Exam DP-203 All QuestionsBrowse all questions from this exam
Question 149

HOTSPOT -

You have a self-hosted integration runtime in Azure Data Factory.

The current status of the integration runtime has the following configurations:

✑ Status: Running

✑ Type: Self-Hosted

✑ Version: 4.4.7292.1

✑ Running / Registered Node(s): 1/1

✑ High Availability Enabled: False

✑ Linked Count: 0

✑ Queue Length: 0

✑ Average Queue Duration. 0.00s

The integration runtime has the following node details:

✑ Name: X-M

✑ Status: Running

✑ Version: 4.4.7292.1

✑ Available Memory: 7697MB

✑ CPU Utilization: 6%

✑ Network (In/Out): 1.21KBps/0.83KBps

✑ Concurrent Jobs (Running/Limit): 2/14

✑ Role: Dispatcher/Worker

✑ Credential Status: In Sync

Use the drop-down menus to select the answer choice that completes each statement based on the information presented.

NOTE: Each correct selection is worth one point.

Hot Area:

    Correct Answer:

    Box 1: fail until the node comes back online

    We see: High Availability Enabled: False

    Note: Higher availability of the self-hosted integration runtime so that it's no longer the single point of failure in your big data solution or cloud data integration with

    Data Factory.

    Box 2: lowered -

    We see:

    Concurrent Jobs (Running/Limit): 2/14

    CPU Utilization: 6%

    Note: When the processor and available RAM aren't well utilized, but the execution of concurrent jobs reaches a node's limits, scale up by increasing the number of concurrent jobs that a node can run

    Reference:

    https://docs.microsoft.com/en-us/azure/data-factory/create-self-hosted-integration-runtime

Discussion
Sunnyb

1/14 = 0.07 6% = 0.06 should be lowered.

romanzdk

0.06/2 = 0.03 0.03 * 14 = 0.42 = maximally 42% of cpu for all jobs isn't this better?

shachar_ash

The question mentions 2/14 which is 0.14, therefore it can be increased.

semauni

Why is this the calculation you make? I see 6% utilization, so 94% to go, so the amount can be raised.

MirandaL

"We recommend that you increase the concurrent jobs limit only when you see low resource usage with the default values on each node." https://docs.microsoft.com/en-us/azure/data-factory/monitor-integration-runtime

moneytime

In essence, the 6% of the CPU depicts a low resource usage, therefore the concurrent jobs limit.

moneytime

In essence, the 6% of the CPU depicts a low resource usage, therefore the concurrent jobs limit should be increased.

Omkarrokee

Based on the information provided, the CPU Utilization is 6% and the Concurrent Jobs (Running/Limit) is 2/14. This indicates that the integration runtime is utilizing only 6% of the available CPU capacity and currently running 2 out of a maximum limit of 14 concurrent jobs. Given this information, the appropriate answer choice for the completion statement would be: Concurrent Job should be scaled up Since the current CPU utilization is relatively low at 6% and there is still capacity available for running additional jobs, scaling up the concurrent job limit would allow for more jobs to run simultaneously and make better use of the available resources.

saqib839

So the math is simple we have 2 concurrent processes running causing 6% CPU utilization. The concurrent process limit is 14 so if we multiply both sides (2 process = 6%) we get (14process = 42%) which is under utilized. We should raise concurrent process limit to 26 which gives us cpu utilization of 76%. ALL THESE CALCULATIONS ARE DONE CONSIDERING EACH CONCURRENT PROCESS CAUSE SAME CPU UTILIZATION WHICH IS 3%.

gplusplus

100% / 3% = 33.33 SHOULD BE THE NEW LIMIT -> RAISED

Momoanwar

Chatgpt : Given that high availability is not enabled for the self-hosted integration runtime, the correct answer for the first statement is: - **Fail until the node comes back online.** For the second statement regarding the number of concurrent jobs, considering that the CPU utilization is quite low at 6%, and there is a significant difference between the number of running jobs (2) and the limit (14), the correct answer should be: - **Left as is.** There is no indication from the given data that the concurrent jobs limit needs to be adjusted, as the system is currently underutilized.

pavankr

when the explanation is "scale up by increasing the number" then why the answer is "Lowered"???

norbitek

I would leave it as it is. See: https://learn.microsoft.com/en-us/azure/data-factory/monitor-integration-runtime "The default value of the concurrent jobs limit is set based on the machine size. The factors used to calculate this value depend on the amount of RAM and the number of CPU cores of the machine. So the more cores and the more memory, the higher the default limit of concurrent jobs. You scale out by increasing the number of nodes. When you increase the number of nodes, the concurrent jobs limit is the sum of the concurrent job limit values of all the available nodes. For example, if one node lets you run a maximum of twelve concurrent jobs, then adding three more similar nodes lets you run a maximum of 48 concurrent jobs (that is, 4 x 12). We recommend that you increase the concurrent jobs limit only when you see low resource usage with the default values on each node."

OldSchool

Here is my thinking of this. High availability is False, so no scaling. The 2nd Q is what should be done with the number of concurent jobs, not scaling up CPU. Since there are only 2 running jobs of possible 14 and CPU itilization is only 6% the number of concurent jobs should be Increased. If left as is we are overspending, if decreased we are still overspending even more since CPU utilization will be lovered too.

OldSchool

When the processor and available RAM aren't well utilized, but the execution of concurrent jobs reaches a node's limits, scale up by increasing the number of concurrent jobs that a node can run. https://learn.microsoft.com/en-us/azure/data-factory/create-self-hosted-integration-runtime?tabs=data-factory#scale-up

ELJORDAN23

Got this question on my exam on january 17, I wasn't sure but I put the same answers as the dump

phydev

ChatGPT says: *it should be raised* because there's currently a very low CPU utilization (only 6%) and two concurrent jobs running out of a limit of 14. The fact that the CPU utilization is quite low suggests that your integration runtime has available processing capacity.

Ram9198

Left as it is

auwia

We are talking about max number of job running in parallel! If you have available resource of course it is recommanded to raise up the current limit to afford future load. Also Microsoft recommned that: https://learn.microsoft.com/en-us/azure/data-factory/monitor-integration-runtime We recommend that you increase the concurrent jobs limit only when you see low resource usage with the default values on each node. I think this is the case, also the question doesn't tell you it's mandatory, what should! So I think we should follow recommandation and raise up the limit.

learnwell

The answer provided by edba is explaining the question as well as the answer very well. The second question is asking what should we do with the LIMIT of the concurrent jobs which is currently set to 14. Since only 2 jobs are running, the LIMIT of concurrent jobs can be reduced from 14, so the question is asking about what should be do with the limit of concurrent jobs and not about what should we do with the actual concurrent jobs that can be run. Check the below link https://docs.microsoft.com/en-us/azure/data-factory/create-self-hosted-integration-runtime?tabs=data-factory#scale-considerations

j888

Agreed on the answer for the first part but not the second. https://docs.microsoft.com/en-us/azure/data-factory/monitor-integration-runtime "We recommend that you increase the concurrent jobs limit only when you see low resource usage with the default values on each node." I think the answer should be raised

Vanq69

I don't know why there is this one standard bs answer with lowered everywhere. So only 2 concurrent jobs are running out of 14 possible and CPU usage is at 6%, does it not make sense to raise the concurrent jobs to be even 14/14 and still have only 42% CPU usage. Or is this question aiming at something else?

kkk5566

be lowered

martcerv

the concurrent jobs limit is the sum of the concurrent job limit values of all the available nodes