DP-200 Exam QuestionsBrowse all questions from this exam

DP-200 Exam - Question 100


HOTSPOT -

You have a self-hosted integration runtime in Azure Data Factory.

The current status of the integration runtime has the following configurations:

✑ Status: Running

✑ Type: Self-Hosted

✑ Version: 4.4.7292.1

✑ Running / Registered Node(s): 1/1

✑ High Availability Enabled: False

✑ Linked Count: 0

✑ Queue Length: 0

✑ Average Queue Duration: 0.00s

The integration runtime has the following node details:

✑ Name: X-M

✑ Status: Running

✑ Version: 4.4.7292.1

✑ Available Memory: 7697MB

✑ CPU Utilization: 6%

✑ Network (In/Out): 1.21KBps/0.83KBps

✑ Concurrent Jobs (Running/Limit): 2/14

✑ Role: Dispatcher/Worker

✑ Credential Status: In Sync

Use the drop-down menus to select the answer choice that completes each statement based on the information presented.

NOTE: Each correct selection is worth one point.

Hot Area:

Exam DP-200 Question 100
Show Answer
Correct Answer:
Exam DP-200 Question 100

Box 1: fail until the node comes back online

We see: High Availability Enabled: False

Note: Higher availability of the self-hosted integration runtime so that it's no longer the single point of failure in your big data solution or cloud data integration with

Data Factory.

Box 2: lowered -

We see:

Concurrent Jobs (Running/Limit): 2/14

CPU Utilization: 6%

Note: When the processor and available RAM aren't well utilized, but the execution of concurrent jobs reaches a node's limits, scale up by increasing the number of concurrent jobs that a node can run

Reference:

https://docs.microsoft.com/en-us/azure/data-factory/create-self-hosted-integration-runtime

Discussion

4 comments
Sign in to comment
MMM777
May 8, 2021

If each job is similar and each uses 3% CPU - then even at current max 14,only 42% CPU utilization. Should INCREASE the limit - which is what the answer is also saying.

111222333
May 15, 2021

Agree with "raise". If we lower the number of concurrent jobs, the CPU will become even more *under*utilized. Because then when it reaches the limit number of concurrent jobs, it will occupy even less CPU -> CPU will never be optimally utilized. RAISE the number of concurrent jobs so that their CPU consumption comes closer to CPU's max. Reference: https://docs.microsoft.com/en-us/azure/data-factory/create-self-hosted-integration-runtime#scale-up

michalS
Aug 25, 2021

You are not increasing currently running jobs, but the limit of jobs that can run concurrently, which is 14 at the moment. So increasing it would only reduce cpu utilization.

dangal95
May 5, 2021

Shouldn't the second option be "left as is" since the number of concurrent jobs is 2/14? The answer states that if the number of concurrent jobs reaches the limit (14 in this case) and the CPU is under utilized then we should increase the number of concurrent jobs that can run. In this case it seems that 6% is a good amount of utilization given that 2/14 jobs (14%) are running, no?

cadio30
May 11, 2021

"When the processor and available RAM aren't well utilized, but the execution of concurrent jobs reaches a node's limits, scale up by increasing the number of concurrent jobs that a node can run. You might also want to scale up when activities time out because the self-hosted IR is overloaded. As shown in the following image, you can increase the maximum capacity for a node." The above statement was quoted from the link provided below and on this scenario, it requires to increase the concurrent jobs (running/limit) Reference: https://docs.microsoft.com/en-us/azure/data-factory/create-self-hosted-integration-runtime

lgtiza
Jun 27, 2021

I agree with Dangal, "left as is", because here you are not reaching a node's limits (it's just using 6%). It it was using the 14 jobs and it would be using 20% then I would say do raise the number of possible jobs, otherwise not. And also I would not lower the number of possible jobs because I don't know how many concurrent jobs I would need to run, this is just a "snapshot" of what was happening at that moment.

lgtiza
Jun 27, 2021

I agree with Dangal, "left as is", because here you are not reaching a node's limits (it's just using 6%). It it was using the 14 jobs and it would be using 20% then I would say do raise the number of possible jobs, otherwise not. And also I would not lower the number of possible jobs because I don't know how many concurrent jobs I would need to run, this is just a "snapshot" of what was happening at that moment.

Pierwiastek
Jun 14, 2021

I agree with the answer. We should lower Concurrent Jobs (Running/Limit) value. It means we should lower the limit value. We don't need so big value if we use only 2 of 14.

Simon2021
Jun 15, 2021

It should lower the concurrent Jobs limit.