Exam PL-300 All QuestionsBrowse all questions from this exam
Question 33

You are creating a report in Power BI Desktop.

You load a data extract that includes a free text field named coll.

You need to analyze the frequency distribution of the string lengths in col1. The solution must not affect the size of the model.

What should you do?

    Correct Answer: D

    To analyze the frequency distribution of string lengths in col1 without affecting the size of the model, you should change the distribution for the Column profile to group by length for col1 in the Power Query Editor. This method does not create any new columns or add data to the model; it merely changes how the column’s data is visualized, providing the required distribution analysis.

Discussion
MuffinshowOption: D

Wrong answer, A will affect the size of the model as would C. B doesn't give you enough information about the distribution (just the average) D is the right answer.

GPerez73

I agree

Jonagan

Why do you think that aggregating in the PowerQuery size will not influence the size of the datamodel? its getting smaller isnt it? Measures are the only solutions that does not influence the datamodel. They require CPU but but does not store additional data or does not reduce the data in the model

Kai_don

Option A is saying useing calculated column which increases the size of the model. So D is correct.

GabryPL

Option B is also correct for me it's the only one that will not affect the size of the model

Mubarakbabs

Yes, option B will not affect the size of the model, but it won't show us the frequency distribution, which is what we really need. Option D doesn't create any new column, it only changes how the column distribution is displayed, so it won't affect the size of the model

lizbette

why doesn't B affect the size of the model but A does?

Elektrolite

D is not aggregating in power query, it's viewing the column profile

Hoeishetmogelijk

I agree completely!

KARELA

for D to be correct we need to calculate length of the strings in col1 beforehand so it is not correct

sandipnair

If you enable to column profile from view menu, you can actually group the distribution by text length. It is not grouping the actual column, rather just grouping the distribution.

Inesd

correct @sandipnair

lukelin08Option: D

Its D, this can easily be tested by going to Power Query Editor > View > Column Profile > distribution graph, click the three little dots and select group by text length. This will allow you to view the distribution of text length within the column

HemantGorle

D is correct and it can be tested by following step mentioned by Lukelin08

eloomis

The problem is this method doesn't make the distribution analyzable in the report, which I think is what the question is getting at. It will show you the distribution but you need a dax measure to place in your report to visualize that. I would go with option B as it creates a measure which you can use in the report, and it doesn't contribute to the size of the model as with A.

lifewjola

The question says analyse not add it to the report...

miro26

Make sure your column type is not "variant" ;)

cs3122

Thank you, just tested and it works. D has to be the answer, as it doesn't impact the model size

GiudittaOption: D

this was on exam on 14/03/2024 i scored 948 my answer was D

AZFabioOption: A

A is right, but B looks correct to me as well

Usm_9

WAS ON THE EXAM 02/03/2024

DANIELOption: A

I think sometimes it's better to stay grounded and read the question for what it is. relying on facts only; you create a REPORT you load data you need to be able to see frequency distribution of Len(col1) (supposedly on the report as you just were asked to create one, make sense?) In the available answers you have 2 options from REPORT One calculates sum, the other average. Just go for the sum which is answer A As everyone knows DAX creates new info from data ALREADY in your model. In Power query you need to close and apply to use your new info(=> affects model)

momo1165Option: C

I am pick C for performance purposes based on https://www.sqlbi.com/articles/comparing-dax-calculated-columns-with-power-query-computed-columns/

goliaOption: C

chat GPT answer : C. From Power Query Editor, add a column that calculates the length of col1 Explanation: In Power Query Editor, you can add a custom column to calculate the length of the text in col1 without affecting the size of the Power BI model. This is a more efficient way to perform the operation on the data before it is loaded into the model. Option A suggests adding a DAX calculated column in the report, but this would affect the size of the model, which is not desired. Option B suggests using a DAX function to calculate the average length, which is not the same as analyzing the frequency distribution of string lengths. Option D refers to changing the distribution for the Column profile, which is a profiling feature and doesn't directly calculate the lengths for the purpose of frequency distribution. The correct approach for this scenario is to add a column in Power Query Editor.

Kiran37

correct answer

yaya32Option: D

D for me

RoxyRishiOption: D

I also think D as it won't affect the data model size

9f73003Option: D

D. It says to analyze, which means in the context of this question, look at to determine. It also states it must not affect the model size. A calculated dax column will affect size of the model. D is perfect, because with the column profile tool you able to see the exact information that are looking for.

benni_aleOption: D

D works just fine and DOES NOT affects the model size.

svbzOption: C

Option D , doesnot allow you to see the string length of each row, just shows Min and Max

datacert2022Option: A

Why is A the right answer when 91% of the community indicates it's D? That question is for Exam Topics the company, not the community.

JohnChungOption: D

I tried. D is the correct answer

Lalith_parsaOption: C

C. From Power Query Editor, add a column that calculates the length of col1 Adding a column in Power Query Editor that calculates the length of col1 before the data is loaded into the model is a more efficient approach. This processing is done during the data load, and the calculated length can be used for analysis without increasing the size of the in-memory data model.