Certified Associate Developer for Apache Spark Exam QuestionsBrowse all questions from this exam

Certified Associate Developer for Apache Spark Exam - Question 30


Which of the following operations fails to return a DataFrame with no duplicate rows?

Show Answer
Correct Answer: AE

The operation DataFrame.drop_duplicates(subset = "all") fails to return a DataFrame with no duplicate rows because the argument 'all' is not a valid value for the subset parameter in the drop_duplicates() method. The subset parameter should accept either a column name or a list of column names to identify duplicate rows, and 'all' does not fit this requirement, thus resulting in an error.

Discussion

8 comments
Sign in to comment
TC007Option: E
Mar 26, 2023

Option E is incorrect as "all" is not a valid value for the subset parameter in the drop_duplicates() method. The correct value should be a column name or a list of column names to be used as the subset to identify duplicate rows. All other options (A, B, C, and D) can be used to return a DataFrame with no duplicate rows. The dropDuplicates(), distinct(), and drop_duplicates() methods are all equivalent and return a new DataFrame with distinct rows. The drop_duplicates() method also accepts a subset parameter to specify the columns to use for identifying duplicates, and when the subset parameter is not specified, all columns are used. Therefore, both option A and C are valid, and option D is also valid as it is equivalent to drop_duplicates() with no subset parameter.

4be8126Option: E
Apr 26, 2023

A. DataFrame.dropDuplicates(): This method returns a new DataFrame with distinct rows based on all columns. It should return a DataFrame with no duplicate rows. B. DataFrame.distinct(): This method returns a new DataFrame with distinct rows based on all columns. It should also return a DataFrame with no duplicate rows. C. DataFrame.drop_duplicates(): This is an alias for DataFrame.dropDuplicates(). It should also return a DataFrame with no duplicate rows. D. DataFrame.drop_duplicates(subset=None): This method returns a new DataFrame with distinct rows based on all columns. It should return a DataFrame with no duplicate rows. E. DataFrame.drop_duplicates(subset="all"): This method attempts to drop duplicates based on all columns but returns an error, because "all" is not a valid argument for the subset parameter. So this operation fails to return a DataFrame with no duplicate rows. Therefore, the correct answer is E.

azurearchOption: E
Mar 7, 2024

Option E . df.drop_duplicates(subset = "all") returns error SparkTypeError: [NOT_LIST_OR_TUPLE] Argument `subset` should be a list or tuple, got str.

ItsABOption: E
Jul 9, 2023

the correct answer is E

cookiemonster42Option: B
Aug 3, 2023

B is the right one, as TC007 said, the argument for drop_duplicates is a subset of columns: DataFrame.dropDuplicates(subset: Optional[List[str]] = None) → pyspark.sql.dataframe.DataFrame[source] Return a new DataFrame with duplicate rows removed, optionally only considering certain columns. For a static batch DataFrame, it just drops duplicate rows. For a streaming DataFrame, it will keep all data across triggers as intermediate state to drop duplicates rows. You can use withWatermark() to limit how late the duplicate data can be and the system will accordingly limit the state. In addition, data older than watermark will be dropped to avoid any possibility of duplicates. drop_duplicates() is an alias for dropDuplicates(). Parameters subsetList of column names, optional List of columns to use for duplicate comparison (default All columns).

cookiemonster42
Aug 3, 2023

OMG, I got it all wrong, the answer is E :)

azurearchOption: E
Mar 7, 2024

DataFrame.drop_duplicates(subset = "all") - this is specific to pandas

dbdantasOption: E
Apr 9, 2024

E PySparkTypeError: [NOT_LIST_OR_TUPLE] Argument `subset` should be a list or tuple, got str.

dbdantasOption: E
Apr 15, 2024

the answer is E