Question 6 of 81

Note: This question is part of a series of questions that present the same scenario. Each question in the series contains a unique solution that might meet the stated goals. Some question sets might have more than one correct solution, while others might not have a correct solution.

After you answer a question in this section, you will NOT be able to return to it. As a result, these questions will not appear in the review screen.

You have the following line-of-business solutions:

✑ ERP system

✑ Online WebStore

✑ Partner extranet

One or more Microsoft SQL Server instances support each solution. Each solution has its own product catalog. You have an additional server that hosts SQL

Server Integration Services (SSIS) and a data warehouse. You populate the data warehouse with data from each of the line-of-business solutions. The data warehouse does not store primary key values from the individual source tables.

The database for each solution has a table named Products that stored product information. The Products table in each database uses a separate and unique key for product records. Each table shares a column named ReferenceNr between the databases. This column is used to create queries that involve more than once solution.

You need to load data from the individual solutions into the data warehouse nightly. The following requirements must be met:

✑ If a change is made to the ReferenceNr column in any of the sources, set the value of IsDisabled to True and create a new row in the Products table.

✑ If a row is deleted in any of the sources, set the value of IsDisabled to True in the data warehouse.

Solution: Perform the following actions:

✑ Enable the Change Tracking for the Product table in the source databases.

✑ Query the CHANGETABLE function from the sources for the updated rows.

✑ Set the IsDisabled column to True for the listed rows that have the old ReferenceNr value.

✑ Create a new row in the data warehouse Products table with the new ReferenceNr value.

Does the solution meet the goal?

    Correct Answer: B

    The solution must account for both updates and deletions in the source data to meet the requirements. While enabling Change Tracking and querying the CHANGETABLE function can help identify updated rows, it does not account for rows that have been deleted. The instructions specify that if a row is deleted in any of the sources, the IsDisabled column should be set to True in the data warehouse. Thus, relying solely on Change Tracking to capture updated rows is insufficient, as it does not fulfill the requirement of tracking deletions.

Question 7 of 81

Note: This question is part of a series of questions that present the same scenario. Each question in the series contains a unique solution that might meet the stated goals. Some question sets might have more than one correct solution, while others might not have a correct solution.

After you answer a question in this section, you will NOT be able to return to it. As a result, these questions will not appear in the review screen.

You have the following line-of-business solutions:

✑ ERP system

✑ Online WebStore

✑ Partner extranet

One or more Microsoft SQL Server instances support each solution. Each solution has its own product catalog. You have an additional server that hosts SQL

Server Integration Services (SSIS) and a data warehouse. You populate the data warehouse with data from each of the line-of-business solutions. The data warehouse does not store primary key values from the individual source tables.

The database for each solution has a table named Products that stored product information. The Products table in each database uses a separate and unique key for product records. Each table shares a column named ReferenceNr between the databases. This column is used to create queries that involve more than once solution.

You need to load data from the individual solutions into the data warehouse nightly. The following requirements must be met:

✑ If a change is made to the ReferenceNr column in any of the sources, set the value of IsDisabled to True and create a new row in the Products table.

✑ If a row is deleted in any of the sources, set the value of IsDisabled to True in the data warehouse.

Solution: Perform the following actions:

✑ Enable the Change Tracking for the Product table in the source databases.

✑ Query the cdc.fn_cdc_get_all_changes_capture_dbo_products function from the sources for updated rows.

✑ Set the IsDisabled column to True for rows with the old ReferenceNr value.

✑ Create a new row in the data warehouse Products table with the new ReferenceNr value.

Does the solution meet the goal?

    Correct Answer: B

    The solution only addresses changes made to the ReferenceNr column by querying the change tracking function for updated rows and setting the IsDisabled column to True. However, it does not cover the requirement of handling rows deleted in any of the source databases. Therefore, this solution does not fully meet the goal of loading data into the data warehouse nightly while accounting for both updates and deletions.

Question 8 of 81

DRAG DROP -

Note: This question is part of a series of questions that use the same scenario. For your convenience, the scenario is repeated in each question. Each question presents a different goal and answer choices, but the text of the scenario is exactly the same in each question in this series.

You have a Microsoft SQL Server data warehouse instance that supports several client applications.

The data warehouse includes the following tables: Dimension.SalesTerritory, Dimension.Customer, Dimension.Date, Fact.Ticket, and Fact.Order. The

Dimension.SalesTerritory and Dimension.Customer tables are frequently updated. The Fact.Order table is optimized for weekly reporting, but the company wants to change it to daily. The Fact.Order table is loaded by using an ETL process. Indexes have been added to the table over time, but the presence of these indexes slows data loading.

All data in the data warehouse is stored on a shared SAN. All tables are in a database named DB1. You have a second database named DB2 that contains copies of production data for a development environment. The data warehouse has grown and the cost of storage has increased. Data older than one year is accessed infrequently and is considered historical.

You have the following requirements:

✑ Implement table partitioning to improve the manageability of the data warehouse and to avoid the need to repopulate all transactional data each night. Use a partitioning strategy that is as granular as possible.

✑ Partition the Fact.Order table and retain a total of seven years of data.

✑ Partition the Fact.Ticket table and retain seven years of data. At the end of each month, the partition structure must apply a sliding window strategy to ensure that a new partition is available for the upcoming month, and that the oldest month of data is archived and removed.

✑ Optimize data loading for the Dimension.SalesTerritory, Dimension.Customer, and Dimension.Date tables.

✑ Incrementally load all tables in the database and ensure that all incremental changes are processed.

Maximize the performance during the data loading process for the Fact.Order partition.

✑ Ensure that historical data remains online and available for querying.

✑ Reduce ongoing storage costs while maintaining query performance for current data.

You are not permitted to make changes to the client applications.

You need to configure the Fact.Order table.

Which three actions should you perform in sequence? To answer, move the appropriate actions from the list of actions to the answer area and arrange them in the correct order.

Select and Place:

    Correct Answer:

    From scenario: Partition the Fact.Order table and retain a total of seven years of data. Maximize the performance during the data loading process for the

    Fact.Order partition.

    Step 1: Create a partition function.

    Using CREATE PARTITION FUNCTION is the first step in creating a partitioned table or index.

    Step 2: Create a partition scheme based on the partition function.

    To migrate SQL Server partition definitions to SQL Data Warehouse simply:

    ✑ Eliminate the SQL Server partition scheme.

    ✑ Add the partition function definition to your CREATE TABLE.

    Step 3: Execute an ALTER TABLE command to specify the partition function.

    References: https://docs.microsoft.com/en-us/azure/sql-data-warehouse/sql-data-warehouse-tables-partition

Question 9 of 81

DRAG DROP -

Note: This question is part of a series of questions that use the same scenario. For your convenience, the scenario is repeated in each question. Each question presents a different goal and answer choices, but the text of the scenario is exactly the same in each question in this series.

You have a Microsoft SQL Server data warehouse instance that supports several client applications.

The data warehouse includes the following tables: Dimension.SalesTerritory, Dimension.Customer, Dimension.Date, Fact.Ticket, and Fact.Order. The

Dimension.SalesTerritory and Dimension.Customer tables are frequently updated. The Fact.Order table is optimized for weekly reporting, but the company wants to change it to daily. The Fact.Order table is loaded by using an ETL process. Indexes have been added to the table over time, but the presence of these indexes slows data loading.

All data in the data warehouse is stored on a shared SAN. All tables are in a database named DB1. You have a second database named DB2 that contains copies of production data for a development environment. The data warehouse has grown and the cost of storage has increased. Data older than one year is accessed infrequently and is considered historical.

You have the following requirements:

✑ Implement table partitioning to improve the manageability of the data warehouse and to avoid the need to repopulate all transactional data each night. Use a partitioning strategy that is as granular as possible.

✑ - Partition the Fact.Order table and retain a total of seven years of data.

✑ - Partition the Fact.Ticket table and retain seven years of data. At the end of each month, the partition structure must apply a sliding window strategy to ensure that a new partition is available for the upcoming month, and that the oldest month of data is archived and removed.

✑ - Optimize data loading for the Dimension.SalesTerritory, Dimension.Customer, and Dimension.Date tables.

✑ - Incrementally load all tables in the database and ensure that all incremental changes are processed.

✑ - Maximize the performance during the data loading process for the Fact.Order partition.

✑ - Ensure that historical data remains online and available for querying.

✑ - Reduce ongoing storage costs while maintaining query performance for current data.

You are not permitted to make changes to the client applications.

You need to optimize data loading for the Dimension.Customer table.

Which three Transact-SQL segments should you use to develop the solution? To answer, move the appropriate Transact-SQL segments from the list of Transact-

SQL segments to the answer area and arrange them in the correct order.

NOTE: You will not need all of the Transact-SQL segments.

Select and Place:

    Correct Answer:

    Step 1: USE DB1 -

    From Scenario: All tables are in a database named DB1. You have a second database named DB2 that contains copies of production data for a development environment.

    Step 2: EXEC sys.sp_cdc_enable_db

    Before you can enable a table for change data capture, the database must be enabled. To enable the database, use the sys.sp_cdc_enable_db stored procedure. sys.sp_cdc_enable_db has no parameters.

    Step 3: EXEC sys.sp_cdc_enable_table

    @source schema = N 'schema' etc.

    Sys.sp_cdc_enable_table enables change data capture for the specified source table in the current database.

    Partial syntax:

    sys.sp_cdc_enable_table

    [ @source_schema = ] 'source_schema',

    [ @source_name = ] 'source_name' , [,[ @capture_instance = ] 'capture_instance' ]

    [,[ @supports_net_changes = ] supports_net_changes ]

    Etc.

    References: https://docs.microsoft.com/en-us/sql/relational-databases/system-stored-procedures/sys-sp-cdc-enable-table-transact-sql https://docs.microsoft.com/en-us/sql/relational-databases/system-stored-procedures/sys-sp-cdc-enable-db-transact-sql

Question 10 of 81

Note: This question is part of a series of questions that use the same scenario. For your convenience, the scenario is repeated in each question. Each question presents a different goal and answer choices, but the text of the scenario is exactly the same in each question in this series.

You have a Microsoft SQL Server data warehouse instance that supports several client applications.

The data warehouse includes the following tables: Dimension.SalesTerritory, Dimension.Customer, Dimension.Date, Fact.Ticket, and Fact.Order. The

Dimension.SalesTerritory and Dimension.Customer tables are frequently updated. The Fact.Order table is optimized for weekly reporting, but the company wants to change it to daily. The Fact.Order table is loaded by using an ETL process. Indexes have been added to the table over time, but the presence of these indexes slows data loading.

All data in the data warehouse is stored on a shared SAN. All tables are in a database named DB1. You have a second database named DB2 that contains copies of production data for a development environment. The data warehouse has grown and the cost of storage has increased. Data older than one year is accessed infrequently and is considered historical.

You have the following requirements:

✑ Implement table partitioning to improve the manageability of the data warehouse and to avoid the need to repopulate all transactional data each night. Use a partitioning strategy that is as granular as possible.

✑ Partition the Fact.Order table and retain a total of seven years of data.

✑ Partition the Fact.Ticket table and retain seven years of data. At the end of each month, the partition structure must apply a sliding window strategy to ensure that a new partition is available for the upcoming month, and that the oldest month of data is archived and removed.

✑ Optimize data loading for the Dimension.SalesTerritory, Dimension.Customer, and Dimension.Date tables.

✑ Incrementally load all tables in the database and ensure that all incremental changes are processed.

✑ Maximize the performance during the data loading process for the Fact.Order partition.

✑ Ensure that historical data remains online and available for querying.

✑ Reduce ongoing storage costs while maintaining query performance for current data.

You are not permitted to make changes to the client applications.

You need to implement the data partitioning strategy.

How should you partition the Fact.Order table?

    Correct Answer: C

    The Fact.Order table needs to be partitioned to support the requirement of changing the reporting frequency from weekly to daily and to retain seven years of data. Given that there are 365 days in a year, partitioning by day would require 2,555 partitions for normal years plus 2 partitions to account for leap years and future data. Thus, a total of 2,557 partitions is required. This granularity will ensure optimal performance for daily reporting and efficient data management.