Question 6 of 331

HOTSPOT -

You have an on-premises Microsoft SQL Server 2016 server named Server1 that contains a database named DB1.

You need to perform an online migration of DB1 to an Azure SQL Database managed instance by using Azure Database Migration Service.

How should you configure the backup of DB1? To answer, select the appropriate options in the answer area.

NOTE: Each correct selection is worth one point.

Hot Area:

    Correct Answer:

    Box 1: Full and log backups only

    Make sure to take every backup on a separate backup media (backup files). Azure Database Migration Service doesn't support backups that are appended to a single backup file. Take full backup and log backups to separate backup files.

    Box 2: WITH CHECKSUM -

    Azure Database Migration Service uses the backup and restore method to migrate your on-premises databases to SQL Managed Instance. Azure Database

    Migration Service only supports backups created using checksum.

    Incorrect Answers:

    NOINIT Indicates that the backup set is appended to the specified media set, preserving existing backup sets. If a media password is defined for the media set, the password must be supplied. NOINIT is the default.

    UNLOAD -

    Specifies that the tape is automatically rewound and unloaded when the backup is finished. UNLOAD is the default when a session begins.

    Reference:

    https://docs.microsoft.com/en-us/azure/dms/known-issues-azure-sql-db-managed-instance-online

Question 7 of 331

DRAG DROP -

You have a resource group named App1Dev that contains an Azure SQL Database server named DevServer1. DevServer1 contains an Azure SQL database named DB1. The schema and permissions for DB1 are saved in a Microsoft SQL Server Data Tools (SSDT) database project.

You need to populate a new resource group named App1Test with the DB1 database and an Azure SQL Server named TestServer1. The resources in App1Test must have the same configurations as the resources in App1Dev.

Which four actions should you perform in sequence? To answer, move the appropriate actions from the list of actions to the answer area and arrange them in the correct order.

Select and Place:

    Correct Answer:

Question 8 of 331

HOTSPOT -

You have an Azure Synapse Analytics dedicated SQL pool named Pool1 and an Azure Data Lake Storage Gen2 account named Account1.

You plan to access the files in Account1 by using an external table.

You need to create a data source in Pool1 that you can reference when you create the external table.

How should you complete the Transact-SQL statement? To answer, select the appropriate options in the answer area.

NOTE: Each correct selection is worth one point.

Hot Area:

    Correct Answer:

    Box 1: dfs -

    For Azure Data Lake Store Gen 2 used the following syntax:

    http[s] .dfs.core.windows.net//subfolders

    Incorrect:

    Not blob: blob is used for Azure Blob Storage. Syntax:

    http[s] .blob.core.windows.net//subfolders

    Box 2: TYPE = HADOOP -

    Syntax for CREATE EXTERNAL DATA SOURCE.

    External data sources with TYPE=HADOOP are available only in dedicated SQL pools.

    CREATE EXTERNAL DATA SOURCE

    WITH -

    ( LOCATION = '://'

    [, CREDENTIAL = ]

    , TYPE = HADOOP

    )

    [;]

    Reference:

    https://docs.microsoft.com/en-us/azure/synapse-analytics/sql/develop-tables-external-tables

Question 9 of 331

HOTSPOT -

You plan to develop a dataset named Purchases by using Azure Databricks. Purchases will contain the following columns:

✑ ProductID

✑ ItemPrice

✑ LineTotal

✑ Quantity

✑ StoreID

✑ Minute

✑ Month

✑ Hour

✑ Year

✑ Day

You need to store the data to support hourly incremental load pipelines that will vary for each StoreID. The solution must minimize storage costs.

How should you complete the code? To answer, select the appropriate options in the answer area.

NOTE: Each correct selection is worth one point.

Hot Area:

    Correct Answer:

    Box 1: .partitionBy -

    Example:

    df.write.partitionBy("y","m","d")

    .mode(SaveMode.Append)

    .parquet("/data/hive/warehouse/db_name.db/" + tableName)

    Box 2: ("Year","Month","Day","Hour","StoreID")

    Box 3: .parquet("/Purchases")

    Reference:

    https://intellipaat.com/community/11744/how-to-partition-and-write-dataframe-in-spark-without-deleting-partitions-with-no-new-data

Question 10 of 331

You are designing a streaming data solution that will ingest variable volumes of data.

You need to ensure that you can change the partition count after creation.

Which service should you use to ingest the data?

    Correct Answer: D

    To ingest variable volumes of data and ensure the ability to change the partition count after creation, Azure Event Hubs Dedicated is the appropriate choice. In the standard tier of Azure Event Hubs, the partition count is set at creation and cannot be modified later. However, the dedicated tier provides the flexibility to adjust the partition count as needed after the event hub has been created.