dp-300 exam, Microsoft

DP-300 exam

There is no doubt that the DP-300 exam dumps are useful for passing exams. It is true that passing Microsoft Administering Relational Databases on Microsoft Azure exams with DP-300 exam dumps questions can save a lot of time and energy. However, I’m not saying you have to buy DP-300 exam dumps to pass the exam.

Here, share the (partial) of the Microsoft DP-300 exam dumps questions answers you want for free. If you want to get the complete Microsoft DP-300 exam dumps, please click https://www.pass4itsure.com/dp-300.html (PDF +VCE).

Microsoft DP-300 exam dumps pdf download

DP-300 exam dumps pdf google drive from Pass4itSure DP-300 exam dumps pdf
free download https://drive.google.com/file/d/1j1TA7Rp0VT8qQ2dmMYxQ2kf4eUMxdDjH/view?usp=sharing

Learn about DP-300 exam questions and valid DP-300 practice test

QUESTION 1 #

Note: This question is part of a series of questions that present the same scenario. Each question in the series contains
a unique solution that might meet the stated goals. Some question sets might have more than one correct solution,
while others might not have a correct solution.

After you answer a question in this section, you will NOT be able to return to it. As a result, these questions will not
appear on the review screen. You have two Azure SQL Database servers named Server1 and Server2. Each server contains an Azure SQL database named Database1.

You need to restore Database1 from Server1 to Server2. The solution must replace the existing Database1 on Server2.

Solution: From Microsoft SQL Server Management Studio (SSMS), you rename Database1 on Server2 as Database2.
From the Azure portal, you create a new database on Server2 by restoring the backup of Database1 from Server1, and
then you delete Database2.

Does this meet the goal?

A. Yes
B. No

Correct Answer: B

Instead, restore Database1 from Server1 to Server2 by using the RESTORE Transact-SQL command and the
REPLACE option.

Note: REPLACE should be used rarely and only after careful consideration. Restore normally prevents accidentally
overwriting a database with a different database.

If the database specified in a RESTORE statement already exists on the current server and the specified database family GUID differs from the database family GUID recorded in the backup set, the database is not restored. This is an important safeguard.

Reference: https://docs.microsoft.com/en-us/sql/t-sql/statements/restore-statements-transact-sql

QUESTION 2 #

You have a Microsoft SQL Server 2019 instance in an on-premises data center. The instance contains a 4- TB database
named DB1. You plan to migrate DB1 to an Azure SQL Database managed instance.

What should you use to minimize downtime and data loss during the migration?

A. distributed availability groups
B. database mirroring
C. log shipping
D. Database Migration Assistant

Correct Answer: D

QUESTION 3 #

You have a version-8.0 Azure Database for MySQL database.
You need to identify which database queries consume the most resources.
Which tool should you use?

A. Query Store
B. Metrics
C. Query Performance Insight
D. Alerts

Correct Answer: A

The Query Store feature in Azure Database for MySQL provides a way to track query performance over time. Query
Store simplifies performance troubleshooting by helping you quickly find the longest-running and most resource-intensive queries.

Query Store automatically captures a history of queries and runtime statistics, and it retains them for
your review. It separates data by time windows so that you can see database usage patterns. Data for all users,
databases, and queries are stored in the MySQL schema database in the Azure Database for MySQL instance.

Reference: https://docs.microsoft.com/en-us/azure/mysql/concepts-query-store

QUESTION 4 #

HOTSPOT
You have an Azure SQL database that contains a table named Customer. The customer has the columns shown in the
following table.

You plan to implement a dynamic data mask for the Customer_Phone column. The mask must meet the following
requirements:
1.
The first six numerals of each customer\’s phone number must be masked.
2.
The last four digits of each customer\’s phone number must be visible.
3.
Hyphens must be preserved and displayed.
How should you configure the dynamic data mask? To answer, select the appropriate options in the answer area.

Hot Area:

Correct Answer:

Box 1: 0 Custom String: Masking method that exposes the first and last letters and adds a custom padding string in the
middle. prefix,[padding],suffix

Box 2: xxx-xxx

Box 3: 5

Reference: https://docs.microsoft.com/en-us/sql/relational-databases/security/dynamic-data-masking

QUESTION 5 #

Note: This question is part of a series of questions that present the same scenario. Each question in the series contains
a unique solution that might meet the stated goals. Some question sets might have more than one correct solution,
while others might not have a correct solution.

After you answer a question in this section, you will NOT be able to return to it. As a result, these questions will not
appear in the review screen.

You have an Azure Data Lake Storage account that contains a staging zone.
You need to design a daily process to ingest incremental data from the staging zone, transform the data by executing an R script, and then insert the transformed data into a data warehouse in Azure Synapse Analytics.

Solution: You use an Azure Data Factory schedule trigger to execute a pipeline that executes mapping data flow, and
then inserts the data into the data warehouse.

Does this meet the goal?

A. Yes
B. No

Correct Answer: B

If you need to transform data in a way that is not supported by Data Factory, you can create a custom activity, not a
mapping flow,5 with your own data processing logic, and use the activity in the pipeline. You can create a custom activity to run R scripts on your HDInsight cluster with R installed.

Reference: https://docs.microsoft.com/en-US/azure/data-factory/transform-data

QUESTION 6 #

You are planning disaster recovery for the failover group of an Azure SQL Database managed instance.
Your company\\’s SLA requires that the database in the failover group become available as quickly as possible if a
major outage occurs.

You set the Read/Write failover policy to Automatic.

What are the two results of the configuration? Each correct answer presents a complete solution.
NOTE: Each correct selection is worth one point.

A. In the event of a data center or Azure regional outage, the databases will failover automatically.
B. In the event of an outage, the databases in the primary instance will fail over immediately.
C. In the event of an outage, you can selectively fail over individual databases.
D. In the event of an outage, you can set a different grace period to fail over each database.
E. In the event of an outage, the minimum delay for the databases to failover in the primary instance will be one hour.

Correct Answer: AE

A: Auto-failover groups allow you to manage replication and failover of a group of databases on a server or all
databases in a managed instance to another region.

E: Because verification of the scale of the outage and how quickly it can be mitigated involves human actions by the
operations team, the grace period cannot be set below one hour. This limitation applies to all databases in the failover
group regardless of their data synchronization state.

Incorrect Answers:
C: individual SQL Managed Instance databases cannot be added to or removed from a failover group.

Reference: https://docs.microsoft.com/en-us/azure/azure-sql/database/auto-failover-group-overview

QUESTION 7 #

You have an Azure Data Factory that contains 10 pipelines.
You need to label each pipeline with its main purpose of either ingest, transforming, or load. The labels must be available for grouping and filtering when using the monitoring experience in Data Factory.

What should you add to each pipeline?

A. an annotation
B. a resource tag
C. a run group ID
D. a user property
E. a correlation ID

Correct Answer: A

Azure Data Factory annotations help you easily filter different Azure Data Factory objects based on a tag. You can
define tags so you can see their performance or find errors faster.

Reference: https://www.techtalkcorner.com/monitor-azure-data-factory-annotations/

QUESTION 8 #

You have SQL Server on an Azure virtual machine that contains a database named DB1.
You view a plan summary that shows the duration in milliseconds of each execution of query 1178902 as shown in the
following exhibit:

What should you do to ensure that the query uses the execution plan which executes in the least amount of time?

A. Force the query execution plan for plan 1221065.
B. Run the DBCC FREEPROCCACHE command.
C. Force the query execution plan for plan 1220917.
D. Disable parameter sniffing.

Correct Answer: C

Reference: https://docs.microsoft.com/en-us/sql/relational-databases/performance/query-store-usage-scenarios

QUESTION 9 #

Note: This question is part of a series of questions that present the same scenario. Each question in the series contains
a unique solution that might meet the stated goals. Some question sets might have more than one correct solution,
while others might not have a correct solution.

After you answer a question in this section, you will NOT be able to return to it. As a result, these questions will not
appear in the review screen.

You have an Azure Data Lake Storage account that contains a staging zone.
You need to design a daily process to ingest incremental data from the staging zone, transform the data by executing an R script, and then insert the transformed data into a data warehouse in Azure Synapse Analytics.

Solution: You use an Azure Data Factory schedule trigger to execute a pipeline that executes an Azure Databricks
notebook, and then inserts the data into the data warehouse.
Does this meet the goal?

A. Yes
B. No

Correct Answer: B

If you need to transform data in a way that is not supported by Data Factory, you can create a custom activity, not an
Azure Databricks notebook, with your own data processing logic and use the activity in the pipeline. You can create a
custom activity to run R scripts on your HDInsight cluster with R installed.

Reference: https://docs.microsoft.com/en-US/azure/data-factory/transform-data

QUESTION 10 #

You are designing a security model for an Azure Synapse Analytics dedicated SQL pool that will support multiple
companies.
You need to ensure that users from each company can view only the data of their respective companies.

Which two objects should you include in the solution? Each correct answer presents part of the solution.
NOTE: Each correct selection is worth one point.

A. a column encryption key
B. asymmetric keys
C. a function
D. a custom role-based access control (RBAC) role
E. a security policy

Correct Answer: DE

Azure RBAC is used to manage who can create, update, or delete the Synapse workspace and its SQL pools, Apache
Spark pools, and Integration runtimes.

Define and implement network security configurations for resources related to your dedicated SQL pool with Azure
Policy.

Reference:
https://docs.microsoft.com/en-us/azure/synapse-analytics/security/synapse-workspace-synapse-rbac
https://docs.microsoft.com/en-us/security/benchmark/azure/baselines/synapse-analytics-security-baseline

QUESTION 11 #

You are designing an enterprise data warehouse in Azure Synapse Analytics that will contain a table named Customers.
Customers will contain credit card information.

You need to recommend a solution to provide salespeople with the ability to view all the entries in Customers. The
the solution must prevent all the salespeople from viewing or inferring the credit card information.

What should you include in the recommendation?

A. row-level security
B. data masking
C. Always Encrypted
D. column-level security

Correct Answer: B

Azure SQL Database, Azure SQL Managed Instance, and Azure Synapse Analytics support dynamic data masking.
Dynamic data masking limits sensitive data exposure by masking it to non-privileged users.
The Credit card masking method exposes the last four digits of the designated fields and adds a constant string as a
prefix in the form of a credit card.

Example: XXXX-XXXX-XXXX-1234

QUESTION 12 #

You have an Azure Data Factory pipeline that is triggered hourly.
The pipeline has had 100% success for the past seven days.
The pipeline execution fails, and two retries that occur 15 minutes apart also fail. The third failure returns the following
error.

What is a possible cause of the error?

A. From 06:00 to 07:00 on January 10, 2021, there was no data in WWI/BIKES/CARBON.
B. The parameter used to generate year=2021/month=01/day=10/hour=06 was incorrect.
C. From 06:00 to 07:00 on January 10, 2021, the file format of data in WWI/BIKES/CARBON was incorrect.
D. The pipeline was triggered too early.

Correct Answer: B

A file is missing.

QUESTION 13 #

You have the following Azure Data Factory pipelines:

1. Ingest Data from System1
2. Ingest Data from System2
3. Populate Dimensions
4. Populate Facts

Ingest Data from System1 and Ingest Data from System2 have no dependencies. Populate Dimensions must execute
after Ingest Data from System1 and Ingest Data from System2. Populate Facts must execute after the Populate
Dimensions pipeline. All the pipelines must execute every eight hours.

What should you do to schedule the pipelines for execution?

A. Add a schedule trigger to all four pipelines.
B. Add an event trigger to all four pipelines.
C. Create a parent pipeline that contains the four pipelines and use an event trigger.
D. Create a parent pipeline that contains the four pipelines and use a schedule trigger.

Correct Answer: D

Reference: https://www.mssqltips.com/sqlservertip/6137/azure-data-factory-control-flow-activities-overview/

QUESTION 14 #

You are designing a dimension table in an Azure Synapse Analytics dedicated SQL pool.
You need to create a surrogate key for the table. The solution must provide the fastest query performance.
What should you use for the surrogate key?

A. an IDENTITY column
B. a GUID column
C. a sequence object

Correct Answer: A

A dedicated SQL pool supports many, but not all, of the table features offered by other databases. Surrogate keys are not supported. Implement it with an Identity column.

Reference: https://docs.microsoft.com/en-us/azure/synapse-analytics/sql-data-warehouse/sql-data-warehouse-tablesoverview

QUESTION 15 #

DRAG-DROP
You have SQL Server on an Azure virtual machine named SQL1.
SQL1 has an agent job to back up all databases.
You add a user named dbadmin1 as a SQL Server Agent operator.
You need to ensure that dbadmin1 receives an email alert if a job fails.

Which three actions should you perform in sequence? To answer, move the appropriate actions from the list of actions
to the answer area and arrange them in the correct order.

Select and Place:

Correct Answer:

Step 1: Enable the email settings for the SQL Server Agent.
To send a notification in response to an alert, you must first configure SQL Server Agent to send mail.

Step 2: Create a job alert

Step 3: Create a job notification

Example:
— adds an e-mail notification for the specified alert (Test Alert)
— This example assumes that Test Alert already exists
— and that Franois Ajenstat is a valid operator name.
USE msdb ;
GO
EXEC dbo.sp_add_notification
@alert_name = N\\’Test Alert\\’,
@operator_name = N\\’Franois Ajenstat\\’,
@notification_method = 1 ;
GO

Reference:
https://docs.microsoft.com/en-us/sql/ssms/agent/notify-an-operator-of-job-status
https://docs.microsoft.com/en-us/sql/ssms/agent/assign-alerts-to-an-operator

Conclusion:

DP-300 exam dumps can bring you a lot of help, but this is not the decisive factor, mainly because of your careful preparation. Pass4itSure designed the latest Microsoft DP-300 exam dumps for the DP-300 exam: https://www.pass4itsure.com/dp-300.html

Pass4itSure provides you with the latest DP-300 exam dumps learning materials for various IT certifications. The correct practice of DP-300 exam dumps will help you pass the Microsoft Azure DP-300 exam.

Click the link below for DP-300 questions PDF free: https://drive.google.com/file/d/1j1TA7Rp0VT8qQ2dmMYxQ2kf4eUMxdDjH/view?usp=sharing