[4-Nov-2019 Update] Exam DP-201 VCE Dumps and DP-201 PDF Dumps from PassLeader

Valid DP-201 Dumps shared by PassLeader for Helping Passing DP-201 Exam! PassLeader now offer the newest DP-201 VCE dumps and DP-201 PDF dumps, the PassLeader DP-201 exam questions have been updated and ANSWERS have been corrected, get the newest PassLeader DP-201 dumps with VCE and PDF here: https://www.passleader.com/dp-201.html (130 Q&As Dumps –> 179 Q&As Dumps –> 201 Q&As Dumps –> 223 Q&As Dumps)

BTW, DOWNLOAD part of PassLeader DP-201 dumps from Cloud Storage: https://drive.google.com/open?id=1VdzP5HksyU93Arqn65qPe5UFEm2Sxooh

NEW QUESTION 111
You are designing an application that will have an Azure virtual machine. The virtual machine will access an Azure SQL database. The database will not be accessible from the Internet. You need to recommend a solution to provide the required level of access to the database. What should you include in the recommendation?

A.    Deploy an On-premises data gateway.
B.    Add a virtual network to the Azure SQL server that hosts the database.
C.    Add an application gateway to the virtual network that contains the Azure virtual machine.
D.    Add a virtual network gateway to the virtual network that contains the Azure virtual machine.

Answer: B
Explanation:
When you create an Azure virtual machine (VM), you must create a virtual network (VNet) or use an existing VNet. You also need to decide how your VMs are intended to be accessed on the VNet.
Incorrect:
Not C: Azure Application Gateway is a web traffic load balancer that enables you to manage traffic to your web applications.
Not D: A VPN gateway is a specific type of virtual network gateway that is used to send encrypted traffic between an Azure virtual network and an on-premises location over the public Internet.
https://docs.microsoft.com/en-us/azure/virtual-machines/windows/network-overview

NEW QUESTION 112
You are designing a data store that will store organizational information for a company. The data will be used to identify the relationships between users. The data will be stored in an Azure Cosmos DB database and will contain several million objects. You need to recommend which API to use for the database. The API must minimize the complexity to query the user relationships. The solution must support fast traversals. Which API should you recommend?

A.    MongoDB
B.    Table
C.    Gremlin
D.    Cassandra

Answer: C
Explanation:
Gremlin features fast queries and traversals with the most widely adopted graph query standard.
https://docs.microsoft.com/th-th/azure/cosmos-db/graph-introduction?view=azurermps-5.7.0

NEW QUESTION 113
You need to recommend a storage solution to store flat files and columnar optimized files. The solution must meet the following requirements:
– Store standardized data that data scientists will explore in a curated folder.
– Ensure that applications cannot access the curated folder.
– Store staged data for import to applications in a raw folder.
– Provide data scientists with access to specific folders in the raw folder and all the content the curated folder.
Which storage solution should you recommend?

A.    Azure SQL Data Warehouse
B.    Azure Blob storage
C.    Azure Data Lake Storage Gen2
D.    Azure SQL Database

Answer: B
Explanation:
Azure Blob Storage containers is a general purpose object store for a wide variety of storage scenarios. Blobs are stored in containers, which are similar to folders.
Incorrect:
Not C: Azure Data Lake Storage is an optimized storage for big data analytics workloads.
https://docs.microsoft.com/en-us/azure/architecture/data-guide/technology-choices/data-storage

NEW QUESTION 114
You have a MongoDB database that you plan to migrate to an Azure Cosmos DB account that uses the MongoDB API. During testing, you discover that the migration takes longer than expected. You need to recommend a solution that will reduce the amount of time it takes to migrate the data. What are two possible recommendations to achieve this goal? (Each correct answer presents a complete solution. Choose two.)

A.    Increase the Request Units (RUs).
B.    Turn off indexing.
C.    Add a write region.
D.    Create unique indexes.
E.    Create compound indexes.

Answer: AB
Explanation:
A: Increase the throughput during the migration by increasing the Request Units (RUs). For customers that are migrating many collections within a database, it is strongly recommend to configure database-level throughput. You must make this choice when you create the database. The minimum database-level throughput capacity is 400 RU/sec. Each collection sharing database-level throughput requires at least 100 RU/sec.
B: By default, Azure Cosmos DB indexes all your data fields upon ingestion. You can modify the indexing policy in Azure Cosmos DB at any time. In fact, it is often recommended to turn off indexing when migrating data, and then turn it back on when the data is already in Cosmos DB.
https://docs.microsoft.com/bs-latn-ba/Azure/cosmos-db/mongodb-pre-migration

NEW QUESTION 115
You need to recommend a storage solution for a sales system that will receive thousands of small files per minute. The files will be in JSON, text, and CSV formats. The files will be processed and transformed before they are loaded into an Azure data warehouse. The files must be stored and secured in folders. Which storage solution should you recommend?

A.    Azure Data Lake Storage Gen2
B.    Azure Cosmos DB
C.    Azure SQL Database
D.    Azure Blob storage

Answer: A
Explanation:
Azure provides several solutions for working with CSV and JSON files, depending on your needs. The primary landing place for these files is either Azure Storage or Azure Data Lake Store.1 Azure Data Lake Storage is an optimized storage for big data analytics workloads.
Incorrect:
Not D: Azure Blob Storage containers is a general purpose object store for a wide variety of storage scenarios. Blobs are stored in containers, which are similar to folders.
https://docs.microsoft.com/en-us/azure/architecture/data-guide/scenarios/csv-and-json

NEW QUESTION 116
You are designing an Azure Cosmos DB database that will support vertices and edges. Which Cosmos DB API should you include in the design?

A.    SQL
B.    Cassandra
C.    Gremlin
D.    Table

Answer: C
Explanation:
The Azure Cosmos DB Gremlin API can be used to store massive graphs with billions of vertices and edges.
https://docs.microsoft.com/en-us/azure/cosmos-db/graph-introduction

NEW QUESTION 117
You plan to store delimited text files in an Azure Data Lake Storage account that will be organized into department folders. You need to configure data access so that users see only the files in their respective department folder.
Solution: From the storage account, you enable a hierarchical namespace, and you use RBAC.
Does this meet the goal?

A.    Yes
B.    No

Answer: B
Explanation:
Disable the hierarchical namespace. And instead of RBAC use access control lists (ACLs). Note: Azure Data Lake Storage implements an access control model that derives from HDFS, which in turn derives from the POSIX access control model. Blob container ACLs does not support the hierarchical namespace, so it must be disabled.
https://docs.microsoft.com/en-us/azure/storage/blobs/data-lake-storage-known-issues
https://docs.microsoft.com/en-us/azure/data-lake-store/data-lake-store-access-control

NEW QUESTION 118
You plan to store delimited text files in an Azure Data Lake Storage account that will be organized into department folders. You need to configure data access so that users see only the files in their respective department folder.
Solution: From the storage account, you disable a hierarchical namespace, and you use access control lists (ACLs).
Does this meet the goal?

A.    Yes
B.    No

Answer: A
Explanation:
Azure Data Lake Storage implements an access control model that derives from HDFS, which in turn derives from the POSIX access control model. Blob container ACLs does not support the hierarchical namespace, so it must be disabled.
https://docs.microsoft.com/en-us/azure/storage/blobs/data-lake-storage-known-issues
https://docs.microsoft.com/en-us/azure/data-lake-store/data-lake-store-access-control

NEW QUESTION 119
You are designing a data storage solution for a database that is expected to grow to 50 TB. The usage pattern is singleton inserts, singleton updates, and reporting. Which storage solution should you use?

A.    Azure SQL. Database elastic pools.
B.    Azure SQL Data Warehouse.
C.    Azure Cosmos DB that uses the Gremlin API.
D.    Azure SQL Database Hyperscale.

Answer: D
Explanation:
A Hyperscale database is an Azure SQL database in the Hyperscale service tier that is backed by the Hyperscale scale-out storage technology. A Hyperscale database supports up to 100 TB of data and provides high throughput and performance, as well as rapid scaling to adapt to the workload requirements. Scaling is transparent to the application  connectivity, query processing, etc. work like any other Azure SQL database.
Incorrect:
Not A: SQL Database elastic pools are a simple, cost-effective solution for managing and scaling multiple databases that have varying and unpredictable usage demands. The databases in an elastic pool are on a single Azure SQL Database server and share a set number of resources at a set price. Elastic pools in Azure SQL Database enable SaaS developers to optimize the price performance for a group of databases within a prescribed budget while delivering performance elasticity for each database.
Not B: Rather than SQL Data Warehouse, consider other options for operational (OLTP) workloads that have large numbers of singleton selects.
https://docs.microsoft.com/en-us/azure/sql-database/sql-database-service-tier-hyperscale-faq

NEW QUESTION 120
You are designing an Azure Databricks interactive cluster. The cluster will be used infrequently and will be configured for auto-termination. You need to ensure that the cluster configuration is retained indefinitely after the cluster is terminated. The solution must minimize costs. What should you do?

A.    Clone the cluster after it is terminated.
B.    Terminate the cluster manually when processing completes.
C.    Create an Azure runbook that starts the cluster every 90 days.
D.    Pin the cluster.

Answer: D
Explanation:
To keep an interactive cluster configuration even after it has been terminated for more than 30 days, an administrator can pin a cluster to the cluster list.
https://docs.azuredatabricks.net/clusters/clusters-manage.html#automatic-termination

NEW QUESTION 121
You have an Azure SQL database that has columns. The columns contain sensitive Personally Identifiable Information (PII) data. You need to design a solution that tracks and stores all the queries executed against the PII data. You must be able to review the data in Azure Monitor, and the data must be available for at least 45 days.
Solution: You create a SELECT trigger on the table in SQL Database that writes the query to a new table in the database, and then executes a stored procedure that looks up the column classifications and joins to the query text.
Does this meet the goal?

A.    Yes
B.    No

Answer: B
Explanation:
Instead add classifications to the columns that contain sensitive data and turn on Auditing. Note: Auditing has been enhanced to log sensitivity classifications or labels of the actual data that were returned by the query. This would enable you to gain insights on who is accessing sensitive data.
https://azure.microsoft.com/en-us/blog/announcing-public-preview-of-data-discovery-classification-for-microsoft-azure-sql-data-warehouse/

NEW QUESTION 122
You have an Azure SQL database that has columns. The columns contain sensitive Personally Identifiable Information (PII) data. You need to design a solution that tracks and stores all the queries executed against the PII data. You must be able to review the data in Azure Monitor, and the data must be available for at least 45 days.
Solution: You add classifications to the columns that contain sensitive data. You turn on Auditing and set the audit log destination to use Azure Blob storage.
Does this meet the goal?

A.    Yes
B.    No

Answer: A
Explanation:
Auditing has been enhanced to log sensitivity classifications or labels of the actual data that were returned by the query. This would enable you to gain insights on who is accessing sensitive data.
https://azure.microsoft.com/en-us/blog/announcing-public-preview-of-data-discovery-classification-for-microsoft-azure-sql-data-warehouse/

NEW QUESTION 123
You need to recommend a security solution for containers in Azure Blob storage. The solution must ensure that only read permissions are granted to a specific user for a specific container. What should you include in the recommendation?

A.    shared access signatures (SAS)
B.    an RBAC role in Azure Active Directory (Azure AD)
C.    public read access for blobs only
D.    access keys

Answer: A
Explanation:
You can delegate access to read, write, and delete operations on blob containers, tables, queues, and file shares that are not permitted with a service SAS. Note: A shared access signature (SAS) provides secure delegated access to resources in your storage account without compromising the security of your data. With a SAS, you have granular control over how a client can access your data. You can control what resources the client may access, what permissions they have on those resources, and how long the SAS is valid, among other parameters.
Incorrect:
Not C: You can enable anonymous, public read access to a container and its blobs in Azure Blob storage. By doing so, you can grant read-only access to these resources without sharing your account key, and without requiring a shared access signature (SAS). Public read access is best for scenarios where you want certain blobs to always be available for anonymous read access.
https://docs.microsoft.com/en-us/azure/storage/common/storage-sas-overview

NEW QUESTION 124
You store data in an Azure SQL data warehouse. You need to design a solution to ensure that the data warehouse and the most current data is available within one hour of a datacenter failure. Which three actions should you include in the design? (Each correct answer presents part of the solution. Choose three.)

A.    Each day, restore the data warehouse from a geo-redundant backup to an available Azure region.
B.    If a failure occurs, update the connection strings to point to the recovered data warehouse.
C.    If a failure occurs, modify the Azure Firewall rules of the data warehouse.
D.    Each day, create Azure Firewall rules that allow access to the restored data warehouse.
E.    Each day, restore the data warehouse from a user-defined restore point to an available Azure region.

Answer: BDE
Explanation:
E: You can create a user-defined restore point and restore from the newly created restore point to a new data warehouse in a different region. Note: A data warehouse snapshot creates a restore point you can leverage to recover or copy your data warehouse to a previous state. A data warehouse restore is a new data warehouse that is created from a restore point of an existing or deleted data warehouse. On average within the same region, restore rates typically take around 20 minutes.
Incorrect:
Not A: SQL Data Warehouse performs a geo-backup once per day to a paired data center. The RPO for a geo- restore is 24 hours. You can restore the geo-backup to a server in any other region where SQL Data Warehouse is supported. A geo-backup ensures you can restore data warehouse in case you cannot access the restore points in your primary region.
https://docs.microsoft.com/en-us/azure/sql-data-warehouse/backup-and-restore

NEW QUESTION 125
You design data engineering solutions for a company that has locations around the world. You plan to deploy a large set of data to Azure Cosmos DB. The data must be accessible from all company locations. You need to recommend a strategy for deploying the data that minimizes latency for data read operations and minimizes costs. What should you recommend?

A.    Use a single Azure Cosmos DB account. Enable multi-region writes.
B.    Use a single Azure Cosmos DB account Configure data replication.
C.    Use multiple Azure Cosmos DB accounts. For each account, configure the location to the closest Azure datacenter.
D.    Use a single Azure Cosmos DB account. Enable geo-redundancy.
E.    Use multiple Azure Cosmos DB accounts. Enable multi-region writes.

Answer: A
Explanation:
With Azure Cosmos DB, you can add or remove the regions associated with your account at any time. Multi-region accounts configured with multiple-write regions will be highly available for both writes and reads. Regional failovers are instantaneous and don’t require any changes from the application.
https://docs.microsoft.com/en-us/azure/cosmos-db/high-availability

NEW QUESTION 126
……


Get the newest PassLeader DP-201 VCE dumps here: https://www.passleader.com/dp-201.html (130 Q&As Dumps –> 179 Q&As Dumps –> 201 Q&As Dumps –> 223 Q&As Dumps)

And, DOWNLOAD the newest PassLeader DP-201 PDF dumps from Cloud Storage for free: https://drive.google.com/open?id=1VdzP5HksyU93Arqn65qPe5UFEm2Sxooh