Cloud MCQ's
Cloud MCQ's
Cloud MCQ's
import pandas as pd
ques=pd.read_excel('MCQ by sarang.xlsx')
c=1
for i in range(len(ques)):
print(str(c)+". ",ques['Question'][i],"\n")
c=c+1
print("Options : ")
print("A. ",ques['Choice1'][i])
print("B. ",ques['Choice2'][i])
print("C. ",ques['Choice3'][i])
print("D. ",ques['Choice4'][i])
if ques['Answer1'][i] == True:
print('\t Ans : A. ',ques['Choice1'][i],'\n\n')
elif ques['Answer2'][i] == True:
print('\t Ans : B. ',ques['Choice2'][i],'\n\n')
elif ques['Answer3'][i] == True:
print('\t Ans : C. ',ques['Choice3'][i],'\n\n')
elif ques['Answer4'][i] == True:
print('\t Ans : D. ',ques['Choice4'][i],'\n\n')
Options :
A. Ananlysis Service
B. Azure event Hub
C. Azure Monitor
D. Stream Anaytics
Ans : B. Azure event Hub
Options :
A. No
B. not sure
C. invalid statement
D. Yes
Ans : D. Yes
Options :
A. Compute
B. Application
C. Storage
D. None of the mentioned
Ans : A. Compute
Options :
A. structured,
unstructured and semi structured
B. only structured
C. structured and semi structured
D. only unstructured
Ans : A. structured,
unstructured and semi structured
Options :
A. Sqoop
B. Azure Data Factory
C. Developer SDK
D. All of the above
Ans : D. All of the above
Options :
A. Azure Control List
B. Azure Control Lake
C. Access Control List
D. Access Control Lake
Ans : C. Access Control List
Options :
A. metered pricing &
self-service management
B. limited storage
C. unsecured connections
D. dedicated hardware
Ans : A. metered pricing &
self-service management
Options :
A. structured, unstructured and
semi structured
B. only structured
C. structured and semi structured
D. only unstructured
Ans : A. structured, unstructured and
semi structured
9. When data is saved in Azure Data Lake, how many copies of data are
saved by default?
Options :
A. 1
B. 2
C. 3
D. 4
Ans : C. 3
10. Which of the following tools can be used for data ingestion in
Azure Data Lake?
Options :
A. Sqoop
B. Azure Data Factory
C. Developer SDK
D. All of the above
Ans : D. All of the above
11. Choose the correct option with respect to Azure Data Lake?
Options :
A. It can only provide high storage
capacity but cannot perform analytics opertaion upon that data
B. It can perform analytics operation on the datas that are stored in
blob storage,
but cannot store data
C. It can store as well as
perform analytics on data
D. It can neither store nor
perform analytics on data
Ans : C. It can store as well as
perform analytics on data
Options :
A. 1024TB
B. 1024EB
C. 512EB
D. Unlimited
Ans : D. Unlimited
Options :
A. structured, unstructured and
semi structured
B. only structured
C. structured and semi structured
D. only unstructured
Ans : A. structured, unstructured and
semi structured
14. You need set up the Azure Data Factory JSON definition for Tier
10 data.
What should you use?
Options :
A. Connection String
B. linked Service
C. Azure Blob
D. All of the above
15. You need to set up Azure Data Factory pipelines
to meet data movement requirements. Which integration runtime should
you use?
Options :
A. self-hosted integration runtime
B. Azure-SSIS Integration Runtime
C. . .NET Common Language Runtime (CLR)
D. All of the above
Ans : A. self-hosted integration runtime
Options :
A. Custom text,default,
email,RandomNumber
B. Custom text,email,RandomNumber
C. Custom text,default,RandomNumber
D. All of the above
Ans : A. Custom text,default,
email,RandomNumber
17. Each day, company plans to store hundreds of files in Azure Blob
Storage and Azure Data Lake Storage. The company uses the parquet
format.
You must develop a pipeline that meets the following requirements:
Process data every six hours
Offer interactive data analysis capabilities
Offer the ability to process data using solid-state drive (SSD)
caching
Use Directed Acyclic Graph(DAG) processing mechanisms
Provide support for REST API calls to monitor processes
Provide native support for Python
Integrate with Microsoft Power BI
Options :
A. Azure SQL Data Warehouse
B. HDInsight Apache Storm cluste
C. Azure Stream Analytics
D. All of the above
Ans : B. HDInsight Apache Storm cluste
18. Each day, company plans to store hundreds of files in Azure Blob
Storage and Azure Data Lake Storage. The company uses the parquet
format.
You must develop a pipeline that meets the following requirements:
Process data every six hours
Offer interactive data analysis capabilities
Offer the ability to process data using solid-state drive (SSD)
caching
Use Directed Acyclic Graph(DAG) processing mechanisms
Provide support for REST API calls to monitor processes
Provide native support for Python
Integrate with Microsoft Power BI
Options :
A. Azure SQL Data Warehouse
B. HDInsight Apache Storm cluste
C. Azure Stream Analytics
D. All of the above
Ans : B. HDInsight Apache Storm cluste
Options :
A. Bus Schema
B. Data Staging
C. Schema Objects
D. Workflow
Ans : B. Data Staging
20. You plan to create an Azure Blob Storage account in the Azure
region of East US 2.
You need to create a storage account that meets the following
requirements:
-> Replicates synchronously.
-> Remains available if a single data center in the region fails.
How should you configure the storage account?
Options :
A. GRS
B. LRS
C. RA GRS
D. ZRS
Ans : A. GRS
Options :
A. Ananlysis Service
B. Azure event Hub
C. Azure Monitor
D. Stream Anaytics
Ans : B. Azure event Hub
Options :
A. Multiple Azure AD
B. multiple resource groups
C. multiple regions
D. Multipule Subsctiptions
Ans : B. multiple resource groups
Options :
A. Will have no access
B. Promted for credentails
C. will have RW access
D. Will have Read only access
Ans : A. Will have no access
Options :
A. Web Role
B. All Mentioned
C. Worker Role
D. Vm Role
Ans : B. All Mentioned
Options :
A. Binary objects
B. Binary loaded object
C. Binary big objects
D. Binary Large object
Ans : D. Binary Large object
Options :
A. Introduction to DAX
B. Overview
C. Data Collection
D. Data Preparation
Ans : A. Introduction to DAX
27. Power Query Editor is the platform we use to transform data like
changing column data types and removing columns and rows
Options :
A. 1
B. False
C. invalid statement
D. none of the above
Ans : A. 1
28. The Power Query Editor does not save changes made to the data
tables. It saves the applied steps separately so that the steps will
need to be applied every time the table is loaded.
Options :
A. 1
B. False
C. invalid statement
D. none of the above
Ans : A. 1
Options :
A. Yes, I will go to the visualization panel and click on the
clustered bar chart
B. Yes, I will go to the visualization panel and click on the column
chart which is the 4 icon on the first row of
the pane
C. Yes, I will click here to the visualization panel but I need to
click another section to choose the column chart
D. No, this panel is not for creating data visualization. It is used
for the formatting of charts.
Ans : B. Yes, I will go to the visualization panel and click
on the column chart which is the 4 icon on the first row of
the pane
Options :
A. False
B. True
C. invalid statement
D. none of the above
Ans : B. True
Options :
A. 1
B. False
C. invalid statement
D. none of the above
Ans : A. 1
Options :
A. 1
B. False
C. invalid statement
D. none of the above
Ans : A. 1
Options :
A. False
B. True
C. invalid statement
D. none of the above
Ans : B. True
Options :
A. Many to One
B. One to Many
C. Many to Many
D. One to One
Ans : B. One to Many
Options :
A. 1
B. False
C. invalid statement
D. none of the above
Ans : A. 1
Options :
A. 1 banana tree having many bananas on the tree
B. Many banana trees planted in more than 1 garden
C. Many banana trees planted in a garden
D. Many gardens having no banana trees
Ans : C. Many banana trees planted in a garden
37. In the data view, we are able to view the tables that we have
imported. Each table shown is from a specific excel worksheet
Options :
A. False
B. True
C. invalid statement
D. none of the above
Ans : B. True
Options :
A. AND
B. ISNUMBER
C. AVERAGE
D. CONCATENATE
Ans : C. AVERAGE
Options :
A. Report View, Data View, Relationship View
B. Data View, Draft View, Canvas View
C. Data view, Relationship View, Canvas View
D. Report View, Relationship View, Draft View
Ans : A. Report View, Data View, Relationship View
40. What are the group of data sources can Power BI connect to?
Options :
A. Files, SQL, Azure
B. Files, Content Packs, Connectors
C. Databases, SQL, Azure
D. Content Packs, Connectors, SQL
Ans : B. Files, Content Packs, Connectors
Options :
A. One Page
B. All Pages
C. One Visualization
D. Multiple Visualizations in same page
Ans : C. One Visualization
Options :
A. 1
B. False
C. invalid statement
D. none of the above
Ans : A. 1
43. What is the SQL command to return the values from a table?
Options :
A. DISTINCT
B. SELECT
C. WHERE
D. ORDER BY
Ans : B. SELECT
44. What is the SQL expression used to count the values in a table?
Options :
A. COUNT
B. SUM
C. AVERAGE
D. DISTINCT
Ans : A. COUNT
45. Your Company has ADF pipeline Configured and they wants to run
that pipeline on storage event base , so which kind o trigger you will
configure
Options :
A. Scheduled Trigger
B. Manual Trigger
C. Event Trigger
D. none of the above
Ans : C. Event Trigger
Options :
A. Versioning
B. Data Compression
C. automatic Scaling
D. No change is needed
Ans : D. No change is needed
47. You need to ensure that phone-based poling data can be analyzed
in the PollingData database.
How should you configure Azure Data Factory?
Options :
A. Use a tumbling schedule trigger
B. Use an event-based trigger
C. Use a schedule trigger
D. None of the above
Ans : B. Use an event-based trigger
48. You need set up the Azure Data Factory JSON definition for Tier
10 data.
What should you use?
Options :
A. Connection String
B. linked Service
C. Azure Blob
D. none of the above
Ans : A. Connection String
49. You need to set up Azure Data Factory pipelines to meet data
movement requirements. Which integration runtime should you use?
Options :
A. self-hosted integration runtime
B. Azure-SSIS Integration Runtime
C. . .NET Common Language Runtime (CLR)
D. All of the above
Ans : A. self-hosted integration runtime
50. You need to mask tier 1 data. Which functions should you use?
Options :
A. Custom text,default,email,RandomNumber
B. Custom text,email,RandomNumber
C. Custom text,default,RandomNumber
D. All of the above
Ans : A. Custom text,default,email,RandomNumber
51. Each day, company plans to store hundreds of files in Azure Blob
Storage and Azure Data Lake Storage. The company uses the parquet
format.
You must develop a pipeline that meets the following requirements:
Process data every six hours
Offer interactive data analysis capabilities
Offer the ability to process data using solid-state drive (SSD)
caching
Use Directed Acyclic Graph(DAG) processing mechanisms
Provide support for REST API calls to monitor processes
Provide native support for Python
Integrate with Microsoft Power BI
Options :
A. Azure SQL Data Warehouse
B. HDInsight Apache Storm cluste
C. Azure Stream Analytics
D. None of the above
Ans : B. HDInsight Apache Storm cluste
52. You need to develop a pipeline for processing data. The pipeline
must meet the following requirements.
•Scale up and down resources for cost reduction.
•Use an in-memory data processing engine to speed up ETL and machine
learning operations.
•Use streaming capabilities.
•Provide the ability to code in SQL, Python, Scala, and R.
•Integrate workspace collaboration with Git. What should you use?
Options :
A. HDInsight Spark Cluster
B. Azure Stream Analytics
C. HDInsight Hadoop Cluste
D. None of the above
Ans : B. Azure Stream Analytics
53. A company plans to use Azure Storage for file storage purposes.
Compliance rules require: A single storage account to store all
operations including reads, writes
and deletes
Retention of an on-premises copy of historical operations You need to
configure the storage account.
Which two actions should you perform? Each correct answer presents
part of the solution.
NOTE: Each correct selection is worth one point.
Options :
A. Configure the storage account to log read, write and delete
operations
for service type Blob
B. . Use the AzCopy tool to download log data from $logs/blob
C. Configure the storage account to log read, write and delete
operations for service-type table
D. None of the above
Ans : A. Configure the storage account to log read, write
and delete operations
for service type Blob
54. You develop a data ingestion process that will import data to a
Microsoft Azure SQL Data Warehouse. The data to be ingested resides in
parquet files stored in an
Azure Data Lake Gen 2 storage account. You need to toad the data from
the Azure Data Lake Gen 2 storage account into the Azure SQL Data
Warehouse
Solution:
1. Create an external data source pointing to the Azure storage
account
2. Create an external file format and external table using the
external data source
3. Load the data using the INSERT…SELECT statement
Does the solution meet the goal?
Options :
A. Yes
B. No
C. Not Sure
D. None of the above
Ans : B. No
55. You need to ensure that Azure Data Factory pipelines can be
deployed. How should you configure authentication and authorization
for
deployments?
Options :
A. RBAC and SPN
B. DAC and Kerberos
C. MAC and Certificate based
D. none of the above
Ans : A. RBAC and SPN
56. You need to set up Azure Data Factory pipelines to meet data
movement requirements. Which integration runtime should you use?
Options :
A. self-hosted integration runtime
B. Azure-SSIS Integration Runtime
C. .NET Common Language Runtime (CLR)
D. none of the above
Ans : A. self-hosted integration runtime
Options :
A. a,b,c
B. b,c,a
C. a,c,b
D. C,B,A
Ans : D. C,B,A
Options :
A. a,b,c
B. b,c,a
C. a,c,b
D. none of the above
Ans : A. a,b,c
59. You need to integrate the on-premises data sources and Azure
Synapse Analytics. The solution must meet the data integration
requirements.
Which type of integration runtime should you use?
Options :
A. Azure-SSIS integration runtime
B. Azure-SSIS integration runtime
C. Azure integration runtime
D. none of the above
Ans : C. Azure integration runtime
Options :
A. Create External Table & Format Options
B. Create View & Format Type
C. Create Table & Range right for values
D. none of the above
Ans : C. Create Table & Range right for values
61. You use Azure Data Factory to prepare data to be queried by Azure
Synapse Analytics serverless SQL pools. Files are initially ingested
into an Azure Data Lake
Storage Gen2 account as 10 small JSON files. Each file contains the
same data attributes and data from a subsidiary of your company.
You need to move the files to a different folder and transform the
data to meet the following requirements: Provide the fastest possible
query times.
Automatically infer the schema from the underlying files.
How should you configure the Data Factory copy activity?
Options :
A. Flatten hirearchy & CSV
B. Merge Files and Json
C. Preserver hirearchy & Parquet
D. none of the above
Ans : C. Preserver hirearchy & Parquet
Options :
A. Faile Until Nodes come back online & Lowered
B. Switch to another integration runtime & raised
C. Exceed the cpu limit & left as is
D. none of the above
Ans : A. Faile Until Nodes come back online & Lowered
Options :
A. It is Apache Spark based analytics platform
B. It helps to extract, transform and load the data
C. Visualization if data is not possible with it
D. All of the above
Ans : C. Visualization if data is not possible with it
Options :
A. Amazon web-based service
B. Amazon web-store service
C. Amazon web service
D. Amazon web-data service
Ans : C. Amazon web service
Options :
A. 2
B. 3
C. 4
D. 5
Ans : B. 3
Options :
A. Flexibility
B. Cost-effectiveness
C. Scalability
D. All of the above
Ans : D. All of the above
Options :
A. A region is a geographical area or
collection of data centers.
B. A region is an isolated
logical data center
C. A region is the end-points for AWS.
D. None of the above
Ans : A. A region is a geographical area or
collection of data centers.
Options :
A. An Availability zone is a geographical
area or collection of data centers.
B. An Availability zone is an isolated
logical data center in a region
C. An Availability zone is the
end-points for AWS
D. None of the above
Ans : B. An Availability zone is an isolated
logical data center in a region
Options :
A. The edge location is a
geographical area or collection of data centers
B. The edge location is an isolated
logical data center in a region
C. The edge locations are the end-points for
AWS, used to deliver fast content to users
D. None of the above
Ans : C. The edge locations are the end-points for
AWS, used to deliver fast content to users
Options :
A. Edge location
B. Regions
C. Availability zone
D. Regional Edge caches
Ans : D. Regional Edge caches
Options :
A. AWS account ID is a
12-digit number that is used to construct Amazon Resource Names
(ARNs).
B. AWS account ID is 64-digit hexadecimal
used in an Amazon S3 bucket policy.
C. AWS account ID is 32-digit hexadecimal
used in an Amazon S3 bucket policy.
D. None of the above
Ans : A. AWS account ID is a
12-digit number that is used to construct Amazon Resource Names
(ARNs).
Options :
A. Canonical user ID is a 12-digit
number that is used to construct Amazon Resource Names (ARNs).
B. Canonical user ID is 32-digit hexadecimal
used in an Amazon S3 bucket policy.
C. Canonical user ID is 64-digit hexadecimal
used in an Amazon S3 bucket policy.
D. None of the above
Ans : C. Canonical user ID is 64-digit hexadecimal
used in an Amazon S3 bucket policy.
Options :
A. Identity access manager
B. Identity access management
C. Identify user-access management
D. None of the above
Ans : B. Identity access management
Options :
A. Google Cloud Professional
B. Google Cloud Profession
C. Google Cloud Platform
D. Google Compute Platform
Ans : C. Google Cloud Platform
Options :
A.
The user has no control over their data
B. Many programs can be run at the same time,
regardless of the processing power of your device
C. Accessible anywhere
with an internet connection
D. Portability
Ans : C. Accessible anywhere
with an internet connection
Options :
A. The third party company
B. The personal computer user
C.
The internet
D. Your personal home server
Ans : A. The third party company
Options :
A. Saves storage space on your PC
B. Gives you access to files from
any computer
C. Protects your files from
being lost due to PC failure
D. Completely protects your
information from cloud hackers
Ans : D. Completely protects your
information from cloud hackers
78. You are a project owner and need your co-worker to deploy a new
version of your application to App Engine. You want to follow Google’s
recommended practices. Which IAM roles should
you grant your co-worker?
Options :
A. Project Editor
B. App Engine Service Admin
C. App Engine Deployer
D. App Engine Code Viewer
Ans : C. App Engine Deployer
Options :
A. Project,Region,Network,zones,SubNetworks
B. Project,Network,Region,Zones,SubNetworks
C. Project,Network,SubNetworks,Region,Zones
D. Project,Network,Region,SubNetworks,Zones
Ans : C. Project,Network,SubNetworks,Region,Zones
80. Which of the following is a command line tool that is part of the
Cloud SDK?
Options :
A. git
B. bash
C. gsutil
D. ssh
Ans : C. gsutil
Options :
A. gcloud compute
B. gsutil mb
C. bq run
D.
gcloud init
Ans : D.
gcloud init
82. Why might a GCP customer use resources in several zones within a
region?
Options :
A. For improved fault tolerance
B. For better performance
C. gsutil
D. none of the above
Ans : A. For improved fault tolerance
Options :
A. IAAS
B. PAAS
C. SAAS
D. FAAS
Ans : A. IAAS