Cloud MCQ's

Download as pdf or txt
Download as pdf or txt
You are on page 1of 1

import numpy as np

import pandas as pd

ques=pd.read_excel('MCQ by sarang.xlsx')

c=1
for i in range(len(ques)):
print(str(c)+". ",ques['Question'][i],"\n")
c=c+1
print("Options : ")
print("A. ",ques['Choice1'][i])
print("B. ",ques['Choice2'][i])
print("C. ",ques['Choice3'][i])
print("D. ",ques['Choice4'][i])
if ques['Answer1'][i] == True:
print('\t Ans : A. ',ques['Choice1'][i],'\n\n')
elif ques['Answer2'][i] == True:
print('\t Ans : B. ',ques['Choice2'][i],'\n\n')
elif ques['Answer3'][i] == True:
print('\t Ans : C. ',ques['Choice3'][i],'\n\n')
elif ques['Answer4'][i] == True:
print('\t Ans : D. ',ques['Choice4'][i],'\n\n')

1. Which Azure service should you use to collect all


events from mutiple resources

Options :
A. Ananlysis Service
B. Azure event Hub
C. Azure Monitor
D. Stream Anaytics
Ans : B. Azure event Hub

2. The Azure spending limit is fixed and it can' be increase ot


decrease

Options :
A. No
B. not sure
C. invalid statement
D. Yes
Ans : D. Yes

3. Which of the following element allows you to create and manage


virtual machines that serve either in a Web role and a Worker role ?

Options :
A. Compute
B. Application
C. Storage
D. None of the mentioned
Ans : A. Compute

4. What format of data can be stored in Azure Data Lake?

Options :
A. structured,
unstructured and semi structured
B. only structured
C. structured and semi structured
D. only unstructured
Ans : A. structured,
unstructured and semi structured

5. Which of the following tools can be used for data ingestion in


Azure Data Lake?

Options :
A. Sqoop
B. Azure Data Factory
C. Developer SDK
D. All of the above
Ans : D. All of the above

6. What does ACL stand for

Options :
A. Azure Control List
B. Azure Control Lake
C. Access Control List
D. Access Control Lake
Ans : C. Access Control List

7. What are two characteristics of the public cloud?

Options :
A. metered pricing &
self-service management
B. limited storage
C. unsecured connections
D. dedicated hardware
Ans : A. metered pricing &
self-service management

8. What are two characteristics of the public cloud?

Options :
A. structured, unstructured and
semi structured
B. only structured
C. structured and semi structured
D. only unstructured
Ans : A. structured, unstructured and
semi structured

9. When data is saved in Azure Data Lake, how many copies of data are
saved by default?

Options :
A. 1
B. 2
C. 3
D. 4
Ans : C. 3

10. Which of the following tools can be used for data ingestion in
Azure Data Lake?

Options :
A. Sqoop
B. Azure Data Factory
C. Developer SDK
D. All of the above
Ans : D. All of the above

11. Choose the correct option with respect to Azure Data Lake?

Options :
A. It can only provide high storage
capacity but cannot perform analytics opertaion upon that data
B. It can perform analytics operation on the datas that are stored in
blob storage,
but cannot store data
C. It can store as well as
perform analytics on data
D. It can neither store nor
perform analytics on data
Ans : C. It can store as well as
perform analytics on data

12. What is the storage capacity of Azure Data Lake?

Options :
A. 1024TB
B. 1024EB
C. 512EB
D. Unlimited
Ans : D. Unlimited

13. What format of data can be stored in Azure Data Lake

Options :
A. structured, unstructured and
semi structured
B. only structured
C. structured and semi structured
D. only unstructured
Ans : A. structured, unstructured and
semi structured

14. You need set up the Azure Data Factory JSON definition for Tier
10 data.
What should you use?

Options :
A. Connection String
B. linked Service
C. Azure Blob
D. All of the above
15. You need to set up Azure Data Factory pipelines
to meet data movement requirements. Which integration runtime should
you use?

Options :
A. self-hosted integration runtime
B. Azure-SSIS Integration Runtime
C. . .NET Common Language Runtime (CLR)
D. All of the above
Ans : A. self-hosted integration runtime

16. You need to mask tier 1 data.


Which functions should you use?

Options :
A. Custom text,default,
email,RandomNumber
B. Custom text,email,RandomNumber
C. Custom text,default,RandomNumber
D. All of the above
Ans : A. Custom text,default,
email,RandomNumber

17. Each day, company plans to store hundreds of files in Azure Blob
Storage and Azure Data Lake Storage. The company uses the parquet
format.
You must develop a pipeline that meets the following requirements:
Process data every six hours
Offer interactive data analysis capabilities
Offer the ability to process data using solid-state drive (SSD)
caching
Use Directed Acyclic Graph(DAG) processing mechanisms
Provide support for REST API calls to monitor processes
Provide native support for Python
Integrate with Microsoft Power BI

Options :
A. Azure SQL Data Warehouse
B. HDInsight Apache Storm cluste
C. Azure Stream Analytics
D. All of the above
Ans : B. HDInsight Apache Storm cluste

18. Each day, company plans to store hundreds of files in Azure Blob
Storage and Azure Data Lake Storage. The company uses the parquet
format.
You must develop a pipeline that meets the following requirements:
Process data every six hours
Offer interactive data analysis capabilities
Offer the ability to process data using solid-state drive (SSD)
caching
Use Directed Acyclic Graph(DAG) processing mechanisms
Provide support for REST API calls to monitor processes
Provide native support for Python
Integrate with Microsoft Power BI

Options :
A. Azure SQL Data Warehouse
B. HDInsight Apache Storm cluste
C. Azure Stream Analytics
D. All of the above
Ans : B. HDInsight Apache Storm cluste

19. __________ is an area where you hold the data temporary


on data warehouse server.

Options :
A. Bus Schema
B. Data Staging
C. Schema Objects
D. Workflow
Ans : B. Data Staging

20. You plan to create an Azure Blob Storage account in the Azure
region of East US 2.
You need to create a storage account that meets the following
requirements:
-> Replicates synchronously.
-> Remains available if a single data center in the region fails.
How should you configure the storage account?

Options :
A. GRS
B. LRS
C. RA GRS
D. ZRS
Ans : A. GRS

21. Which Azure service should you use to collect all


events from mutiple resources

Options :
A. Ananlysis Service
B. Azure event Hub
C. Azure Monitor
D. Stream Anaytics
Ans : B. Azure event Hub

22. Your Company plans to migrate to Azure.The Company


has several departments, all the azure resources used by each
department will be managed by a department admin , what is the best
possible technique to segment azure for departments

Options :
A. Multiple Azure AD
B. multiple resource groups
C. multiple regions
D. Multipule Subsctiptions
Ans : B. multiple resource groups

23. You have an Azure subscription named Subscription1.


In Subscription1, you create an Azure file share named share1.
You created a shared access signature (SAS) named SAS1 and now if you
are trying to connect to storage account using this key from
193.77.134 this IP then what will be the output

Options :
A. Will have no access
B. Promted for credentails
C. will have RW access
D. Will have Read only access
Ans : A. Will have no access

24. What are the roles in windows azure

Options :
A. Web Role
B. All Mentioned
C. Worker Role
D. Vm Role
Ans : B. All Mentioned

25. BLOB means

Options :
A. Binary objects
B. Binary loaded object
C. Binary big objects
D. Binary Large object
Ans : D. Binary Large object

26. The topic of discussion of Power BI Session 4 is

Options :
A. Introduction to DAX
B. Overview
C. Data Collection
D. Data Preparation
Ans : A. Introduction to DAX

27. Power Query Editor is the platform we use to transform data like
changing column data types and removing columns and rows

Options :
A. 1
B. False
C. invalid statement
D. none of the above
Ans : A. 1

28. The Power Query Editor does not save changes made to the data
tables. It saves the applied steps separately so that the steps will
need to be applied every time the table is loaded.

Options :
A. 1
B. False
C. invalid statement
D. none of the above
Ans : A. 1

29. I would like to create a clustered column chart. I will go to


this panel to create the chart

Options :
A. Yes, I will go to the visualization panel and click on the
clustered bar chart
B. Yes, I will go to the visualization panel and click on the column
chart which is the 4 icon on the first row of
the pane
C. Yes, I will click here to the visualization panel but I need to
click another section to choose the column chart
D. No, this panel is not for creating data visualization. It is used
for the formatting of charts.
Ans : B. Yes, I will go to the visualization panel and click
on the column chart which is the 4 icon on the first row of
the pane

30. Sometimes when analyzing data we may use our dimensions as a


measure to understand values like no. of instances, average pricing or
maximum price

Options :
A. False
B. True
C. invalid statement
D. none of the above
Ans : B. True

31. The MAP chart is a useful visualization to show locations of


cities and countries with bubbles to indicate the size of the data
points.

Options :
A. 1
B. False
C. invalid statement
D. none of the above
Ans : A. 1

32. A dimension is a descriptive information to help us slice the


data. Examples of dimension includes customer name, customer age,
product id, product descriptive etc

Options :
A. 1
B. False
C. invalid statement
D. none of the above
Ans : A. 1

33. In any analytics report, it is not necessary to have measures

Options :
A. False
B. True
C. invalid statement
D. none of the above
Ans : B. True

34. A customer may purchase 1 or many products. So the relationship


between the customer table and the products table will be

Options :
A. Many to One
B. One to Many
C. Many to Many
D. One to One
Ans : B. One to Many

35. To enable two tables to be joined. A unique key between the 2


tables is required. For example between the customer table and the
sales table. The key would be customer ID

Options :
A. 1
B. False
C. invalid statement
D. none of the above
Ans : A. 1

36. Which of the following illustrates a Many to One relationship?

Options :
A. 1 banana tree having many bananas on the tree
B. Many banana trees planted in more than 1 garden
C. Many banana trees planted in a garden
D. Many gardens having no banana trees
Ans : C. Many banana trees planted in a garden

37. In the data view, we are able to view the tables that we have
imported. Each table shown is from a specific excel worksheet

Options :
A. False
B. True
C. invalid statement
D. none of the above
Ans : B. True

38. A function that can only work on numeric fields is

Options :
A. AND
B. ISNUMBER
C. AVERAGE
D. CONCATENATE
Ans : C. AVERAGE

39. What are the different views in Power BI Desktop

Options :
A. Report View, Data View, Relationship View
B. Data View, Draft View, Canvas View
C. Data view, Relationship View, Canvas View
D. Report View, Relationship View, Draft View
Ans : A. Report View, Data View, Relationship View

40. What are the group of data sources can Power BI connect to?

Options :
A. Files, SQL, Azure
B. Files, Content Packs, Connectors
C. Databases, SQL, Azure
D. Content Packs, Connectors, SQL
Ans : B. Files, Content Packs, Connectors

41. Visual-level filter applies to

Options :
A. One Page
B. All Pages
C. One Visualization
D. Multiple Visualizations in same page
Ans : C. One Visualization

42. Only domains with a Power BI license (subscribed through Office


365) can embed the web link of a Power BI Report.

Options :
A. 1
B. False
C. invalid statement
D. none of the above
Ans : A. 1

43. What is the SQL command to return the values from a table?

Options :
A. DISTINCT
B. SELECT
C. WHERE
D. ORDER BY
Ans : B. SELECT

44. What is the SQL expression used to count the values in a table?

Options :
A. COUNT
B. SUM
C. AVERAGE
D. DISTINCT
Ans : A. COUNT

45. Your Company has ADF pipeline Configured and they wants to run
that pipeline on storage event base , so which kind o trigger you will
configure

Options :
A. Scheduled Trigger
B. Manual Trigger
C. Event Trigger
D. none of the above
Ans : C. Event Trigger

46. One of the benifnits of Azure SQL DW is that High Availability is


built into the platform
Instructions: Review the underlined text. If it makes the statement
correct, select "No change is
needed". If the statement is incorrect, select the answer choice that
makes the statement correct.

Options :
A. Versioning
B. Data Compression
C. automatic Scaling
D. No change is needed
Ans : D. No change is needed

47. You need to ensure that phone-based poling data can be analyzed
in the PollingData database.
How should you configure Azure Data Factory?

Options :
A. Use a tumbling schedule trigger
B. Use an event-based trigger
C. Use a schedule trigger
D. None of the above
Ans : B. Use an event-based trigger

48. You need set up the Azure Data Factory JSON definition for Tier
10 data.
What should you use?

Options :
A. Connection String
B. linked Service
C. Azure Blob
D. none of the above
Ans : A. Connection String

49. You need to set up Azure Data Factory pipelines to meet data
movement requirements. Which integration runtime should you use?

Options :
A. self-hosted integration runtime
B. Azure-SSIS Integration Runtime
C. . .NET Common Language Runtime (CLR)
D. All of the above
Ans : A. self-hosted integration runtime

50. You need to mask tier 1 data. Which functions should you use?

Options :
A. Custom text,default,email,RandomNumber
B. Custom text,email,RandomNumber
C. Custom text,default,RandomNumber
D. All of the above
Ans : A. Custom text,default,email,RandomNumber

51. Each day, company plans to store hundreds of files in Azure Blob
Storage and Azure Data Lake Storage. The company uses the parquet
format.
You must develop a pipeline that meets the following requirements:
Process data every six hours
Offer interactive data analysis capabilities
Offer the ability to process data using solid-state drive (SSD)
caching
Use Directed Acyclic Graph(DAG) processing mechanisms
Provide support for REST API calls to monitor processes
Provide native support for Python
Integrate with Microsoft Power BI

Options :
A. Azure SQL Data Warehouse
B. HDInsight Apache Storm cluste
C. Azure Stream Analytics
D. None of the above
Ans : B. HDInsight Apache Storm cluste

52. You need to develop a pipeline for processing data. The pipeline
must meet the following requirements.
•Scale up and down resources for cost reduction.
•Use an in-memory data processing engine to speed up ETL and machine
learning operations.
•Use streaming capabilities.
•Provide the ability to code in SQL, Python, Scala, and R.
•Integrate workspace collaboration with Git. What should you use?

Options :
A. HDInsight Spark Cluster
B. Azure Stream Analytics
C. HDInsight Hadoop Cluste
D. None of the above
Ans : B. Azure Stream Analytics

53. A company plans to use Azure Storage for file storage purposes.
Compliance rules require: A single storage account to store all
operations including reads, writes
and deletes
Retention of an on-premises copy of historical operations You need to
configure the storage account.
Which two actions should you perform? Each correct answer presents
part of the solution.
NOTE: Each correct selection is worth one point.

Options :
A. Configure the storage account to log read, write and delete
operations
for service type Blob
B. . Use the AzCopy tool to download log data from $logs/blob
C. Configure the storage account to log read, write and delete
operations for service-type table
D. None of the above
Ans : A. Configure the storage account to log read, write
and delete operations
for service type Blob

54. You develop a data ingestion process that will import data to a
Microsoft Azure SQL Data Warehouse. The data to be ingested resides in
parquet files stored in an
Azure Data Lake Gen 2 storage account. You need to toad the data from
the Azure Data Lake Gen 2 storage account into the Azure SQL Data
Warehouse
Solution:
1. Create an external data source pointing to the Azure storage
account
2. Create an external file format and external table using the
external data source
3. Load the data using the INSERT…SELECT statement
Does the solution meet the goal?

Options :
A. Yes
B. No
C. Not Sure
D. None of the above
Ans : B. No

55. You need to ensure that Azure Data Factory pipelines can be
deployed. How should you configure authentication and authorization
for
deployments?

Options :
A. RBAC and SPN
B. DAC and Kerberos
C. MAC and Certificate based
D. none of the above
Ans : A. RBAC and SPN

56. You need to set up Azure Data Factory pipelines to meet data
movement requirements. Which integration runtime should you use?

Options :
A. self-hosted integration runtime
B. Azure-SSIS Integration Runtime
C. .NET Common Language Runtime (CLR)
D. none of the above
Ans : A. self-hosted integration runtime

57. Your company manages on-premises Microsoft SQL Server pipelines


by using a custom solution.
The data engineering team must implement a process to pull data from
SQL Server and migrate it to Azure Blob storage. The process must
orchestrate and
manage the data lifecycle.
You need to configure Azure Data Factory to connect to the on-premises
SQL Server database.
Which three actions should you perform in sequence?
a) Create an Azure Data Factory resource.
b) Configure a self-hosted integration runtime.
c) Create a virtual private network (VPN) connection from on-premises
to Microsoft Azure

Options :
A. a,b,c
B. b,c,a
C. a,c,b
D. C,B,A
Ans : D. C,B,A

58. Your company has on-premises Microsoft SQL Server instance.


The data engineering team plans to implement a process that copies
data from the SQL Server instance to Azure Blob storage. The process
must orchestrate and
manage the data lifecycle.
You need to configure Azure Data Factory to connect to the SQL Server
instance.
A) Deploy Data Factory
B) Configure a linked service to connect to SQL server Instance
c) from the on premises network ,install and configure the self-hosted
integration runtime

Options :
A. a,b,c
B. b,c,a
C. a,c,b
D. none of the above
Ans : A. a,b,c

59. You need to integrate the on-premises data sources and Azure
Synapse Analytics. The solution must meet the data integration
requirements.
Which type of integration runtime should you use?

Options :
A. Azure-SSIS integration runtime
B. Azure-SSIS integration runtime
C. Azure integration runtime
D. none of the above
Ans : C. Azure integration runtime

60. You need to implement an Azure Synapse Analytics database object


for storing the sales transactions data. The solution must meet the
sales transaction dataset
requirements.
What solution must meet the sales transaction dataset requirements.
What should you do

Options :
A. Create External Table & Format Options
B. Create View & Format Type
C. Create Table & Range right for values
D. none of the above
Ans : C. Create Table & Range right for values

61. You use Azure Data Factory to prepare data to be queried by Azure
Synapse Analytics serverless SQL pools. Files are initially ingested
into an Azure Data Lake
Storage Gen2 account as 10 small JSON files. Each file contains the
same data attributes and data from a subsidiary of your company.
You need to move the files to a different folder and transform the
data to meet the following requirements: Provide the fastest possible
query times.
Automatically infer the schema from the underlying files.
How should you configure the Data Factory copy activity?

Options :
A. Flatten hirearchy & CSV
B. Merge Files and Json
C. Preserver hirearchy & Parquet
D. none of the above
Ans : C. Preserver hirearchy & Parquet

62. You have a self-hosted integration runtime in Azure Data Factory.


The current status of the integration runtime has the following
configurations:
Status: Running
Type: Self-Hosted
Version: 4.4.7292.1
Running / Registered Node(s): 1/1
High Availability Enabled: False
Linked Count: 0
Queue Length: 0
Average Queue Duration. 0.00s
The integration runtime has the following node details:
Name: X-M
Status: Running
Version: 4.4.7292.1
Available Memory: 7697MB
CPU Utilization: 6%
Network (In/Out): 1.21KBps/0.83KBps
Concurrent Jobs (Running/Limit): 2/14
Role: Dispatcher/Worker
Credential Status: In Sync
Use the drop-down menus to select the answer choice that completes
each statement based on the information presented

Options :
A. Faile Until Nodes come back online & Lowered
B. Switch to another integration runtime & raised
C. Exceed the cpu limit & left as is
D. none of the above
Ans : A. Faile Until Nodes come back online & Lowered

63. Which one of the following is not a operations that can be


performed using Azure Databricks

Options :
A. It is Apache Spark based analytics platform
B. It helps to extract, transform and load the data
C. Visualization if data is not possible with it
D. All of the above
Ans : C. Visualization if data is not possible with it

64. What is the full form of AWS

Options :
A. Amazon web-based service
B. Amazon web-store service
C. Amazon web service
D. Amazon web-data service
Ans : C. Amazon web service

65. How many types of cloud computing are there

Options :
A. 2
B. 3
C. 4
D. 5
Ans : B. 3

66. Which of the following are the advantages of AWS

Options :
A. Flexibility
B. Cost-effectiveness
C. Scalability
D. All of the above
Ans : D. All of the above

67. What is the region in AWS

Options :
A. A region is a geographical area or
collection of data centers.
B. A region is an isolated
logical data center
C. A region is the end-points for AWS.
D. None of the above
Ans : A. A region is a geographical area or
collection of data centers.

68. What is the Availability zone in AWS

Options :
A. An Availability zone is a geographical
area or collection of data centers.
B. An Availability zone is an isolated
logical data center in a region
C. An Availability zone is the
end-points for AWS
D. None of the above
Ans : B. An Availability zone is an isolated
logical data center in a region

69. What are edge locations in AWS

Options :
A. The edge location is a
geographical area or collection of data centers
B. The edge location is an isolated
logical data center in a region
C. The edge locations are the end-points for
AWS, used to deliver fast content to users
D. None of the above
Ans : C. The edge locations are the end-points for
AWS, used to deliver fast content to users

70. Which of the following are the components of AWS infrastructure?

Options :
A. Edge location
B. Regions
C. Availability zone
D. Regional Edge caches
Ans : D. Regional Edge caches

71. What do you mean by AWS account ID

Options :
A. AWS account ID is a
12-digit number that is used to construct Amazon Resource Names
(ARNs).
B. AWS account ID is 64-digit hexadecimal
used in an Amazon S3 bucket policy.
C. AWS account ID is 32-digit hexadecimal
used in an Amazon S3 bucket policy.
D. None of the above
Ans : A. AWS account ID is a
12-digit number that is used to construct Amazon Resource Names
(ARNs).

72. What do you mean by canonical user ID

Options :
A. Canonical user ID is a 12-digit
number that is used to construct Amazon Resource Names (ARNs).
B. Canonical user ID is 32-digit hexadecimal
used in an Amazon S3 bucket policy.
C. Canonical user ID is 64-digit hexadecimal
used in an Amazon S3 bucket policy.
D. None of the above
Ans : C. Canonical user ID is 64-digit hexadecimal
used in an Amazon S3 bucket policy.

73. What Does IAM stands for in AWS?

Options :
A. Identity access manager
B. Identity access management
C. Identify user-access management
D. None of the above
Ans : B. Identity access management

74. expand GCP ?

Options :
A. Google Cloud Professional
B. Google Cloud Profession
C. Google Cloud Platform
D. Google Compute Platform
Ans : C. Google Cloud Platform

75. Which of these is an advantage of cloud storage?

Options :
A.
The user has no control over their data
B. Many programs can be run at the same time,
regardless of the processing power of your device
C. Accessible anywhere
with an internet connection
D. Portability
Ans : C. Accessible anywhere
with an internet connection

76. What is the back end of cloud computing?

Options :
A. The third party company
B. The personal computer user
C.
The internet
D. Your personal home server
Ans : A. The third party company

77. Which of these is not a benefit to cloud computing?

Options :
A. Saves storage space on your PC
B. Gives you access to files from
any computer
C. Protects your files from
being lost due to PC failure
D. Completely protects your
information from cloud hackers
Ans : D. Completely protects your
information from cloud hackers

78. You are a project owner and need your co-worker to deploy a new
version of your application to App Engine. You want to follow Google’s
recommended practices. Which IAM roles should
you grant your co-worker?

Options :
A. Project Editor
B. App Engine Service Admin
C. App Engine Deployer
D. App Engine Code Viewer
Ans : C. App Engine Deployer

79. Select correct order in given list?

Options :
A. Project,Region,Network,zones,SubNetworks
B. Project,Network,Region,Zones,SubNetworks
C. Project,Network,SubNetworks,Region,Zones
D. Project,Network,Region,SubNetworks,Zones
Ans : C. Project,Network,SubNetworks,Region,Zones

80. Which of the following is a command line tool that is part of the
Cloud SDK?

Options :
A. git
B. bash
C. gsutil
D. ssh
Ans : C. gsutil

81. What command would you use to set up the default


configuration of the Cloud SDK?

Options :
A. gcloud compute
B. gsutil mb
C. bq run
D.
gcloud init
Ans : D.
gcloud init

82. Why might a GCP customer use resources in several zones within a
region?

Options :
A. For improved fault tolerance
B. For better performance
C. gsutil
D. none of the above
Ans : A. For improved fault tolerance

83. What type of cloud computing service provides raw compute,


storage, and network, organized in ways that are familiar from
physical data centers?

Options :
A. IAAS
B. PAAS
C. SAAS
D. FAAS
Ans : A. IAAS

You might also like