0% found this document useful (0 votes)
22 views50 pages

professional-cloud-developer_6b51dff7a7c4

Certy IQ offers premium exam materials for quick certification preparation, including lifetime updates and a first-attempt success guarantee. The document contains a series of questions and answers related to Google Cloud certifications, covering topics such as data migration, monitoring, database management, and application deployment strategies. Each question is followed by an explanation of the correct answer, providing insights into best practices for using Google Cloud services.

Uploaded by

naresh.s551
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
22 views50 pages

professional-cloud-developer_6b51dff7a7c4

Certy IQ offers premium exam materials for quick certification preparation, including lifetime updates and a first-attempt success guarantee. The document contains a series of questions and answers related to Google Cloud certifications, covering topics such as data migration, monitoring, database management, and application deployment strategies. Each question is followed by an explanation of the correct answer, providing insights into best practices for using Google Cloud services.

Uploaded by

naresh.s551
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 50

Certy IQ

Premium exam material


Get certification quickly with the CertyIQ Premium exam material.
Everything you need to prepare, learn & pass your certification exam easily. Lifetime free updates
First attempt guaranteed success.
https://www.CertyIQ.com
Google

(Professional Cloud Developer)

Professional Cloud Developer

Total: 286 Questions


Link: https://certyiq.com/papers?provider=google&exam=professional-cloud-developer
Question: 1 CertyIQ
You want to upload files from an on-premises virtual machine to Google Cloud Storage as part of a data migration.
These files will be consumed by Cloud
DataProc Hadoop cluster in a GCP environment.
Which command should you use?

A.gsutil cp [LOCAL_OBJECT] gs://[DESTINATION_BUCKET_NAME]/


B.gcloud cp [LOCAL_OBJECT] gs://[DESTINATION_BUCKET_NAME]/
C.hadoop fs cp [LOCAL_OBJECT] gs://[DESTINATION_BUCKET_NAME]/
D.gcloud dataproc cp [LOCAL_OBJECT] gs://[DESTINATION_BUCKET_NAME]/

Answer: A

Explanation:
The gsutil cp command allows you to copy data between your local file. storage. boto files generated by
running "gsutil config"

Question: 2 CertyIQ
You migrated your applications to Google Cloud Platform and kept your existing monitoring platform. You now find
that your notification system is too slow for time critical problems.
What should you do?

A.Replace your entire monitoring platform with Stackdriver.


B.Install the Stackdriver agents on your Compute Engine instances.
C.Use Stackdriver to capture and alert on logs, then ship them to your existing platform.
D.Migrate some traffic back to your old platform and perform AB testing on the two platforms concurrently.

Answer: C

Explanation:

Cyou have problems with notifications.C option allows you to use stackdriver to send alerts immediately and
straight away after sends all this data to your on-prem monitoring platform

Think twice. You have working an expensive monitoring system i.e Splunk and you have the problem with
unacceptable delay time between incident and notification. You need to fix this problem, not doing a
revolution (changing monitoring system). You can leverage GCP Monitoring with alerting system which is out-
of-the-box with no huge effort, because if you want or not logs are in cloud logging. Simply implement alerts
and push logs to Splunk. Simples.

Question: 3 CertyIQ
You are planning to migrate a MySQL database to the managed Cloud SQL database for Google Cloud. You have
Compute Engine virtual machine instances that will connect with this Cloud SQL instance. You do not want to
whitelist IPs for the Compute Engine instances to be able to access Cloud SQL.
What should you do?

A.Enable private IP for the Cloud SQL instance.


B.Whitelist a project to access Cloud SQL, and add Compute Engine instances in the whitelisted project.
C.Create a role in Cloud SQL that allows access to the database from external instances, and assign the
Compute Engine instances to that role.
D.Create a CloudSQL instance on one project. Create Compute engine instances in a different project. Create a
VPN between these two projects to allow internal access to CloudSQL.

Answer: A

Explanation:

The question is about "connection". Role assignment gives a set of permission to compute engine but doesn't
allow connection.

Question: 4 CertyIQ
You have deployed an HTTP(s) Load Balancer with the gcloud commands shown below.

Health checks to port 80 on the Compute Engine virtual machine instance are failing and no traffic is sent to your
instances. You want to resolve the problem.
Which commands should you run?

A.gcloud compute instances add-access-config $ NAME -backend-instance-1


B.gcloud compute instances add-tags $ NAME -backend-instance-1 --tags http-server
C.gcloud compute firewall-rules create allow-lb --network load-balancer --allow tcp --source-ranges
130.211.0.0/22,35.191.0.0/16 --direction INGRESS
D.gcloud compute firewall-rules create allow-lb --network load-balancer --allow tcp --destination-ranges
130.211.0.0/22,35.191.0.0/16 --direction EGRESS

Answer: C

Explanation:

Cthe source IP ranges for health checks (including legacy health checks if used for HTTP(S) Load Balancing)
are:35.191.0.0/16130.211.0.0/22Furthermore it should be direction INGRESS since the health-check (ping) is
coming into the load balancer/instance.

Reference:

https://cloud.google.com/vpc/docs/special-configurations
Question: 5 CertyIQ
Your website is deployed on Compute Engine. Your marketing team wants to test conversion rates between 3
different website designs.
Which approach should you use?

A.Deploy the website on App Engine and use traffic splitting.


B.Deploy the website on App Engine as three separate services.
C.Deploy the website on Cloud Functions and use traffic splitting.
D.Deploy the website on Cloud Functions as three separate functions.

Answer: A

Explanation:

A is correct because it allows routing traffic to a single domain and split traffic based on IP or Cookie. B is not
correct because the domain name will change based on the service.

Reference:

https://cloud.google.com/appengine/docs/standard/python/splitting-traffic

Question: 6 CertyIQ
You need to copy directory local-scripts and all of its contents from your local workstation to a Compute Engine
virtual machine instance.
Which command should you use?

A.gsutil cp --project my-gcp-project -r ~/local-scripts/ gcp-instance-name:~/server-scripts/ --zone us-east1-b


B.gsutil cp --project my-gcp-project -R ~/local-scripts/ gcp-instance-name:~/server-scripts/ --zone us-east1-b
C.gcloud compute scp --project my-gcp-project --recurse ~/local-scripts/ gcp-instance-name:~/server-scripts/
--zone us-east1-b
D.gcloud compute mv --project my-gcp-project --recurse ~/local-scripts/ gcp-instance-name:~/server-scripts/ -
-zone us-east1-b

Answer: C

Explanation:

Reference:
https://cloud.google.com/sdk/gcloud/reference/compute/copy-files

Question: 7 CertyIQ
You are deploying your application to a Compute Engine virtual machine instance with the Stackdriver Monitoring
Agent installed. Your application is a unix process on the instance. You want to be alerted if the unix process has
not run for at least 5 minutes. You are not able to change the application to generate metrics or logs.
Which alert condition should you configure?

A.Uptime check
B.Process health
C.Metric absence
D.Metric threshold
Answer: B

Explanation:

"An uptime check is a request sent to a resource to see if it responds"A is wrongMetric absence and threshold
don't make senseProcess health is correct for sure so answer is B

Reference:

https://cloud.google.com/monitoring/alerts/concepts-indepth

Question: 8 CertyIQ
You have two tables in an ANSI-SQL compliant database with identical columns that you need to quickly combine
into a single table, removing duplicate rows from the result set.
What should you do?

A.Use the JOIN operator in SQL to combine the tables.


B.Use nested WITH statements to combine the tables.
C.Use the UNION operator in SQL to combine the tables.
D.Use the UNION ALL operator in SQL to combine the tables.

Answer: C

Explanation:

C is correct answer here.The only difference between Union and Union All is that Union All will not removes
duplicate rows or records, instead, it just selects all the rows from all the tables which meets the conditions of
your specifics query and combines them into the result table.

Reference:

https://www.techonthenet.com/sql/union_all.php

Question: 9 CertyIQ
You have an application deployed in production. When a new version is deployed, some issues don't arise until the
application receives traffic from users in production. You want to reduce both the impact and the number of users
affected.
Which deployment strategy should you use?

A.Blue/green deployment
B.Canary deployment
C.Rolling deployment
D.Recreate deployment

Answer: B

Explanation:

answer is B to reduce impact on users because it's a progressive release

Blue/Green is 100% users to Green, Canary id progressive. B.


Question: 10 CertyIQ
Your company wants to expand their users outside the United States for their popular application. The company
wants to ensure 99.999% availability of the database for their application and also wants to minimize the read
latency for their users across the globe.
Which two actions should they take? (Choose two.)

A.Create a multi-regional Cloud Spanner instance with "nam-asia-eur1" configuration.


B.Create a multi-regional Cloud Spanner instance with "nam3" configuration.
C.Create a cluster with at least 3 Spanner nodes.
D.Create a cluster with at least 1 Spanner node.
E.Create a minimum of two Cloud Spanner instances in separate regions with at least one node.
F.Create a Cloud Dataflow pipeline to replicate data across different databases.

Answer: AC

Explanation:

99.999% availability and reduce latencyOption A gives us 99.999% availability (think its typo in region
name)Option C is about compute capacity, more nodes -> less
latencyhttps://cloud.google.com/spanner/docs/instances#compute-capacityB - there is no such multi-region
configuration nam3D - its better to create cluster with 3 nodes, not 1E,F - overengineering

A - global and provides 99.999% availabilityC - more nodes - less latency

Question: 11 CertyIQ
You need to migrate an internal file upload API with an enforced 500-MB file size limit to App Engine.
What should you do?

A.Use FTP to upload files.


B.Use CPanel to upload files.
C.Use signed URLs to upload files.
D.Change the API to be a multipart file upload API.

Answer: C

Explanation:

Reference:
https://wiki.christophchamp.com/index.php?title=Google_Cloud_Platform

Question: 12 CertyIQ
You are planning to deploy your application in a Google Kubernetes Engine (GKE) cluster. The application exposes
an HTTP-based health check at /healthz. You want to use this health check endpoint to determine whether traffic
should be routed to the pod by the load balancer.
Which code snippet should you include in your Pod configuration?
A.
B.

C.

D.

Answer: B

Explanation:
For the GKE ingress controller to use your readinessProbes as health checks, the Pods for an Ingress must
exist at the time of Ingress creation. If your replicas are scaled to 0, the default health check will apply.

Question: 13 CertyIQ
Your teammate has asked you to review the code below. Its purpose is to efficiently add a large number of small
rows to a BigQuery table.
Which improvement should you suggest your teammate make?

A.Include multiple rows with each request.


B.Perform the inserts in parallel by creating multiple threads.
C.Write each row to a Cloud Storage object, then load into BigQuery.
D.Write each row to a Cloud Storage object in parallel, then load into BigQuery.

Answer: A

Explanation:

i'd choose A. for me it's same as batch insert/update recommended

A. Include multiple rows with each request.Batch inserts are more efficient than individual inserts and will
increase write performance by reducing the overhead of creating and sending individual requests for each
row. Parallel inserts could potentially lead to conflicting writes or cause resource exhaustion, and adding a
step of writing to Cloud Storage and then loading into BigQuery can add additional overhead and complexity.

Question: 14 CertyIQ
You are developing a JPEG image-resizing API hosted on Google Kubernetes Engine (GKE). Callers of the service
will exist within the same GKE cluster. You want clients to be able to get the IP address of the service.
What should you do?

A.Define a GKE Service. Clients should use the name of the A record in Cloud DNS to find the service's cluster
IP address.
B.Define a GKE Service. Clients should use the service name in the URL to connect to the service.
C.Define a GKE Endpoint. Clients should get the endpoint name from the appropriate environment variable in
the client container.
D.Define a GKE Endpoint. Clients should get the endpoint name from Cloud DNS.

Answer: B

Explanation:

both A and B are validOption A, DNS A record maps service FQDN to IP address, fqdn like service-
name.default.svc.cluster.localB is more easier, just use http://service-name

answer is B because client are in the same cluster so service name can be used.

Question: 15 CertyIQ
You are using Cloud Build to build and test application source code stored in Cloud Source Repositories. The build
process requires a build tool not available in the Cloud Build environment.
What should you do?

A.Download the binary from the internet during the build process.
B.Build a custom cloud builder image and reference the image in your build steps.
C.Include the binary in your Cloud Source Repositories repository and reference it in your build scripts.
D.Ask to have the binary added to the Cloud Build environment by filing a feature request against the Cloud
Build public Issue Tracker.

Answer: B

Explanation:

B is correct answer

https://cloud.google.com/cloud-build/docs/configuring-builds/use-community-and-custom-
builders#creating_a_custom_builder

Question: 16 CertyIQ
You are deploying your application to a Compute Engine virtual machine instance. Your application is configured to
write its log files to disk. You want to view the logs in Stackdriver Logging without changing the application code.
What should you do?

A.Install the Stackdriver Logging Agent and configure it to send the application logs.
B.Use a Stackdriver Logging Library to log directly from the application to Stackdriver Logging.
C.Provide the log file folder path in the metadata of the instance to configure it to send the application logs.
D.Change the application to log to /var/log so that its logs are automatically sent to Stackdriver Logging.

Answer: A

Explanation:

https://cloud.google.com/logging/docs/agent/logging/installation:"

The Logging agent streams logs from your VM instances and from selected third-party software packages to
Cloud Logging"A is correct

Question: 17 CertyIQ
Your service adds text to images that it reads from Cloud Storage. During busy times of the year, requests to Cloud
Storage fail with an HTTP 429 "Too Many
Requests" status code.
How should you handle this error?

A.Add a cache-control header to the objects.


B.Request a quota increase from the GCP Console.
C.Retry the request with a truncated exponential backoff strategy.
D.Change the storage class of the Cloud Storage bucket to Multi-regional.

Answer: C
Explanation:

"A Cloud Storage JSON API usage limit was exceeded. If your application tries to use more than its limit,
additional requests will fail. Throttle your client's requests, and/or use truncated exponential backoff."C is
correct

Reference:

https://developers.google.com/gmail/api/v1/reference/quota

Question: 18 CertyIQ
You are building an API that will be used by Android and iOS apps. The API must:
* Support HTTPs
* Minimize bandwidth cost
* Integrate easily with mobile apps
Which API architecture should you use?

A.RESTful APIs
B.MQTT for APIs
C.gRPC-based APIs
D.SOAP-based APIs

Answer: C

Explanation:

https://www.imaginarycloud.com/blog/grpc-vs-rest/gRPC architectural style has promising features that can


(and should) be explored. It is an excellent option for working with multi-language systems, real-time
streaming, and for instance, when operating an IoT system that requires light-weight message transmission
such as the serialized Protobuf messages allow. Moreover, gRPC should also be considered for mobile
applications since they do not need a browser and can benefit from smaller messages, preserving mobiles'
processors' speed.

Question: 19 CertyIQ
Your application takes an input from a user and publishes it to the user's contacts. This input is stored in a table in
Cloud Spanner. Your application is more sensitive to latency and less sensitive to consistency.
How should you perform reads from Cloud Spanner for this application?

A.Perform Read-Only transactions.


B.Perform stale reads using single-read methods.
C.Perform strong reads using single-read methods.
D.Perform stale reads using read-write transactions.

Answer: B

Explanation:

https://cloud.google.com/spanner/docs/reference/rest/v1/TransactionOptions read-write transaction type


has no options, and there is no way to make stale reads with this transaction type, so D) is definitely wrong.In
the question, low latency is more critical than consistency, so C) is not an option. Read-Only transactions can
do stale reads as well as Single Read methods, but in the documentation
https://cloud.google.com/spanner/docs/transactions#read-only_transactions , they encourage to use
SingleRead methods where possible.My vote is B)

Question: 20 CertyIQ
Your application is deployed in a Google Kubernetes Engine (GKE) cluster. When a new version of your application
is released, your CI/CD tool updates the spec.template.spec.containers[0].image value to reference the Docker
image of your new application version. When the Deployment object applies the change, you want to deploy at
least 1 replica of the new version and maintain the previous replicas until the new replica is healthy.
Which change should you make to the GKE Deployment object shown below?

A.Set the Deployment strategy to RollingUpdate with maxSurge set to 0, maxUnavailable set to 1.
B.Set the Deployment strategy to RollingUpdate with maxSurge set to 1, maxUnavailable set to 0.
C.Set the Deployment strategy to Recreate with maxSurge set to 0, maxUnavailable set to 1.
D.Set the Deployment strategy to Recreate with maxSurge set to 1, maxUnavailable set to 0.

Answer: B

Explanation:

"The simplest way to take advantage of surge upgrade is to configure maxSurge=1 maxUnavailable=0. This
means that only 1 surge node can be added to the node pool during an upgrade so only 1 node will be
upgraded at a time. This setting is superior to the existing upgrade configuration (maxSurge=0
maxUnavailable=1) because it speeds up Pod restarts during upgrades while progressing
conservatively."Answer is B

Reference:

https://cloud.google.com/kubernetes-engine/docs/concepts/cluster-upgrades

Question: 21 CertyIQ
You plan to make a simple HTML application available on the internet. This site keeps information about FAQs for
your application. The application is static and contains images, HTML, CSS, and Javascript. You want to make this
application available on the internet with as few steps as possible.
What should you do?

A.Upload your application to Cloud Storage.


B.Upload your application to an App Engine environment.
C.Create a Compute Engine instance with Apache web server installed. Configure Apache web server to host
the application.
D.Containerize your application first. Deploy this container to Google Kubernetes Engine (GKE) and assign an
external IP address to the GKE pod hosting the application.

Answer: A

Explanation:

A, if its static then quickest way is via cloud storage.

Reference:

https://cloud.google.com/storage/docs/hosting-static-website

Question: 22 CertyIQ
Your company has deployed a new API to App Engine Standard environment. During testing, the API is not
behaving as expected. You want to monitor the application over time to diagnose the problem within the
application code without redeploying the application.
Which tool should you use?

A.Stackdriver Trace
B.Stackdriver Monitoring
C.Stackdriver Debug Snapshots
D.Stackdriver Debug Logpoints

Answer: D

Explanation:

i think this question will become obsolete since Cloud debugger will be deprecated: Cloud Debugger is
deprecated and will be shutdown May 31, 2023. See the deprecations page and release notes for more
information. Cloud Debugger is deprecated and is scheduled for shutdown on May 31 2023. For an alternative,
use the open source CLI tool, Snapshot Debugger.https://cloud.google.com/debugger/docs/release-notesIn
thi context i'll say D

" You want to monitor the application over time to diagnose the problem within the application code"If it's only
for moniroting it's B, but it mentions "within the code" so it should be D

Question: 23 CertyIQ
You want to use the Stackdriver Logging Agent to send an application's log file to Stackdriver from a Compute
Engine virtual machine instance.
After installing the Stackdriver Logging Agent, what should you do first?

A.Enable the Error Reporting API on the project.


B.Grant the instance full access to all Cloud APIs.
C.Configure the application log file as a custom source.
D.Create a Stackdriver Logs Export Sink with a filter that matches the application's log entries.

Answer: C

Explanation:

answer should be C unless your application log is in the default log directory

https://cloud.google.com/logging/docs/agent/configuration

Question: 24 CertyIQ
Your company has a BigQuery data mart that provides analytics information to hundreds of employees. One user of
wants to run jobs without interrupting important workloads. This user isn't concerned about the time it takes to run
these jobs. You want to fulfill this request while minimizing cost to the company and the effort required on your
part.
What should you do?

A.Ask the user to run the jobs as batch jobs.


B.Create a separate project for the user to run jobs.
C.Add the user as a job.user role in the existing project.
D.Allow the user to run jobs when important workloads are not running.

Answer: A

Explanation:

Option A makes the most senseB is wrong since it will incur more costs which is not what the qn wantsC is
definitely out as creating roles is not what the qn is asking forD is wrong as it would not minimise effort

Question: 25 CertyIQ
You want to notify on-call engineers about a service degradation in production while minimizing development time.
What should you do?

A.Use Cloud Function to monitor resources and raise alerts.


B.Use Cloud Pub/Sub to monitor resources and raise alerts.
C.Use Stackdriver Error Reporting to capture errors and raise alerts.
D.Use Stackdriver Monitoring to monitor resources and raise alerts.
Answer: D

Explanation:

DError Reporting is not about service degradation, more, Error Reporting uses Monitoring to send
alerts.https://cloud.google.com/error-reporting/docs/notifications

D is correct for monitoring.I'm baffled by the "correct" answers given by the site, 80% of the time they are
wrong.

Question: 26 CertyIQ
You are writing a single-page web application with a user-interface that communicates with a third-party API for
content using XMLHttpRequest. The data displayed on the UI by the API results is less critical than other data
displayed on the same web page, so it is acceptable for some requests to not have the API data displayed in the UI.
However, calls made to the API should not delay rendering of other parts of the user interface. You want your
application to perform well when the API response is an error or a timeout.
What should you do?

A.Set the asynchronous option for your requests to the API to false and omit the widget displaying the API
results when a timeout or error is encountered.
B.Set the asynchronous option for your request to the API to true and omit the widget displaying the API results
when a timeout or error is encountered.
C.Catch timeout or error exceptions from the API call and keep trying with exponential backoff until the API
response is successful.
D.Catch timeout or error exceptions from the API call and display the error response in the UI widget.

Answer: B

Explanation:

Answer is B.Api should not delay rendering: asynchronousApplication perform well when Api error or timeout:
omit the widget

Understanding the question correctly, the answer should be B

Question: 27 CertyIQ
You are creating a web application that runs in a Compute Engine instance and writes a file to any user's Google
Drive. You need to configure the application to authenticate to the Google Drive API. What should you do?

A.Use an OAuth Client ID that uses the https://www.googleapis.com/auth/drive.file scope to obtain an access
token for each user.
B.Use an OAuth Client ID with delegated domain-wide authority.
C.Use the App Engine service account and https://www.googleapis.com/auth/drive.file scope to generate a
signed JSON Web Token (JWT).
D.Use the App Engine service account with delegated domain-wide authority.

Answer: A

Explanation:

A Because need to allow all users so not link to a domain


Question: 28 CertyIQ
You are creating a Google Kubernetes Engine (GKE) cluster and run this command:

The command fails with the error:

You want to resolve the issue. What should you do?

A.Request additional GKE quota in the GCP Console.


B.Request additional Compute Engine quota in the GCP Console.
C.Open a support case to request additional GKE quota.
D.Decouple services in the cluster, and rewrite new clusters to function with fewer cores.

Answer: B

Explanation:

https://cloud.google.com/kubernetes-engine/docs/concepts/cluster-architecture:

"A cluster typically has one or more nodes, which are the worker machines that run your containerized
applications and other workloads. The individual machines are Compute Engine VM instances that GKE
creates on your behalf when you create a cluster."Error message mentions "CPU" so this would refer to
Compute Engine VMsAnswer is B

Question: 29 CertyIQ
You are parsing a log file that contains three columns: a timestamp, an account number (a string), and a transaction
amount (a number). You want to calculate the sum of all transaction amounts for each unique account number
efficiently.
Which data structure should you use?

A.A linked list


B.A hash table
C.A two-dimensional array
D.A comma-delimited string

Answer: B

Explanation:

Hash table with the account number as the key, the timestamp is useless for this question, so we can safely
discard it.

Question: 30 CertyIQ
Your company has a BigQuery dataset named "Master" that keeps information about employee travel and
expenses. This information is organized by employee department. That means employees should only be able to
view information for their department. You want to apply a security framework to enforce this requirement with the
minimum number of steps.
What should you do?

A.Create a separate dataset for each department. Create a view with an appropriate WHERE clause to select
records from a particular dataset for the specific department. Authorize this view to access records from your
Master dataset. Give employees the permission to this department-specific dataset.
B.Create a separate dataset for each department. Create a data pipeline for each department to copy
appropriate information from the Master dataset to the specific dataset for the department. Give employees
the permission to this department-specific dataset.
C.Create a dataset named Master dataset. Create a separate view for each department in the Master dataset.
Give employees access to the specific view for their department.
D.Create a dataset named Master dataset. Create a separate table for each department in the Master dataset.
Give employees access to the specific table for their department.

Answer: C

Explanation:

correct answer c. the view answer the need of acess A is elminited because create dataset by department is
more steps.

Question: 31 CertyIQ
You have an application in production. It is deployed on Compute Engine virtual machine instances controlled by a
managed instance group. Traffic is routed to the instances via a HTTP(s) load balancer. Your users are unable to
access your application. You want to implement a monitoring technique to alert you when the application is
unavailable.
Which technique should you choose?

A.Smoke tests
B.Stackdriver uptime checks
C.Cloud Load Balancing - heath checks
D.Managed instance group - heath checks

Answer: B

Explanation:

B is correct answer, Uptime provide you a machanism to do halth check on URL.

Reference:

https://medium.com/google-cloud/stackdriver-monitoring-automation-part-3-uptime-checks-476b8507f59c

Question: 32 CertyIQ
You are load testing your server application. During the first 30 seconds, you observe that a previously inactive
Cloud Storage bucket is now servicing 2000 write requests per second and 7500 read requests per second. Your
application is now receiving intermittent 5xx and 429 HTTP responses from the Cloud Storage
JSON API as the demand escalates. You want to decrease the failed responses from the Cloud Storage API.
What should you do?

A.Distribute the uploads across a large number of individual storage buckets.


B.Use the XML API instead of the JSON API for interfacing with Cloud Storage.
C.Pass the HTTP response codes back to clients that are invoking the uploads from your application.
D.Limit the upload rate from your application clients so that the dormant bucket's peak request rate is reached
more gradually.

Answer: D

Explanation:

Limit the upload rate from your application clients so that the dormant bucket's peak request rate is reached
more gradually.

https://cloud.google.com/storage/docs/request-rate#ramp-up

Question: 33 CertyIQ
Your application is controlled by a managed instance group. You want to share a large read-only data set between
all the instances in the managed instance group. You want to ensure that each instance can start quickly and can
access the data set via its filesystem with very low latency. You also want to minimize the total cost of the solution.
What should you do?

A.Move the data to a Cloud Storage bucket, and mount the bucket on the filesystem using Cloud Storage FUSE.
B.Move the data to a Cloud Storage bucket, and copy the data to the boot disk of the instance via a startup
script.
C.Move the data to a Compute Engine persistent disk, and attach the disk in read-only mode to multiple
Compute Engine virtual machine instances.
D.Move the data to a Compute Engine persistent disk, take a snapshot, create multiple disks from the snapshot,
and attach each disk to its own instance.

Answer: C

Explanation:

https://cloud.google.com/compute/docs/disks/sharing-disks-between-vms#use-multi-instancesShare a disk
in read-only mode between multiple VMs Sharing static data between multiple VMs from one persistent disk
is "less expensive" than replicating your data to unique disks for individual instances.
https://cloud.google.com/compute/docs/disks/gcs-buckets#mount_bucketMounting a bucket as a file
systemYou can use the Cloud Storage FUSE tool to mount a Cloud Storage bucket to your Compute Engine
instance. The mounted bucket behaves similarly to a persistent disk even though Cloud Storage buckets are
object storage. https://github.com/GoogleCloudPlatform/gcsfuse/Cloud Storage FUSE performance issues:
Latency, Rate limit

Question: 34 CertyIQ
You are developing an HTTP API hosted on a Compute Engine virtual machine instance that needs to be invoked by
multiple clients within the same Virtual
Private Cloud (VPC). You want clients to be able to get the IP address of the service.
What should you do?

A.Reserve a static external IP address and assign it to an HTTP(S) load balancing service's forwarding rule.
Clients should use this IP address to connect to the service.
B.Reserve a static external IP address and assign it to an HTTP(S) load balancing service's forwarding rule.
Then, define an A record in Cloud DNS. Clients should use the name of the A record to connect to the service.
C.Ensure that clients use Compute Engine internal DNS by connecting to the instance name with the url
https://[INSTANCE_NAME].[ZONE].c. [PROJECT_ID].internal/.
D.Ensure that clients use Compute Engine internal DNS by connecting to the instance name with the url
https://[API_NAME]/[API_VERSION]/.

Answer: C

Explanation:

answer C)"Virtual Private Cloud networks on Google Cloud have an internal DNS service that lets instances in
the same network access each other by using internal DNS names" This name can be used for access:
[INSTANCE_NAME].[ZONE].c.[PROJECT_ID].internal https://cloud.google.com/compute/docs/internal-
dns#access_by_internal_DNS

Question: 35 CertyIQ
Your application is logging to Stackdriver. You want to get the count of all requests on all /api/alpha/* endpoints.
What should you do?

A.Add a Stackdriver counter metric for path:/api/alpha/.


B.Add a Stackdriver counter metric for endpoint:/api/alpha/*.
C.Export the logs to Cloud Storage and count lines matching /api/alpha.
D.Export the logs to Cloud Pub/Sub and count lines matching /api/alpha.

Answer: B

Explanation:

submiting just to confirm community response.

Question: 36 CertyIQ
You want to re-architect a monolithic application so that it follows a microservices model. You want to accomplish
this efficiently while minimizing the impact of this change to the business.
Which approach should you take?

A.Deploy the application to Compute Engine and turn on autoscaling.


B.Replace the application's features with appropriate microservices in phases.
C.Refactor the monolithic application with appropriate microservices in a single effort and deploy it.
D.Build a new application with the appropriate microservices separate from the monolith and replace it when it
is complete.

Answer: B

Explanation:

Migrating a monolithic service is best when done feature by feature.

Question: 37 CertyIQ
Your existing application keeps user state information in a single MySQL database. This state information is very
user-specific and depends heavily on how long a user has been using an application. The MySQL database is
causing challenges to maintain and enhance the schema for various users.
Which storage option should you choose?

A.Cloud SQL
B.Cloud Storage
C.Cloud Spanner
D.Cloud Datastore/Firestore

Answer: D

Explanation:

The question is a bit misleading. If its asking to keep a MySQL storage option then Cloud SQL or Spanner are
the only options. However, assuming that they want to move away from schema and also the need for stateful
DB I would go for Datastore/Firestore.

https://cloud.google.com/datastore/docs/concepts/overview#what_its_good_for -> "User profiles that deliver


a customized experience based on the user’s past activities and preferences".Answer id D.

Question: 38 CertyIQ
You are building a new API. You want to minimize the cost of storing and reduce the latency of serving images.
Which architecture should you use?

A.App Engine backed by Cloud Storage


B.Compute Engine backed by Persistent Disk
C.Transfer Appliance backed by Cloud Filestore
D.Cloud Content Delivery Network (CDN) backed by Cloud Storage

Answer: D

Explanation:

D. Cloud Content Delivery Network (CDN) backed by Cloud Storage.A Cloud CDN is a content delivery network
that uses Google's globally distributed edge points of presence to accelerate content delivery for websites
and applications served out of Google Cloud. Cloud CDN stores and serves content from Google Cloud
Storage, which allows for efficient and low-cost storage of images, as well as low latency in serving the
images. The other options do not mention low latency or cost-effective storage as their primary benefits.

Question: 39 CertyIQ
Your company's development teams want to use Cloud Build in their projects to build and push Docker images to
Container Registry. The operations team requires all Docker images to be published to a centralized, securely
managed Docker registry that the operations team manages.
What should you do?

A.Use Container Registry to create a registry in each development team's project. Configure the Cloud Build
build to push the Docker image to the project's registry. Grant the operations team access to each development
team's registry.
B.Create a separate project for the operations team that has Container Registry configured. Assign appropriate
permissions to the Cloud Build service account in each developer team's project to allow access to the
operation team's registry.
C.Create a separate project for the operations team that has Container Registry configured. Create a Service
Account for each development team and assign the appropriate permissions to allow it access to the operations
team's registry. Store the service account key file in the source code repository and use it to authenticate
against the operations team's registry.
D.Create a separate project for the operations team that has the open source Docker Registry deployed on a
Compute Engine virtual machine instance. Create a username and password for each development team. Store
the username and password in the source code repository and use it to authenticate against the operations
team's Docker registry.

Answer: B

Explanation:

The correct answer is B Container Registry is a good choice to store containers in a secure manageable way. It
is possible to have ContainerRegistry in One project and push to it from Cloud Build of another project by
adding appropriate service account as a member of a Cloud Storage Bucket used to host containers with the
role Cloud Build Service Account.

Question: 40 CertyIQ
You are planning to deploy your application in a Google Kubernetes Engine (GKE) cluster. Your application can
scale horizontally, and each instance of your application needs to have a stable network identity and its own
persistent disk.
Which GKE object should you use?

A.Deployment
B.StatefulSet
C.ReplicaSet
D.ReplicaController

Answer: B

Explanation:

Once created, the StatefulSet ensures that the desired number of Pods are running and available at all times.
The StatefulSet automatically replaces Pods that fail or are evicted from their nodes, and automatically
associates new Pods with the storage resources, resource requests and limits, and other configurations
defined in the StatefulSet's Pod specification

Reference:

https://livebook.manning.com/book/kubernetes-in-action/chapter-10/46

Question: 41 CertyIQ
You are using Cloud Build to build a Docker image. You need to modify the build to execute unit and run integration
tests. When there is a failure, you want the build history to clearly display the stage at which the build failed.
What should you do?

A.Add RUN commands in the Dockerfile to execute unit and integration tests.
B.Create a Cloud Build build config file with a single build step to compile unit and integration tests.
C.Create a Cloud Build build config file that will spawn a separate cloud build pipeline for unit and integration
tests.
D.Create a Cloud Build build config file with separate cloud builder steps to compile and execute unit and
integration tests.
Answer: D

Explanation:

Create a Cloud Build build config file with separate cloud builder steps to compile and execute unit and
integration tests. This is the best option because it allows you to clearly specify and separate the different
stages of the build process (compiling unit tests, executing unit tests, compiling integration tests, executing
integration tests). This makes it easier to understand the build history and identify any failures that may occur.
In addition, using separate build steps allows you to specify different properties (such as timeout values or
environment variables) for each stage of the build process.

Question: 42 CertyIQ
Your code is running on Cloud Functions in project A. It is supposed to write an object in a Cloud Storage bucket
owned by project B. However, the write call is failing with the error "403 Forbidden".
What should you do to correct the problem?

A.Grant your user account the roles/storage.objectCreator role for the Cloud Storage bucket.
B.Grant your user account the roles/iam.serviceAccountUser role for the [email protected] service account.
C.Grant the [email protected] service account the roles/storage.objectCreator role for the Cloud Storage
bucket.
D.Enable the Cloud Storage API in project B.

Answer: C

Explanation:

The answer is C : the default service account use by cloud function is [email protected] (cf.
https://cloud.google.com/functions/docs/concepts/iam#troubleshooting_permission_errors)

Question: 43 CertyIQ
Case study -
This is a case study. Case studies are not timed separately. You can use as much exam time as you would like to
complete each case. However, there may be additional case studies and sections on this exam. You must manage
your time to ensure that you are able to complete all questions included on this exam in the time provided.
To answer the questions included in a case study, you will need to reference information that is provided in the
case study. Case studies might contain exhibits and other resources that provide more information about the
scenario that is described in the case study. Each question is independent of the other questions in this case study.
At the end of this case study, a review screen will appear. This screen allows you to review your answers and to
make changes before you move to the next section of the exam. After you begin a new section, you cannot return
to this section.

To start the case study -


To display the first question in this case study, click the Next button. Use the buttons in the left pane to explore
the content of the case study before you answer the questions. Clicking these buttons displays information such
as business requirements, existing environment, and problem statements. If the case study has an
All Information tab, note that the information displayed is identical to the information displayed on the subsequent
tabs. When you are ready to answer a question, click the Question button to return to the question.

Company Overview -
HipLocal is a community application designed to facilitate communication between people in close proximity. It is
used for event planning and organizing sporting events, and for businesses to connect with their local
communities. HipLocal launched recently in a few neighborhoods in Dallas and is rapidly growing into a global
phenomenon. Its unique style of hyper-local community communication and business outreach is in demand
around the world.

Executive Statement -
We are the number one local community app; it's time to take our local community services global. Our venture
capital investors want to see rapid growth and the same great experience for new local and virtual communities
that come online, whether their members are 10 or 10000 miles away from each other.

Solution Concept -
HipLocal wants to expand their existing service, with updated functionality, in new regions to better serve their
global customers. They want to hire and train a new team to support these regions in their time zones. They will
need to ensure that the application scales smoothly and provides clear uptime data.

Existing Technical Environment -


HipLocal's environment is a mix of on-premises hardware and infrastructure running in Google Cloud Platform. The
HipLocal team understands their application well, but has limited experience in global scale applications. Their
existing technical environment is as follows:
* Existing APIs run on Compute Engine virtual machine instances hosted in GCP.
* State is stored in a single instance MySQL database in GCP.
* Data is exported to an on-premises Teradata/Vertica data warehouse.
* Data analytics is performed in an on-premises Hadoop environment.
* The application has no logging.
* There are basic indicators of uptime; alerts are frequently fired when the APIs are unresponsive.

Business Requirements -
HipLocal's investors want to expand their footprint and support the increase in demand they are seeing. Their
requirements are:
* Expand availability of the application to new regions.
* Increase the number of concurrent users that can be supported.
* Ensure a consistent experience for users when they travel to different regions.
* Obtain user activity metrics to better understand how to monetize their product.
* Ensure compliance with regulations in the new regions (for example, GDPR).
* Reduce infrastructure management time and cost.
* Adopt the Google-recommended practices for cloud computing.

Technical Requirements -
* The application and backend must provide usage metrics and monitoring.
* APIs require strong authentication and authorization.
* Logging must be increased, and data should be stored in a cloud analytics platform.
* Move to serverless architecture to facilitate elastic scaling.
* Provide authorized access to internal apps in a secure manner.
HipLocal's .net-based auth service fails under intermittent load.
What should they do?

A.Use App Engine for autoscaling.


B.Use Cloud Functions for autoscaling.
C.Use a Compute Engine cluster for the service.
D.Use a dedicated Compute Engine virtual machine instance for the service.

Answer: A

Explanation:

Use App Engine for autoscaling.

Question: 44 CertyIQ
Case study -
This is a case study. Case studies are not timed separately. You can use as much exam time as you would like to
complete each case. However, there may be additional case studies and sections on this exam. You must manage
your time to ensure that you are able to complete all questions included on this exam in the time provided.
To answer the questions included in a case study, you will need to reference information that is provided in the
case study. Case studies might contain exhibits and other resources that provide more information about the
scenario that is described in the case study. Each question is independent of the other questions in this case study.
At the end of this case study, a review screen will appear. This screen allows you to review your answers and to
make changes before you move to the next section of the exam. After you begin a new section, you cannot return
to this section.

To start the case study -


To display the first question in this case study, click the Next button. Use the buttons in the left pane to explore
the content of the case study before you answer the questions. Clicking these buttons displays information such
as business requirements, existing environment, and problem statements. If the case study has an
All Information tab, note that the information displayed is identical to the information displayed on the subsequent
tabs. When you are ready to answer a question, click the Question button to return to the question.

Company Overview -
HipLocal is a community application designed to facilitate communication between people in close proximity. It is
used for event planning and organizing sporting events, and for businesses to connect with their local
communities. HipLocal launched recently in a few neighborhoods in Dallas and is rapidly growing into a global
phenomenon. Its unique style of hyper-local community communication and business outreach is in demand
around the world.

Executive Statement -
We are the number one local community app; it's time to take our local community services global. Our venture
capital investors want to see rapid growth and the same great experience for new local and virtual communities
that come online, whether their members are 10 or 10000 miles away from each other.

Solution Concept -
HipLocal wants to expand their existing service, with updated functionality, in new regions to better serve their
global customers. They want to hire and train a new team to support these regions in their time zones. They will
need to ensure that the application scales smoothly and provides clear uptime data.

Existing Technical Environment -


HipLocal's environment is a mix of on-premises hardware and infrastructure running in Google Cloud Platform. The
HipLocal team understands their application well, but has limited experience in global scale applications. Their
existing technical environment is as follows:
* Existing APIs run on Compute Engine virtual machine instances hosted in GCP.
* State is stored in a single instance MySQL database in GCP.
* Data is exported to an on-premises Teradata/Vertica data warehouse.
* Data analytics is performed in an on-premises Hadoop environment.
* The application has no logging.
* There are basic indicators of uptime; alerts are frequently fired when the APIs are unresponsive.

Business Requirements -
HipLocal's investors want to expand their footprint and support the increase in demand they are seeing. Their
requirements are:
* Expand availability of the application to new regions.
* Increase the number of concurrent users that can be supported.
* Ensure a consistent experience for users when they travel to different regions.
* Obtain user activity metrics to better understand how to monetize their product.
* Ensure compliance with regulations in the new regions (for example, GDPR).
* Reduce infrastructure management time and cost.
* Adopt the Google-recommended practices for cloud computing.

Technical Requirements -
* The application and backend must provide usage metrics and monitoring.
* APIs require strong authentication and authorization.
* Logging must be increased, and data should be stored in a cloud analytics platform.
* Move to serverless architecture to facilitate elastic scaling.
* Provide authorized access to internal apps in a secure manner.
HipLocal's APIs are having occasional application failures. They want to collect application information specifically
to troubleshoot the issue. What should they do?

A.Take frequent snapshots of the virtual machines.


B.Install the Cloud Logging agent on the virtual machines.
C.Install the Cloud Monitoring agent on the virtual machines.
D.Use Cloud Trace to look for performance bottlenecks.
Answer: B

Explanation:

They don't have logging so need to add logging agent so we can have logs to study. Tracr is for latency issue
and it's not the issue here.

they don’t have any logging so it should be B

Question: 45 CertyIQ
Case study -
This is a case study. Case studies are not timed separately. You can use as much exam time as you would like to
complete each case. However, there may be additional case studies and sections on this exam. You must manage
your time to ensure that you are able to complete all questions included on this exam in the time provided.
To answer the questions included in a case study, you will need to reference information that is provided in the
case study. Case studies might contain exhibits and other resources that provide more information about the
scenario that is described in the case study. Each question is independent of the other questions in this case study.
At the end of this case study, a review screen will appear. This screen allows you to review your answers and to
make changes before you move to the next section of the exam. After you begin a new section, you cannot return
to this section.

To start the case study -


To display the first question in this case study, click the Next button. Use the buttons in the left pane to explore
the content of the case study before you answer the questions. Clicking these buttons displays information such
as business requirements, existing environment, and problem statements. If the case study has an
All Information tab, note that the information displayed is identical to the information displayed on the subsequent
tabs. When you are ready to answer a question, click the Question button to return to the question.

Company Overview -
HipLocal is a community application designed to facilitate communication between people in close proximity. It is
used for event planning and organizing sporting events, and for businesses to connect with their local
communities. HipLocal launched recently in a few neighborhoods in Dallas and is rapidly growing into a global
phenomenon. Its unique style of hyper-local community communication and business outreach is in demand
around the world.

Executive Statement -
We are the number one local community app; it's time to take our local community services global. Our venture
capital investors want to see rapid growth and the same great experience for new local and virtual communities
that come online, whether their members are 10 or 10000 miles away from each other.

Solution Concept -
HipLocal wants to expand their existing service, with updated functionality, in new regions to better serve their
global customers. They want to hire and train a new team to support these regions in their time zones. They will
need to ensure that the application scales smoothly and provides clear uptime data.

Existing Technical Environment -


HipLocal's environment is a mix of on-premises hardware and infrastructure running in Google Cloud Platform. The
HipLocal team understands their application well, but has limited experience in global scale applications. Their
existing technical environment is as follows:
* Existing APIs run on Compute Engine virtual machine instances hosted in GCP.
* State is stored in a single instance MySQL database in GCP.
* Data is exported to an on-premises Teradata/Vertica data warehouse.
* Data analytics is performed in an on-premises Hadoop environment.
* The application has no logging.
* There are basic indicators of uptime; alerts are frequently fired when the APIs are unresponsive.

Business Requirements -
HipLocal's investors want to expand their footprint and support the increase in demand they are seeing. Their
requirements are:
* Expand availability of the application to new regions.
* Increase the number of concurrent users that can be supported.
* Ensure a consistent experience for users when they travel to different regions.
* Obtain user activity metrics to better understand how to monetize their product.
* Ensure compliance with regulations in the new regions (for example, GDPR).
* Reduce infrastructure management time and cost.
* Adopt the Google-recommended practices for cloud computing.

Technical Requirements -
* The application and backend must provide usage metrics and monitoring.
* APIs require strong authentication and authorization.
* Logging must be increased, and data should be stored in a cloud analytics platform.
* Move to serverless architecture to facilitate elastic scaling.
* Provide authorized access to internal apps in a secure manner.
HipLocal has connected their Hadoop infrastructure to GCP using Cloud Interconnect in order to query data stored
on persistent disks.
Which IP strategy should they use?

A.Create manual subnets.


B.Create an auto mode subnet.
C.Create multiple peered VPCs.
D.Provision a single instance for NAT.

Answer: A

Explanation:

A - Need to take control of the IP assignment thru manual subnet especially when establishing the
connectivity between on-prem/cloud

Question: 46 CertyIQ
Case study -
This is a case study. Case studies are not timed separately. You can use as much exam time as you would like to
complete each case. However, there may be additional case studies and sections on this exam. You must manage
your time to ensure that you are able to complete all questions included on this exam in the time provided.
To answer the questions included in a case study, you will need to reference information that is provided in the
case study. Case studies might contain exhibits and other resources that provide more information about the
scenario that is described in the case study. Each question is independent of the other questions in this case study.
At the end of this case study, a review screen will appear. This screen allows you to review your answers and to
make changes before you move to the next section of the exam. After you begin a new section, you cannot return
to this section.

To start the case study -


To display the first question in this case study, click the Next button. Use the buttons in the left pane to explore
the content of the case study before you answer the questions. Clicking these buttons displays information such
as business requirements, existing environment, and problem statements. If the case study has an
All Information tab, note that the information displayed is identical to the information displayed on the subsequent
tabs. When you are ready to answer a question, click the Question button to return to the question.

Company Overview -
HipLocal is a community application designed to facilitate communication between people in close proximity. It is
used for event planning and organizing sporting events, and for businesses to connect with their local
communities. HipLocal launched recently in a few neighborhoods in Dallas and is rapidly growing into a global
phenomenon. Its unique style of hyper-local community communication and business outreach is in demand
around the world.

Executive Statement -
We are the number one local community app; it's time to take our local community services global. Our venture
capital investors want to see rapid growth and the same great experience for new local and virtual communities
that come online, whether their members are 10 or 10000 miles away from each other.

Solution Concept -
HipLocal wants to expand their existing service, with updated functionality, in new regions to better serve their
global customers. They want to hire and train a new team to support these regions in their time zones. They will
need to ensure that the application scales smoothly and provides clear uptime data.

Existing Technical Environment -


HipLocal's environment is a mix of on-premises hardware and infrastructure running in Google Cloud Platform. The
HipLocal team understands their application well, but has limited experience in global scale applications. Their
existing technical environment is as follows:
* Existing APIs run on Compute Engine virtual machine instances hosted in GCP.
* State is stored in a single instance MySQL database in GCP.
* Data is exported to an on-premises Teradata/Vertica data warehouse.
* Data analytics is performed in an on-premises Hadoop environment.
* The application has no logging.
* There are basic indicators of uptime; alerts are frequently fired when the APIs are unresponsive.

Business Requirements -
HipLocal's investors want to expand their footprint and support the increase in demand they are seeing. Their
requirements are:
* Expand availability of the application to new regions.
* Increase the number of concurrent users that can be supported.
* Ensure a consistent experience for users when they travel to different regions.
* Obtain user activity metrics to better understand how to monetize their product.
* Ensure compliance with regulations in the new regions (for example, GDPR).
* Reduce infrastructure management time and cost.
* Adopt the Google-recommended practices for cloud computing.

Technical Requirements -
* The application and backend must provide usage metrics and monitoring.
* APIs require strong authentication and authorization.
* Logging must be increased, and data should be stored in a cloud analytics platform.
* Move to serverless architecture to facilitate elastic scaling.
* Provide authorized access to internal apps in a secure manner.
Which service should HipLocal use to enable access to internal apps?

A.Cloud VPN
B.Cloud Armor
C.Virtual Private Cloud
D.Cloud Identity-Aware Proxy

Answer: D

Explanation:

Reference:
https://cloud.google.com/iap/docs/cloud-iap-for-on-prem-apps-overview

Question: 47 CertyIQ
Case study -
This is a case study. Case studies are not timed separately. You can use as much exam time as you would like to
complete each case. However, there may be additional case studies and sections on this exam. You must manage
your time to ensure that you are able to complete all questions included on this exam in the time provided.
To answer the questions included in a case study, you will need to reference information that is provided in the
case study. Case studies might contain exhibits and other resources that provide more information about the
scenario that is described in the case study. Each question is independent of the other questions in this case study.
At the end of this case study, a review screen will appear. This screen allows you to review your answers and to
make changes before you move to the next section of the exam. After you begin a new section, you cannot return
to this section.

To start the case study -


To display the first question in this case study, click the Next button. Use the buttons in the left pane to explore
the content of the case study before you answer the questions. Clicking these buttons displays information such
as business requirements, existing environment, and problem statements. If the case study has an
All Information tab, note that the information displayed is identical to the information displayed on the subsequent
tabs. When you are ready to answer a question, click the Question button to return to the question.

Company Overview -
HipLocal is a community application designed to facilitate communication between people in close proximity. It is
used for event planning and organizing sporting events, and for businesses to connect with their local
communities. HipLocal launched recently in a few neighborhoods in Dallas and is rapidly growing into a global
phenomenon. Its unique style of hyper-local community communication and business outreach is in demand
around the world.

Executive Statement -
We are the number one local community app; it's time to take our local community services global. Our venture
capital investors want to see rapid growth and the same great experience for new local and virtual communities
that come online, whether their members are 10 or 10000 miles away from each other.

Solution Concept -
HipLocal wants to expand their existing service, with updated functionality, in new regions to better serve their
global customers. They want to hire and train a new team to support these regions in their time zones. They will
need to ensure that the application scales smoothly and provides clear uptime data.

Existing Technical Environment -


HipLocal's environment is a mix of on-premises hardware and infrastructure running in Google Cloud Platform. The
HipLocal team understands their application well, but has limited experience in global scale applications. Their
existing technical environment is as follows:
* Existing APIs run on Compute Engine virtual machine instances hosted in GCP.
* State is stored in a single instance MySQL database in GCP.
* Data is exported to an on-premises Teradata/Vertica data warehouse.
* Data analytics is performed in an on-premises Hadoop environment.
* The application has no logging.
* There are basic indicators of uptime; alerts are frequently fired when the APIs are unresponsive.

Business Requirements -
HipLocal's investors want to expand their footprint and support the increase in demand they are seeing. Their
requirements are:
* Expand availability of the application to new regions.
* Increase the number of concurrent users that can be supported.
* Ensure a consistent experience for users when they travel to different regions.
* Obtain user activity metrics to better understand how to monetize their product.
* Ensure compliance with regulations in the new regions (for example, GDPR).
* Reduce infrastructure management time and cost.
* Adopt the Google-recommended practices for cloud computing.

Technical Requirements -
* The application and backend must provide usage metrics and monitoring.
* APIs require strong authentication and authorization.
* Logging must be increased, and data should be stored in a cloud analytics platform.
* Move to serverless architecture to facilitate elastic scaling.
* Provide authorized access to internal apps in a secure manner.
HipLocal wants to reduce the number of on-call engineers and eliminate manual scaling.
Which two services should they choose? (Choose two.)

A.Use Google App Engine services.


B.Use serverless Google Cloud Functions.
C.Use Knative to build and deploy serverless applications.
D.Use Google Kubernetes Engine for automated deployments.
E.Use a large Google Compute Engine cluster for deployments.

Answer: AB

Explanation:

A. Use Google App Engine services.


B .Use serverless Google Cloud Functions.

Question: 48 CertyIQ
Case study -
This is a case study. Case studies are not timed separately. You can use as much exam time as you would like to
complete each case. However, there may be additional case studies and sections on this exam. You must manage
your time to ensure that you are able to complete all questions included on this exam in the time provided.
To answer the questions included in a case study, you will need to reference information that is provided in the
case study. Case studies might contain exhibits and other resources that provide more information about the
scenario that is described in the case study. Each question is independent of the other questions in this case study.
At the end of this case study, a review screen will appear. This screen allows you to review your answers and to
make changes before you move to the next section of the exam. After you begin a new section, you cannot return
to this section.

To start the case study -


To display the first question in this case study, click the Next button. Use the buttons in the left pane to explore
the content of the case study before you answer the questions. Clicking these buttons displays information such
as business requirements, existing environment, and problem statements. If the case study has an
All Information tab, note that the information displayed is identical to the information displayed on the subsequent
tabs. When you are ready to answer a question, click the Question button to return to the question.

Company Overview -
HipLocal is a community application designed to facilitate communication between people in close proximity. It is
used for event planning and organizing sporting events, and for businesses to connect with their local
communities. HipLocal launched recently in a few neighborhoods in Dallas and is rapidly growing into a global
phenomenon. Its unique style of hyper-local community communication and business outreach is in demand
around the world.

Executive Statement -
We are the number one local community app; it's time to take our local community services global. Our venture
capital investors want to see rapid growth and the same great experience for new local and virtual communities
that come online, whether their members are 10 or 10000 miles away from each other.

Solution Concept -
HipLocal wants to expand their existing service, with updated functionality, in new regions to better serve their
global customers. They want to hire and train a new team to support these regions in their time zones. They will
need to ensure that the application scales smoothly and provides clear uptime data.

Existing Technical Environment -


HipLocal's environment is a mix of on-premises hardware and infrastructure running in Google Cloud Platform. The
HipLocal team understands their application well, but has limited experience in global scale applications. Their
existing technical environment is as follows:
* Existing APIs run on Compute Engine virtual machine instances hosted in GCP.
* State is stored in a single instance MySQL database in GCP.
* Data is exported to an on-premises Teradata/Vertica data warehouse.
* Data analytics is performed in an on-premises Hadoop environment.
* The application has no logging.
* There are basic indicators of uptime; alerts are frequently fired when the APIs are unresponsive.

Business Requirements -
HipLocal's investors want to expand their footprint and support the increase in demand they are seeing. Their
requirements are:
* Expand availability of the application to new regions.
* Increase the number of concurrent users that can be supported.
* Ensure a consistent experience for users when they travel to different regions.
* Obtain user activity metrics to better understand how to monetize their product.
* Ensure compliance with regulations in the new regions (for example, GDPR).
* Reduce infrastructure management time and cost.
* Adopt the Google-recommended practices for cloud computing.

Technical Requirements -
* The application and backend must provide usage metrics and monitoring.
* APIs require strong authentication and authorization.
* Logging must be increased, and data should be stored in a cloud analytics platform.
* Move to serverless architecture to facilitate elastic scaling.
* Provide authorized access to internal apps in a secure manner.
In order to meet their business requirements, how should HipLocal store their application state?

A.Use local SSDs to store state.


B.Put a memcache layer in front of MySQL.
C.Move the state storage to Cloud Spanner.
D.Replace the MySQL instance with Cloud SQL.

Answer: C

Explanation:

the answer is C. A is not valid because local SSD is volatile memory. B and D is bad solution because it don't
reduce latency in world wide but they are a regional location.

Question: 49 CertyIQ
Case study -
This is a case study. Case studies are not timed separately. You can use as much exam time as you would like to
complete each case. However, there may be additional case studies and sections on this exam. You must manage
your time to ensure that you are able to complete all questions included on this exam in the time provided.
To answer the questions included in a case study, you will need to reference information that is provided in the
case study. Case studies might contain exhibits and other resources that provide more information about the
scenario that is described in the case study. Each question is independent of the other questions in this case study.
At the end of this case study, a review screen will appear. This screen allows you to review your answers and to
make changes before you move to the next section of the exam. After you begin a new section, you cannot return
to this section.

To start the case study -


To display the first question in this case study, click the Next button. Use the buttons in the left pane to explore
the content of the case study before you answer the questions. Clicking these buttons displays information such
as business requirements, existing environment, and problem statements. If the case study has an
All Information tab, note that the information displayed is identical to the information displayed on the subsequent
tabs. When you are ready to answer a question, click the Question button to return to the question.

Company Overview -
HipLocal is a community application designed to facilitate communication between people in close proximity. It is
used for event planning and organizing sporting events, and for businesses to connect with their local
communities. HipLocal launched recently in a few neighborhoods in Dallas and is rapidly growing into a global
phenomenon. Its unique style of hyper-local community communication and business outreach is in demand
around the world.

Executive Statement -
We are the number one local community app; it's time to take our local community services global. Our venture
capital investors want to see rapid growth and the same great experience for new local and virtual communities
that come online, whether their members are 10 or 10000 miles away from each other.

Solution Concept -
HipLocal wants to expand their existing service, with updated functionality, in new regions to better serve their
global customers. They want to hire and train a new team to support these regions in their time zones. They will
need to ensure that the application scales smoothly and provides clear uptime data.

Existing Technical Environment -


HipLocal's environment is a mix of on-premises hardware and infrastructure running in Google Cloud Platform. The
HipLocal team understands their application well, but has limited experience in global scale applications. Their
existing technical environment is as follows:
* Existing APIs run on Compute Engine virtual machine instances hosted in GCP.
* State is stored in a single instance MySQL database in GCP.
* Data is exported to an on-premises Teradata/Vertica data warehouse.
* Data analytics is performed in an on-premises Hadoop environment.
* The application has no logging.
* There are basic indicators of uptime; alerts are frequently fired when the APIs are unresponsive.

Business Requirements -
HipLocal's investors want to expand their footprint and support the increase in demand they are seeing. Their
requirements are:
* Expand availability of the application to new regions.
* Increase the number of concurrent users that can be supported.
* Ensure a consistent experience for users when they travel to different regions.
* Obtain user activity metrics to better understand how to monetize their product.
* Ensure compliance with regulations in the new regions (for example, GDPR).
* Reduce infrastructure management time and cost.
* Adopt the Google-recommended practices for cloud computing.

Technical Requirements -
* The application and backend must provide usage metrics and monitoring.
* APIs require strong authentication and authorization.
* Logging must be increased, and data should be stored in a cloud analytics platform.
* Move to serverless architecture to facilitate elastic scaling.
* Provide authorized access to internal apps in a secure manner.
Which service should HipLocal use for their public APIs?

A.Cloud Armor
B.Cloud Functions
C.Cloud Endpoints
D.Shielded Virtual Machines

Answer: C

Explanation:

Cloud Endpoints is a correct answer.

Question: 50 CertyIQ
Case study -
This is a case study. Case studies are not timed separately. You can use as much exam time as you would like to
complete each case. However, there may be additional case studies and sections on this exam. You must manage
your time to ensure that you are able to complete all questions included on this exam in the time provided.
To answer the questions included in a case study, you will need to reference information that is provided in the
case study. Case studies might contain exhibits and other resources that provide more information about the
scenario that is described in the case study. Each question is independent of the other questions in this case study.
At the end of this case study, a review screen will appear. This screen allows you to review your answers and to
make changes before you move to the next section of the exam. After you begin a new section, you cannot return
to this section.

To start the case study -


To display the first question in this case study, click the Next button. Use the buttons in the left pane to explore
the content of the case study before you answer the questions. Clicking these buttons displays information such
as business requirements, existing environment, and problem statements. If the case study has an
All Information tab, note that the information displayed is identical to the information displayed on the subsequent
tabs. When you are ready to answer a question, click the Question button to return to the question.

Company Overview -
HipLocal is a community application designed to facilitate communication between people in close proximity. It is
used for event planning and organizing sporting events, and for businesses to connect with their local
communities. HipLocal launched recently in a few neighborhoods in Dallas and is rapidly growing into a global
phenomenon. Its unique style of hyper-local community communication and business outreach is in demand
around the world.
Executive Statement -
We are the number one local community app; it's time to take our local community services global. Our venture
capital investors want to see rapid growth and the same great experience for new local and virtual communities
that come online, whether their members are 10 or 10000 miles away from each other.

Solution Concept -
HipLocal wants to expand their existing service, with updated functionality, in new regions to better serve their
global customers. They want to hire and train a new team to support these regions in their time zones. They will
need to ensure that the application scales smoothly and provides clear uptime data.

Existing Technical Environment -


HipLocal's environment is a mix of on-premises hardware and infrastructure running in Google Cloud Platform. The
HipLocal team understands their application well, but has limited experience in global scale applications. Their
existing technical environment is as follows:
* Existing APIs run on Compute Engine virtual machine instances hosted in GCP.
* State is stored in a single instance MySQL database in GCP.
* Data is exported to an on-premises Teradata/Vertica data warehouse.
* Data analytics is performed in an on-premises Hadoop environment.
* The application has no logging.
* There are basic indicators of uptime; alerts are frequently fired when the APIs are unresponsive.

Business Requirements -
HipLocal's investors want to expand their footprint and support the increase in demand they are seeing. Their
requirements are:
* Expand availability of the application to new regions.
* Increase the number of concurrent users that can be supported.
* Ensure a consistent experience for users when they travel to different regions.
* Obtain user activity metrics to better understand how to monetize their product.
* Ensure compliance with regulations in the new regions (for example, GDPR).
* Reduce infrastructure management time and cost.
* Adopt the Google-recommended practices for cloud computing.

Technical Requirements -
* The application and backend must provide usage metrics and monitoring.
* APIs require strong authentication and authorization.
* Logging must be increased, and data should be stored in a cloud analytics platform.
* Move to serverless architecture to facilitate elastic scaling.
* Provide authorized access to internal apps in a secure manner.
HipLocal wants to improve the resilience of their MySQL deployment, while also meeting their business and
technical requirements.
Which configuration should they choose?

A.Use the current single instance MySQL on Compute Engine and several read-only MySQL servers on
Compute Engine.
B.Use the current single instance MySQL on Compute Engine, and replicate the data to Cloud SQL in an
external master configuration.
C.Replace the current single instance MySQL instance with Cloud SQL, and configure high availability.
D.Replace the current single instance MySQL instance with Cloud SQL, and Google provides redundancy
without further configuration.

Answer: C

Explanation:

Replace the current single instance MySQL instance with Cloud SQL, and configure high availability.

Question: 51 CertyIQ
Your application is running in multiple Google Kubernetes Engine clusters. It is managed by a Deployment in each
cluster. The Deployment has created multiple replicas of your Pod in each cluster. You want to view the logs sent
to stdout for all of the replicas in your Deployment in all clusters.
Which command should you use?

A.kubectl logs [PARAM]


B.gcloud logging read [PARAM]
C.kubectl exec "it [PARAM] journalctl
D.gcloud compute ssh [PARAM] "-command= sudo journalctl

Answer: B

Explanation:

B: gcloud logging readUsing the "gcloud logging read" command, select the appropriate cluster, node, pod,
and container logs.https://cloud.google.com/stackdriver/docs/solutions/gke/using-
logs#accessing_your_logsHowever if you use "kubectl logs" to see logs on CLI, logs won’t be seen readable.
It prints each line as a JSON object. https://medium.com/google-cloud/display-gke-logs-in-a-text-format-
with-kubectl-db0169be0282

Question: 52 CertyIQ
You are using Cloud Build to create a new Docker image on each source code commit to a Cloud Source
Repositories repository. Your application is built on every commit to the master branch. You want to release
specific commits made to the master branch in an automated method.
What should you do?

A.Manually trigger the build for new releases.


B.Create a build trigger on a Git tag pattern. Use a Git tag convention for new releases.
C.Create a build trigger on a Git branch name pattern. Use a Git branch naming convention for new releases.
D.Commit your source code to a second Cloud Source Repositories repository with a second Cloud Build trigger.
Use this repository for new releases only.

Answer: B

Explanation:

B is correct

I don't know why people are selecting C , the qus says commit to master . C literally does not make sense how
commit to a feature branch can trigger a master build.

Question: 53 CertyIQ
You are designing a schema for a table that will be moved from MySQL to Cloud Bigtable. The MySQL table is as
follows:
How should you design a row key for Cloud Bigtable for this table?

A.Set Account_id as a key.


B.Set Account_id_Event_timestamp as a key.
C.Set Event_timestamp_Account_id as a key.
D.Set Event_timestamp as a key.

Answer: B

Explanation:

https://cloud.google.com/bigtable/docs/schema-design#row-keysIt's B because :"Row keys that start with a


timestamp. This pattern causes sequential writes to be pushed onto a single node, creating a hotspot. If you
put a timestamp in a row key, precede it with a high-cardinality value like a user ID to avoid hotspotting."

Question: 54 CertyIQ
You want to view the memory usage of your application deployed on Compute Engine.
What should you do?

A.Install the Stackdriver Client Library.


B.Install the Stackdriver Monitoring Agent.
C.Use the Stackdriver Metrics Explorer.
D.Use the Google Cloud Platform Console.

Answer: B

Explanation:

Option-B is correct. https://cloud.google.com/monitoring/api/metrics_agent#agent-memory (By default


Memory metrics is not collected). To double confirm. Just goto Console->Operations->Monitoring-
>Dashboards->VM Instances->Memory Tab (Assume you have VM running already). You will see a info
message saying that No agents detected. Monitoring agents collect memory metrics, disk metrics, and more.
Learn more about agents and how to manage them across multiple VMs.

Question: 55 CertyIQ
You have an analytics application that runs hundreds of queries on BigQuery every few minutes using BigQuery
API. You want to find out how much time these queries take to execute.
What should you do?

A.Use Stackdriver Monitoring to plot slot usage.


B.Use Stackdriver Trace to plot API execution time.
C.Use Stackdriver Trace to plot query execution time.
D.Use Stackdriver Monitoring to plot query execution times.

Answer: D

Explanation:

Use Stackdriver Monitoring to plot query execution times.

https://cloud.google.com/bigquery/docs/monitoring

Question: 56 CertyIQ
You are designing a schema for a Cloud Spanner customer database. You want to store a phone number array field
in a customer table. You also want to allow users to search customers by phone number.
How should you design this schema?

A.Create a table named Customers. Add an Array field in a table that will hold phone numbers for the customer.
B.Create a table named Customers. Create a table named Phones. Add a CustomerId field in the Phones table to
find the CustomerId from a phone number.
C.Create a table named Customers. Add an Array field in a table that will hold phone numbers for the customer.
Create a secondary index on the Array field.
D.Create a table named Customers as a parent table. Create a table named Phones, and interleave this table
into the Customer table. Create an index on the phone number field in the Phones table.

Answer: D

Explanation:

The correct answer is D. You should create a table named Customers as a parent table and a table named
Phones, and interleave this table into the Customer table. You should also create an index on the phone
number field in the Phones table. This allows you to store the phone number array field in the Customers table
and search for customers by phone number using the index on the Phones table.

Question: 57 CertyIQ
You are deploying a single website on App Engine that needs to be accessible via the URL
http://www.altostrat.com/.
What should you do?

A.Verify domain ownership with Webmaster Central. Create a DNS CNAME record to point to the App Engine
canonical name ghs.googlehosted.com.
B.Verify domain ownership with Webmaster Central. Define an A record pointing to the single global App
Engine IP address.
C.Define a mapping in dispatch.yaml to point the domain www.altostrat.com to your App Engine service. Create
a DNS CNAME record to point to the App Engine canonical name ghs.googlehosted.com.
D.Define a mapping in dispatch.yaml to point the domain www.altostrat.com to your App Engine service. Define
an A record pointing to the single global App Engine IP address.

Answer: A

Explanation:
Reference:
https://cloud.google.com/appengine/docs/flexible/dotnet/mapping-custom-domains?hl=fa

Question: 58 CertyIQ
You are running an application on App Engine that you inherited. You want to find out whether the application is
using insecure binaries or is vulnerable to XSS attacks.
Which service should you use?

A.Cloud Amor
B.Stackdriver Debugger
C.Cloud Security Scanner
D.Stackdriver Error Reporting

Answer: C

Explanation:

https://cloud.google.com/appengine/docs/standard/python/application-security:"The Google Cloud Web


Security Scanner discovers vulnerabilities by crawling your App Engine app, following all that links within the
scope of your starting URLs, and attempting to exercise as many user inputs and event handlers as
possible."https://cloud.google.com/security-command-center/docs/concepts-web-security-scanner-
overview:"Web Security Scanner custom scans provide granular information about application vulnerability
findings, like outdated libraries, cross-site scripting, or use of mixed content"C is correct

Reference:

https://cloud.google.com/security-scanner

Question: 59 CertyIQ
You are working on a social media application. You plan to add a feature that allows users to upload images. These
images will be 2 MB `" 1 GB in size. You want to minimize their infrastructure operations overhead for this feature.
What should you do?

A.Change the application to accept images directly and store them in the database that stores other user
information.
B.Change the application to create signed URLs for Cloud Storage. Transfer these signed URLs to the client
application to upload images to Cloud Storage.
C.Set up a web server on GCP to accept user images and create a file store to keep uploaded files. Change the
application to retrieve images from the file store.
D.Create a separate bucket for each user in Cloud Storage. Assign a separate service account to allow write
access on each bucket. Transfer service account credentials to the client application based on user information.
The application uses this service account to upload images to Cloud Storage.

Answer: B

Explanation:

Reference:
https://cloud.google.com/blog/products/storage-data-transfer/uploading-images-directly-to-cloud-storage-b
y-using-signed-url
Question: 60 CertyIQ
Your application is built as a custom machine image. You have multiple unique deployments of the machine image.
Each deployment is a separate managed instance group with its own template. Each deployment requires a unique
set of configuration values. You want to provide these unique values to each deployment but use the same custom
machine image in all deployments. You want to use out-of-the-box features of Compute Engine.
What should you do?

A.Place the unique configuration values in the persistent disk.


B.Place the unique configuration values in a Cloud Bigtable table.
C.Place the unique configuration values in the instance template startup script.
D.Place the unique configuration values in the instance template instance metadata.

Answer: D

Explanation:

Option D is the correct answer. Instance metadata is metadata that is associated with a Compute Engine
instance and can be used to pass configuration values to the instance at startup. It can be accessed from
within the instance itself, allowing you to use the same custom machine image in all deployments and still
provide unique configuration values to each deployment. Option A is not a good solution because the
persistent disk is not automatically attached to the instance at startup and is not intended for storing
configuration values. Option B is not a good solution because Cloud Bigtable is a NoSQL database, which is
not well-suited for storing configuration values. Option C is not a good solution because the startup script is
executed after the instance has started, so it cannot be used to pass configuration values to the instance at
startup.

Question: 61 CertyIQ
Your application performs well when tested locally, but it runs significantly slower after you deploy it to a
Compute Engine instance. You need to diagnose the problem. What should you do?
What should you do?

A.File a ticket with Cloud Support indicating that the application performs faster locally.
B.Use Cloud Debugger snapshots to look at a point-in-time execution of the application.
C.Use Cloud Profiler to determine which functions within the application take the longest amount of time.
D.Add logging commands to the application and use Cloud Logging to check where the latency problem occurs.

Answer: C

Explanation:

A is incorrect because the argument “it worked on my machine” but doesn’t work on Google Cloud is never
valid.B is incorrect because Debugger snapshots only lets us review the application at a single point in time.C
is correct because it provides latency per function and historical latency information.D is incorrect because
while it works it requires a lot of work and is not the clear, optimal choice.

Question: 62 CertyIQ
You have an application running in App Engine. Your application is instrumented with Stackdriver Trace. The
/product-details request reports details about four known unique products at /sku-details as shown below. You
want to reduce the time it takes for the request to complete.
What should you do?

A.Increase the size of the instance class.


B.Change the Persistent Disk type to SSD.
C.Change /product-details to perform the requests in parallel.
D.Store the /sku-details information in a database, and replace the webservice call with a database query.

Answer: C

Explanation:

Option C is the correct answer. By changing /product-details to perform the requests in parallel, you can
reduce the time it takes for the request to complete by making multiple requests at the same time rather than
sequentially. This will allow you to retrieve the information for all four products more quickly. Option A is not a
good solution because increasing the size of the instance class may not necessarily reduce the time it takes
for the request to complete. Option B is not a good solution because changing the Persistent Disk type to SSD
will not have any impact on the time it takes for the request to complete. Option D is not a good solution
because storing the /sku-details information in a database and replacing the webservice call with a database
query will not necessarily reduce the time it takes for the request to complete, and it will add unnecessary
complexity to the application.

Question: 63 CertyIQ
Your company has a data warehouse that keeps your application information in BigQuery. The BigQuery data
warehouse keeps 2 PBs of user data. Recently, your company expanded your user base to include EU users and
needs to comply with these requirements:
✑ Your company must be able to delete all user account information upon user request.
✑ All EU user data must be stored in a single region specifically for EU users.
Which two actions should you take? (Choose two.)

A.Use BigQuery federated queries to query data from Cloud Storage.


B.Create a dataset in the EU region that will keep information about EU users only.
C.Create a Cloud Storage bucket in the EU region to store information for EU users only.
D.Re-upload your data using to a Cloud Dataflow pipeline by filtering your user records out.
E.Use DML statements in BigQuery to update/delete user records based on their requests.

Answer: BE
Explanation:

B & E. Data is already stored in BigQuery, I do not see any reason to have anything to do with Cloud Storage.
Also, BigQuery allows DML to do updates and deletes. So I would choose B & E

Question: 64 CertyIQ
Your App Engine standard configuration is as follows:
service: production
instance_class: B1
You want to limit the application to 5 instances.
Which code snippet should you include in your configuration?

A.manual_scaling: instances: 5 min_pending_latency: 30ms


B.manual_scaling: max_instances: 5 idle_timeout: 10m
C.basic_scaling: instances: 5 min_pending_latency: 30ms
D.basic_scaling: max_instances: 5 idle_timeout: 10m

Answer: D

Explanation:

D is correct

https://cloud.google.com/appengine/docs/legacy/standard/python/how-instances-are-
managed#scaling_types

Question: 65 CertyIQ
Your analytics system executes queries against a BigQuery dataset. The SQL query is executed in batch and
passes the contents of a SQL file to the BigQuery
CLI. Then it redirects the BigQuery CLI output to another process. However, you are getting a permission error
from the BigQuery CLI when the queries are executed.
You want to resolve the issue. What should you do?

A.Grant the service account BigQuery Data Viewer and BigQuery Job User roles.
B.Grant the service account BigQuery Data Editor and BigQuery Data Viewer roles.
C.Create a view in BigQuery from the SQL query and SELECT* from the view in the CLI.
D.Create a new dataset in BigQuery, and copy the source table to the new dataset Query the new dataset and
table from the CLI.

Answer: A

Explanation:

The correct answer is Option A. In order to allow the analytics system to execute queries against the BigQuery
dataset, the service account must be granted the BigQuery Data Viewer and BigQuery Job User roles. The
BigQuery Data Viewer role allows the service account to read data from tables, and the BigQuery Job User
role allows the service account to run jobs, which includes executing queries. Option B is not a good solution
because the BigQuery Data Editor role allows the service account to modify data in tables, which is not
necessary to execute queries. Option C is not a good solution because creating a view in BigQuery and
selecting from the view in the CLI will not resolve the permission issue. Option D is not a good solution
because creating a new dataset and copying the source table to the new dataset will not resolve the
permission issue.

Question: 66 CertyIQ
Your application is running on Compute Engine and is showing sustained failures for a small number of requests.
You have narrowed the cause down to a single
Compute Engine instance, but the instance is unresponsive to SSH.
What should you do next?

A.Reboot the machine.


B.Enable and check the serial port output.
C.Delete the machine and create a new one.
D.Take a snapshot of the disk and attach it to a new machine.

Answer: B

Explanation:

Option B is correct. According to Google Cloud documentation, if a Compute Engine instance is unresponsive
to SSH and you have narrowed the cause down to a single instance, you should enable and check the serial
port output. The serial port output is a log of system messages and can help you diagnose the issue causing
the instance to become unresponsive. To enable and check the serial port output, you can access the serial
console as the root user from your local workstation using a browser. This will allow you to review the logs
and potentially identify the cause of the problem.

Question: 67 CertyIQ
You configured your Compute Engine instance group to scale automatically according to overall CPU usage.
However, your application's response latency increases sharply before the cluster has finished adding up
instances. You want to provide a more consistent latency experience for your end users by changing the
configuration of the instance group autoscaler.
Which two configuration changes should you make? (Choose two.)

A.Add the label AUTOSCALE to the instance group template.


B.Decrease the cool-down period for instances added to the group.
C.Increase the target CPU usage for the instance group autoscaler.
D.Decrease the target CPU usage for the instance group autoscaler.
E.Remove the health-check for individual VMs in the instance group.

Answer: BD

Explanation:

Adding label won't solve the issue so A is wrong for sureRemoving health check is not recommended so E is
wrong as wellIncrease CPU target is wrong since scaling will take place at a higher usage which is not what
we wantB and D are the correct options

Question: 68 CertyIQ
You have an application controlled by a managed instance group. When you deploy a new version of the
application, costs should be minimized and the number of instances should not increase. You want to ensure that,
when each new instance is created, the deployment only continues if the new instance is healthy.
What should you do?

A.Perform a rolling-action with maxSurge set to 1, maxUnavailable set to 0.


B.Perform a rolling-action with maxSurge set to 0, maxUnavailable set to 1
C.Perform a rolling-action with maxHealthy set to 1, maxUnhealthy set to 0.
D.Perform a rolling-action with maxHealthy set to 0, maxUnhealthy set to 1.

Answer: B

Explanation:

As others suggested, B is the correct option.I am adding this to highlight the community choice.

Question: 69 CertyIQ
Your application requires service accounts to be authenticated to GCP products via credentials stored on its host
Compute Engine virtual machine instances. You want to distribute these credentials to the host instances as
securely as possible.
What should you do?

A.Use HTTP signed URLs to securely provide access to the required resources.
B.Use the instance's service account Application Default Credentials to authenticate to the required resources.
C.Generate a P12 file from the GCP Console after the instance is deployed, and copy the credentials to the host
instance before starting the application.
D.Commit the credential JSON file into your application's source repository, and have your CI/CD process
package it with the software that is deployed to the instance.

Answer: B

Explanation:

Use the instance's service account Application Default Credentials to authenticate to the required
resources.Using the instance's service account Application Default Credentials is the most secure method for
distributing credentials to the host instances. This method allows the instance to automatically authenticate
with the required resources using the instance's built-in service account, without requiring the credentials to
be stored on the instance or transmitted over the network. This eliminates the risk of the credentials being
compromised or exposed. Additionally, this method is the most convenient, as it requires no manual steps to
set up the credentials on the instance.

Reference:

https://cloud.google.com/compute/docs/api/how-tos/authorization

Question: 70 CertyIQ
Your application is deployed in a Google Kubernetes Engine (GKE) cluster. You want to expose this application
publicly behind a Cloud Load Balancing HTTP(S) load balancer.
What should you do?

A.Configure a GKE Ingress resource.


B.Configure a GKE Service resource.
C.Configure a GKE Ingress resource with type: LoadBalancer.
D.Configure a GKE Service resource with type: LoadBalancer.

Answer: A

Explanation:

Ingress for HTTP(S) Load Balancing This page provides a general overview of what Ingress for HTTP(S) Load
Balancing is and how it works. Google Kubernetes Engine (GKE) provides a built-in and managed Ingress
controller called GKE Ingress. This controller implements Ingress resources as Google Cloud load balancers
for HTTP(S) workloads in GKE.

Reference:

https://cloud.google.com/kubernetes-engine/docs/concepts/ingress

Question: 71 CertyIQ
Your company is planning to migrate their on-premises Hadoop environment to the cloud. Increasing storage cost
and maintenance of data stored in HDFS is a major concern for your company. You also want to make minimal
changes to existing data analytics jobs and existing architecture.
How should you proceed with the migration?

A.Migrate your data stored in Hadoop to BigQuery. Change your jobs to source their information from BigQuery
instead of the on-premises Hadoop environment.
B.Create Compute Engine instances with HDD instead of SSD to save costs. Then perform a full migration of
your existing environment into the new one in Compute Engine instances.
C.Create a Cloud Dataproc cluster on Google Cloud Platform, and then migrate your Hadoop environment to the
new Cloud Dataproc cluster. Move your HDFS data into larger HDD disks to save on storage costs.
D.Create a Cloud Dataproc cluster on Google Cloud Platform, and then migrate your Hadoop code objects to
the new cluster. Move your data to Cloud Storage and leverage the Cloud Dataproc connector to run jobs on
that data.

Answer: D

Explanation:

Keeping your data in a persistent HDFS cluster using Dataproc is more expensive than storing your data in
Cloud Storage, which is what we recommend, as explained later. Keeping data in an HDFS cluster also limits
your ability to use your data with other Google Cloud products.""Google Cloud includes Dataproc, which is a
managed Hadoop and Spark environment. You can use Dataproc to run most of your existing jobs with minimal
alteration, so you don't need to move away from all of the Hadoop tools you already know"D is the answer

Reference:

https://cloud.google.com/architecture/hadoop/hadoop-gcp-migration-overview

Question: 72 CertyIQ
Your data is stored in Cloud Storage buckets. Fellow developers have reported that data downloaded from Cloud
Storage is resulting in slow API performance.
You want to research the issue to provide details to the GCP support team.
Which command should you run?
A.gsutil test "o output.json gs://my-bucket
B.gsutil perfdiag "o output.json gs://my-bucket
C.gcloud compute scp example-instance:~/test-data "o output.json gs://my-bucket
D.gcloud services test "o output.json gs://my-bucket

Answer: B

Explanation:

gsutil perfdiag -o output.json gs://my-bucketThe gsutil perfdiag command is used to diagnose performance
issues with Cloud Storage. It can be used to perform various tests such as download, upload, and metadata
operations. By using the -o flag, you can specify an output file where the results of the tests will be stored in
JSON format. This output file can then be provided to the GCP support team to help them investigate the
issue.

Reference:

https://groups.google.com/forum/#!topic/gce-discussion/xBl9Jq5HDsY

Question: 73 CertyIQ
You are using Cloud Build build to promote a Docker image to Development, Test, and Production environments.
You need to ensure that the same Docker image is deployed to each of these environments.
How should you identify the Docker image in your build?

A.Use the latest Docker image tag.


B.Use a unique Docker image name.
C.Use the digest of the Docker image.
D.Use a semantic version Docker image tag.

Answer: C

Explanation:

C. Use the digest of the Docker image.Using the digest of the Docker image is the most reliable way to ensure
that the exact same Docker image is deployed to each environment. The digest is a hash of the image content
and metadata, which is unique to each image. This means that even if the image is tagged with different
versions or names, the digest will remain the same as long as the content and metadata are identical.On the
other hand, using the latest Docker image tag or a semantic version tag may not guarantee that the exact
same image is deployed to each environment. These tags are mutable and can be overwritten or updated,
which could result in different images being deployed to different environments.Using a unique Docker image
name could work, but it may be more difficult to manage and track multiple images with different names,
especially if there are many environments or frequent updates.

Anser C because nees to be sure that the same image for the 3 envs. A tag version can be change between
the deployment of the env.

Question: 74 CertyIQ
Your company has created an application that uploads a report to a Cloud Storage bucket. When the report is
uploaded to the bucket, you want to publish a message to a Cloud Pub/Sub topic. You want to implement a solution
that will take a small amount to effort to implement.
What should you do?

A.Configure the Cloud Storage bucket to trigger Cloud Pub/Sub notifications when objects are modified.
B.Create an App Engine application to receive the file; when it is received, publish a message to the Cloud
Pub/Sub topic.
C.Create a Cloud Function that is triggered by the Cloud Storage bucket. In the Cloud Function, publish a
message to the Cloud Pub/Sub topic.
D.Create an application deployed in a Google Kubernetes Engine cluster to receive the file; when it is received,
publish a message to the Cloud Pub/Sub topic.

Answer: A

Explanation:

Answer A takes least effort to implement the solution

Option-A required least amount of effort to implement.

https://cloud.google.com/storage/docs/reporting-changes#enabling

Question: 75 CertyIQ
Your teammate has asked you to review the code below, which is adding a credit to an account balance in Cloud
Datastore.
Which improvement should you suggest your teammate make?

A.Get the entity with an ancestor query.


B.Get and put the entity in a transaction.
C.Use a strongly consistent transactional database.
D.Don't return the account entity from the function.

Answer: B

Explanation:

https://cloud.google.com/datastore/docs/concepts/transactions#uses_for_transactions:"This requires a
transaction because the value of balance in an entity may be updated by another user after this code fetches
the object, but before it saves the modified object. Without a transaction, the user's request uses the value of
balance prior to the other user's update, and the save overwrites the new value. With a transaction, the
application is told about the other user's update."B is the answer
Question: 76 CertyIQ
Your company stores their source code in a Cloud Source Repositories repository. Your company wants to build
and test their code on each source code commit to the repository and requires a solution that is managed and has
minimal operations overhead.
Which method should they use?

A.Use Cloud Build with a trigger configured for each source code commit.
B.Use Jenkins deployed via the Google Cloud Platform Marketplace, configured to watch for source code
commits.
C.Use a Compute Engine virtual machine instance with an open source continuous integration tool, configured
to watch for source code commits.
D.Use a source code commit trigger to push a message to a Cloud Pub/Sub topic that triggers an App Engine
service to build the source code.

Answer: A

Explanation:

Use Cloud Build with a trigger configured for each source code commit.Cloud Build is a fully managed service
for building, testing, and deploying software quickly. It integrates with Cloud Source Repositories and can be
triggered by source code commits, which makes it an ideal solution for building and testing code on each
commit. It requires minimal operations overhead as it is fully managed by Google Cloud.

Question: 77 CertyIQ
You are writing a Compute Engine hosted application in project A that needs to securely authenticate to a Cloud
Pub/Sub topic in project B.
What should you do?

A.Configure the instances with a service account owned by project B. Add the service account as a Cloud
Pub/Sub publisher to project A.
B.Configure the instances with a service account owned by project A. Add the service account as a publisher on
the topic.
C.Configure Application Default Credentials to use the private key of a service account owned by project B. Add
the service account as a Cloud Pub/Sub publisher to project A.
D.Configure Application Default Credentials to use the private key of a service account owned by project A. Add
the service account as a publisher on the topic

Answer: B

Explanation:

https://cloud.google.com/pubsub/docs/access-control:"For example, suppose a service account in Cloud


Project A wants to publish messages to a topic in Cloud Project B. You could accomplish this by granting the
service account Edit permission in Cloud Project B"B is the answer

Question: 78 CertyIQ
You are developing a corporate tool on Compute Engine for the finance department, which needs to authenticate
users and verify that they are in the finance department. All company employees use G Suite.
What should you do?

A.Enable Cloud Identity-Aware Proxy on the HTTP(s) load balancer and restrict access to a Google Group
containing users in the finance department. Verify the provided JSON Web Token within the application.
B.Enable Cloud Identity-Aware Proxy on the HTTP(s) load balancer and restrict access to a Google Group
containing users in the finance department. Issue client-side certificates to everybody in the finance team and
verify the certificates in the application.
C.Configure Cloud Armor Security Policies to restrict access to only corporate IP address ranges. Verify the
provided JSON Web Token within the application.
D.Configure Cloud Armor Security Policies to restrict access to only corporate IP address ranges. Issue client
side certificates to everybody in the finance team and verify the certificates in the application.

Answer: A

Explanation:

A should be the answer -- IAP x G-Suite

Question: 79 CertyIQ
Your API backend is running on multiple cloud providers. You want to generate reports for the network latency of
your API.
Which two steps should you take? (Choose two.)

A.Use Zipkin collector to gather data.


B.Use Fluentd agent to gather data.
C.Use Stackdriver Trace to generate reports.
D.Use Stackdriver Debugger to generate report.
E.Use Stackdriver Profiler to generate report.

Answer: AC

Explanation:

The two steps you should take to generate reports for the network latency of your API running on multiple
cloud providers are:A. Use Zipkin collector to gather data: Zipkin is a distributed tracing system that helps you
gather data about the latency of requests made to your API. It allows you to trace requests as they flow
through your system, and provides insight into the performance of your services. You can use Zipkin collectors
to collect data from multiple cloud providers, and then generate reports to analyze the latency of your API.C.
Use Stackdriver Trace to generate reports: Stackdriver Trace is a distributed tracing system that helps you
trace requests across multiple services and provides detailed performance data about your applications. It
allows you to visualize and analyze the performance of your API and its dependencies. You can use
Stackdriver Trace to generate reports about the network latency of your API running on multiple cloud
providers.Therefore, the correct options are A and C.

https://cloud.google.com/trace/docs/zipkin

Question: 80 CertyIQ
Case study -
This is a case study. Case studies are not timed separately. You can use as much exam time as you would like to
complete each case. However, there may be additional case studies and sections on this exam. You must manage
your time to ensure that you are able to complete all questions included on this exam in the time provided.
To answer the questions included in a case study, you will need to reference information that is provided in the
case study. Case studies might contain exhibits and other resources that provide more information about the
scenario that is described in the case study. Each question is independent of the other questions in this case study.
At the end of this case study, a review screen will appear. This screen allows you to review your answers and to
make changes before you move to the next section of the exam. After you begin a new section, you cannot return
to this section.

To start the case study -


To display the first question in this case study, click the Next button. Use the buttons in the left pane to explore
the content of the case study before you answer the questions. Clicking these buttons displays information such
as business requirements, existing environment, and problem statements. If the case study has an
All Information tab, note that the information displayed is identical to the information displayed on the subsequent
tabs. When you are ready to answer a question, click the Question button to return to the question.

Company Overview -
HipLocal is a community application designed to facilitate communication between people in close proximity. It is
used for event planning and organizing sporting events, and for businesses to connect with their local
communities. HipLocal launched recently in a few neighborhoods in Dallas and is rapidly growing into a global
phenomenon. Its unique style of hyper-local community communication and business outreach is in demand
around the world.

Executive Statement -
We are the number one local community app; it's time to take our local community services global. Our venture
capital investors want to see rapid growth and the same great experience for new local and virtual communities
that come online, whether their members are 10 or 10000 miles away from each other.

Solution Concept -
HipLocal wants to expand their existing service, with updated functionality, in new regions to better serve their
global customers. They want to hire and train a new team to support these regions in their time zones. They will
need to ensure that the application scales smoothly and provides clear uptime data.

Existing Technical Environment -


HipLocal's environment is a mix of on-premises hardware and infrastructure running in Google Cloud Platform. The
HipLocal team understands their application well, but has limited experience in global scale applications. Their
existing technical environment is as follows:
* Existing APIs run on Compute Engine virtual machine instances hosted in GCP.
* State is stored in a single instance MySQL database in GCP.
* Data is exported to an on-premises Teradata/Vertica data warehouse.
* Data analytics is performed in an on-premises Hadoop environment.
* The application has no logging.
* There are basic indicators of uptime; alerts are frequently fired when the APIs are unresponsive.

Business Requirements -
HipLocal's investors want to expand their footprint and support the increase in demand they are seeing. Their
requirements are:
* Expand availability of the application to new regions.
* Increase the number of concurrent users that can be supported.
* Ensure a consistent experience for users when they travel to different regions.
* Obtain user activity metrics to better understand how to monetize their product.
* Ensure compliance with regulations in the new regions (for example, GDPR).
* Reduce infrastructure management time and cost.
* Adopt the Google-recommended practices for cloud computing.

Technical Requirements -
* The application and backend must provide usage metrics and monitoring.
* APIs require strong authentication and authorization.
* Logging must be increased, and data should be stored in a cloud analytics platform.
* Move to serverless architecture to facilitate elastic scaling.
* Provide authorized access to internal apps in a secure manner.
Which database should HipLocal use for storing user activity?

A.BigQuery
B.Cloud SQL
C.Cloud Spanner
D.Cloud Datastore

Answer: A
Explanation:

In the case study is stated: "Obtain user activity metrics to better understand how to monetize their product",
which means that they'll need to analyse the user activity, so... I'll go with answer A (BigQuery)

Bigquery for user activity analysis . And also the user activity is kind of raw data which being used to segment
user or according age , choice etc so Bigquery fits best fr this use cases
Thank you
Thank you for being so interested in the premium exam material.
I'm glad to hear that you found it informative and helpful.

But Wait

I wanted to let you know that there is more content available in the full version.
The full paper contains additional sections and information that you may find helpful,
and I encourage you to download it to get a more comprehensive and detailed view of
all the subject matter.

Download Full Version Now

Total: 286 Questions


Link: https://certyiq.com/papers?provider=google&exam=professional-cloud-developer

You might also like