professional-cloud-developer_6b51dff7a7c4
professional-cloud-developer_6b51dff7a7c4
Answer: A
Explanation:
The gsutil cp command allows you to copy data between your local file. storage. boto files generated by
running "gsutil config"
Question: 2 CertyIQ
You migrated your applications to Google Cloud Platform and kept your existing monitoring platform. You now find
that your notification system is too slow for time critical problems.
What should you do?
Answer: C
Explanation:
Cyou have problems with notifications.C option allows you to use stackdriver to send alerts immediately and
straight away after sends all this data to your on-prem monitoring platform
Think twice. You have working an expensive monitoring system i.e Splunk and you have the problem with
unacceptable delay time between incident and notification. You need to fix this problem, not doing a
revolution (changing monitoring system). You can leverage GCP Monitoring with alerting system which is out-
of-the-box with no huge effort, because if you want or not logs are in cloud logging. Simply implement alerts
and push logs to Splunk. Simples.
Question: 3 CertyIQ
You are planning to migrate a MySQL database to the managed Cloud SQL database for Google Cloud. You have
Compute Engine virtual machine instances that will connect with this Cloud SQL instance. You do not want to
whitelist IPs for the Compute Engine instances to be able to access Cloud SQL.
What should you do?
Answer: A
Explanation:
The question is about "connection". Role assignment gives a set of permission to compute engine but doesn't
allow connection.
Question: 4 CertyIQ
You have deployed an HTTP(s) Load Balancer with the gcloud commands shown below.
Health checks to port 80 on the Compute Engine virtual machine instance are failing and no traffic is sent to your
instances. You want to resolve the problem.
Which commands should you run?
Answer: C
Explanation:
Cthe source IP ranges for health checks (including legacy health checks if used for HTTP(S) Load Balancing)
are:35.191.0.0/16130.211.0.0/22Furthermore it should be direction INGRESS since the health-check (ping) is
coming into the load balancer/instance.
Reference:
https://cloud.google.com/vpc/docs/special-configurations
Question: 5 CertyIQ
Your website is deployed on Compute Engine. Your marketing team wants to test conversion rates between 3
different website designs.
Which approach should you use?
Answer: A
Explanation:
A is correct because it allows routing traffic to a single domain and split traffic based on IP or Cookie. B is not
correct because the domain name will change based on the service.
Reference:
https://cloud.google.com/appengine/docs/standard/python/splitting-traffic
Question: 6 CertyIQ
You need to copy directory local-scripts and all of its contents from your local workstation to a Compute Engine
virtual machine instance.
Which command should you use?
Answer: C
Explanation:
Reference:
https://cloud.google.com/sdk/gcloud/reference/compute/copy-files
Question: 7 CertyIQ
You are deploying your application to a Compute Engine virtual machine instance with the Stackdriver Monitoring
Agent installed. Your application is a unix process on the instance. You want to be alerted if the unix process has
not run for at least 5 minutes. You are not able to change the application to generate metrics or logs.
Which alert condition should you configure?
A.Uptime check
B.Process health
C.Metric absence
D.Metric threshold
Answer: B
Explanation:
"An uptime check is a request sent to a resource to see if it responds"A is wrongMetric absence and threshold
don't make senseProcess health is correct for sure so answer is B
Reference:
https://cloud.google.com/monitoring/alerts/concepts-indepth
Question: 8 CertyIQ
You have two tables in an ANSI-SQL compliant database with identical columns that you need to quickly combine
into a single table, removing duplicate rows from the result set.
What should you do?
Answer: C
Explanation:
C is correct answer here.The only difference between Union and Union All is that Union All will not removes
duplicate rows or records, instead, it just selects all the rows from all the tables which meets the conditions of
your specifics query and combines them into the result table.
Reference:
https://www.techonthenet.com/sql/union_all.php
Question: 9 CertyIQ
You have an application deployed in production. When a new version is deployed, some issues don't arise until the
application receives traffic from users in production. You want to reduce both the impact and the number of users
affected.
Which deployment strategy should you use?
A.Blue/green deployment
B.Canary deployment
C.Rolling deployment
D.Recreate deployment
Answer: B
Explanation:
Answer: AC
Explanation:
99.999% availability and reduce latencyOption A gives us 99.999% availability (think its typo in region
name)Option C is about compute capacity, more nodes -> less
latencyhttps://cloud.google.com/spanner/docs/instances#compute-capacityB - there is no such multi-region
configuration nam3D - its better to create cluster with 3 nodes, not 1E,F - overengineering
Question: 11 CertyIQ
You need to migrate an internal file upload API with an enforced 500-MB file size limit to App Engine.
What should you do?
Answer: C
Explanation:
Reference:
https://wiki.christophchamp.com/index.php?title=Google_Cloud_Platform
Question: 12 CertyIQ
You are planning to deploy your application in a Google Kubernetes Engine (GKE) cluster. The application exposes
an HTTP-based health check at /healthz. You want to use this health check endpoint to determine whether traffic
should be routed to the pod by the load balancer.
Which code snippet should you include in your Pod configuration?
A.
B.
C.
D.
Answer: B
Explanation:
For the GKE ingress controller to use your readinessProbes as health checks, the Pods for an Ingress must
exist at the time of Ingress creation. If your replicas are scaled to 0, the default health check will apply.
Question: 13 CertyIQ
Your teammate has asked you to review the code below. Its purpose is to efficiently add a large number of small
rows to a BigQuery table.
Which improvement should you suggest your teammate make?
Answer: A
Explanation:
A. Include multiple rows with each request.Batch inserts are more efficient than individual inserts and will
increase write performance by reducing the overhead of creating and sending individual requests for each
row. Parallel inserts could potentially lead to conflicting writes or cause resource exhaustion, and adding a
step of writing to Cloud Storage and then loading into BigQuery can add additional overhead and complexity.
Question: 14 CertyIQ
You are developing a JPEG image-resizing API hosted on Google Kubernetes Engine (GKE). Callers of the service
will exist within the same GKE cluster. You want clients to be able to get the IP address of the service.
What should you do?
A.Define a GKE Service. Clients should use the name of the A record in Cloud DNS to find the service's cluster
IP address.
B.Define a GKE Service. Clients should use the service name in the URL to connect to the service.
C.Define a GKE Endpoint. Clients should get the endpoint name from the appropriate environment variable in
the client container.
D.Define a GKE Endpoint. Clients should get the endpoint name from Cloud DNS.
Answer: B
Explanation:
both A and B are validOption A, DNS A record maps service FQDN to IP address, fqdn like service-
name.default.svc.cluster.localB is more easier, just use http://service-name
answer is B because client are in the same cluster so service name can be used.
Question: 15 CertyIQ
You are using Cloud Build to build and test application source code stored in Cloud Source Repositories. The build
process requires a build tool not available in the Cloud Build environment.
What should you do?
A.Download the binary from the internet during the build process.
B.Build a custom cloud builder image and reference the image in your build steps.
C.Include the binary in your Cloud Source Repositories repository and reference it in your build scripts.
D.Ask to have the binary added to the Cloud Build environment by filing a feature request against the Cloud
Build public Issue Tracker.
Answer: B
Explanation:
B is correct answer
https://cloud.google.com/cloud-build/docs/configuring-builds/use-community-and-custom-
builders#creating_a_custom_builder
Question: 16 CertyIQ
You are deploying your application to a Compute Engine virtual machine instance. Your application is configured to
write its log files to disk. You want to view the logs in Stackdriver Logging without changing the application code.
What should you do?
A.Install the Stackdriver Logging Agent and configure it to send the application logs.
B.Use a Stackdriver Logging Library to log directly from the application to Stackdriver Logging.
C.Provide the log file folder path in the metadata of the instance to configure it to send the application logs.
D.Change the application to log to /var/log so that its logs are automatically sent to Stackdriver Logging.
Answer: A
Explanation:
https://cloud.google.com/logging/docs/agent/logging/installation:"
The Logging agent streams logs from your VM instances and from selected third-party software packages to
Cloud Logging"A is correct
Question: 17 CertyIQ
Your service adds text to images that it reads from Cloud Storage. During busy times of the year, requests to Cloud
Storage fail with an HTTP 429 "Too Many
Requests" status code.
How should you handle this error?
Answer: C
Explanation:
"A Cloud Storage JSON API usage limit was exceeded. If your application tries to use more than its limit,
additional requests will fail. Throttle your client's requests, and/or use truncated exponential backoff."C is
correct
Reference:
https://developers.google.com/gmail/api/v1/reference/quota
Question: 18 CertyIQ
You are building an API that will be used by Android and iOS apps. The API must:
* Support HTTPs
* Minimize bandwidth cost
* Integrate easily with mobile apps
Which API architecture should you use?
A.RESTful APIs
B.MQTT for APIs
C.gRPC-based APIs
D.SOAP-based APIs
Answer: C
Explanation:
Question: 19 CertyIQ
Your application takes an input from a user and publishes it to the user's contacts. This input is stored in a table in
Cloud Spanner. Your application is more sensitive to latency and less sensitive to consistency.
How should you perform reads from Cloud Spanner for this application?
Answer: B
Explanation:
Question: 20 CertyIQ
Your application is deployed in a Google Kubernetes Engine (GKE) cluster. When a new version of your application
is released, your CI/CD tool updates the spec.template.spec.containers[0].image value to reference the Docker
image of your new application version. When the Deployment object applies the change, you want to deploy at
least 1 replica of the new version and maintain the previous replicas until the new replica is healthy.
Which change should you make to the GKE Deployment object shown below?
A.Set the Deployment strategy to RollingUpdate with maxSurge set to 0, maxUnavailable set to 1.
B.Set the Deployment strategy to RollingUpdate with maxSurge set to 1, maxUnavailable set to 0.
C.Set the Deployment strategy to Recreate with maxSurge set to 0, maxUnavailable set to 1.
D.Set the Deployment strategy to Recreate with maxSurge set to 1, maxUnavailable set to 0.
Answer: B
Explanation:
"The simplest way to take advantage of surge upgrade is to configure maxSurge=1 maxUnavailable=0. This
means that only 1 surge node can be added to the node pool during an upgrade so only 1 node will be
upgraded at a time. This setting is superior to the existing upgrade configuration (maxSurge=0
maxUnavailable=1) because it speeds up Pod restarts during upgrades while progressing
conservatively."Answer is B
Reference:
https://cloud.google.com/kubernetes-engine/docs/concepts/cluster-upgrades
Question: 21 CertyIQ
You plan to make a simple HTML application available on the internet. This site keeps information about FAQs for
your application. The application is static and contains images, HTML, CSS, and Javascript. You want to make this
application available on the internet with as few steps as possible.
What should you do?
Answer: A
Explanation:
Reference:
https://cloud.google.com/storage/docs/hosting-static-website
Question: 22 CertyIQ
Your company has deployed a new API to App Engine Standard environment. During testing, the API is not
behaving as expected. You want to monitor the application over time to diagnose the problem within the
application code without redeploying the application.
Which tool should you use?
A.Stackdriver Trace
B.Stackdriver Monitoring
C.Stackdriver Debug Snapshots
D.Stackdriver Debug Logpoints
Answer: D
Explanation:
i think this question will become obsolete since Cloud debugger will be deprecated: Cloud Debugger is
deprecated and will be shutdown May 31, 2023. See the deprecations page and release notes for more
information. Cloud Debugger is deprecated and is scheduled for shutdown on May 31 2023. For an alternative,
use the open source CLI tool, Snapshot Debugger.https://cloud.google.com/debugger/docs/release-notesIn
thi context i'll say D
" You want to monitor the application over time to diagnose the problem within the application code"If it's only
for moniroting it's B, but it mentions "within the code" so it should be D
Question: 23 CertyIQ
You want to use the Stackdriver Logging Agent to send an application's log file to Stackdriver from a Compute
Engine virtual machine instance.
After installing the Stackdriver Logging Agent, what should you do first?
Answer: C
Explanation:
answer should be C unless your application log is in the default log directory
https://cloud.google.com/logging/docs/agent/configuration
Question: 24 CertyIQ
Your company has a BigQuery data mart that provides analytics information to hundreds of employees. One user of
wants to run jobs without interrupting important workloads. This user isn't concerned about the time it takes to run
these jobs. You want to fulfill this request while minimizing cost to the company and the effort required on your
part.
What should you do?
Answer: A
Explanation:
Option A makes the most senseB is wrong since it will incur more costs which is not what the qn wantsC is
definitely out as creating roles is not what the qn is asking forD is wrong as it would not minimise effort
Question: 25 CertyIQ
You want to notify on-call engineers about a service degradation in production while minimizing development time.
What should you do?
Explanation:
DError Reporting is not about service degradation, more, Error Reporting uses Monitoring to send
alerts.https://cloud.google.com/error-reporting/docs/notifications
D is correct for monitoring.I'm baffled by the "correct" answers given by the site, 80% of the time they are
wrong.
Question: 26 CertyIQ
You are writing a single-page web application with a user-interface that communicates with a third-party API for
content using XMLHttpRequest. The data displayed on the UI by the API results is less critical than other data
displayed on the same web page, so it is acceptable for some requests to not have the API data displayed in the UI.
However, calls made to the API should not delay rendering of other parts of the user interface. You want your
application to perform well when the API response is an error or a timeout.
What should you do?
A.Set the asynchronous option for your requests to the API to false and omit the widget displaying the API
results when a timeout or error is encountered.
B.Set the asynchronous option for your request to the API to true and omit the widget displaying the API results
when a timeout or error is encountered.
C.Catch timeout or error exceptions from the API call and keep trying with exponential backoff until the API
response is successful.
D.Catch timeout or error exceptions from the API call and display the error response in the UI widget.
Answer: B
Explanation:
Answer is B.Api should not delay rendering: asynchronousApplication perform well when Api error or timeout:
omit the widget
Question: 27 CertyIQ
You are creating a web application that runs in a Compute Engine instance and writes a file to any user's Google
Drive. You need to configure the application to authenticate to the Google Drive API. What should you do?
A.Use an OAuth Client ID that uses the https://www.googleapis.com/auth/drive.file scope to obtain an access
token for each user.
B.Use an OAuth Client ID with delegated domain-wide authority.
C.Use the App Engine service account and https://www.googleapis.com/auth/drive.file scope to generate a
signed JSON Web Token (JWT).
D.Use the App Engine service account with delegated domain-wide authority.
Answer: A
Explanation:
Answer: B
Explanation:
https://cloud.google.com/kubernetes-engine/docs/concepts/cluster-architecture:
"A cluster typically has one or more nodes, which are the worker machines that run your containerized
applications and other workloads. The individual machines are Compute Engine VM instances that GKE
creates on your behalf when you create a cluster."Error message mentions "CPU" so this would refer to
Compute Engine VMsAnswer is B
Question: 29 CertyIQ
You are parsing a log file that contains three columns: a timestamp, an account number (a string), and a transaction
amount (a number). You want to calculate the sum of all transaction amounts for each unique account number
efficiently.
Which data structure should you use?
Answer: B
Explanation:
Hash table with the account number as the key, the timestamp is useless for this question, so we can safely
discard it.
Question: 30 CertyIQ
Your company has a BigQuery dataset named "Master" that keeps information about employee travel and
expenses. This information is organized by employee department. That means employees should only be able to
view information for their department. You want to apply a security framework to enforce this requirement with the
minimum number of steps.
What should you do?
A.Create a separate dataset for each department. Create a view with an appropriate WHERE clause to select
records from a particular dataset for the specific department. Authorize this view to access records from your
Master dataset. Give employees the permission to this department-specific dataset.
B.Create a separate dataset for each department. Create a data pipeline for each department to copy
appropriate information from the Master dataset to the specific dataset for the department. Give employees
the permission to this department-specific dataset.
C.Create a dataset named Master dataset. Create a separate view for each department in the Master dataset.
Give employees access to the specific view for their department.
D.Create a dataset named Master dataset. Create a separate table for each department in the Master dataset.
Give employees access to the specific table for their department.
Answer: C
Explanation:
correct answer c. the view answer the need of acess A is elminited because create dataset by department is
more steps.
Question: 31 CertyIQ
You have an application in production. It is deployed on Compute Engine virtual machine instances controlled by a
managed instance group. Traffic is routed to the instances via a HTTP(s) load balancer. Your users are unable to
access your application. You want to implement a monitoring technique to alert you when the application is
unavailable.
Which technique should you choose?
A.Smoke tests
B.Stackdriver uptime checks
C.Cloud Load Balancing - heath checks
D.Managed instance group - heath checks
Answer: B
Explanation:
Reference:
https://medium.com/google-cloud/stackdriver-monitoring-automation-part-3-uptime-checks-476b8507f59c
Question: 32 CertyIQ
You are load testing your server application. During the first 30 seconds, you observe that a previously inactive
Cloud Storage bucket is now servicing 2000 write requests per second and 7500 read requests per second. Your
application is now receiving intermittent 5xx and 429 HTTP responses from the Cloud Storage
JSON API as the demand escalates. You want to decrease the failed responses from the Cloud Storage API.
What should you do?
Answer: D
Explanation:
Limit the upload rate from your application clients so that the dormant bucket's peak request rate is reached
more gradually.
https://cloud.google.com/storage/docs/request-rate#ramp-up
Question: 33 CertyIQ
Your application is controlled by a managed instance group. You want to share a large read-only data set between
all the instances in the managed instance group. You want to ensure that each instance can start quickly and can
access the data set via its filesystem with very low latency. You also want to minimize the total cost of the solution.
What should you do?
A.Move the data to a Cloud Storage bucket, and mount the bucket on the filesystem using Cloud Storage FUSE.
B.Move the data to a Cloud Storage bucket, and copy the data to the boot disk of the instance via a startup
script.
C.Move the data to a Compute Engine persistent disk, and attach the disk in read-only mode to multiple
Compute Engine virtual machine instances.
D.Move the data to a Compute Engine persistent disk, take a snapshot, create multiple disks from the snapshot,
and attach each disk to its own instance.
Answer: C
Explanation:
https://cloud.google.com/compute/docs/disks/sharing-disks-between-vms#use-multi-instancesShare a disk
in read-only mode between multiple VMs Sharing static data between multiple VMs from one persistent disk
is "less expensive" than replicating your data to unique disks for individual instances.
https://cloud.google.com/compute/docs/disks/gcs-buckets#mount_bucketMounting a bucket as a file
systemYou can use the Cloud Storage FUSE tool to mount a Cloud Storage bucket to your Compute Engine
instance. The mounted bucket behaves similarly to a persistent disk even though Cloud Storage buckets are
object storage. https://github.com/GoogleCloudPlatform/gcsfuse/Cloud Storage FUSE performance issues:
Latency, Rate limit
Question: 34 CertyIQ
You are developing an HTTP API hosted on a Compute Engine virtual machine instance that needs to be invoked by
multiple clients within the same Virtual
Private Cloud (VPC). You want clients to be able to get the IP address of the service.
What should you do?
A.Reserve a static external IP address and assign it to an HTTP(S) load balancing service's forwarding rule.
Clients should use this IP address to connect to the service.
B.Reserve a static external IP address and assign it to an HTTP(S) load balancing service's forwarding rule.
Then, define an A record in Cloud DNS. Clients should use the name of the A record to connect to the service.
C.Ensure that clients use Compute Engine internal DNS by connecting to the instance name with the url
https://[INSTANCE_NAME].[ZONE].c. [PROJECT_ID].internal/.
D.Ensure that clients use Compute Engine internal DNS by connecting to the instance name with the url
https://[API_NAME]/[API_VERSION]/.
Answer: C
Explanation:
answer C)"Virtual Private Cloud networks on Google Cloud have an internal DNS service that lets instances in
the same network access each other by using internal DNS names" This name can be used for access:
[INSTANCE_NAME].[ZONE].c.[PROJECT_ID].internal https://cloud.google.com/compute/docs/internal-
dns#access_by_internal_DNS
Question: 35 CertyIQ
Your application is logging to Stackdriver. You want to get the count of all requests on all /api/alpha/* endpoints.
What should you do?
Answer: B
Explanation:
Question: 36 CertyIQ
You want to re-architect a monolithic application so that it follows a microservices model. You want to accomplish
this efficiently while minimizing the impact of this change to the business.
Which approach should you take?
Answer: B
Explanation:
Question: 37 CertyIQ
Your existing application keeps user state information in a single MySQL database. This state information is very
user-specific and depends heavily on how long a user has been using an application. The MySQL database is
causing challenges to maintain and enhance the schema for various users.
Which storage option should you choose?
A.Cloud SQL
B.Cloud Storage
C.Cloud Spanner
D.Cloud Datastore/Firestore
Answer: D
Explanation:
The question is a bit misleading. If its asking to keep a MySQL storage option then Cloud SQL or Spanner are
the only options. However, assuming that they want to move away from schema and also the need for stateful
DB I would go for Datastore/Firestore.
Question: 38 CertyIQ
You are building a new API. You want to minimize the cost of storing and reduce the latency of serving images.
Which architecture should you use?
Answer: D
Explanation:
D. Cloud Content Delivery Network (CDN) backed by Cloud Storage.A Cloud CDN is a content delivery network
that uses Google's globally distributed edge points of presence to accelerate content delivery for websites
and applications served out of Google Cloud. Cloud CDN stores and serves content from Google Cloud
Storage, which allows for efficient and low-cost storage of images, as well as low latency in serving the
images. The other options do not mention low latency or cost-effective storage as their primary benefits.
Question: 39 CertyIQ
Your company's development teams want to use Cloud Build in their projects to build and push Docker images to
Container Registry. The operations team requires all Docker images to be published to a centralized, securely
managed Docker registry that the operations team manages.
What should you do?
A.Use Container Registry to create a registry in each development team's project. Configure the Cloud Build
build to push the Docker image to the project's registry. Grant the operations team access to each development
team's registry.
B.Create a separate project for the operations team that has Container Registry configured. Assign appropriate
permissions to the Cloud Build service account in each developer team's project to allow access to the
operation team's registry.
C.Create a separate project for the operations team that has Container Registry configured. Create a Service
Account for each development team and assign the appropriate permissions to allow it access to the operations
team's registry. Store the service account key file in the source code repository and use it to authenticate
against the operations team's registry.
D.Create a separate project for the operations team that has the open source Docker Registry deployed on a
Compute Engine virtual machine instance. Create a username and password for each development team. Store
the username and password in the source code repository and use it to authenticate against the operations
team's Docker registry.
Answer: B
Explanation:
The correct answer is B Container Registry is a good choice to store containers in a secure manageable way. It
is possible to have ContainerRegistry in One project and push to it from Cloud Build of another project by
adding appropriate service account as a member of a Cloud Storage Bucket used to host containers with the
role Cloud Build Service Account.
Question: 40 CertyIQ
You are planning to deploy your application in a Google Kubernetes Engine (GKE) cluster. Your application can
scale horizontally, and each instance of your application needs to have a stable network identity and its own
persistent disk.
Which GKE object should you use?
A.Deployment
B.StatefulSet
C.ReplicaSet
D.ReplicaController
Answer: B
Explanation:
Once created, the StatefulSet ensures that the desired number of Pods are running and available at all times.
The StatefulSet automatically replaces Pods that fail or are evicted from their nodes, and automatically
associates new Pods with the storage resources, resource requests and limits, and other configurations
defined in the StatefulSet's Pod specification
Reference:
https://livebook.manning.com/book/kubernetes-in-action/chapter-10/46
Question: 41 CertyIQ
You are using Cloud Build to build a Docker image. You need to modify the build to execute unit and run integration
tests. When there is a failure, you want the build history to clearly display the stage at which the build failed.
What should you do?
A.Add RUN commands in the Dockerfile to execute unit and integration tests.
B.Create a Cloud Build build config file with a single build step to compile unit and integration tests.
C.Create a Cloud Build build config file that will spawn a separate cloud build pipeline for unit and integration
tests.
D.Create a Cloud Build build config file with separate cloud builder steps to compile and execute unit and
integration tests.
Answer: D
Explanation:
Create a Cloud Build build config file with separate cloud builder steps to compile and execute unit and
integration tests. This is the best option because it allows you to clearly specify and separate the different
stages of the build process (compiling unit tests, executing unit tests, compiling integration tests, executing
integration tests). This makes it easier to understand the build history and identify any failures that may occur.
In addition, using separate build steps allows you to specify different properties (such as timeout values or
environment variables) for each stage of the build process.
Question: 42 CertyIQ
Your code is running on Cloud Functions in project A. It is supposed to write an object in a Cloud Storage bucket
owned by project B. However, the write call is failing with the error "403 Forbidden".
What should you do to correct the problem?
A.Grant your user account the roles/storage.objectCreator role for the Cloud Storage bucket.
B.Grant your user account the roles/iam.serviceAccountUser role for the [email protected] service account.
C.Grant the [email protected] service account the roles/storage.objectCreator role for the Cloud Storage
bucket.
D.Enable the Cloud Storage API in project B.
Answer: C
Explanation:
The answer is C : the default service account use by cloud function is [email protected] (cf.
https://cloud.google.com/functions/docs/concepts/iam#troubleshooting_permission_errors)
Question: 43 CertyIQ
Case study -
This is a case study. Case studies are not timed separately. You can use as much exam time as you would like to
complete each case. However, there may be additional case studies and sections on this exam. You must manage
your time to ensure that you are able to complete all questions included on this exam in the time provided.
To answer the questions included in a case study, you will need to reference information that is provided in the
case study. Case studies might contain exhibits and other resources that provide more information about the
scenario that is described in the case study. Each question is independent of the other questions in this case study.
At the end of this case study, a review screen will appear. This screen allows you to review your answers and to
make changes before you move to the next section of the exam. After you begin a new section, you cannot return
to this section.
Company Overview -
HipLocal is a community application designed to facilitate communication between people in close proximity. It is
used for event planning and organizing sporting events, and for businesses to connect with their local
communities. HipLocal launched recently in a few neighborhoods in Dallas and is rapidly growing into a global
phenomenon. Its unique style of hyper-local community communication and business outreach is in demand
around the world.
Executive Statement -
We are the number one local community app; it's time to take our local community services global. Our venture
capital investors want to see rapid growth and the same great experience for new local and virtual communities
that come online, whether their members are 10 or 10000 miles away from each other.
Solution Concept -
HipLocal wants to expand their existing service, with updated functionality, in new regions to better serve their
global customers. They want to hire and train a new team to support these regions in their time zones. They will
need to ensure that the application scales smoothly and provides clear uptime data.
Business Requirements -
HipLocal's investors want to expand their footprint and support the increase in demand they are seeing. Their
requirements are:
* Expand availability of the application to new regions.
* Increase the number of concurrent users that can be supported.
* Ensure a consistent experience for users when they travel to different regions.
* Obtain user activity metrics to better understand how to monetize their product.
* Ensure compliance with regulations in the new regions (for example, GDPR).
* Reduce infrastructure management time and cost.
* Adopt the Google-recommended practices for cloud computing.
Technical Requirements -
* The application and backend must provide usage metrics and monitoring.
* APIs require strong authentication and authorization.
* Logging must be increased, and data should be stored in a cloud analytics platform.
* Move to serverless architecture to facilitate elastic scaling.
* Provide authorized access to internal apps in a secure manner.
HipLocal's .net-based auth service fails under intermittent load.
What should they do?
Answer: A
Explanation:
Question: 44 CertyIQ
Case study -
This is a case study. Case studies are not timed separately. You can use as much exam time as you would like to
complete each case. However, there may be additional case studies and sections on this exam. You must manage
your time to ensure that you are able to complete all questions included on this exam in the time provided.
To answer the questions included in a case study, you will need to reference information that is provided in the
case study. Case studies might contain exhibits and other resources that provide more information about the
scenario that is described in the case study. Each question is independent of the other questions in this case study.
At the end of this case study, a review screen will appear. This screen allows you to review your answers and to
make changes before you move to the next section of the exam. After you begin a new section, you cannot return
to this section.
Company Overview -
HipLocal is a community application designed to facilitate communication between people in close proximity. It is
used for event planning and organizing sporting events, and for businesses to connect with their local
communities. HipLocal launched recently in a few neighborhoods in Dallas and is rapidly growing into a global
phenomenon. Its unique style of hyper-local community communication and business outreach is in demand
around the world.
Executive Statement -
We are the number one local community app; it's time to take our local community services global. Our venture
capital investors want to see rapid growth and the same great experience for new local and virtual communities
that come online, whether their members are 10 or 10000 miles away from each other.
Solution Concept -
HipLocal wants to expand their existing service, with updated functionality, in new regions to better serve their
global customers. They want to hire and train a new team to support these regions in their time zones. They will
need to ensure that the application scales smoothly and provides clear uptime data.
Business Requirements -
HipLocal's investors want to expand their footprint and support the increase in demand they are seeing. Their
requirements are:
* Expand availability of the application to new regions.
* Increase the number of concurrent users that can be supported.
* Ensure a consistent experience for users when they travel to different regions.
* Obtain user activity metrics to better understand how to monetize their product.
* Ensure compliance with regulations in the new regions (for example, GDPR).
* Reduce infrastructure management time and cost.
* Adopt the Google-recommended practices for cloud computing.
Technical Requirements -
* The application and backend must provide usage metrics and monitoring.
* APIs require strong authentication and authorization.
* Logging must be increased, and data should be stored in a cloud analytics platform.
* Move to serverless architecture to facilitate elastic scaling.
* Provide authorized access to internal apps in a secure manner.
HipLocal's APIs are having occasional application failures. They want to collect application information specifically
to troubleshoot the issue. What should they do?
Explanation:
They don't have logging so need to add logging agent so we can have logs to study. Tracr is for latency issue
and it's not the issue here.
Question: 45 CertyIQ
Case study -
This is a case study. Case studies are not timed separately. You can use as much exam time as you would like to
complete each case. However, there may be additional case studies and sections on this exam. You must manage
your time to ensure that you are able to complete all questions included on this exam in the time provided.
To answer the questions included in a case study, you will need to reference information that is provided in the
case study. Case studies might contain exhibits and other resources that provide more information about the
scenario that is described in the case study. Each question is independent of the other questions in this case study.
At the end of this case study, a review screen will appear. This screen allows you to review your answers and to
make changes before you move to the next section of the exam. After you begin a new section, you cannot return
to this section.
Company Overview -
HipLocal is a community application designed to facilitate communication between people in close proximity. It is
used for event planning and organizing sporting events, and for businesses to connect with their local
communities. HipLocal launched recently in a few neighborhoods in Dallas and is rapidly growing into a global
phenomenon. Its unique style of hyper-local community communication and business outreach is in demand
around the world.
Executive Statement -
We are the number one local community app; it's time to take our local community services global. Our venture
capital investors want to see rapid growth and the same great experience for new local and virtual communities
that come online, whether their members are 10 or 10000 miles away from each other.
Solution Concept -
HipLocal wants to expand their existing service, with updated functionality, in new regions to better serve their
global customers. They want to hire and train a new team to support these regions in their time zones. They will
need to ensure that the application scales smoothly and provides clear uptime data.
Business Requirements -
HipLocal's investors want to expand their footprint and support the increase in demand they are seeing. Their
requirements are:
* Expand availability of the application to new regions.
* Increase the number of concurrent users that can be supported.
* Ensure a consistent experience for users when they travel to different regions.
* Obtain user activity metrics to better understand how to monetize their product.
* Ensure compliance with regulations in the new regions (for example, GDPR).
* Reduce infrastructure management time and cost.
* Adopt the Google-recommended practices for cloud computing.
Technical Requirements -
* The application and backend must provide usage metrics and monitoring.
* APIs require strong authentication and authorization.
* Logging must be increased, and data should be stored in a cloud analytics platform.
* Move to serverless architecture to facilitate elastic scaling.
* Provide authorized access to internal apps in a secure manner.
HipLocal has connected their Hadoop infrastructure to GCP using Cloud Interconnect in order to query data stored
on persistent disks.
Which IP strategy should they use?
Answer: A
Explanation:
A - Need to take control of the IP assignment thru manual subnet especially when establishing the
connectivity between on-prem/cloud
Question: 46 CertyIQ
Case study -
This is a case study. Case studies are not timed separately. You can use as much exam time as you would like to
complete each case. However, there may be additional case studies and sections on this exam. You must manage
your time to ensure that you are able to complete all questions included on this exam in the time provided.
To answer the questions included in a case study, you will need to reference information that is provided in the
case study. Case studies might contain exhibits and other resources that provide more information about the
scenario that is described in the case study. Each question is independent of the other questions in this case study.
At the end of this case study, a review screen will appear. This screen allows you to review your answers and to
make changes before you move to the next section of the exam. After you begin a new section, you cannot return
to this section.
Company Overview -
HipLocal is a community application designed to facilitate communication between people in close proximity. It is
used for event planning and organizing sporting events, and for businesses to connect with their local
communities. HipLocal launched recently in a few neighborhoods in Dallas and is rapidly growing into a global
phenomenon. Its unique style of hyper-local community communication and business outreach is in demand
around the world.
Executive Statement -
We are the number one local community app; it's time to take our local community services global. Our venture
capital investors want to see rapid growth and the same great experience for new local and virtual communities
that come online, whether their members are 10 or 10000 miles away from each other.
Solution Concept -
HipLocal wants to expand their existing service, with updated functionality, in new regions to better serve their
global customers. They want to hire and train a new team to support these regions in their time zones. They will
need to ensure that the application scales smoothly and provides clear uptime data.
Business Requirements -
HipLocal's investors want to expand their footprint and support the increase in demand they are seeing. Their
requirements are:
* Expand availability of the application to new regions.
* Increase the number of concurrent users that can be supported.
* Ensure a consistent experience for users when they travel to different regions.
* Obtain user activity metrics to better understand how to monetize their product.
* Ensure compliance with regulations in the new regions (for example, GDPR).
* Reduce infrastructure management time and cost.
* Adopt the Google-recommended practices for cloud computing.
Technical Requirements -
* The application and backend must provide usage metrics and monitoring.
* APIs require strong authentication and authorization.
* Logging must be increased, and data should be stored in a cloud analytics platform.
* Move to serverless architecture to facilitate elastic scaling.
* Provide authorized access to internal apps in a secure manner.
Which service should HipLocal use to enable access to internal apps?
A.Cloud VPN
B.Cloud Armor
C.Virtual Private Cloud
D.Cloud Identity-Aware Proxy
Answer: D
Explanation:
Reference:
https://cloud.google.com/iap/docs/cloud-iap-for-on-prem-apps-overview
Question: 47 CertyIQ
Case study -
This is a case study. Case studies are not timed separately. You can use as much exam time as you would like to
complete each case. However, there may be additional case studies and sections on this exam. You must manage
your time to ensure that you are able to complete all questions included on this exam in the time provided.
To answer the questions included in a case study, you will need to reference information that is provided in the
case study. Case studies might contain exhibits and other resources that provide more information about the
scenario that is described in the case study. Each question is independent of the other questions in this case study.
At the end of this case study, a review screen will appear. This screen allows you to review your answers and to
make changes before you move to the next section of the exam. After you begin a new section, you cannot return
to this section.
Company Overview -
HipLocal is a community application designed to facilitate communication between people in close proximity. It is
used for event planning and organizing sporting events, and for businesses to connect with their local
communities. HipLocal launched recently in a few neighborhoods in Dallas and is rapidly growing into a global
phenomenon. Its unique style of hyper-local community communication and business outreach is in demand
around the world.
Executive Statement -
We are the number one local community app; it's time to take our local community services global. Our venture
capital investors want to see rapid growth and the same great experience for new local and virtual communities
that come online, whether their members are 10 or 10000 miles away from each other.
Solution Concept -
HipLocal wants to expand their existing service, with updated functionality, in new regions to better serve their
global customers. They want to hire and train a new team to support these regions in their time zones. They will
need to ensure that the application scales smoothly and provides clear uptime data.
Business Requirements -
HipLocal's investors want to expand their footprint and support the increase in demand they are seeing. Their
requirements are:
* Expand availability of the application to new regions.
* Increase the number of concurrent users that can be supported.
* Ensure a consistent experience for users when they travel to different regions.
* Obtain user activity metrics to better understand how to monetize their product.
* Ensure compliance with regulations in the new regions (for example, GDPR).
* Reduce infrastructure management time and cost.
* Adopt the Google-recommended practices for cloud computing.
Technical Requirements -
* The application and backend must provide usage metrics and monitoring.
* APIs require strong authentication and authorization.
* Logging must be increased, and data should be stored in a cloud analytics platform.
* Move to serverless architecture to facilitate elastic scaling.
* Provide authorized access to internal apps in a secure manner.
HipLocal wants to reduce the number of on-call engineers and eliminate manual scaling.
Which two services should they choose? (Choose two.)
Answer: AB
Explanation:
Question: 48 CertyIQ
Case study -
This is a case study. Case studies are not timed separately. You can use as much exam time as you would like to
complete each case. However, there may be additional case studies and sections on this exam. You must manage
your time to ensure that you are able to complete all questions included on this exam in the time provided.
To answer the questions included in a case study, you will need to reference information that is provided in the
case study. Case studies might contain exhibits and other resources that provide more information about the
scenario that is described in the case study. Each question is independent of the other questions in this case study.
At the end of this case study, a review screen will appear. This screen allows you to review your answers and to
make changes before you move to the next section of the exam. After you begin a new section, you cannot return
to this section.
Company Overview -
HipLocal is a community application designed to facilitate communication between people in close proximity. It is
used for event planning and organizing sporting events, and for businesses to connect with their local
communities. HipLocal launched recently in a few neighborhoods in Dallas and is rapidly growing into a global
phenomenon. Its unique style of hyper-local community communication and business outreach is in demand
around the world.
Executive Statement -
We are the number one local community app; it's time to take our local community services global. Our venture
capital investors want to see rapid growth and the same great experience for new local and virtual communities
that come online, whether their members are 10 or 10000 miles away from each other.
Solution Concept -
HipLocal wants to expand their existing service, with updated functionality, in new regions to better serve their
global customers. They want to hire and train a new team to support these regions in their time zones. They will
need to ensure that the application scales smoothly and provides clear uptime data.
Business Requirements -
HipLocal's investors want to expand their footprint and support the increase in demand they are seeing. Their
requirements are:
* Expand availability of the application to new regions.
* Increase the number of concurrent users that can be supported.
* Ensure a consistent experience for users when they travel to different regions.
* Obtain user activity metrics to better understand how to monetize their product.
* Ensure compliance with regulations in the new regions (for example, GDPR).
* Reduce infrastructure management time and cost.
* Adopt the Google-recommended practices for cloud computing.
Technical Requirements -
* The application and backend must provide usage metrics and monitoring.
* APIs require strong authentication and authorization.
* Logging must be increased, and data should be stored in a cloud analytics platform.
* Move to serverless architecture to facilitate elastic scaling.
* Provide authorized access to internal apps in a secure manner.
In order to meet their business requirements, how should HipLocal store their application state?
Answer: C
Explanation:
the answer is C. A is not valid because local SSD is volatile memory. B and D is bad solution because it don't
reduce latency in world wide but they are a regional location.
Question: 49 CertyIQ
Case study -
This is a case study. Case studies are not timed separately. You can use as much exam time as you would like to
complete each case. However, there may be additional case studies and sections on this exam. You must manage
your time to ensure that you are able to complete all questions included on this exam in the time provided.
To answer the questions included in a case study, you will need to reference information that is provided in the
case study. Case studies might contain exhibits and other resources that provide more information about the
scenario that is described in the case study. Each question is independent of the other questions in this case study.
At the end of this case study, a review screen will appear. This screen allows you to review your answers and to
make changes before you move to the next section of the exam. After you begin a new section, you cannot return
to this section.
Company Overview -
HipLocal is a community application designed to facilitate communication between people in close proximity. It is
used for event planning and organizing sporting events, and for businesses to connect with their local
communities. HipLocal launched recently in a few neighborhoods in Dallas and is rapidly growing into a global
phenomenon. Its unique style of hyper-local community communication and business outreach is in demand
around the world.
Executive Statement -
We are the number one local community app; it's time to take our local community services global. Our venture
capital investors want to see rapid growth and the same great experience for new local and virtual communities
that come online, whether their members are 10 or 10000 miles away from each other.
Solution Concept -
HipLocal wants to expand their existing service, with updated functionality, in new regions to better serve their
global customers. They want to hire and train a new team to support these regions in their time zones. They will
need to ensure that the application scales smoothly and provides clear uptime data.
Business Requirements -
HipLocal's investors want to expand their footprint and support the increase in demand they are seeing. Their
requirements are:
* Expand availability of the application to new regions.
* Increase the number of concurrent users that can be supported.
* Ensure a consistent experience for users when they travel to different regions.
* Obtain user activity metrics to better understand how to monetize their product.
* Ensure compliance with regulations in the new regions (for example, GDPR).
* Reduce infrastructure management time and cost.
* Adopt the Google-recommended practices for cloud computing.
Technical Requirements -
* The application and backend must provide usage metrics and monitoring.
* APIs require strong authentication and authorization.
* Logging must be increased, and data should be stored in a cloud analytics platform.
* Move to serverless architecture to facilitate elastic scaling.
* Provide authorized access to internal apps in a secure manner.
Which service should HipLocal use for their public APIs?
A.Cloud Armor
B.Cloud Functions
C.Cloud Endpoints
D.Shielded Virtual Machines
Answer: C
Explanation:
Question: 50 CertyIQ
Case study -
This is a case study. Case studies are not timed separately. You can use as much exam time as you would like to
complete each case. However, there may be additional case studies and sections on this exam. You must manage
your time to ensure that you are able to complete all questions included on this exam in the time provided.
To answer the questions included in a case study, you will need to reference information that is provided in the
case study. Case studies might contain exhibits and other resources that provide more information about the
scenario that is described in the case study. Each question is independent of the other questions in this case study.
At the end of this case study, a review screen will appear. This screen allows you to review your answers and to
make changes before you move to the next section of the exam. After you begin a new section, you cannot return
to this section.
Company Overview -
HipLocal is a community application designed to facilitate communication between people in close proximity. It is
used for event planning and organizing sporting events, and for businesses to connect with their local
communities. HipLocal launched recently in a few neighborhoods in Dallas and is rapidly growing into a global
phenomenon. Its unique style of hyper-local community communication and business outreach is in demand
around the world.
Executive Statement -
We are the number one local community app; it's time to take our local community services global. Our venture
capital investors want to see rapid growth and the same great experience for new local and virtual communities
that come online, whether their members are 10 or 10000 miles away from each other.
Solution Concept -
HipLocal wants to expand their existing service, with updated functionality, in new regions to better serve their
global customers. They want to hire and train a new team to support these regions in their time zones. They will
need to ensure that the application scales smoothly and provides clear uptime data.
Business Requirements -
HipLocal's investors want to expand their footprint and support the increase in demand they are seeing. Their
requirements are:
* Expand availability of the application to new regions.
* Increase the number of concurrent users that can be supported.
* Ensure a consistent experience for users when they travel to different regions.
* Obtain user activity metrics to better understand how to monetize their product.
* Ensure compliance with regulations in the new regions (for example, GDPR).
* Reduce infrastructure management time and cost.
* Adopt the Google-recommended practices for cloud computing.
Technical Requirements -
* The application and backend must provide usage metrics and monitoring.
* APIs require strong authentication and authorization.
* Logging must be increased, and data should be stored in a cloud analytics platform.
* Move to serverless architecture to facilitate elastic scaling.
* Provide authorized access to internal apps in a secure manner.
HipLocal wants to improve the resilience of their MySQL deployment, while also meeting their business and
technical requirements.
Which configuration should they choose?
A.Use the current single instance MySQL on Compute Engine and several read-only MySQL servers on
Compute Engine.
B.Use the current single instance MySQL on Compute Engine, and replicate the data to Cloud SQL in an
external master configuration.
C.Replace the current single instance MySQL instance with Cloud SQL, and configure high availability.
D.Replace the current single instance MySQL instance with Cloud SQL, and Google provides redundancy
without further configuration.
Answer: C
Explanation:
Replace the current single instance MySQL instance with Cloud SQL, and configure high availability.
Question: 51 CertyIQ
Your application is running in multiple Google Kubernetes Engine clusters. It is managed by a Deployment in each
cluster. The Deployment has created multiple replicas of your Pod in each cluster. You want to view the logs sent
to stdout for all of the replicas in your Deployment in all clusters.
Which command should you use?
Answer: B
Explanation:
B: gcloud logging readUsing the "gcloud logging read" command, select the appropriate cluster, node, pod,
and container logs.https://cloud.google.com/stackdriver/docs/solutions/gke/using-
logs#accessing_your_logsHowever if you use "kubectl logs" to see logs on CLI, logs won’t be seen readable.
It prints each line as a JSON object. https://medium.com/google-cloud/display-gke-logs-in-a-text-format-
with-kubectl-db0169be0282
Question: 52 CertyIQ
You are using Cloud Build to create a new Docker image on each source code commit to a Cloud Source
Repositories repository. Your application is built on every commit to the master branch. You want to release
specific commits made to the master branch in an automated method.
What should you do?
Answer: B
Explanation:
B is correct
I don't know why people are selecting C , the qus says commit to master . C literally does not make sense how
commit to a feature branch can trigger a master build.
Question: 53 CertyIQ
You are designing a schema for a table that will be moved from MySQL to Cloud Bigtable. The MySQL table is as
follows:
How should you design a row key for Cloud Bigtable for this table?
Answer: B
Explanation:
Question: 54 CertyIQ
You want to view the memory usage of your application deployed on Compute Engine.
What should you do?
Answer: B
Explanation:
Question: 55 CertyIQ
You have an analytics application that runs hundreds of queries on BigQuery every few minutes using BigQuery
API. You want to find out how much time these queries take to execute.
What should you do?
Answer: D
Explanation:
https://cloud.google.com/bigquery/docs/monitoring
Question: 56 CertyIQ
You are designing a schema for a Cloud Spanner customer database. You want to store a phone number array field
in a customer table. You also want to allow users to search customers by phone number.
How should you design this schema?
A.Create a table named Customers. Add an Array field in a table that will hold phone numbers for the customer.
B.Create a table named Customers. Create a table named Phones. Add a CustomerId field in the Phones table to
find the CustomerId from a phone number.
C.Create a table named Customers. Add an Array field in a table that will hold phone numbers for the customer.
Create a secondary index on the Array field.
D.Create a table named Customers as a parent table. Create a table named Phones, and interleave this table
into the Customer table. Create an index on the phone number field in the Phones table.
Answer: D
Explanation:
The correct answer is D. You should create a table named Customers as a parent table and a table named
Phones, and interleave this table into the Customer table. You should also create an index on the phone
number field in the Phones table. This allows you to store the phone number array field in the Customers table
and search for customers by phone number using the index on the Phones table.
Question: 57 CertyIQ
You are deploying a single website on App Engine that needs to be accessible via the URL
http://www.altostrat.com/.
What should you do?
A.Verify domain ownership with Webmaster Central. Create a DNS CNAME record to point to the App Engine
canonical name ghs.googlehosted.com.
B.Verify domain ownership with Webmaster Central. Define an A record pointing to the single global App
Engine IP address.
C.Define a mapping in dispatch.yaml to point the domain www.altostrat.com to your App Engine service. Create
a DNS CNAME record to point to the App Engine canonical name ghs.googlehosted.com.
D.Define a mapping in dispatch.yaml to point the domain www.altostrat.com to your App Engine service. Define
an A record pointing to the single global App Engine IP address.
Answer: A
Explanation:
Reference:
https://cloud.google.com/appengine/docs/flexible/dotnet/mapping-custom-domains?hl=fa
Question: 58 CertyIQ
You are running an application on App Engine that you inherited. You want to find out whether the application is
using insecure binaries or is vulnerable to XSS attacks.
Which service should you use?
A.Cloud Amor
B.Stackdriver Debugger
C.Cloud Security Scanner
D.Stackdriver Error Reporting
Answer: C
Explanation:
Reference:
https://cloud.google.com/security-scanner
Question: 59 CertyIQ
You are working on a social media application. You plan to add a feature that allows users to upload images. These
images will be 2 MB `" 1 GB in size. You want to minimize their infrastructure operations overhead for this feature.
What should you do?
A.Change the application to accept images directly and store them in the database that stores other user
information.
B.Change the application to create signed URLs for Cloud Storage. Transfer these signed URLs to the client
application to upload images to Cloud Storage.
C.Set up a web server on GCP to accept user images and create a file store to keep uploaded files. Change the
application to retrieve images from the file store.
D.Create a separate bucket for each user in Cloud Storage. Assign a separate service account to allow write
access on each bucket. Transfer service account credentials to the client application based on user information.
The application uses this service account to upload images to Cloud Storage.
Answer: B
Explanation:
Reference:
https://cloud.google.com/blog/products/storage-data-transfer/uploading-images-directly-to-cloud-storage-b
y-using-signed-url
Question: 60 CertyIQ
Your application is built as a custom machine image. You have multiple unique deployments of the machine image.
Each deployment is a separate managed instance group with its own template. Each deployment requires a unique
set of configuration values. You want to provide these unique values to each deployment but use the same custom
machine image in all deployments. You want to use out-of-the-box features of Compute Engine.
What should you do?
Answer: D
Explanation:
Option D is the correct answer. Instance metadata is metadata that is associated with a Compute Engine
instance and can be used to pass configuration values to the instance at startup. It can be accessed from
within the instance itself, allowing you to use the same custom machine image in all deployments and still
provide unique configuration values to each deployment. Option A is not a good solution because the
persistent disk is not automatically attached to the instance at startup and is not intended for storing
configuration values. Option B is not a good solution because Cloud Bigtable is a NoSQL database, which is
not well-suited for storing configuration values. Option C is not a good solution because the startup script is
executed after the instance has started, so it cannot be used to pass configuration values to the instance at
startup.
Question: 61 CertyIQ
Your application performs well when tested locally, but it runs significantly slower after you deploy it to a
Compute Engine instance. You need to diagnose the problem. What should you do?
What should you do?
A.File a ticket with Cloud Support indicating that the application performs faster locally.
B.Use Cloud Debugger snapshots to look at a point-in-time execution of the application.
C.Use Cloud Profiler to determine which functions within the application take the longest amount of time.
D.Add logging commands to the application and use Cloud Logging to check where the latency problem occurs.
Answer: C
Explanation:
A is incorrect because the argument “it worked on my machine” but doesn’t work on Google Cloud is never
valid.B is incorrect because Debugger snapshots only lets us review the application at a single point in time.C
is correct because it provides latency per function and historical latency information.D is incorrect because
while it works it requires a lot of work and is not the clear, optimal choice.
Question: 62 CertyIQ
You have an application running in App Engine. Your application is instrumented with Stackdriver Trace. The
/product-details request reports details about four known unique products at /sku-details as shown below. You
want to reduce the time it takes for the request to complete.
What should you do?
Answer: C
Explanation:
Option C is the correct answer. By changing /product-details to perform the requests in parallel, you can
reduce the time it takes for the request to complete by making multiple requests at the same time rather than
sequentially. This will allow you to retrieve the information for all four products more quickly. Option A is not a
good solution because increasing the size of the instance class may not necessarily reduce the time it takes
for the request to complete. Option B is not a good solution because changing the Persistent Disk type to SSD
will not have any impact on the time it takes for the request to complete. Option D is not a good solution
because storing the /sku-details information in a database and replacing the webservice call with a database
query will not necessarily reduce the time it takes for the request to complete, and it will add unnecessary
complexity to the application.
Question: 63 CertyIQ
Your company has a data warehouse that keeps your application information in BigQuery. The BigQuery data
warehouse keeps 2 PBs of user data. Recently, your company expanded your user base to include EU users and
needs to comply with these requirements:
✑ Your company must be able to delete all user account information upon user request.
✑ All EU user data must be stored in a single region specifically for EU users.
Which two actions should you take? (Choose two.)
Answer: BE
Explanation:
B & E. Data is already stored in BigQuery, I do not see any reason to have anything to do with Cloud Storage.
Also, BigQuery allows DML to do updates and deletes. So I would choose B & E
Question: 64 CertyIQ
Your App Engine standard configuration is as follows:
service: production
instance_class: B1
You want to limit the application to 5 instances.
Which code snippet should you include in your configuration?
Answer: D
Explanation:
D is correct
https://cloud.google.com/appengine/docs/legacy/standard/python/how-instances-are-
managed#scaling_types
Question: 65 CertyIQ
Your analytics system executes queries against a BigQuery dataset. The SQL query is executed in batch and
passes the contents of a SQL file to the BigQuery
CLI. Then it redirects the BigQuery CLI output to another process. However, you are getting a permission error
from the BigQuery CLI when the queries are executed.
You want to resolve the issue. What should you do?
A.Grant the service account BigQuery Data Viewer and BigQuery Job User roles.
B.Grant the service account BigQuery Data Editor and BigQuery Data Viewer roles.
C.Create a view in BigQuery from the SQL query and SELECT* from the view in the CLI.
D.Create a new dataset in BigQuery, and copy the source table to the new dataset Query the new dataset and
table from the CLI.
Answer: A
Explanation:
The correct answer is Option A. In order to allow the analytics system to execute queries against the BigQuery
dataset, the service account must be granted the BigQuery Data Viewer and BigQuery Job User roles. The
BigQuery Data Viewer role allows the service account to read data from tables, and the BigQuery Job User
role allows the service account to run jobs, which includes executing queries. Option B is not a good solution
because the BigQuery Data Editor role allows the service account to modify data in tables, which is not
necessary to execute queries. Option C is not a good solution because creating a view in BigQuery and
selecting from the view in the CLI will not resolve the permission issue. Option D is not a good solution
because creating a new dataset and copying the source table to the new dataset will not resolve the
permission issue.
Question: 66 CertyIQ
Your application is running on Compute Engine and is showing sustained failures for a small number of requests.
You have narrowed the cause down to a single
Compute Engine instance, but the instance is unresponsive to SSH.
What should you do next?
Answer: B
Explanation:
Option B is correct. According to Google Cloud documentation, if a Compute Engine instance is unresponsive
to SSH and you have narrowed the cause down to a single instance, you should enable and check the serial
port output. The serial port output is a log of system messages and can help you diagnose the issue causing
the instance to become unresponsive. To enable and check the serial port output, you can access the serial
console as the root user from your local workstation using a browser. This will allow you to review the logs
and potentially identify the cause of the problem.
Question: 67 CertyIQ
You configured your Compute Engine instance group to scale automatically according to overall CPU usage.
However, your application's response latency increases sharply before the cluster has finished adding up
instances. You want to provide a more consistent latency experience for your end users by changing the
configuration of the instance group autoscaler.
Which two configuration changes should you make? (Choose two.)
Answer: BD
Explanation:
Adding label won't solve the issue so A is wrong for sureRemoving health check is not recommended so E is
wrong as wellIncrease CPU target is wrong since scaling will take place at a higher usage which is not what
we wantB and D are the correct options
Question: 68 CertyIQ
You have an application controlled by a managed instance group. When you deploy a new version of the
application, costs should be minimized and the number of instances should not increase. You want to ensure that,
when each new instance is created, the deployment only continues if the new instance is healthy.
What should you do?
Answer: B
Explanation:
As others suggested, B is the correct option.I am adding this to highlight the community choice.
Question: 69 CertyIQ
Your application requires service accounts to be authenticated to GCP products via credentials stored on its host
Compute Engine virtual machine instances. You want to distribute these credentials to the host instances as
securely as possible.
What should you do?
A.Use HTTP signed URLs to securely provide access to the required resources.
B.Use the instance's service account Application Default Credentials to authenticate to the required resources.
C.Generate a P12 file from the GCP Console after the instance is deployed, and copy the credentials to the host
instance before starting the application.
D.Commit the credential JSON file into your application's source repository, and have your CI/CD process
package it with the software that is deployed to the instance.
Answer: B
Explanation:
Use the instance's service account Application Default Credentials to authenticate to the required
resources.Using the instance's service account Application Default Credentials is the most secure method for
distributing credentials to the host instances. This method allows the instance to automatically authenticate
with the required resources using the instance's built-in service account, without requiring the credentials to
be stored on the instance or transmitted over the network. This eliminates the risk of the credentials being
compromised or exposed. Additionally, this method is the most convenient, as it requires no manual steps to
set up the credentials on the instance.
Reference:
https://cloud.google.com/compute/docs/api/how-tos/authorization
Question: 70 CertyIQ
Your application is deployed in a Google Kubernetes Engine (GKE) cluster. You want to expose this application
publicly behind a Cloud Load Balancing HTTP(S) load balancer.
What should you do?
Answer: A
Explanation:
Ingress for HTTP(S) Load Balancing This page provides a general overview of what Ingress for HTTP(S) Load
Balancing is and how it works. Google Kubernetes Engine (GKE) provides a built-in and managed Ingress
controller called GKE Ingress. This controller implements Ingress resources as Google Cloud load balancers
for HTTP(S) workloads in GKE.
Reference:
https://cloud.google.com/kubernetes-engine/docs/concepts/ingress
Question: 71 CertyIQ
Your company is planning to migrate their on-premises Hadoop environment to the cloud. Increasing storage cost
and maintenance of data stored in HDFS is a major concern for your company. You also want to make minimal
changes to existing data analytics jobs and existing architecture.
How should you proceed with the migration?
A.Migrate your data stored in Hadoop to BigQuery. Change your jobs to source their information from BigQuery
instead of the on-premises Hadoop environment.
B.Create Compute Engine instances with HDD instead of SSD to save costs. Then perform a full migration of
your existing environment into the new one in Compute Engine instances.
C.Create a Cloud Dataproc cluster on Google Cloud Platform, and then migrate your Hadoop environment to the
new Cloud Dataproc cluster. Move your HDFS data into larger HDD disks to save on storage costs.
D.Create a Cloud Dataproc cluster on Google Cloud Platform, and then migrate your Hadoop code objects to
the new cluster. Move your data to Cloud Storage and leverage the Cloud Dataproc connector to run jobs on
that data.
Answer: D
Explanation:
Keeping your data in a persistent HDFS cluster using Dataproc is more expensive than storing your data in
Cloud Storage, which is what we recommend, as explained later. Keeping data in an HDFS cluster also limits
your ability to use your data with other Google Cloud products.""Google Cloud includes Dataproc, which is a
managed Hadoop and Spark environment. You can use Dataproc to run most of your existing jobs with minimal
alteration, so you don't need to move away from all of the Hadoop tools you already know"D is the answer
Reference:
https://cloud.google.com/architecture/hadoop/hadoop-gcp-migration-overview
Question: 72 CertyIQ
Your data is stored in Cloud Storage buckets. Fellow developers have reported that data downloaded from Cloud
Storage is resulting in slow API performance.
You want to research the issue to provide details to the GCP support team.
Which command should you run?
A.gsutil test "o output.json gs://my-bucket
B.gsutil perfdiag "o output.json gs://my-bucket
C.gcloud compute scp example-instance:~/test-data "o output.json gs://my-bucket
D.gcloud services test "o output.json gs://my-bucket
Answer: B
Explanation:
gsutil perfdiag -o output.json gs://my-bucketThe gsutil perfdiag command is used to diagnose performance
issues with Cloud Storage. It can be used to perform various tests such as download, upload, and metadata
operations. By using the -o flag, you can specify an output file where the results of the tests will be stored in
JSON format. This output file can then be provided to the GCP support team to help them investigate the
issue.
Reference:
https://groups.google.com/forum/#!topic/gce-discussion/xBl9Jq5HDsY
Question: 73 CertyIQ
You are using Cloud Build build to promote a Docker image to Development, Test, and Production environments.
You need to ensure that the same Docker image is deployed to each of these environments.
How should you identify the Docker image in your build?
Answer: C
Explanation:
C. Use the digest of the Docker image.Using the digest of the Docker image is the most reliable way to ensure
that the exact same Docker image is deployed to each environment. The digest is a hash of the image content
and metadata, which is unique to each image. This means that even if the image is tagged with different
versions or names, the digest will remain the same as long as the content and metadata are identical.On the
other hand, using the latest Docker image tag or a semantic version tag may not guarantee that the exact
same image is deployed to each environment. These tags are mutable and can be overwritten or updated,
which could result in different images being deployed to different environments.Using a unique Docker image
name could work, but it may be more difficult to manage and track multiple images with different names,
especially if there are many environments or frequent updates.
Anser C because nees to be sure that the same image for the 3 envs. A tag version can be change between
the deployment of the env.
Question: 74 CertyIQ
Your company has created an application that uploads a report to a Cloud Storage bucket. When the report is
uploaded to the bucket, you want to publish a message to a Cloud Pub/Sub topic. You want to implement a solution
that will take a small amount to effort to implement.
What should you do?
A.Configure the Cloud Storage bucket to trigger Cloud Pub/Sub notifications when objects are modified.
B.Create an App Engine application to receive the file; when it is received, publish a message to the Cloud
Pub/Sub topic.
C.Create a Cloud Function that is triggered by the Cloud Storage bucket. In the Cloud Function, publish a
message to the Cloud Pub/Sub topic.
D.Create an application deployed in a Google Kubernetes Engine cluster to receive the file; when it is received,
publish a message to the Cloud Pub/Sub topic.
Answer: A
Explanation:
https://cloud.google.com/storage/docs/reporting-changes#enabling
Question: 75 CertyIQ
Your teammate has asked you to review the code below, which is adding a credit to an account balance in Cloud
Datastore.
Which improvement should you suggest your teammate make?
Answer: B
Explanation:
https://cloud.google.com/datastore/docs/concepts/transactions#uses_for_transactions:"This requires a
transaction because the value of balance in an entity may be updated by another user after this code fetches
the object, but before it saves the modified object. Without a transaction, the user's request uses the value of
balance prior to the other user's update, and the save overwrites the new value. With a transaction, the
application is told about the other user's update."B is the answer
Question: 76 CertyIQ
Your company stores their source code in a Cloud Source Repositories repository. Your company wants to build
and test their code on each source code commit to the repository and requires a solution that is managed and has
minimal operations overhead.
Which method should they use?
A.Use Cloud Build with a trigger configured for each source code commit.
B.Use Jenkins deployed via the Google Cloud Platform Marketplace, configured to watch for source code
commits.
C.Use a Compute Engine virtual machine instance with an open source continuous integration tool, configured
to watch for source code commits.
D.Use a source code commit trigger to push a message to a Cloud Pub/Sub topic that triggers an App Engine
service to build the source code.
Answer: A
Explanation:
Use Cloud Build with a trigger configured for each source code commit.Cloud Build is a fully managed service
for building, testing, and deploying software quickly. It integrates with Cloud Source Repositories and can be
triggered by source code commits, which makes it an ideal solution for building and testing code on each
commit. It requires minimal operations overhead as it is fully managed by Google Cloud.
Question: 77 CertyIQ
You are writing a Compute Engine hosted application in project A that needs to securely authenticate to a Cloud
Pub/Sub topic in project B.
What should you do?
A.Configure the instances with a service account owned by project B. Add the service account as a Cloud
Pub/Sub publisher to project A.
B.Configure the instances with a service account owned by project A. Add the service account as a publisher on
the topic.
C.Configure Application Default Credentials to use the private key of a service account owned by project B. Add
the service account as a Cloud Pub/Sub publisher to project A.
D.Configure Application Default Credentials to use the private key of a service account owned by project A. Add
the service account as a publisher on the topic
Answer: B
Explanation:
Question: 78 CertyIQ
You are developing a corporate tool on Compute Engine for the finance department, which needs to authenticate
users and verify that they are in the finance department. All company employees use G Suite.
What should you do?
A.Enable Cloud Identity-Aware Proxy on the HTTP(s) load balancer and restrict access to a Google Group
containing users in the finance department. Verify the provided JSON Web Token within the application.
B.Enable Cloud Identity-Aware Proxy on the HTTP(s) load balancer and restrict access to a Google Group
containing users in the finance department. Issue client-side certificates to everybody in the finance team and
verify the certificates in the application.
C.Configure Cloud Armor Security Policies to restrict access to only corporate IP address ranges. Verify the
provided JSON Web Token within the application.
D.Configure Cloud Armor Security Policies to restrict access to only corporate IP address ranges. Issue client
side certificates to everybody in the finance team and verify the certificates in the application.
Answer: A
Explanation:
Question: 79 CertyIQ
Your API backend is running on multiple cloud providers. You want to generate reports for the network latency of
your API.
Which two steps should you take? (Choose two.)
Answer: AC
Explanation:
The two steps you should take to generate reports for the network latency of your API running on multiple
cloud providers are:A. Use Zipkin collector to gather data: Zipkin is a distributed tracing system that helps you
gather data about the latency of requests made to your API. It allows you to trace requests as they flow
through your system, and provides insight into the performance of your services. You can use Zipkin collectors
to collect data from multiple cloud providers, and then generate reports to analyze the latency of your API.C.
Use Stackdriver Trace to generate reports: Stackdriver Trace is a distributed tracing system that helps you
trace requests across multiple services and provides detailed performance data about your applications. It
allows you to visualize and analyze the performance of your API and its dependencies. You can use
Stackdriver Trace to generate reports about the network latency of your API running on multiple cloud
providers.Therefore, the correct options are A and C.
https://cloud.google.com/trace/docs/zipkin
Question: 80 CertyIQ
Case study -
This is a case study. Case studies are not timed separately. You can use as much exam time as you would like to
complete each case. However, there may be additional case studies and sections on this exam. You must manage
your time to ensure that you are able to complete all questions included on this exam in the time provided.
To answer the questions included in a case study, you will need to reference information that is provided in the
case study. Case studies might contain exhibits and other resources that provide more information about the
scenario that is described in the case study. Each question is independent of the other questions in this case study.
At the end of this case study, a review screen will appear. This screen allows you to review your answers and to
make changes before you move to the next section of the exam. After you begin a new section, you cannot return
to this section.
Company Overview -
HipLocal is a community application designed to facilitate communication between people in close proximity. It is
used for event planning and organizing sporting events, and for businesses to connect with their local
communities. HipLocal launched recently in a few neighborhoods in Dallas and is rapidly growing into a global
phenomenon. Its unique style of hyper-local community communication and business outreach is in demand
around the world.
Executive Statement -
We are the number one local community app; it's time to take our local community services global. Our venture
capital investors want to see rapid growth and the same great experience for new local and virtual communities
that come online, whether their members are 10 or 10000 miles away from each other.
Solution Concept -
HipLocal wants to expand their existing service, with updated functionality, in new regions to better serve their
global customers. They want to hire and train a new team to support these regions in their time zones. They will
need to ensure that the application scales smoothly and provides clear uptime data.
Business Requirements -
HipLocal's investors want to expand their footprint and support the increase in demand they are seeing. Their
requirements are:
* Expand availability of the application to new regions.
* Increase the number of concurrent users that can be supported.
* Ensure a consistent experience for users when they travel to different regions.
* Obtain user activity metrics to better understand how to monetize their product.
* Ensure compliance with regulations in the new regions (for example, GDPR).
* Reduce infrastructure management time and cost.
* Adopt the Google-recommended practices for cloud computing.
Technical Requirements -
* The application and backend must provide usage metrics and monitoring.
* APIs require strong authentication and authorization.
* Logging must be increased, and data should be stored in a cloud analytics platform.
* Move to serverless architecture to facilitate elastic scaling.
* Provide authorized access to internal apps in a secure manner.
Which database should HipLocal use for storing user activity?
A.BigQuery
B.Cloud SQL
C.Cloud Spanner
D.Cloud Datastore
Answer: A
Explanation:
In the case study is stated: "Obtain user activity metrics to better understand how to monetize their product",
which means that they'll need to analyse the user activity, so... I'll go with answer A (BigQuery)
Bigquery for user activity analysis . And also the user activity is kind of raw data which being used to segment
user or according age , choice etc so Bigquery fits best fr this use cases
Thank you
Thank you for being so interested in the premium exam material.
I'm glad to hear that you found it informative and helpful.
But Wait
I wanted to let you know that there is more content available in the full version.
The full paper contains additional sections and information that you may find helpful,
and I encourage you to download it to get a more comprehensive and detailed view of
all the subject matter.