Practice Test 3
Practice Test 3
Practice Test 3
Return to review
Attempt 1
All knowledge areas
All questions
Question 1: Correct
You have two compute instances in the same VPC but in different regions. You can
SSH from one instance to another instance using their internal IP address but not
their external IP address. What could be the reason for SSH failing on external IP
address?
The combination of compute instance network tags and VPC firewall rules only
allow SSH from the subnets IP range.
(Correct)
The compute instances are not using the right cross region SSH IAM permissions
Explanation
The compute instances have a static IP for their external IP. is not right.
Not having a static IP is not a reason for failed SSH connections. When the firewall rules
are set up correctly, SSH works fine on compute instances having ephemeral IP Address.
The combination of compute instance network tags and VPC firewall rules only
allow SSH from the subnets IP range. is the right answer.
The combination of compute instance network tags and VPC firewall rules can certainly
result in SSH traffic being allowed from only subnets IP range. The firewall rule can be
configured to allow SSH traffic from just the VPC range e.g. 10.0.0.0/8. In this scenario,
all SSH traffic from within the VPC is accepted but external SSH traffic is blocked.
Ref: https://cloud.google.com/vpc/docs/using-firewalls
Question 2: Incorrect
You have asked your supplier to send you a purchase order and you want to
enable them upload the file to a cloud storage bucket within the next 4 hours.
Your supplier does not have a Google account. You want to follow Google
recommended practices. What should you do?
Create a service account with just the permissions to upload files to the bucket. Create a
JSON key for the service account. Execute the command gsutil signurl -m PUT -d 4h
{JSON Key File} gs://{bucket}/**.
(Correct)
Create a service account with just the permissions to upload files to the bucket. Create a
JSON key for the service account. Execute the command gsutil signurl -d 4h {JSON Key
File} gs://{bucket}/.
(Incorrect)
Create a service account with just the permissions to upload files to the bucket. Create a
JSON key for the service account. Execute the command gsutil signurl -httpMethod PUT
-d 4h {JSON Key File} gs://{bucket}/**.
Create a JSON key for the Default Compute Engine Service Account. Execute the
command gsutil signurl -m PUT -d 4h {JSON Key File} gs://{bucket}/**.
Explanation
Create a service account with just the permissions to upload files to the
bucket. Create a JSON key for the service account. Execute the command
gsutil signurl -d 4h {JSON Key File} gs://{bucket}/. is not right.
This command creates signed URLs for retrieving existing objects. This command does
not specify a HTTP method and in the absence of one, the default HTTP method is GET.
Ref: https://cloud.google.com/storage/docs/gsutil/commands/signurl
Create a service account with just the permissions to upload files to the
bucket. Create a JSON key for the service account. Execute the command
gsutil signurl -httpMethod PUT -d 4h {JSON Key File} gs://{bucket}/**. is not
right.
gsutil signurl does not accept -httpMethod parameter.
Create a JSON key for the Default Compute Engine Service Account. Execute
the command gsutil signurl -m PUT -d 4h {JSON Key File} gs://{bucket}/**. is
not right.
Using the default compute engine service account violates the principle of least
privilege. The recommended approach is to create a service account with just the right
permissions needed and create JSON keys for this service account to use with gsutil
signurl command.
Create a service account with just the permissions to upload files to the
bucket. Create a JSON key for the service account. Execute the command
gsutil signurl -m PUT -d 4h {JSON Key File} gs://{bucket}/**. is the right
answer.
This command correctly creates a signed url that is valid for 4 hours and allows PUT
(through the -m flag) operations on the bucket. The supplier can then use the signed
URL to upload a file to this bucket within 4 hours.
Ref: https://cloud.google.com/storage/docs/gsutil/commands/signurl
Question 3: Incorrect
You have two Kubernetes resource configuration files.
1. deployment.yaml - creates a deployment
2. service.yaml - sets up a LoadBalancer service to expose the pods.
You don't have a GKE cluster in the development project and you need to provision one.
Which of the commands below would you run in Cloud Shell to create a GKE cluster and
deploy the yaml configuration files to create a deployment and service?
(Incorrect)
1. gcloud container clusters create cluster-1 --zone=us-central1-a
(Correct)
Explanation
1. kubectl container clusters create cluster-1 --zone=us-central1-a
2. kubectl container clusters get-credentials cluster-1 --zone=us-central1-a
3. kubectl apply -f deployment.yaml
4. kubectl apply -f service.yaml. is not right.
kubectl doesn't support kubectl container clusters create command. kubectl can not be
used to create GKE clusters. To create a GKE cluster, you need to execute gcloud
container clusters create command.
Ref: https://cloud.google.com/sdk/gcloud/reference/container/clusters/create
Question 4: Correct
Your company plans to store sensitive PII data in a cloud storage bucket. Your
compliance department doesn’t like encrypting sensitive PII data with Google
managed keys and has asked you to ensure the new objects uploaded to this
bucket are encrypted by customer managed encryption keys. What should you do?
(Select Three)
(Correct)
In the bucket advanced settings, select Customer-managed key and then select a
Cloud KMS encryption key.
(Correct)
In the bucket advanced settings, select Customer-supplied key and then select a
Cloud KMS encryption key.
(Correct)
Explanation
In the bucket advanced settings, select the Customer-supplied key and then
select a Cloud KMS encryption key. is not right.
The customer-supplied key is not an option when selecting the encryption method in
the console.
In the bucket advanced settings, select Customer-managed key and then select
a Cloud KMS encryption key. is the right answer.
Our compliance department wants us to use customer-managed encryption keys. We
can select Customer-Managed radio and provide a cloud KMS encryption key to encrypt
objects with the customer managed key. This fit our requirements.
Ref: https://cloud.google.com/storage/docs/encryption/using-customer-managed-
keys#add-object-key
Question 5: Correct
You want to create a Google Cloud Storage regional bucket logs-archive in the Los
Angeles region (us-west2). You want to use coldline storage class to minimize
costs and you want to retain files for 10 years. Which of the following commands
should you run to create this bucket?
(Correct)
Explanation
gsutil mb -l us-west2 -s nearline --retention 10y gs://logs-archive. is not
right.
This command creates a bucket that uses nearline storage class whereas we want to use
Coldline storage class.
Ref: https://cloud.google.com/storage/docs/gsutil/commands/mb
Question 6: Incorrect
You want to migrate an application from Google App Engine Standard to Google
App Engine Flex. Your application is currently serving live traffic and you want to
ensure everything is working in Google App Engine Flex before migrating all
traffic. You want to minimize effort and ensure availability of service. What should
you do?
(Correct)
1. Set env: app-engine-flex in app.yaml
(Incorrect)
Explanation
1. Set env: flex in app.yaml
2. gcloud app deploy --version=[NEW_VERSION]
3. Validate [NEW_VERSION] in App Engine Flex
4. gcloud app versions migrate [NEW_VERSION]. is not right.
Executing gcloud app deploy --version=[NEW_VERSION] without --no-promote would
deploy the new version and immediately promote it to serve traffic. We don't want this
version to receive traffic as we would like to validate the version first before sending it
traffic.
Ref: https://cloud.google.com/sdk/gcloud/reference/app/versions/migrate
Question 7: Incorrect
You developed an application that lets users upload statistical files and
subsequently run analytics on this data. You chose to use Google Cloud Storage
and BigQuery respectively for these requirements as they are highly available and
scalable. You have a docker image for your application code, and you plan to
deploy on your on-premises Kubernetes clusters. Your on-prem kubernetes cluster
needs to connect to Google Cloud Storage and BigQuery and you want to do this
in a secure way following Google recommended practices. What should you do?
Create a new service account, grant it the least viable privileges to the required
services, generate and download a JSON key. Use the JSON key to authenticate
inside the application.
(Correct)
Use the default service account for App Engine, which already has the required
permissions.
Create a new service account, with editor permissions, generate and download a
key. Use the key to authenticate inside the application.
(Incorrect)
Use the default service account for Compute Engine, which already has the
required permissions.
Explanation
Use the default service account for Compute Engine, which already has the
required permissions. is not right.
The Compute Engine default service account is created with the Cloud IAM project
editor role
Ref: https://cloud.google.com/compute/docs/access/service-
accounts#default_service_account
The project editor role includes all viewer permissions, plus permissions for actions that
modify state, such as changing existing resources. Using a service account that is over-
privileged falls foul of the principle of least privilege. Google recommends you enforce
the principle of least privilege by ensuring that members have only the permissions that
they actually need.
Ref: https://cloud.google.com/iam/docs/understanding-roles
Use the default service account for App Engine, which already has the
required permissions. is not right.
App Engine default service account has the Editor role in the project (Same as the
default service account for Compute Engine).
Ref: https://cloud.google.com/appengine/docs/standard/python/service-account
The project editor role includes all viewer permissions, plus permissions for actions that
modify state, such as changing existing resources. Using a service account that is over-
privileged falls foul of the principle of least privilege. Google recommends you enforce
the principle of least privilege by ensuring that members have only the permissions that
they actually need.
Ref: https://cloud.google.com/iam/docs/understanding-roles
Create a new service account, with editor permissions, generate and download
a key. Use the key to authenticate inside the application. is not right.
The project editor role includes all viewer permissions, plus permissions for actions that
modify state, such as changing existing resources. Using a service account that is over-
privileged falls foul of the principle of least privilege. Google recommends you enforce
the principle of least privilege by ensuring that members have only the permissions that
they actually need.
Ref: https://cloud.google.com/iam/docs/understanding-roles
Create a new service account, grant it the least viable privileges to the
required services, generate and download a JSON key. Use the JSON key to
authenticate inside the application. is the right answer.
Using a new service account with just the least viable privileges for the required services
follows the principle of least privilege. To use a service account outside of Google Cloud,
such as on other platforms or on-premises, you must first establish the identity of the
service account. Public/private key pairs provide a secure way of accomplishing this
goal. Once you have the key, you can use it in your application to authenticate
connections to Cloud Storage and BigQuery.
Ref: https://cloud.google.com/iam/docs/creating-managing-service-account-
keys#creating_service_account_keys
Ref: https://cloud.google.com/iam/docs/recommender-overview
Question 8: Incorrect
Your company wants to move 200 TB of your website clickstream logs from your
on premise data center to Google Cloud Platform. These logs need to be retained
in GCP for compliance requirements. Your business analysts also want to run
analytics on these logs to understand user click behaviour on your website. Which
of the below would enable you to meet these requirements? (Select Two)
Upload log files into Google Cloud Storage.
(Correct)
(Incorrect)
(Correct)
Explanation
Load logs into Google Cloud SQL. is not right.
Cloud SQL is a fully-managed relational database service. Storing logs in Google Cloud
SQL is very expensive. Cloud SQL doesn't help us with analytics. Moreover, Google
Cloud Platform offers several storage classes in Google Cloud Storage that are more apt
for storing logs at a much cheaper cost.
Ref: https://cloud.google.com/sql/docs
Ref: https://cloud.google.com/sql/pricing#sql-storage-networking-prices
Ref: https://cloud.google.com/storage/pricing
Upload log files into Google Cloud Storage. is the right answer.
Google Cloud Platform offers several storage classes in Google Cloud Storage that are
suitable for storing/archiving logs at a reasonable cost. GCP recommends you use
Coldline storage class if you access even less frequently e.g. once a quarter
Ref: https://cloud.google.com/storage/docs/storage-classes
Question 9: Incorrect
You deployed a workload to your GKE cluster by running the command kubectl
apply -f app.yaml. You also enabled a LoadBalancer service to expose the
deployment by running kubectl apply -f service.yaml. Your pods are struggling
due to increased load so you decided to enable horizontal pod autoscaler by
running kubectl autoscale deployment [YOUR DEPLOYMENT] --cpu-percent=50 --
min=1 --max=10. You noticed the autoscaler has launched several new pods but
the new pods have failed with the message "Insufficient cpu". What should you do
to resolve this issue?
Edit the managed instance group of the cluster and increase the number of VMs
by 1.
Use "kubectl container clusters resize" to add more nodes to the node pool.
Use "gcloud container clusters resize" to add more nodes to the node pool.
(Correct)
Edit the managed instance group of the cluster and enable autoscaling.
(Incorrect)
Explanation
Use "kubectl container clusters resize" to add more nodes to the node
pool. is not right.
kubectl doesn't support the command kubectl container clusters resize. You have to use
gcloud container clusters resize to resize a cluster.
Ref: https://cloud.google.com/sdk/gcloud/reference/container/clusters/resize
Edit the managed instance group of the cluster and increase the number of
VMs by 1. is not right.
GKE Cluster does not use a managed instance group. Instead, the cluster master (control
plan) handles the lifecycle of nodes in the node pools. The cluster master is responsible
for managing the workloads' lifecycle, scaling, and upgrades. The master also manages
network and storage resources for those workloads.
Ref: https://cloud.google.com/kubernetes-engine/docs/concepts/cluster-architecture
Edit the managed instance group of the cluster and enable autoscaling. is not
right.
GKE Cluster does not use a managed instance group. Instead, the cluster master (control
plan) handles the lifecycle of nodes in the node pools. The cluster master is responsible
for managing the workloads' lifecycle, scaling, and upgrades. The master also manages
network and storage resources for those workloads.
Ref: https://cloud.google.com/kubernetes-engine/docs/concepts/cluster-architecture
Use "gcloud container clusters resize" to add more nodes to the node pool. is
the right answer.
Your pods are failing with "Insufficient cpu". This is because the existing nodes in the
node pool are maxed out, therefore, you need to add more nodes to your node pool.
For such scenarios, enabling cluster autoscaling is ideal, however, this is not in any of the
answer options. In the absence of cluster autoscaling, the next best approach is to add
more nodes to the cluster manually. This is achieved by running the command gcloud
container clusters resize which resizes an existing cluster for running containers.
Ref: https://cloud.google.com/sdk/gcloud/reference/container/clusters/resize
Create a new managed instance group (MIG) based on a new template. Add the
group to the backend service for the load balancer. When all instances in the new
managed instance group are healthy, delete the old managed instance group
Delete instances in the managed instance group (MIG) one at a time and rely on
autohealing to provision an additional instance.
Explanation
Perform a rolling-action start-update with max-unavailable set to 1 and max-
surge set to 0. is not right.
You can carry out a rolling action start update to fully replace the template by executing
a command like
gcloud compute instance-groups managed rolling-action start-update instance-group
-1 --zone=us-central1-a --version template=instance-template-1 --canary-version t
emplate=instance-template-2,target-size=100%
maxSurge specifies the maximum number of instances that can be created over the
desired number of instances. If maxSurge is set to 0, the rolling update can not create
additional instances and is forced to update existing instances. This results in a
reduction in capacity and therefore does not satisfy our requirement to ensure that the
available capacity does not decrease during the deployment.
Create a new managed instance group (MIG) based on a new template. Add the
group to the backend service for the load balancer. When all instances in
the new managed instance group are healthy, delete the old managed instance
group. is not right.
While the end result is the same, we have a period of time where the traffic is served by
instances from both the old managed instances group (MIG) which doubles our cost
and increases effort and complexity.
Delete instances in the managed instance group (MIG) one at a time and rely
on auto-healing to provision an additional instance. is not right.
While this would result in the same eventual outcome, there are two issues with this
approach. First, deleting an instance one at a time would result in a reduction in capacity
which is against our requirements. Secondly, deleting instances manually one at a time
is error-prone and time-consuming. One of our requirements is to "minimize the effort"
but deleting instances manually and relying on auto-healing health checks to provision
them back is time-consuming and could take a lot of time depending on the number of
instances in the MIG and the startup scripts executed during bootstrap.
Ref: https://kubernetes.io/docs/concepts/workloads/controllers/deployment/#max-
unavailable
Ref: https://kubernetes.io/docs/concepts/workloads/controllers/deployment/#max-
surge
Ref: https://cloud.google.com/sdk/gcloud/reference/alpha/compute/instance-
groups/managed/rolling-action/replace
Your deployment is now serving live traffic but is suffering from performance issues. You
want to increase the number of replicas to 5. What should you do in order to update the
replicas in existing Kubernetes deployment objects?
Disregard the YAML file. Use the kubectl scale command to scale the replicas to 5.
kubectl scale --replicas=5 -f app-deployment.yaml
Modify the current configuration of the deployment by using kubectl edit to open
the YAML file of the current configuration, modify and save the configuration.
kubectl edit deployment/app-deployment -o yaml --save-config
Disregard the YAML file. Enable autoscaling on the deployment to trigger on CPU
usage and set max pods to 5. kubectl autoscale myapp --max=5 --cpu-percent=80
Edit the number of replicas in the YAML file and rerun the kubectl apply. kubectl
apply -f app-deployment.yaml
(Correct)
Explanation
Disregard the YAML file. Use the kubectl scale command to scale the replicas
to 5. kubectl scale --replicas=5 -f app-deployment.yaml. is not right.
While the outcome is the same, this approach doesn't update the change in the desired
state configuration (YAML file). If you were to make some changes in your app-
deployment.yaml and apply it, the update would scale back the replicas to 2. This is
undesirable.
Ref: https://kubernetes.io/docs/concepts/workloads/controllers/deployment/#scaling-a-
deployment
Edit the number of replicas in the YAML file and rerun the kubectl apply.
kubectl apply -f app-deployment.yaml. is the right answer.
This is the only approach that guarantees that you use desired state configuration. By
updating the YAML file to have 5 replicas and applying it using kubectl apply, you are
preserving the intended state of Kubernetes cluster in the YAML file.
Ref: https://kubernetes.io/docs/concepts/cluster-administration/manage-
deployment/#in-place-updates-of-resources
gcloud compute instances get --filter="zone:( us-central1-b europe-west1-d )"
(Correct)
Explanation
gcloud compute instances get --filter="zone:( us-central1-b europe-west1-d
)". is not right.
gcloud compute instances command does not support get action.
Ref: https://cloud.google.com/sdk/gcloud/reference/compute/instances
Stop instances in the managed instance group (MIG) one at a time and rely on
autohealing to bring them back up.
(Correct)
Explanation
Perform a rolling-action reboot with max-surge set to 20%. is not right.
reboot is not a supported action for rolling updates. The supported actions are replace,
restart, start-update and stop-proactive-update.
Ref: https://cloud.google.com/sdk/gcloud/reference/beta/compute/instance-
groups/managed/rolling-action
Stop instances in the managed instance group (MIG) one at a time and rely on
autohealing to bring them back up. is not right.
While this would result in the same eventual outcome, it is manual, error-prone and
time-consuming. One of our requirements is to "do this at the earliest" but stopping
instances manually is time-consuming and could take a lot of time depending on the
number of instances in the MIG. Also, relying on autohealing health checks to detect the
failure and spin up the instance adds to the delay.
Cloud Filestore.
(Correct)
Explanation
Cloud Datastore database. is not right.
Cloud Datastore is a NoSQL document database built for automatic scaling, high
performance, and ease of application development. We want to store objects/files and
Cloud Datastore is not a suitable storage option for such data.
Ref: https://cloud.google.com/datastore/docs/concepts/overview
(Correct)
Explanation
gcloud app versions list. is not right
This command lists all the versions of all services that are currently deployed to the App
Engine server. While this list includes all versions that are receiving traffic, it also
includes versions that are not receiving traffic.
Ref: https://cloud.google.com/sdk/gcloud/reference/app/versions/list
VM A1, VM A2, VM B2
(Correct)
VM A1, VM A2
VM A1, VM A2, VM B1
Explanation
VM A1 can access Google APIs and services, including Cloud Storage because its
network interface is located in subnet-a, which has Private Google Access enabled.
Private Google Access applies to the instance because it only has an internal IP address.
VM B1 cannot access Google APIs and services because it only has an internal IP
address and Private Google Access is disabled for subnet-b.
VM A2 and VM B2 can both access Google APIs and services, including Cloud Storage,
because they each have external IP addresses. Private Google Access has no effect on
whether or not these instances can access Google APIs and services because both have
external IP addresses.
Ref: https://cloud.google.com/vpc/docs/private-access-options#example
(Incorrect)
Explanation
gcloud compute instances list-ip. is not right.
"gcloud compute instances" doesn't support the action list-ip.
Ref: https://cloud.google.com/sdk/gcloud/reference/compute/instances/list
1. In the bucket advanced settings, select Customer-supplied key and then select a
Cloud KMS encryption key.
2. Delete all existing objects and upload them again so they use the new customer-
supplied key for encryption.
1. In the bucket advanced settings, select Customer-managed key and then select a
Cloud KMS encryption key.
2. Rewrite all existing objects using gsutil rewrite to encrypt them with the new
Customer-managed key.
(Correct)
1. In the bucket advanced settings, select Customer-managed key and then select a
Cloud KMS encryption key.
2. Existing objects encrypted by Google-managed keys can still be decrypted by the new
Customer-managed key.
1. Rewrite all existing objects using gsutil rewrite to encrypt them with the new
Customer-managed key.
2. In the bucket advanced settings, select Customer-managed key and then select a
Cloud KMS encryption key.
Explanation
1. In the bucket advanced settings, select Customer-managed key and then
select a Cloud KMS encryption key.
2. Existing objects encrypted by Google-managed keys can still be decrypted
by the new Customer-managed key. is not right.
While changing the bucket encryption to use the Customer-managed key ensures all
new objects use this key, existing objects are still encrypted by the Google-managed
key. This doesn't satisfy our compliance requirements. Moreover, the customer
managed key can't decrypt objects created by Google-managed keys.
Ref: https://cloud.google.com/storage/docs/encryption/using-customer-managed-
keys#add-default-key
1. Rewrite all existing objects using gsutil rewrite to encrypt them with
the new Customer-managed key.
2. In the bucket advanced settings, select Customer-managed key and then
select a Cloud KMS encryption key. is not right.
While changing the bucket encryption to use the Customer-managed key ensures all
new objects use this key, rewriting existing objects before changing the bucket
encryption would result in the objects being encrypted by the encryption method in use
at that point - which is still Google-managed.
In Stackdriver logging, create a new logging metric with the required filters, edit
the application code to set the metric value when needed, and create an alert in
Stackdriver based on the new metric.
In Stackdriver Logging, create a custom monitoring metric from log data and
create an alert in Stackdriver based on the new metric.
(Correct)
Add the Stackdriver monitoring and logging agent to the instances running the
code.
Create a custom monitoring metric in code, edit the application code to set the
metric value when needed, create an alert in Stackdriver based on the new metric.
Explanation
In Stackdriver logging, create a new logging metric with the required
filters, edit the application code to set the metric value when needed, and
create an alert in Stackdriver based on the new metric. is not right.
You don't need to edit the application code to send the metric values. The application
already pushes error logs whenever the application times out. Since you already have
the required entries in the Stackdriver logs, you don't need to edit the application code
to send the metric values. You just need to create metrics from log data.
Ref: https://cloud.google.com/logging
Create a custom monitoring metric in code, edit the application code to set
the metric value when needed, create an alert in Stackdriver based on the
new metric. is not right.
You don't create a custom monitoring metric in code. Stackdriver Logging allows you to
easily create metrics from log data. Since the application already pushes error logs to
Stackdriver Logging, we just need to create metrics from log data in Stackdriver
Logging.
Ref: https://cloud.google.com/logging
Add the Stackdriver monitoring and logging agent to the instances running
the code. is not right.
The Stackdriver Monitoring agent gathers system and application metrics from your VM
instances and sends them to Monitoring. In order to make use of this approach, you
need application metrics but our application doesn't generate metrics. It just logs errors
whenever the upload times out and these are then ingested to Stackdriver logging. We
can update our application to enable custom metrics for these scenarios, but that is a lot
more work than creating metrics from log data in Stackdriver Logging
Ref: https://cloud.google.com/logging
In Stackdriver Logging, create a custom monitoring metric from log data and
create an alert in Stackdriver based on the new metric. is the right answer.
Our application adds entries to error logs whenever the application times out during
image upload and these logs are ingested to Stackdriver Logging. Since we already have
the required data in logs, we just need to create metrics from this log data in Stackdriver
Logging. And we can then set up an alert based on this metric. We can trigger an alert if
the number of occurrences of the relevant error message is greater than a predefined
value. Based on the alert, you can manually add more compute resources.
Ref: https://cloud.google.com/logging
Question 20: Correct
You created a kubernetes deployment by running kubectl run nginx --
image=nginx --replicas=1. After a few days, you decided you no longer want this
deployment. You identified the pod and deleted it by running kubectl delete pod.
You noticed the pod got recreated. $ kubectl get pods NAME READY STATUS
RESTARTS AGE nginx-84748895c4-nqqmt 1/1 Running 0 9m41s $ kubectl delete
pod nginx-84748895c4-nqqmt pod "nginx-84748895c4-nqqmt" deleted $ kubectl
get pods NAME READY STATUS RESTARTS AGE nginx-84748895c4-k6bzl 1/1
Running 0 25s What should you do to delete the deployment and avoid pod
getting recreated?
(Correct)
Explanation
kubectl delete pod nginx-84748895c4-k6bzl --no-restart. is not right.
kubectl delete pod command does not support the flag --no-restart. The command fails
to execute due to the presence of an invalid flag.
$ kubectl delete pod nginx-84748895c4-k6bzl --no-restart
Error: unknown flag: --no-restart
Ref: https://kubernetes.io/docs/reference/kubectl/cheatsheet/#deleting-resources
Ref: https://kubernetes.io/docs/reference/kubectl/cheatsheet/#deleting-resources
Ref: https://kubernetes.io/docs/reference/kubectl/cheatsheet/#deleting-resources
Ref: https://kubernetes.io/docs/reference/kubectl/cheatsheet/#deleting-resources
Question 21: Incorrect
Your company recently migrated all infrastructure to Google Cloud Platform (GCP) and
you want to use Google Cloud Build to build all container images. You want to store the
build logs in a specific Google Cloud Storage bucket. You also have a requirement to
push the images to Google Container Registry. You wrote a cloud build YAML
configuration file with the following contents.
1. steps:
2. - name: 'gcr.io/cloud-builders/docker'
3. args: ['build', '-t', 'gcr.io/[PROJECT_ID]/[IMAGE_NAME]', '.']
4. images: ['gcr.io/[PROJECT_ID]/[IMAGE_NAME]']
(Incorrect)
(Correct)
Explanation
Execute gcloud builds push --config=[CONFIG_FILE_PATH] [SOURCE]. is not right.
gcloud builds command does not support push operation. The correct operation to
build images and push them to gcr is submit.
Ref: https://cloud.google.com/sdk/gcloud/reference/builds/submit
--config flag specifies the YAML or JSON file to use as the build configuration file.
--gcs-log-dir specifies the directory in Google Cloud Storage to hold build logs.
[SOURCE] is the location of the source to build. The location can be a directory on a
local disk or a gzipped archive file (.tar.gz) in Google Cloud Storage.
Ref: https://cloud.google.com/sdk/gcloud/reference/builds/submit
Ref: https://cloud.google.com/cloud-build/docs/building/build-containers
What should you do to delete the instance that was created in the wrong project and
recreate it in gcp-ace-proj-266520 project?
(Correct)
Explanation
1. gcloud compute instances delete instance1
2. gcloud compute instances create instance1. is not right.
The default core/project property is set to gcp-ace-lab-266520 in our current
configuration so the instance would have been created in this project. Running the first
command to delete the instance correctly deletes it from this project but we haven't
modified the core/project property before executing the second command so the
instance is recreated in the same project which is not what we want.
Ref: https://cloud.google.com/sdk/gcloud/reference/compute/instances/create
Ref: https://cloud.google.com/sdk/gcloud/reference/compute/instances/delete
Configure High Availability (HA) for Cloud SQL and Create a Failover replica in the
same region but in a different zone.
(Correct)
Configure High Availability (HA) for Cloud SQL and Create a Failover replica in a
different region.
(Incorrect)
Explanation
Create a Read replica in the same region but in a different zone. is not right.
Read replicas do not provide failover capability. To provide failover capability, you need
to configure Cloud SQL Instance for High Availability.
Ref: https://cloud.google.com/sql/docs/mysql/replication
Configure High Availability (HA) for Cloud SQL and Create a Failover replica
in a different region. is not right.
A Cloud SQL instance configured for HA is called a regional instance because it's
primary and secondary instances are in the same region. They are located in different
zones but within the same region. It is not possible to create a Failover replica in a
different region.
Ref: https://cloud.google.com/sql/docs/mysql/high-availability
Configure High Availability (HA) for Cloud SQL and Create a Failover replica
in the same region but in a different zone. is the right answer.
If a HA-configured instance becomes unresponsive, Cloud SQL automatically switches to
serving data from the standby instance. The HA configuration provides data
redundancy. A Cloud SQL instance configured for HA has instances in the primary zone
(Master node) and secondary zone (standby/failover node) within the configured region.
Through synchronous replication to each zone's persistent disk, all writes made to the
primary instance are also made to the standby instance. If the primary goes down, the
standby/failover node takes over and your data continues to be available to client
applications.
Ref: https://cloud.google.com/sql/docs/mysql/high-availability
(Correct)
Explanation
1. gcloud regions list.
2. gcloud images list. is not right.
The correct command to list compute regions is gcloud compute regions list.
Ref: https://cloud.google.com/sdk/gcloud/reference/compute/regions/list
The correct command to list compute images is gcloud compute images list.
Ref: https://cloud.google.com/sdk/gcloud/reference/compute/images/list
In GCP Console under IAM Roles, select both roles and combine them into a new
custom role. Grant the role to the SME team group at the organization level.
(Correct)
In GCP Console under IAM Roles, select both roles and combine them into a new
custom role. Grant the role to the SME team group at project. Repeat this step for
each project.
In GCP Console under IAM Roles, select both roles and combine them into a new
custom role. Grant the role to the SME team group at project. Use gcloud iam
promote-role to promote the role to all other projects and grant the role in each
project to the SME team group.
Execute command gcloud iam combineroles --global to combine the 2 roles into a
new custom role and grant them globally to SME team group.
Explanation
We want to create a new role and grant it to a team. Since you want to minimize
operational overhead, we need to grant it to a group - so that new users who join the
team just need to be added to the group and they inherit all the permissions. Also, this
team needs to have the role for all projects in the organization. And since we want to
minimize the operational overhead, we need to grant it at the organization level so that
all current projects, as well as future projects, have the role granted to them.
In GCP Console under IAM Roles, select both roles and combine them into a
new custom role. Grant the role to the SME team group at project. Repeat
this step for each project. is not right.
Repeating the step for all projects is a manual, error-prone and time-consuming task.
Also, if any projects were to be created in the future, we have to repeat the same
process again. This increases operational overhead.
In GCP Console under IAM Roles, select both roles and combine them into a
new custom role. Grant the role to the SME team group at project. Use gcloud
iam promote-role to promote the role to all other projects and grant the
role in each project to the SME team group. is not right.
Repeating the step for all projects is a manual, error-prone and time-consuming task.
Also, if any projects were to be created in the future, we have to repeat the same
process again. This increases operational overhead.
In GCP Console under IAM Roles, select both roles and combine them into a
new custom role. Grant the role to the SME team group at the organization
level. is the right answer.
This correctly creates the role and assigns the role to the group at the organization.
When any new users join the team, the only additional task is to add them to the group.
Also, when a new project is created under the organization, no additional human
intervention is needed. Since the role is granted at the organization level, it
automatically is granted to all the current and future projects belonging to the
organization.
Question 26: Incorrect
Your company stores customer PII data in Cloud Storage buckets. A subset of this
data is regularly imported into a BigQuery dataset to carry out analytics. You want
to make sure the access to this bucket is strictly controlled. Your analytics team
needs read access on the bucket so that they can import data in BigQuery. Your
operations team needs read/write access to both the bucket and BigQuery dataset
to add Customer PII data of new customers on an ongoing basis. Your Data
Vigilance officers need Administrator access to the Storage bucket and BigQuery
dataset. You want to follow Google recommended practices. What should you do?
At the Organization level, add your Data Vigilance officers user accounts to the
Owner role, add your operations team user accounts to the Editor role, and add
your analytics team user accounts to the Viewer role.
Use the appropriate predefined IAM roles for each of the access levels needed for
Cloud Storage and BigQuery. Add your users to those roles for each of the
services.
(Correct)
Create 3 custom IAM roles with appropriate permissions for the access levels
needed for Cloud Storage and BigQuery. Add your users to the appropriate roles.
At the Project level, add your Data Vigilance officers user accounts to the Owner
role, add your operations team user accounts to the Editor role, and add your
analytics team user accounts to the Viewer role.
(Incorrect)
Explanation
At the Organization level, add your Data Vigilance officers user accounts to
the Owner role, add your operations team user accounts to the Editor role,
and add your analytics team user accounts to the Viewer role. is not right.
Google recommends we apply the security principle of least privilege, where we grant
only necessary permissions to access specific resources.
Ref: https://cloud.google.com/iam/docs/overview
Providing these primitive roles at the organization levels grants them permissions on all
resources in all projects under the organization which violates the security principle of
least privilege.
Ref: https://cloud.google.com/iam/docs/understanding-roles
At the Project level, add your Data Vigilance officers user accounts to the
Owner role, add your operations team user accounts to the Editor role, and
add your analytics team user accounts to the Viewer role. is not right.
Google recommends we apply the security principle of least privilege, where we grant
only necessary permissions to access specific resources.
Ref: https://cloud.google.com/iam/docs/overview
Providing these primitive roles at the project level grants them permissions on all
resources in the project which violates the security principle of least privilege.
Ref: https://cloud.google.com/iam/docs/understanding-roles
Create 3 custom IAM roles with appropriate permissions for the access levels
needed for Cloud Storage and BigQuery. Add your users to the appropriate
roles. is not right.
While this has the intended outcome, it is not very efficient particularly when there are
predefined roles that can be used. Secondly, if Google adds/modifies permissions for
these services in the future, we would have to update our roles to reflect the
modifications. This results in operational overhead and increases costs.
Ref: https://cloud.google.com/storage/docs/access-control/iam-roles#primitive-roles-
intrinsic
Ref: https://cloud.google.com/bigquery/docs/access-control
Use the appropriate predefined IAM roles for each of the access levels
needed for Cloud Storage and BigQuery. Add your users to those roles for
each of the services. is the right answer.
For Google Cloud Storage service, Google provides predefined roles roles/owner,
roles/editor, roles/viewer that match the access levels we need. Similarly, Google
provides the roles roles/bigquery.dataViewer, roles/bigquery.dataOwner,
roles/bigquery.admin that match the access levels we need. We can assign these
predefined IAM roles to the respective users. Should Google add/modify permissions for
these services in the future, we don't need to modify the roles above as Google does
this for us; and this helps future proof our solution.
Ref: https://cloud.google.com/storage/docs/access-control/iam-roles#primitive-roles-
intrinsic
Ref: https://cloud.google.com/bigquery/docs/access-control
You want to create two compute instances - one in europe-west2-a and another in
europe-west2-b. What should you do? (Select 2)
(Correct)
(Correct)
Explanation
1. gcloud compute instances create instance1
2. gcloud compute instances create instance2. is not right.
The default compute/zone property is set to europe-west2-a in the current gcloud
configuration. Executing the two commands above would create two compute instances
in the default zone i.e. europe-west2-a which doesn't satisfy our requirement.
Ref: https://cloud.google.com/sdk/gcloud/reference/compute/instances/create
Create a Service of type NodePort for dispatch-order and an Ingress Resource for
that Service. Have create-order use the Ingress IP address.
Create a Service of type ClusterIP for dispatch-order. Have create-order use the
Service IP address.
(Correct)
Explanation
Create a Service of type LoadBalancer for dispatch-order. Have create-order
use the Service IP address. is not right.
When you create a Service of type LoadBalancer, the Google Cloud controller configures
a network load balancer that is publicly available. Since we don't want our service to be
publicly available, we shouldn't create a Service of type LoadBalancer
Ref: https://cloud.google.com/kubernetes-engine/docs/how-to/exposing-apps
Create JSON keys for the service account and execute gcloud authenticate
activate-service-account --key-file [KEY_FILE]
Create JSON keys for the service account and execute gcloud auth service-account
--key-file [KEY_FILE]
Create JSON keys for the service account and execute gcloud authenticate service-
account --key-file [KEY_FILE]
Create JSON keys for the service account and execute gcloud auth activate-
service-account --key-file [KEY_FILE]
(Correct)
Explanation
Create JSON keys for the service account and execute gcloud authenticate
activate-service-account --key-file [KEY_FILE]. is not right.
gcloud doesn't support using "authenticate" to grant/revoke credentials for Cloud SDK.
The correct service is "auth".
Ref: https://cloud.google.com/sdk/gcloud/reference/auth
Create JSON keys for the service account and execute gcloud authenticate
service-account --key-file [KEY_FILE]. is not right.
gcloud doesn't support using "authenticate" to grant/revoke credentials for Cloud SDK.
The correct service is "auth".
Ref: https://cloud.google.com/sdk/gcloud/reference/auth
Create JSON keys for the service account and execute gcloud auth service-
account --key-file [KEY_FILE]. is not right.
gcloud auth does not support service-account action. The correct action to authenticate
a service account is activate-service-account.
Ref: https://cloud.google.com/sdk/gcloud/reference/auth/activate-service-account
Create JSON keys for the service account and execute gcloud auth activate-
service-account --key-file [KEY_FILE]. is the right answer.
This command correctly authenticates access to Google Cloud Platform with a service
account using its JSON key file. To allow gcloud (and other tools in Cloud SDK) to use
service account credentials to make requests, use this command to import these
credentials from a file that contains a private authorization key, and activate them for
use in gcloud
Ref: https://cloud.google.com/sdk/gcloud/reference/auth/activate-service-account
Execute gcloud app versions stop v2 --service="pt-createOrder" and gcloud app
versions start v3 --service="pt-createOrder"
(Correct)
Execute gcloud app versions stop v2 and gcloud app versions start v3
Explanation
Execute gcloud app versions migrate v3. is not right.
gcloud app versions migrate v3 migrates all services to version v3. In our scenario, we
have multiple services with each service potentially being on a different version. We
don't want to migrate all services to v3, instead, we only want to migrate the pt-
createOrder service to v3.
Ref: https://cloud.google.com/sdk/gcloud/reference/app/versions/migrate
Execute gcloud app versions stop v2 and gcloud app versions start v3. is not
right.
Stopping version v2 and starting version v3 would result in migrating all services to
version v3 which is undesirable. We don't want to migrate all services to v3, instead, we
only want to migrate the pt-createOrder service to v3. Moreover, stopping version v2
before starting version v3 results in service being unavailable until v3 is ready to receive
traffic. As we want to "ensure availability", this option is not suitable.
Ref: https://cloud.google.com/sdk/gcloud/reference/app/versions/migrate
Execute gcloud app versions migrate v3 --service="pt-createOrder". is the
right answer.
This command correctly migrates the service pt-createOrder to use version 3 and
produces the intended outcome while minimizing effort and ensuring the availability of
service.
Ref: https://cloud.google.com/sdk/gcloud/reference/app/versions/migrate
(Correct)
Explanation
The compute instances are not using the right cross region SSH IAM permissions
The combination of compute instance network tags and VPC firewall rules allow
SSH from 0.0.0.0 but denies SSH from the VPC subnets IP range.
(Correct)
Explanation
The compute instances have a static IP for their internal IP. is not right.
Static internal IPs shouldn't be a reason for failed SSH connections. With all networking
set up correctly, SSH works fine on Static internal IPs.
Ref: https://cloud.google.com/compute/docs/ip-addresses#networkaddresses
The compute instances are not using the right cross-region SSH IAM
permissions. is not right.
There is no such thing as cross region SSH IAM permissions.
The combination of compute instance network tags and VPC firewall rules
allow SSH from 0.0.0.0 but denies SSH from the VPC subnets IP range. is the
right answer.
The combination of compute instance network tags and VPC firewall rules can certainly
result in SSH traffic being allowed on the external IP range but disabled from subnets IP
range. The firewall rule can be configured to allow SSH traffic from 0.0.0.0/0 but deny
traffic from the VPC range e.g. 10.0.0.0/8. In this case, all SSH traffic from within the VPC
is denied but external SSH traffic (i.e. on external IP) is allowed.
Ref: https://cloud.google.com/vpc/docs/using-firewalls
In the bucket advanced settings, select Customer-supplied key and then select a
Cloud KMS encryption key.
In the bucket advanced settings, select Google-managed key and then select a
Cloud KMS encryption key.
Recreate the bucket to use a Customer-managed key. Encryption can only be
specified at the time of bucket creation.
In the bucket advanced settings, select Customer-managed key and then select a
Cloud KMS encryption key.
(Correct)
Explanation
In the bucket advanced settings, select Customer-supplied key and then
select a Cloud KMS encryption key. is not right.
Customer-Supplied key is not an option when selecting the encryption method in the
console. Moreover, we want to use customer managed encryption keys and not
customer supplied encryption keys. This does not fit our requirements.
In the bucket advanced settings, select Google-managed key and then select a
Cloud KMS encryption key. is not right.
While Google-managed key is an option when selecting the encryption method in
console, we want to use customer managed encryption keys and not Google Managed
encryption keys. This does not fit our requirements.
In the bucket advanced settings, select Customer-managed key and then select
a Cloud KMS encryption key. is the right answer.
This option correctly selects the Customer-managed key and then the key to use which
satisfies our requirement. See the screenshot below for reference.
Ref: https://cloud.google.com/storage/docs/encryption/using-customer-managed-
keys#add-default-key
Question 34: Correct
Users of your application are complaining of slowness when loading the
application. You realize the slowness is because the App Engine deployment
serving the application is deployed in us-central where as all users of this
application are closest to europe-west3. You want to change the region of the App
Engine application to europe-west3 to minimize latency. What's the best way to
change the App Engine region?
From the console, under the App Engine page, click edit, and change the region
drop-down.
Use the gcloud app region set command and supply the name of the new region.
(Correct)
Explanation
Use the gcloud app region set command and supply the name of the new
region. is not right.
gcloud app region command does not provide a set action. The only action gcloud app
region command currently supports is list which lists the availability of flex and standard
environments for each region.
Ref: https://cloud.google.com/sdk/gcloud/reference/app/regions/list
Contact Google Cloud Support and request the change. is not right.
Unfortunately, Google Cloud Support isn't of much use here as they would not be able
to change the region of an App Engine Deployment. App engine is a regional service,
which means the infrastructure that runs your app(s) is located in a specific region and is
managed by Google to be redundantly available across all the zones within that region.
Once an app engine deployment is created in a region, it can't be changed.
Ref: https://cloud.google.com/appengine/docs/locations
From the console, Click edit in App Engine dashboard page and change the
region drop-down. is not right.
The settings mentioned in this option aren't available in the App Engine dashboard. App
engine is a regional service. Once an app engine deployment is created in a region, it
can't be changed. As shown in the screenshot below, Region is greyed out.
App engine is a regional service, which means the infrastructure that runs your app(s) is
located in a specific region and is managed by Google to be redundantly available
across all the zones within that region. Once an app engine deployment is created in a
region, it can't be changed. The only way is to create a new project and create an App
Engine instance in europe-west3, send all user traffic to this instance and delete the app
engine instance in us-central.
Ref: https://cloud.google.com/appengine/docs/locations
Question 35: Correct
You want to ingest and analyze large volumes of stream data from sensors in real
time, matching the high speeds of IoT data to track normal and abnormal
behavior. You want to run it through a data processing pipeline and store the
results. Finally, you want to enable customers to build dashboards and drive
analytics on their data in real time. What services should you use for this task?
(Correct)
Explanation
You want to ingest large volumes of streaming data at high speeds. So you need to use
Cloud Pub/Sub. Cloud Pub/Sub provides a simple and reliable staging location for your
event data on its journey towards processing, storage, and analysis. Cloud Pub/Sub is
serverless and you can ingest events at any scale.
Ref: https://cloud.google.com/pubsub
Next, you want to analyze this data. Cloud Dataflow is a fully managed streaming
analytics service that minimizes latency, processing time, and cost through autoscaling
and batch processing. Dataflow enables fast, simplified streaming data pipeline
development with lower data latency.
Ref: https://cloud.google.com/dataflow
Next, you want to store these results. BigQuery is an ideal place to store these results as
BigQuery supports the querying of streaming data in real-time. This assists in real-time
predictive analytics.
Ref: https://cloud.google.com/bigquery
Therefore the correct answer is Cloud Pub/Sub, Cloud Dataflow, BigQuery.
Here’s more information from Google docs about the Stream analytics use case. Google
recommends we use Dataflow along with Pub/Sub and BigQuery.
https://cloud.google.com/dataflow#section-6
Google’s stream analytics makes data more organized, useful, and accessible from the
instant it’s generated. Built on Dataflow along with Pub/Sub and BigQuery, our
streaming solution provisions the resources you need to ingest, process, and analyze
fluctuating volumes of real-time data for real-time business insights. This abstracted
provisioning reduces complexity and makes stream analytics accessible to both data
analysts and data engineers.
and
https://cloud.google.com/solutions/stream-analytics
Ingest, process, and analyze event streams in real time. Stream analytics from Google
Cloud makes data more organized, useful, and accessible from the instant it’s generated.
Built on the autoscaling infrastructure of Pub/Sub, Dataflow, and BigQuery, our
streaming solution provisions the resources you need to ingest, process, and analyze
fluctuating volumes of real-time data for real-time business insights.
Question 36: Incorrect
You want to deploy a python application to an autoscaled managed instance
group on Compute Engine. You want to use GCP deployment manager to do this.
What is the fastest way to get the application onto the instances without
introducing undue complexity?
(Correct)
Once the instance starts up, connect over SSH and install the application.
(Incorrect)
Explanation
Include a startup script to bootstrap the python application when creating
instance template by running gcloud compute instance-templates create app-
template --startup-script=/scripts/install_app.sh. is not right.
gcloud compute instance-templates create command does not accept a flag called --
startup-script. While creating compute engine images, the startup script can be
provided through a special metadata key called startup-script which specifies a script
that will be executed by the instances once they start running. For convenience, --
metadata-from-file can be used to pull the value from a file.
Ref: https://cloud.google.com/sdk/gcloud/reference/compute/instance-
templates/create
Once the instance starts up, connect over SSH and install the application. is
not right.
The managed instances group has auto-scaling enabled. If we are to connect over SSH
and install the application, we have to repeat this task on all current instances and on
future instances the autoscaler adds to the group. This process is manual, error-prone,
time consuming and should be avoided.
3. Created a Cloud DNS managed public zone for storage.cloud.google.com that maps
to 199.36.153.4/30 and authorize the zone for use by VPC network
3. Add a custom static route to the VPC network to direct traffic with the destination
199.36.153.4/30 to the default internet gateway.
4. Created a Cloud DNS managed private zone for storage.cloud.google.com that maps
to 199.36.153.4/30 and authorize the zone for use by VPC network
3. Created a Cloud DNS managed public zone for *.googleapis.com that maps to
199.36.153.4/30 and authorize the zone for use by VPC network
3. Add a custom static route to the VPC network to direct traffic with the destination
199.36.153.4/30 to the default internet gateway.
4. Created a Cloud DNS managed private zone for *.googleapis.com that maps to
199.36.153.4/30 and authorize the zone for use by VPC network
(Correct)
Explanation
While Google APIs are accessible on *.googleapis.com, to restrict Private Google Access
within a service perimeter to only VPC Service Controls supported Google APIs and
services, hosts must send their requests to the restricted.googleapis.com domain name
instead of *.googleapis.com. The restricted.googleapis.com domain resolves to a VIP
(virtual IP address) range 199.36.153.4/30. This IP address range is not announced to the
Internet. If you require access to other Google APIs and services that aren't supported
by VPC Service Controls, you can use 199.36.153.8/30 (private.googleapis.com).
However, we recommend that you use restricted.googleapis.com, which integrates with
VPC Service Controls and mitigates data exfiltration risks. In either case, VPC Service
Controls service perimeters are always enforced on APIs and services that support VPC
Service Controls.
Ref: https://cloud.google.com/vpc-service-controls/docs/set-up-private-connectivity
Public/Private zone.
Here’s more information about how to set up private connectivity to Google’s services
through VPC.
Ref: https://cloud.google.com/vpc/docs/private-access-options#private-vips
In the following example, the on-premises network is connected to a VPC network
through a Cloud VPN tunnel. Traffic from on-premises hosts to Google APIs travels
through the tunnel to the VPC network. After traffic reaches the VPC network, it is sent
through a route that uses the default internet gateway as its next hop. The next hop
allows traffic to leave the VPC network and be delivered to restricted.googleapis.com
(199.36.153.4/30).
Cloud Router has been configured to advertise the 199.36.153.4/30 IP address range
through the Cloud VPN tunnel by using a custom route advertisement. Traffic going to
Google APIs is routed through the tunnel to the VPC network.
A custom static route was added to the VPC network that directs traffic with the
destination 199.36.153.4/30 to the default internet gateway (as the next hop). Google
then routes traffic to the appropriate API or service.
If you created a Cloud DNS managed private zone for *.googleapis.com that maps to
199.36.153.4/30 and have authorized that zone for use by your VPC network, requests to
anything in the googleapis.com domain are sent to the IP addresses that are used by
restricted.googleapis.com
Cloud Run
Cloud Functions
(Correct)
Explanation
GCP serverless compute portfolio includes 4 services, which are all listed in the answer
options. Our requirements are to identify a GCP serverless service that
Create a new GKE cluster by running the command gcloud container clusters
create [CLUSTER_NAME] --enable-autoscaling --min-nodes=1 --max-nodes=10.
Redeploy your application
(Correct)
Set up a stack driver alert to detect slowness in the application. When the alert is
triggered, increase nodes in the cluster by running the command gcloud container
clusters resize CLUSTER_Name --size <new size>.
(Incorrect)
To enable autoscaling, add a tag to the instances in the cluster by running the
command gcloud compute instances add-tags [INSTANCE] --tags=enable-
autoscaling,min-nodes=1,max-nodes=10
Explanation
Set up a stack driver alert to detect slowness in the application. When the
alert is triggered, increase nodes in the cluster by running the command
gcloud container clusters resize CLUSTER_Name --size {new size}. is not right.
The command gcloud container clusters resize command resizes an existing cluster for
running containers. While it is possible to manually increase the number of nodes in the
cluster by running the command, the scale-up is not automatic, it is a manual process.
Also, there is no scale down so it doesn’t fit our requirement of "scale up as traffic
increases and scale down when the traffic goes down".
Ref: https://cloud.google.com/sdk/gcloud/reference/container/clusters/resize
Create a new GKE cluster by running the command gcloud container clusters
create [CLUSTER_NAME] --enable-autoscaling --min-nodes=1 --max-nodes=10.
Redeploy your application. is not right.
The command gcloud container clusters create - creates a GKE cluster and the flag --
enable-autoscaling enables autoscaling and the parameters --min-nodes=1 --max-
nodes=10 define the minimum and maximum number of nodes in the node pool.
However, we want to configure cluster autoscaling for the existing GKE cluster; not
create a new GKE cluster.
Ref: https://cloud.google.com/sdk/gcloud/reference/container/clusters/create
Update existing GKE cluster to enable autoscaling by running the command
gcloud container clusters update [CLUSTER_NAME] --enable-autoscaling --min-
nodes=1 --max-nodes=10. is the right answer.
The command gcloud container clusters update - updates an existing GKE cluster. The
flag --enable-autoscaling enables autoscaling and the parameters --min-nodes=1 --
max-nodes=10 define the minimum and maximum number of nodes in the node pool.
This enables cluster autoscaling which scales up and scales down the nodes
automatically between 1 and 10 nodes in the node pool.
SSH to each node and run a script to install the forwarder agent.
(Correct)
Explanation
SSH to each node and run a script to install the forwarder agent. is not right.
While this can be done, this approach does not scale. Every time the Kubernetes cluster
autoscaling adds a new node, we have to SSH to the instance and run the script which is
manual, possibly error-prone and adds operational overhead. We need to look for a
solution that automates this task.
You want to use Cloud Deployment Manager to create this cluster in GKE. What should
you do?
gcloud deployment-manager deployments create my-gcp-ace-cluster --config
cluster.yaml
(Correct)
Explanation
gcloud deployment-manager deployments apply my-gcp-ace-cluster --config
cluster.yaml. is not right.
"gcloud deployment-manager deployments" doesn't support action apply. With Google
cloud in general, the action for creating is create and the action for retrieving is list. With
Kubernetes resources, the corresponding actions are apply and get respectively.
Ref: https://cloud.google.com/sdk/gcloud/reference/deployment-
manager/deployments/create
(Correct)
Create a key with the ssh-keygen command. Upload the key to the instance. Run
gcloud compute instances list to get the IP address of the instance, then use the
ssh command.
Run gcloud compute instances list to get the IP address of the instance, then use
the ssh command.
Create a key with the ssh-keygen command. Then use the gcloud compute ssh
command.
Explanation
Create a key with the ssh-keygen command. Upload the key to the instance.
Run gcloud compute instances list to get the IP address of the instance,
then use the ssh command. is not right.
This approach certainly works. You can create a key pair with ssh-keygen, update the
instance metadata with the public key and SSH to the instance. But is it the easiest way
to SSH to the instance with the fewest possible steps? Let’s explore other options to
decide (you will see that there is another option that does the same with less effort). You
can find more information about this option here:
https://cloud.google.com/compute/docs/instances/adding-removing-ssh-keys#block-
project-keys
Create a key with the ssh-keygen command. Then use the gcloud compute ssh
command. is not right.
This works but is more work (having to create the key) than the answer. gcloud compute
ssh ensures that the user's public SSH key is present in the project's metadata. If the
user does not have a public SSH key, one is generated using ssh-keygen and added to
the project’s metadata.
Run gcloud compute instances list to get the IP address of the instance,
then use the ssh command. is not right.
We can get the IP of the instance by executing the gcloud compute instances list but
unless an SSH is generated and updated in project metadata, you would not be able to
SSH to the instance. User access to a Linux instance through third-party tools is
determined by which public SSH keys are available to the instance. You can control the
public SSH keys that are available to a Linux instance by editing metadata, which is
where your public SSH keys and related information are stored.
Ref: https://cloud.google.com/compute/docs/instances/adding-removing-ssh-
keys#block-project-keys
Use the kubectl run msg-translator-22 /bin/ bash command to run a shell on that
container.
Use the kubectl exec -it msg-translator-22 -- /bin/bash command to run a shell on
that container.
(Correct)
Use the kubectl exec -it -- /bin/bash command to run a shell on that container.
Explanation
Use the kubectl run command to run a shell on that container. is not right.
kubectl run creates and runs a deployment. It creates a deployment or a job to manage
the created container(s). It is not possible to use kubectl run to connect to an existing
container.
https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#run
Use the kubectl run msg-translator-22 /bin/ bash command to run a shell on
that container. is not right.
kubectl run creates and runs a deployment. It creates a deployment or a job to manage
the created container(s). It is not possible to use kubectl run to connect to an existing
container.
https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#run
Use the kubectl exec -it -- /bin/bash command to run a shell on that
container. is not right.
While kubectl exec is used to execute a command in a container, the command above
doesn't quite work because we haven't passed to it the identifier of the container.
Ref: https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#exec
Cloud Functions
Cloud Run
(Correct)
Explanation
App Engine Standard. is not right.
Google App Engine Standard offers a limited number of runtimes - Java, Node.js,
Python, Go, PHP and Ruby; and at the same time doesn’t offer support for Websockets.
Ref: https://cloud.google.com/appengine/docs/standard
(Correct)
Explanation
Import data into Google Cloud SQL. is not right.
Cloud SQL is a fully-managed relational database service. It supports MySQL so the
migration of data from your data center to cloud can be straightforward but Google
Cloud SQL cannot handle petabyte-scale data. The current second-generation instances
limit the storage to approximately 30TB.
Ref: https://cloud.google.com/sql#overview
Ref: https://cloud.google.com/sql/docs/quotas
(Incorrect)
(Correct)
Explanation
gcloud get pods --selector="app=prod". is not right.
You can not retrieve pods from the Kubernetes cluster by using gcloud. You can list
pods by using Kubernetes CLI - kubectl get pods.
Ref: https://kubernetes.io/docs/tasks/access-application-cluster/list-all-running-
container-images/
You don't have a GKE cluster in the development project and you need to provision one.
Which of the commands fail with an error in Cloud Shell when you are attempting to
create a GKE cluster and deploy the yaml configuration files to create a deployment and
service. (Select Two)
(Correct)
(Correct)
Explanation
1. gcloud container clusters create cluster-1 --zone=us-central1-a
2. gcloud container clusters get-credentials cluster-1 --zone=us-central1-a
3. kubectl apply -f deployment.yaml
4. kubectl apply -f service.yaml. is not right (i.e. commands executes
successfully)
You create a cluster by running gcloud container clusters create command. You then
fetch credentials for a running cluster by running gcloud container clusters get-
credentials command. Finally, you apply the kubernetes resource configuration by
running kubectl apply -f
Ref: https://cloud.google.com/sdk/gcloud/reference/container/clusters/create
Ref: https://cloud.google.com/sdk/gcloud/reference/container/clusters/get-credentials
Ref: https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#apply
Both compute instances were created in europe-west2-a zone but you want to create
them in other zones. Your active gcloud configuration is as shown below.
1. $ gcloud config list
2.
3. [component_manager]
4. disable_update_check = True
5. [compute]
6. gce_metadata_read_timeout_sec = 5
7. zone = europe-west2-a
8. [core]
9. account = gcp-ace-lab-user@gmail.com
10. disable_usage_reporting = False
11. project = gcp-ace-lab-266520
12. [metrics]
13. environment = devshell
You want to modify the gcloud configuration such that you are prompted for zone
when you execute the create instance commands above. What should you do?
gcloud config set zone ""
(Correct)
Explanation
gcloud config unset zone. is not right.
gcloud config does not have a core/zone property. The syntax for this command is
gcloud config unset SECTION/PROPERTY. If SECTION is missing from the command,
SECTION is defaulted to core. We are effectively trying to run the command gcloud
config unset core/zone but the core section doesn't have a property called zone, so this
command fails.
$ gcloud config unset zone
ERROR: (gcloud.config.unset) Section [core] has no property [zone].
Ref: https://cloud.google.com/sdk/gcloud/reference/config/unset
Ref: https://cloud.google.com/sdk/gcloud/reference/config/set
Ref: https://cloud.google.com/sdk/gcloud/reference/config/set
Create a service account with just the permissions to access files in the bucket. Create a
JSON key for the service account. Execute the command gsutil signurl -m 1h {JSON Key
File} gs://{bucket}/*.
Create a JSON key for the Default Compute Engine Service Account. Execute the
command gsutil signurl -t 60m {JSON Key File} gs://{bucket}/. .
Create a service account with just the permissions to access files in the bucket. Create a
JSON key for the service account. Execute the command gsutil signurl -d 1h {JSON Key
File} gs://{bucket}/**.
(Correct)
Create a service account with just the permissions to access files in the bucket. Create a
JSON key for the service account. Execute the command gsutil signurl -p 60m {JSON Key
File} gs://{bucket}/.
Explanation
Create a JSON key for the Default Compute Engine Service Account. Execute
the command gsutil signurl -t 60m {JSON Key File} gs://{bucket}/*.* is not
right.
gsutil signurl does not support -t flag. Executing the command with -t flag fails as
shown.
$ gsutil signurl -t 60m keys.json gs://gcp-ace-lab-255520/*.*
CommandException: Incorrect option(s) specified. Usage:
Ref: https://cloud.google.com/storage/docs/gsutil/commands/signurl
Also, using the default compute engine service account violates the principle of least
privilege. The recommended approach is to create a service account with just the right
permissions needed and create JSON keys for this service account to use with gsutil
signurl command.
Create a service account with just the permissions to access files in the
bucket. Create a JSON key for the service account. Execute the command
gsutil signurl -p 60m {JSON Key File} gs://{bucket}/. is not right.
With gsutil signurl, -p is used to specify the key store password instead of prompting for
the password. It can not be used to pass a time value. Executing the command with -p
flag fails as shown.
Ref: https://cloud.google.com/storage/docs/gsutil/commands/signurl
Create a service account with just the permissions to access files in the
bucket. Create a JSON key for the service account. Execute the command
gsutil signurl -m 1h {JSON Key File} gs://{bucket}/*. is not right.
With gsutil signurl, -m is used to specify the operation e.g. PUT/GET etc. Executing the
command with -m flag fails as shown.
Ref: https://cloud.google.com/storage/docs/gsutil/commands/signurl
Create a service account with just the permissions to access files in the
bucket. Create a JSON key for the service account. Execute the command
gsutil signurl -d 1h {JSON Key File} gs://{bucket}/**. is the right answer.
This command correctly specifies the duration that the signed url should be valid for by
using the -d flag. The default is 1 hour so omitting the -d flag would have also resulted
in the same outcome. Times may be specified with no suffix (default hours), or with s =
seconds, m = minutes, h = hours, d = days. The max duration allowed is 7d.
Ref: https://cloud.google.com/storage/docs/gsutil/commands/signurl
(Correct)
Google Domains, Cloud DNS private zone, SSL Proxy Load Balancer
Once the www.example.com zone is set up, we need to create a DNS (A) record to point
to the public IP of the Load Balancer. This is also carried out in Cloud DNS.
Finally, we need a load balancer to front the autoscaled managed instances group.
Google recommends we use HTTP(S) Load Balancer for this requirement as "SSL Proxy
Load Balancing is intended for non-HTTP(S) traffic. For HTTP(S) traffic, we recommend
that you use HTTP(S) Load Balancing."
Ref: https://cloud.google.com/load-balancing/docs/ssl
So Google Domains, Cloud DNS, HTTP(S) Load Balancer is the right answer.