0% found this document useful (0 votes)
87 views90 pages

Practice Test 3

Download as pdf or txt
Download as pdf or txt
Download as pdf or txt
You are on page 1/ 90

Practice Test 3 - Results

Return to review
Attempt 1
All knowledge areas
All questions
Question 1: Correct
You have two compute instances in the same VPC but in different regions. You can
SSH from one instance to another instance using their internal IP address but not
their external IP address. What could be the reason for SSH failing on external IP
address?

The compute instances have a static IP for their external IP.

The combination of compute instance network tags and VPC firewall rules only
allow SSH from the subnets IP range.

(Correct)

The external IP address is disabled.

The compute instances are not using the right cross region SSH IAM permissions

Explanation
The compute instances have a static IP for their external IP. is not right.
Not having a static IP is not a reason for failed SSH connections. When the firewall rules
are set up correctly, SSH works fine on compute instances having ephemeral IP Address.

The external IP address is disabled. is not right.


Our question states SSH doesn't work on external IP addresses so it is safe to assume
they already have an external IP. Therefore, this option is not correct.
The compute instances are not using the right cross-region SSH IAM
permissions. is not right.
There is no such thing as cross region SSH IAM permissions.

The combination of compute instance network tags and VPC firewall rules only
allow SSH from the subnets IP range. is the right answer.
The combination of compute instance network tags and VPC firewall rules can certainly
result in SSH traffic being allowed from only subnets IP range. The firewall rule can be
configured to allow SSH traffic from just the VPC range e.g. 10.0.0.0/8. In this scenario,
all SSH traffic from within the VPC is accepted but external SSH traffic is blocked.
Ref: https://cloud.google.com/vpc/docs/using-firewalls

Question 2: Incorrect
You have asked your supplier to send you a purchase order and you want to
enable them upload the file to a cloud storage bucket within the next 4 hours.
Your supplier does not have a Google account. You want to follow Google
recommended practices. What should you do?

Create a service account with just the permissions to upload files to the bucket. Create a
JSON key for the service account. Execute the command gsutil signurl -m PUT -d 4h
{JSON Key File} gs://{bucket}/**.

(Correct)

Create a service account with just the permissions to upload files to the bucket. Create a
JSON key for the service account. Execute the command gsutil signurl -d 4h {JSON Key
File} gs://{bucket}/.

(Incorrect)

Create a service account with just the permissions to upload files to the bucket. Create a
JSON key for the service account. Execute the command gsutil signurl -httpMethod PUT
-d 4h {JSON Key File} gs://{bucket}/**.


Create a JSON key for the Default Compute Engine Service Account. Execute the
command gsutil signurl -m PUT -d 4h {JSON Key File} gs://{bucket}/**.
Explanation
Create a service account with just the permissions to upload files to the
bucket. Create a JSON key for the service account. Execute the command
gsutil signurl -d 4h {JSON Key File} gs://{bucket}/. is not right.
This command creates signed URLs for retrieving existing objects. This command does
not specify a HTTP method and in the absence of one, the default HTTP method is GET.
Ref: https://cloud.google.com/storage/docs/gsutil/commands/signurl

Create a service account with just the permissions to upload files to the
bucket. Create a JSON key for the service account. Execute the command
gsutil signurl -httpMethod PUT -d 4h {JSON Key File} gs://{bucket}/**. is not
right.
gsutil signurl does not accept -httpMethod parameter.

$ gsutil signurl -d 4h -httpMethod PUT keys.json gs://gcp-ace-lab-255520/*


CommandException: Incorrect option(s) specified. Usage:

The HTTP method can be provided through -m flag.


Ref: https://cloud.google.com/storage/docs/gsutil/commands/signurl

Create a JSON key for the Default Compute Engine Service Account. Execute
the command gsutil signurl -m PUT -d 4h {JSON Key File} gs://{bucket}/**. is
not right.
Using the default compute engine service account violates the principle of least
privilege. The recommended approach is to create a service account with just the right
permissions needed and create JSON keys for this service account to use with gsutil
signurl command.

Create a service account with just the permissions to upload files to the
bucket. Create a JSON key for the service account. Execute the command
gsutil signurl -m PUT -d 4h {JSON Key File} gs://{bucket}/**. is the right
answer.
This command correctly creates a signed url that is valid for 4 hours and allows PUT
(through the -m flag) operations on the bucket. The supplier can then use the signed
URL to upload a file to this bucket within 4 hours.
Ref: https://cloud.google.com/storage/docs/gsutil/commands/signurl
Question 3: Incorrect
You have two Kubernetes resource configuration files.
1. deployment.yaml - creates a deployment
2. service.yaml - sets up a LoadBalancer service to expose the pods.

You don't have a GKE cluster in the development project and you need to provision one.
Which of the commands below would you run in Cloud Shell to create a GKE cluster and
deploy the yaml configuration files to create a deployment and service?

1. kubectl container clusters create cluster-1 --zone=us-central1-a

2. kubectl container clusters get-credentials cluster-1 --zone=us-central1-a

3. kubectl apply -f deployment.yaml

4. kubectl apply -f service.yaml

1. gcloud container clusters create cluster-1 --zone=us-central1-a

2. gcloud container clusters get-credentials cluster-1 --zone=us-central1-a

3. kubectl deploy -f deployment.yaml

4. kubectl deploy -f service.yaml

(Incorrect)

1. gcloud container clusters create cluster-1 --zone=us-central1-a

2. gcloud container clusters get-credentials cluster-1 --zone=us-central1-a

3. gcloud gke apply -f deployment.yaml

4. gcloud gke apply -f service.yaml


1. gcloud container clusters create cluster-1 --zone=us-central1-a

2. gcloud container clusters get-credentials cluster-1 --zone=us-central1-a

3. kubectl apply -f deployment.yaml

4. kubectl apply -f service.yaml

(Correct)

Explanation
1. kubectl container clusters create cluster-1 --zone=us-central1-a
2. kubectl container clusters get-credentials cluster-1 --zone=us-central1-a
3. kubectl apply -f deployment.yaml
4. kubectl apply -f service.yaml. is not right.
kubectl doesn't support kubectl container clusters create command. kubectl can not be
used to create GKE clusters. To create a GKE cluster, you need to execute gcloud
container clusters create command.
Ref: https://cloud.google.com/sdk/gcloud/reference/container/clusters/create

1. gcloud container clusters create cluster-1 --zone=us-central1-a


2. gcloud container clusters get-credentials cluster-1 --zone=us-central1-a
3. kubectl deploy -f deployment.yaml
4. kubectl deploy -f service.yaml. is not right.
kubectl doesn't support kubectl deploy command. The YAML file contains the cluster
resource configuration. You don't create the configuration, instead, you apply the
configuration to the cluster. The configuration can be applied by running kubectl apply
command
Ref: https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#apply

1. gcloud container clusters create cluster-1 --zone=us-central1-a


2. gcloud container clusters get-credentials cluster-1 --zone=us-central1-a
3. gcloud gke apply -f deployment.yaml
4. gcloud gke apply -f service.yaml. is not right.
gcloud doesn't support gcloud gke apply command. The YAML file contains the cluster
resource configuration. You don't create the configuration, instead, you apply the
configuration to the cluster. The configuration can be applied by running kubectl apply
command
Ref: https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#apply
1. gcloud container clusters create cluster-1 --zone=us-central1-a
2. gcloud container clusters get-credentials cluster-1 --zone=us-central1-a
3. kubectl apply -f deployment.yaml
4. kubectl apply -f service.yaml. is the right answer.
You create a cluster by running gcloud container clusters create command. You then
fetch credentials for a running cluster by running gcloud container clusters get-
credentials command. Finally, you apply the Kubernetes resource configuration by
running kubectl apply -f
Ref: https://cloud.google.com/sdk/gcloud/reference/container/clusters/create
Ref: https://cloud.google.com/sdk/gcloud/reference/container/clusters/get-credentials
Ref: https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#apply

Question 4: Correct
Your company plans to store sensitive PII data in a cloud storage bucket. Your
compliance department doesn’t like encrypting sensitive PII data with Google
managed keys and has asked you to ensure the new objects uploaded to this
bucket are encrypted by customer managed encryption keys. What should you do?
(Select Three)

Use gsutil with -o "GSUtil:encryption_key=[KEY_RESOURCE]" when uploading


objects to the bucket.

(Correct)

Use gsutil with --encryption-key=[ENCRYPTION_KEY] when uploading objects to


the bucket.

In the bucket advanced settings, select Customer-managed key and then select a
Cloud KMS encryption key.

(Correct)


In the bucket advanced settings, select Customer-supplied key and then select a
Cloud KMS encryption key.

Modify .boto configuration to include encryption_key = [KEY_RESOURCE] when


uploading objects to bucket.

(Correct)

Explanation
In the bucket advanced settings, select the Customer-supplied key and then
select a Cloud KMS encryption key. is not right.
The customer-supplied key is not an option when selecting the encryption method in
the console.

Use gsutil with --encryption-key=[ENCRYPTION_KEY] when uploading objects to


the bucket. is not right.
gsutil doesn't accept the flag --encryption-key. gsutil can be set up to use an encryption
key by modifying boto configuration or by specifying a top-level -o flag but neither of
these is included in this option.
Ref: https://cloud.google.com/storage/docs/encryption/using-customer-managed-keys

In the bucket advanced settings, select Customer-managed key and then select
a Cloud KMS encryption key. is the right answer.
Our compliance department wants us to use customer-managed encryption keys. We
can select Customer-Managed radio and provide a cloud KMS encryption key to encrypt
objects with the customer managed key. This fit our requirements.

Use gsutil with -o "GSUtil:encryption_key=[KEY_RESOURCE]" when uploading


objects to the bucket. is the right answer.
We can have gsutil use an encryption key by using the -o top-level flag: -o
"GSUtil:encryption_key=[KEY_RESOURCE]".
Ref: https://cloud.google.com/storage/docs/encryption/using-customer-managed-
keys#add-object-key

Modify .boto configuration to include encryption_key = [KEY_RESOURCE] when


uploading objects to bucket. is the right answer.
As an alternative to the -o top-level flag, gsutil can also use an encryption key if .boto
configuration is modified to specify the encryption key.
encryption_key = [KEY_RESOURCE]

Ref: https://cloud.google.com/storage/docs/encryption/using-customer-managed-
keys#add-object-key
Question 5: Correct
You want to create a Google Cloud Storage regional bucket logs-archive in the Los
Angeles region (us-west2). You want to use coldline storage class to minimize
costs and you want to retain files for 10 years. Which of the following commands
should you run to create this bucket?

gsutil mb -l us-west2 -s nearline --retention 10y gs://logs-archive

gsutil mb -l us-west2 -s coldline --retention 10m gs://logs-archive

gsutil mb -l us-west2 -s coldline --retention 10y gs://logs-archive

(Correct)

gsutil mb -l los-angeles -s coldline --retention 10m gs://logs-archive

Explanation
gsutil mb -l us-west2 -s nearline --retention 10y gs://logs-archive. is not
right.
This command creates a bucket that uses nearline storage class whereas we want to use
Coldline storage class.
Ref: https://cloud.google.com/storage/docs/gsutil/commands/mb

gsutil mb -l los-angeles -s coldline --retention 10m gs://logs-archive. is


not right.
This command uses los-angeles as the location but los-angeles is not a supported
region name. The region name for Los Angeles is us-west-2.
Ref: https://cloud.google.com/storage/docs/locations
gsutil mb -l us-west2 -s coldline --retention 10m gs://logs-archive. is not
right.
This command creates a bucket with retention set to 10 months whereas we want to
retain the objects for 10 years.
Ref: https://cloud.google.com/storage/docs/gsutil/commands/mb

gsutil mb -l us-west2 -s coldline --retention 10y gs://logs-archive. is the


right answer.
This command correctly creates a bucket in Los Angeles, uses Coldline storage class and
retains objects for 10 years.
Ref: https://cloud.google.com/storage/docs/gsutil/commands/mb

Question 6: Incorrect
You want to migrate an application from Google App Engine Standard to Google
App Engine Flex. Your application is currently serving live traffic and you want to
ensure everything is working in Google App Engine Flex before migrating all
traffic. You want to minimize effort and ensure availability of service. What should
you do?

1. Set env: flex in app.yaml

2. gcloud app deploy --no-promote --version=[NEW_VERSION]

3. Validate [NEW_VERSION] in App Engine Flex

4. gcloud app versions migrate [NEW_VERSION]

(Correct)

1. Set env: flex in app.yaml

2. gcloud app deploy --version=[NEW_VERSION]

3. Validate [NEW_VERSION] in App Engine Flex

4. gcloud app versions migrate [NEW_VERSION]


1. Set env: app-engine-flex in app.yaml

2. gcloud app deploy --version=[NEW_VERSION]

3. Validate [NEW_VERSION] in App Engine Flex

4. gcloud app versions start [NEW_VERSION]

1. Set env: app-engine-flex in app.yaml

2. gcloud app deploy --no-promote --version=[NEW_VERSION]

3. Validate [NEW_VERSION] in App Engine Flex

4. gcloud app versions start [NEW_VERSION]

(Incorrect)

Explanation
1. Set env: flex in app.yaml
2. gcloud app deploy --version=[NEW_VERSION]
3. Validate [NEW_VERSION] in App Engine Flex
4. gcloud app versions migrate [NEW_VERSION]. is not right.
Executing gcloud app deploy --version=[NEW_VERSION] without --no-promote would
deploy the new version and immediately promote it to serve traffic. We don't want this
version to receive traffic as we would like to validate the version first before sending it
traffic.
Ref: https://cloud.google.com/sdk/gcloud/reference/app/versions/migrate

1. Set env: app-engine-flex in app.yaml


2. gcloud app deploy --version=[NEW_VERSION]
3. Validate [NEW_VERSION] in App Engine Flex
4. gcloud app versions start [NEW_VERSION] is not right.
env: app-engine-flex is an invalid setting. The correct syntax for using the flex engine is
env: flex. Also, Executing gcloud app deploy --version=[NEW_VERSION] without --no-
promote would deploy the new version and immediately promote it to serve traffic. We
don't want this version to receive traffic as we would like to validate the version first
before sending it traffic.
Ref: https://cloud.google.com/sdk/gcloud/reference/app/versions/migrate

1. Set env: app-engine-flex in app.yaml


2. gcloud app deploy --no-promote --version=[NEW_VERSION]
3. Validate [NEW_VERSION] in App Engine Flex
4. gcloud app versions start [NEW_VERSION] is not right.
env: app-engine-flex is an invalid setting. The correct syntax for using the flex engine is
env: flex.
Ref: https://cloud.google.com/sdk/gcloud/reference/app/versions/migrate

1. Set env: flex in app.yaml


2. gcloud app deploy --no-promote --version=[NEW_VERSION]
3. Validate [NEW_VERSION] in App Engine Flex
4. gcloud app versions migrate [NEW_VERSION] is the right answer.
These commands together achieve the end goal while satisfying our requirements.
Setting env: flex in app.yaml and executing gcloud app deploy --no-promote --
version=[NEW_VERSION] results in a new version deployed to flex engine. but the new
version is not configured to serve traffic. We take the opportunity to review this version
before migrating it to serve live traffic by running gcloud app versions migrate
[NEW_VERSION]
Ref: https://cloud.google.com/sdk/gcloud/reference/app/versions/migrate
Ref: https://cloud.google.com/sdk/gcloud/reference/app/deploy

Question 7: Incorrect
You developed an application that lets users upload statistical files and
subsequently run analytics on this data. You chose to use Google Cloud Storage
and BigQuery respectively for these requirements as they are highly available and
scalable. You have a docker image for your application code, and you plan to
deploy on your on-premises Kubernetes clusters. Your on-prem kubernetes cluster
needs to connect to Google Cloud Storage and BigQuery and you want to do this
in a secure way following Google recommended practices. What should you do?

Create a new service account, grant it the least viable privileges to the required
services, generate and download a JSON key. Use the JSON key to authenticate
inside the application.

(Correct)

Use the default service account for App Engine, which already has the required
permissions.

Create a new service account, with editor permissions, generate and download a
key. Use the key to authenticate inside the application.

(Incorrect)

Use the default service account for Compute Engine, which already has the
required permissions.

Explanation
Use the default service account for Compute Engine, which already has the
required permissions. is not right.
The Compute Engine default service account is created with the Cloud IAM project
editor role
Ref: https://cloud.google.com/compute/docs/access/service-
accounts#default_service_account
The project editor role includes all viewer permissions, plus permissions for actions that
modify state, such as changing existing resources. Using a service account that is over-
privileged falls foul of the principle of least privilege. Google recommends you enforce
the principle of least privilege by ensuring that members have only the permissions that
they actually need.
Ref: https://cloud.google.com/iam/docs/understanding-roles

Use the default service account for App Engine, which already has the
required permissions. is not right.
App Engine default service account has the Editor role in the project (Same as the
default service account for Compute Engine).
Ref: https://cloud.google.com/appengine/docs/standard/python/service-account
The project editor role includes all viewer permissions, plus permissions for actions that
modify state, such as changing existing resources. Using a service account that is over-
privileged falls foul of the principle of least privilege. Google recommends you enforce
the principle of least privilege by ensuring that members have only the permissions that
they actually need.
Ref: https://cloud.google.com/iam/docs/understanding-roles

Create a new service account, with editor permissions, generate and download
a key. Use the key to authenticate inside the application. is not right.
The project editor role includes all viewer permissions, plus permissions for actions that
modify state, such as changing existing resources. Using a service account that is over-
privileged falls foul of the principle of least privilege. Google recommends you enforce
the principle of least privilege by ensuring that members have only the permissions that
they actually need.
Ref: https://cloud.google.com/iam/docs/understanding-roles

Create a new service account, grant it the least viable privileges to the
required services, generate and download a JSON key. Use the JSON key to
authenticate inside the application. is the right answer.
Using a new service account with just the least viable privileges for the required services
follows the principle of least privilege. To use a service account outside of Google Cloud,
such as on other platforms or on-premises, you must first establish the identity of the
service account. Public/private key pairs provide a secure way of accomplishing this
goal. Once you have the key, you can use it in your application to authenticate
connections to Cloud Storage and BigQuery.
Ref: https://cloud.google.com/iam/docs/creating-managing-service-account-
keys#creating_service_account_keys
Ref: https://cloud.google.com/iam/docs/recommender-overview
Question 8: Incorrect
Your company wants to move 200 TB of your website clickstream logs from your
on premise data center to Google Cloud Platform. These logs need to be retained
in GCP for compliance requirements. Your business analysts also want to run
analytics on these logs to understand user click behaviour on your website. Which
of the below would enable you to meet these requirements? (Select Two)

Load logs into Google Cloud SQL.

Insert logs into Google Cloud Bigtable.


Upload log files into Google Cloud Storage.

(Correct)

Import logs into Google Stackdriver.

(Incorrect)

Load logs into Google BigQuery.

(Correct)

Explanation
Load logs into Google Cloud SQL. is not right.
Cloud SQL is a fully-managed relational database service. Storing logs in Google Cloud
SQL is very expensive. Cloud SQL doesn't help us with analytics. Moreover, Google
Cloud Platform offers several storage classes in Google Cloud Storage that are more apt
for storing logs at a much cheaper cost.
Ref: https://cloud.google.com/sql/docs
Ref: https://cloud.google.com/sql/pricing#sql-storage-networking-prices
Ref: https://cloud.google.com/storage/pricing

Import logs into Google Stackdriver. is not right.


You can push custom logs to Stackdriver and set custom retention periods to store the
logs for longer durations. However, Stackdriver doesn't help us with analytics. You could
create a sink and export data into Cloud BigQuery for analytics but that is more work.
Moreover, Google Cloud Platform offers several storage classes in Google Cloud
Storage that are more apt for storing logs at a much cheaper cost.
Ref: https://cloud.google.com/logging
Ref: https://cloud.google.com/storage/pricing

Insert logs into Google Cloud Bigtable. is not right.


Cloud Bigtable is a petabyte-scale, fully managed NoSQL database service for large
analytical and operational workloads. Storing data in Bigtable (approx $0.17/GB +
$0.65/hr per node) is very expensive compared to storing data in Cloud Storage (approx
$0.02/GB in standard storage class) - which can go down further if you transition to
Nearline/Coldline after running analytics.
Ref: https://cloud.google.com/bigtable/

Upload log files into Google Cloud Storage. is the right answer.
Google Cloud Platform offers several storage classes in Google Cloud Storage that are
suitable for storing/archiving logs at a reasonable cost. GCP recommends you use

Standard storage class if you need to access objects frequently

Nearline storage class if you access infrequently i.e. once a month

Coldline storage class if you access even less frequently e.g. once a quarter

Archive storage for logs archival.

Ref: https://cloud.google.com/storage/docs/storage-classes

Load logs into Google BigQuery. is the right answer.


By loading logs into Google BigQuery, you can securely run and share analytical insights
in your organization with a few clicks. BigQuery’s high-speed streaming insertion API
provides a powerful foundation for real-time analytics, making your latest business data
immediately available for analysis.
Ref: https://cloud.google.com/bigquery#marketing-analytics

Question 9: Incorrect
You deployed a workload to your GKE cluster by running the command kubectl
apply -f app.yaml. You also enabled a LoadBalancer service to expose the
deployment by running kubectl apply -f service.yaml. Your pods are struggling
due to increased load so you decided to enable horizontal pod autoscaler by
running kubectl autoscale deployment [YOUR DEPLOYMENT] --cpu-percent=50 --
min=1 --max=10. You noticed the autoscaler has launched several new pods but
the new pods have failed with the message "Insufficient cpu". What should you do
to resolve this issue?

Edit the managed instance group of the cluster and increase the number of VMs
by 1.

Use "kubectl container clusters resize" to add more nodes to the node pool.

Use "gcloud container clusters resize" to add more nodes to the node pool.

(Correct)

Edit the managed instance group of the cluster and enable autoscaling.

(Incorrect)

Explanation
Use "kubectl container clusters resize" to add more nodes to the node
pool. is not right.
kubectl doesn't support the command kubectl container clusters resize. You have to use
gcloud container clusters resize to resize a cluster.
Ref: https://cloud.google.com/sdk/gcloud/reference/container/clusters/resize

Edit the managed instance group of the cluster and increase the number of
VMs by 1. is not right.
GKE Cluster does not use a managed instance group. Instead, the cluster master (control
plan) handles the lifecycle of nodes in the node pools. The cluster master is responsible
for managing the workloads' lifecycle, scaling, and upgrades. The master also manages
network and storage resources for those workloads.
Ref: https://cloud.google.com/kubernetes-engine/docs/concepts/cluster-architecture

Edit the managed instance group of the cluster and enable autoscaling. is not
right.
GKE Cluster does not use a managed instance group. Instead, the cluster master (control
plan) handles the lifecycle of nodes in the node pools. The cluster master is responsible
for managing the workloads' lifecycle, scaling, and upgrades. The master also manages
network and storage resources for those workloads.
Ref: https://cloud.google.com/kubernetes-engine/docs/concepts/cluster-architecture

Use "gcloud container clusters resize" to add more nodes to the node pool. is
the right answer.
Your pods are failing with "Insufficient cpu". This is because the existing nodes in the
node pool are maxed out, therefore, you need to add more nodes to your node pool.
For such scenarios, enabling cluster autoscaling is ideal, however, this is not in any of the
answer options. In the absence of cluster autoscaling, the next best approach is to add
more nodes to the cluster manually. This is achieved by running the command gcloud
container clusters resize which resizes an existing cluster for running containers.
Ref: https://cloud.google.com/sdk/gcloud/reference/container/clusters/resize

Question 10: Correct


You have a web application deployed as a managed instance group based on an
instance template. You modified the startup script used in the instance template
and would like the existing instances to pick up changes from the new startup
scripts. Your web application is currently serving live web traffic. You want to
propagate the startup script changes to all instances in the managed instances
group while minimizing effort, minimizing cost and ensuring that the available
capacity does not decrease. What would you do?

Create a new managed instance group (MIG) based on a new template. Add the
group to the backend service for the load balancer. When all instances in the new
managed instance group are healthy, delete the old managed instance group

Perform a rolling-action replace with max-unavailable set to 0 and max-surge set


to 1
(Correct)

Delete instances in the managed instance group (MIG) one at a time and rely on
autohealing to provision an additional instance.

Perform a rolling-action start-update with max-unavailable set to 1 and max-


surge set to 0

Explanation
Perform a rolling-action start-update with max-unavailable set to 1 and max-
surge set to 0. is not right.
You can carry out a rolling action start update to fully replace the template by executing
a command like
gcloud compute instance-groups managed rolling-action start-update instance-group
-1 --zone=us-central1-a --version template=instance-template-1 --canary-version t
emplate=instance-template-2,target-size=100%

which updates the instance-group-1 to use instance-template-2 instead of instance-


template-1 and have instances created out of instance-template-2 serve 100% of traffic.
However, the values specified for maxSurge and maxUnavailable mean that we will lose
capacity which is against our requirements.

maxSurge specifies the maximum number of instances that can be created over the
desired number of instances. If maxSurge is set to 0, the rolling update can not create
additional instances and is forced to update existing instances. This results in a
reduction in capacity and therefore does not satisfy our requirement to ensure that the
available capacity does not decrease during the deployment.

maxUnavailable - specifies the maximum number of instances that can be unavailable


during the update process. When maxUnavailable is set to 1, the rolling update updates
1 instance at a time. i.e. it takes 1 instance out of service, updates it, and puts it back
into service. This results in a reduction in capacity while the instance is out of service.
Example - if we have 10 instances in service, this combination of setting results in 1
instance at a time taken out of service for replacement while the remaining 9 continue
to serve live traffic. That’s a reduction of 10% in available capacity.
Ref: https://kubernetes.io/docs/concepts/workloads/controllers/deployment/#max-
unavailable
Ref: https://kubernetes.io/docs/concepts/workloads/controllers/deployment/#max-
surge

Create a new managed instance group (MIG) based on a new template. Add the
group to the backend service for the load balancer. When all instances in
the new managed instance group are healthy, delete the old managed instance
group. is not right.
While the end result is the same, we have a period of time where the traffic is served by
instances from both the old managed instances group (MIG) which doubles our cost
and increases effort and complexity.

Delete instances in the managed instance group (MIG) one at a time and rely
on auto-healing to provision an additional instance. is not right.
While this would result in the same eventual outcome, there are two issues with this
approach. First, deleting an instance one at a time would result in a reduction in capacity
which is against our requirements. Secondly, deleting instances manually one at a time
is error-prone and time-consuming. One of our requirements is to "minimize the effort"
but deleting instances manually and relying on auto-healing health checks to provision
them back is time-consuming and could take a lot of time depending on the number of
instances in the MIG and the startup scripts executed during bootstrap.

Perform a rolling-action replace with max-unavailable set to 0 and max-surge


set to 1. is the right answer.
This option achieves the outcome in the most optimal manner. The replace action is
used to replace instances in a managed instance group. When maxUnavailable is set to
0, the rolling update can not take existing instances out of service. And when maxSurge
is set to 1, we let the rolling update spin a single additional instance. The rolling update
then puts the additional instance into service and takes one of the existing instances out
of service for replacement. There is no reduction in capacity at any point in time.

Ref: https://kubernetes.io/docs/concepts/workloads/controllers/deployment/#max-
unavailable
Ref: https://kubernetes.io/docs/concepts/workloads/controllers/deployment/#max-
surge
Ref: https://cloud.google.com/sdk/gcloud/reference/alpha/compute/instance-
groups/managed/rolling-action/replace

Question 11: Correct


Your team is working towards using desired state configuration for your application
deployed on GKE cluster. You have YAML files for the kubernetes Deployment and
Service objects. Your application is designed to have 2 pods, which is defined by the
replicas parameter in app-deployment.yaml. Your service uses GKE Load Balancer which
is defined in app-service.yaml

You created the kubernetes resources by running

1. kubectl apply -f app-deployment.yaml


2. kubectl apply -f app-service.yaml

Your deployment is now serving live traffic but is suffering from performance issues. You
want to increase the number of replicas to 5. What should you do in order to update the
replicas in existing Kubernetes deployment objects?

Disregard the YAML file. Use the kubectl scale command to scale the replicas to 5.
kubectl scale --replicas=5 -f app-deployment.yaml

Modify the current configuration of the deployment by using kubectl edit to open
the YAML file of the current configuration, modify and save the configuration.
kubectl edit deployment/app-deployment -o yaml --save-config

Disregard the YAML file. Enable autoscaling on the deployment to trigger on CPU
usage and set max pods to 5. kubectl autoscale myapp --max=5 --cpu-percent=80

Edit the number of replicas in the YAML file and rerun the kubectl apply. kubectl
apply -f app-deployment.yaml

(Correct)

Explanation
Disregard the YAML file. Use the kubectl scale command to scale the replicas
to 5. kubectl scale --replicas=5 -f app-deployment.yaml. is not right.
While the outcome is the same, this approach doesn't update the change in the desired
state configuration (YAML file). If you were to make some changes in your app-
deployment.yaml and apply it, the update would scale back the replicas to 2. This is
undesirable.
Ref: https://kubernetes.io/docs/concepts/workloads/controllers/deployment/#scaling-a-
deployment

Disregard the YAML file. Enable autoscaling on the deployment to trigger on


CPU usage and set minimum pods as well as maximum pods to 5. kubectl
autoscale myapp --min=5 --max=5 --cpu-percent=80. is not right.
While the outcome is the same, this approach doesn't update the change in the desired
state configuration (YAML file). If you were to make some changes in your app-
deployment.yaml and apply it, the update would scale back the replicas to 2. This is
undesirable.
Ref: https://kubernetes.io/blog/2016/07/autoscaling-in-kubernetes/

Modify the current configuration of the deployment by using kubectl edit to


open the YAML file of the current configuration, modify and save the
configuration. kubectl edit deployment/app-deployment -o yaml --save-
config. is not right.
Like the above, the outcome is the same. This is equivalent to first getting the resource,
editing it in a text editor, and then applying the resource with the updated version. This
approach doesn't update the replicas change in our local YAML file. If you were to make
some changes in your local app-deployment.yaml and apply it, the update would scale
back the replicas to 2. This is undesirable.
Ref: https://kubernetes.io/docs/concepts/cluster-administration/manage-
deployment/#in-place-updates-of-resources

Edit the number of replicas in the YAML file and rerun the kubectl apply.
kubectl apply -f app-deployment.yaml. is the right answer.
This is the only approach that guarantees that you use desired state configuration. By
updating the YAML file to have 5 replicas and applying it using kubectl apply, you are
preserving the intended state of Kubernetes cluster in the YAML file.
Ref: https://kubernetes.io/docs/concepts/cluster-administration/manage-
deployment/#in-place-updates-of-resources

Question 12: Correct


You want to list all the compute instances in zones us-central1-b and europe-
west1-d. Which of the commands below should you run to retrieve this
information?


gcloud compute instances get --filter="zone:( us-central1-b europe-west1-d )"

gcloud compute instances get --filter="zone:( us-central1-b )" and gcloud


compute instances list --filter="zone:( europe-west1-d )" and combine the results.

gcloud compute instances list --filter="zone:( us-central1-b europe-west1-d )"

(Correct)

gcloud compute instances list --filter="zone:( us-central1-b )" and gcloud


compute instances list --filter="zone:( europe-west1-d )" and combine the results.

Explanation
gcloud compute instances get --filter="zone:( us-central1-b europe-west1-d
)". is not right.
gcloud compute instances command does not support get action.
Ref: https://cloud.google.com/sdk/gcloud/reference/compute/instances

gcloud compute instances get --filter="zone:( us-central1-b )" and gcloud


compute instances list --filter="zone:( europe-west1-d )" and combine the
results. is not right.
gcloud compute instances command does not support get action.
Ref: https://cloud.google.com/sdk/gcloud/reference/compute/instances

gcloud compute instances list --filter="zone:( us-central1-b )" and gcloud


compute instances list --filter="zone:( europe-west1-d )" and combine the
results. is not right.
The first command retrieves compute instances from us-central1-b and the second
command retrieves compute instances from europe-west1-d. The output from the two
statements can be combined to create a full list of instances from us-central1-b and
europe-west1-d, however, this is not efficient as it is a manual activity. Moreover, gcloud
already provides the ability to list and filter on multiple zones in a single command.
Ref: https://cloud.google.com/sdk/gcloud/reference/compute/instances/list
gcloud compute instances list --filter="zone:( us-central1-b europe-west1-d
)". is the right answer.
gcloud compute instances list - lists Google Compute Engine instances. The output
includes internal as well as external IP addresses. The filter expression --filter="zone:( us-
central1-b europe-west1-d )" is used to filter instances from zones us-central1-b and
europe-west1-d.
Ref: https://cloud.google.com/sdk/gcloud/reference/compute/instances/list
Here's a sample output of the command.

$gcloud compute instances list


NAME ZONE MACHINE_TYPE PREEMPTIBLE INTERNAL_IP EXTERNAL_IP STATUS
gke-cluster-1-default-pool-8c599c87-16g9 us-central1-a n1-standard-1 10.128.0.8 3
5.184.212.227 RUNNING
gke-cluster-1-default-pool-8c599c87-36xh us-central1-b n1-standard-1 10.129.0.2 3
4.68.254.220 RUNNING
gke-cluster-1-default-pool-8c599c87-lprq us-central1-c n1-standard-1 10.130.0.13
35.224.96.151 RUNNING

$gcloud compute instances list --filter="zone:( us-central1-b europe-west1-d )"


NAME ZONE MACHINE_TYPE PREEMPTIBLE INTERNAL_IP EXTERNAL_IP STATUS
gke-cluster-1-default-pool-8c599c87-36xhus-central1-bn1-standard-110.129.0.234.68
.254.220RUNNING

Question 13: Correct


You have a web application deployed as a managed instance group. You noticed
some of the compute instances are running low on memory. You suspect this is
due to JVM memory leak and you want to restart the compute instances to reclaim
the leaked memory. Your web application is currently serving live web traffic. You
want to ensure that the available capacity does not go below 80% at any time
during the restarts and you want to do this at the earliest. What would you do?

Stop instances in the managed instance group (MIG) one at a time and rely on
autohealing to bring them back up.

Perform a rolling-action replace with max-unavailable set to 20%.


Perform a rolling-action restart with max-unavailable set to 20%.

(Correct)

Perform a rolling-action reboot with max-surge set to 20%.

Explanation
Perform a rolling-action reboot with max-surge set to 20%. is not right.
reboot is not a supported action for rolling updates. The supported actions are replace,
restart, start-update and stop-proactive-update.
Ref: https://cloud.google.com/sdk/gcloud/reference/beta/compute/instance-
groups/managed/rolling-action

Perform a rolling-action replace with max-unavailable set to 20%. is not right.


Performing a rolling-action replace - Replaces instances in a managed instance group.
While this resolves the JVM memory leak issue, recreating the instances is a little drastic
when the same result can be achieved with the simple restart action. One of our
requirements is to "do this at the earliest '' but recreating instances might take a lot of
time depending on the number of instances and startup scripts; certainly more time
than restart action.
Ref: https://cloud.google.com/sdk/gcloud/reference/beta/compute/instance-
groups/managed/rolling-action

Stop instances in the managed instance group (MIG) one at a time and rely on
autohealing to bring them back up. is not right.
While this would result in the same eventual outcome, it is manual, error-prone and
time-consuming. One of our requirements is to "do this at the earliest" but stopping
instances manually is time-consuming and could take a lot of time depending on the
number of instances in the MIG. Also, relying on autohealing health checks to detect the
failure and spin up the instance adds to the delay.

Perform a rolling-action restart with max-unavailable set to 20%. is the right


answer.
This option achieves the outcome in the most optimal manner. The restart action
restarts instances in a managed instance group. By performing a rolling restart with
max-unavailable set to 20%, the rolling update restarts instances while ensuring there is
at least 80% available capacity. The rolling update carries on restarting all the remaining
instances until all instances in the MIG have been restarted.
Ref: https://cloud.google.com/sdk/gcloud/reference/alpha/compute/instance-
groups/managed/rolling-action/restart

Question 14: Correct


You are designing an application that lets users upload and share photos. You
expect your application to grow really fast and you are targeting worldwide
audience. You want to delete uploaded photos after 30 days. You want to
minimize costs while ensuring your application is highly available. Which GCP
storage solution should you choose?

Cloud Filestore.

Cloud Datastore database.

Persistent SSD on VM instances.

Multiregional Cloud Storage bucket.

(Correct)

Explanation
Cloud Datastore database. is not right.
Cloud Datastore is a NoSQL document database built for automatic scaling, high
performance, and ease of application development. We want to store objects/files and
Cloud Datastore is not a suitable storage option for such data.
Ref: https://cloud.google.com/datastore/docs/concepts/overview

Cloud Filestore. is not right.


Cloud Filestore is a managed file storage service based on NFSv3 protocol. While Cloud
Filestore can be used to store images, Cloud Filestore is a zonal service and can not
scale easily to support a worldwide audience. Also, Cloud Filestore costs a lot (10 times)
more than some of the storage classes offered by Google Cloud Storage.
Ref: https://cloud.google.com/filestore
Ref: https://cloud.google.com/storage/pricing

Persistent SSD on VM instances. is not right.


Persistent SSD is a regional service and doesn't automatically scale to other regions to
support a worldwide user base. Moreover, Persistent SSD disks are very expensive. A
regional persistent SSD costs $0.34 per GB per month. In comparison, Google Cloud
Storage offers several storage classes that are significantly cheaper.
Ref: https://cloud.google.com/persistent-disk
Ref: https://cloud.google.com/filestore/pricing

Multiregional Cloud Storage bucket. is the right answer.


Cloud Storage allows world-wide storage and retrieval of any amount of data at any
time. We don't need to set up auto-scaling ourselves. Cloud Storage autoscaling is
managed by GCP. Cloud Storage is an object store so it is suitable for storing photos.
Cloud Storage allows world-wide storage and retrieval so cater well to our worldwide
audience. Cloud storage provides us lifecycle rules that can be configured to
automatically delete objects older than 30 days. This also fits our requirements. Finally,
Google Cloud Storage offers several storage classes such as Nearline Storage ($0.01 per
GB per Month) Coldline Storage ($0.007 per GB per Month) and Archive Storage ($0.004
per GB per month) which are significantly cheaper than any of the options above.
Ref: https://cloud.google.com/storage/docs
Ref: https://cloud.google.com/storage/pricing

Question 15: Correct


An engineer from your team accidentally deployed several new versions of NodeJS
application on Google App Engine Standard. You are concerned the new versions
are serving traffic. You have been asked to produce a list of all the versions of the
application that are receiving traffic as well the percent traffic split between them.
What should you do?

gcloud app versions list --show-traffic

gcloud app versions list --hide-no-traffic

(Correct)

gcloud app versions list --traffic

gcloud app versions list

Explanation
gcloud app versions list. is not right
This command lists all the versions of all services that are currently deployed to the App
Engine server. While this list includes all versions that are receiving traffic, it also
includes versions that are not receiving traffic.
Ref: https://cloud.google.com/sdk/gcloud/reference/app/versions/list

gcloud app versions list --traffic. is not right


gcloud app versions list command does not support --traffic flag.
Ref: https://cloud.google.com/sdk/gcloud/reference/app/versions/list

gcloud app versions list --show-traffic. is not right


gcloud app versions list command does not support --show-traffic flag.
Ref: https://cloud.google.com/sdk/gcloud/reference/app/versions/list

gcloud app versions list --hide-no-traffic. is the right answer.


This command correctly lists just the versions that are receiving traffic by hiding versions
that do not receive traffic. This is the only command that fits our requirements.
Ref: https://cloud.google.com/sdk/gcloud/reference/app/versions/list

Question 16: Correct


Your networks team has set up Google compute network as shown below. In addition,
firewall rules in the VPC network have been configured to allow egress to 0.0.0.0/0
Larger image
Which instances have access to Google APIs and Services such as Google Cloud
Storage?

VM A1, VM A2, VM B1, VM B2


VM A1, VM A2, VM B2

(Correct)

VM A1, VM A2

VM A1, VM A2, VM B1

Explanation

VM A1 can access Google APIs and services, including Cloud Storage because its
network interface is located in subnet-a, which has Private Google Access enabled.
Private Google Access applies to the instance because it only has an internal IP address.

VM B1 cannot access Google APIs and services because it only has an internal IP
address and Private Google Access is disabled for subnet-b.

VM A2 and VM B2 can both access Google APIs and services, including Cloud Storage,
because they each have external IP addresses. Private Google Access has no effect on
whether or not these instances can access Google APIs and services because both have
external IP addresses.

So the correct answer is VM A1, VM A2, VM B2

Ref: https://cloud.google.com/vpc/docs/private-access-options#example

Question 17: Incorrect


You want to list all the internal and external IP addresses of all compute instances.
Which of the commands below should you run to retrieve this information?

gcloud compute networks list

gcloud compute instances list


(Correct)

gcloud compute networks list-ip

gcloud compute instances list-ip

(Incorrect)

Explanation
gcloud compute instances list-ip. is not right.
"gcloud compute instances" doesn't support the action list-ip.
Ref: https://cloud.google.com/sdk/gcloud/reference/compute/instances/list

gcloud compute networks list-ip. is not right.


"gcloud compute networks" doesn't support the action list-ip.
Ref: https://cloud.google.com/sdk/gcloud/reference/compute/networks/list

gcloud compute networks list. is not right.


"gcloud compute networks list" doesn't list the IP addresses. It is used for listing Google
Compute Engine networks (i.e. VPCs)
Ref: https://cloud.google.com/sdk/gcloud/reference/compute/networks/list
Here's a sample output of the command.

$ gcloud compute networks list


NAME SUBNET_MODE BGP_ROUTING_MODE IPV4_RANGE GATEWAY_IPV4
default AUTO REGIONAL
test-vpc CUSTOM REGIONAL

gcloud compute instances list. is the right answer


gcloud compute instances list - lists Google Compute Engine instances. The output
includes internal as well as external IP addresses.
Ref: https://cloud.google.com/sdk/gcloud/reference/compute/instances/list
Here's a sample output of the command.
$ gcloud compute instances list
NAME ZONE MACHINE_TYPE PREEMPTIBLE INTERNAL_IP EXTERNAL_IP STATUS
gke-cluster-1-default-pool-8c599c87-16g9 us-central1-a n1-standard-1 10.128.0.8 3
5.184.212.227 RUNNING
gke-cluster-1-default-pool-8c599c87-36xh us-central1-a n1-standard-1 10.128.0.6 3
4.68.254.220 RUNNING
gke-cluster-1-default-pool-8c599c87-lprq us-central1-a n1-standard-1 10.128.0.7 3
5.224.96.151 RUNNING

Question 18: Correct


Your company stores sensitive PII data in a cloud storage bucket. The objects are
currently encrypted by Google-managed keys. Your compliance department has
asked you to ensure all current and future objects in this bucket are encrypted by
customer managed encryption keys. You want to minimize effort. What should
you do?

1. In the bucket advanced settings, select Customer-supplied key and then select a
Cloud KMS encryption key.

2. Delete all existing objects and upload them again so they use the new customer-
supplied key for encryption.

1. In the bucket advanced settings, select Customer-managed key and then select a
Cloud KMS encryption key.

2. Rewrite all existing objects using gsutil rewrite to encrypt them with the new
Customer-managed key.

(Correct)

1. In the bucket advanced settings, select Customer-managed key and then select a
Cloud KMS encryption key.

2. Existing objects encrypted by Google-managed keys can still be decrypted by the new
Customer-managed key.


1. Rewrite all existing objects using gsutil rewrite to encrypt them with the new
Customer-managed key.

2. In the bucket advanced settings, select Customer-managed key and then select a
Cloud KMS encryption key.

Explanation
1. In the bucket advanced settings, select Customer-managed key and then
select a Cloud KMS encryption key.
2. Existing objects encrypted by Google-managed keys can still be decrypted
by the new Customer-managed key. is not right.
While changing the bucket encryption to use the Customer-managed key ensures all
new objects use this key, existing objects are still encrypted by the Google-managed
key. This doesn't satisfy our compliance requirements. Moreover, the customer
managed key can't decrypt objects created by Google-managed keys.
Ref: https://cloud.google.com/storage/docs/encryption/using-customer-managed-
keys#add-default-key

1. In the bucket advanced settings, select customer-supplied key and then


select a Cloud KMS encryption key.
2. Delete all existing objects and upload them again so they use the new
customer-supplied key for encryption. is not right.
The customer-supplied key is not an option when selecting the encryption method in
the console. Moreover, we want to use customer-managed encryption keys and not
customer-supplied encryption keys. This does not fit our requirements.

1. Rewrite all existing objects using gsutil rewrite to encrypt them with
the new Customer-managed key.
2. In the bucket advanced settings, select Customer-managed key and then
select a Cloud KMS encryption key. is not right.
While changing the bucket encryption to use the Customer-managed key ensures all
new objects use this key, rewriting existing objects before changing the bucket
encryption would result in the objects being encrypted by the encryption method in use
at that point - which is still Google-managed.

1. In the bucket advanced settings, select Customer-managed key and then


select a Cloud KMS encryption key.
2. Rewrite all existing objects using gsutil rewrite to encrypt them with
the new Customer-managed key. is the right answer.
Changing the bucket encryption to use the Customer-managed key ensures all new
objects use this key. Now that bucket encryption is changed to use the Customer-
managed key, rewrite all existing objects using gsutil rewrite results in objects being
encrypted by the new Customer-managed key. This is the only option that satisfies our
requirements.
Ref: https://cloud.google.com/storage/docs/encryption/using-customer-managed-
keys#add-default-key

Question 19: Correct


You developed a web application that lets users upload and share images. You
deployed this application in Google Compute Engine and you have configured
Stackdriver Logging. Your application sometimes times out while uploading large
images, and your application generates relevant error log entries which are
ingested to Stackdriver Logging. You would now like to create alerts based on
these metrics. You intend to add more compute resources manually when the
number of failures exceeds a threshold. What should you do in order to alert
based on these metrics with minimal effort?

In Stackdriver logging, create a new logging metric with the required filters, edit
the application code to set the metric value when needed, and create an alert in
Stackdriver based on the new metric.

In Stackdriver Logging, create a custom monitoring metric from log data and
create an alert in Stackdriver based on the new metric.

(Correct)

Add the Stackdriver monitoring and logging agent to the instances running the
code.

Create a custom monitoring metric in code, edit the application code to set the
metric value when needed, create an alert in Stackdriver based on the new metric.
Explanation
In Stackdriver logging, create a new logging metric with the required
filters, edit the application code to set the metric value when needed, and
create an alert in Stackdriver based on the new metric. is not right.
You don't need to edit the application code to send the metric values. The application
already pushes error logs whenever the application times out. Since you already have
the required entries in the Stackdriver logs, you don't need to edit the application code
to send the metric values. You just need to create metrics from log data.
Ref: https://cloud.google.com/logging

Create a custom monitoring metric in code, edit the application code to set
the metric value when needed, create an alert in Stackdriver based on the
new metric. is not right.
You don't create a custom monitoring metric in code. Stackdriver Logging allows you to
easily create metrics from log data. Since the application already pushes error logs to
Stackdriver Logging, we just need to create metrics from log data in Stackdriver
Logging.
Ref: https://cloud.google.com/logging

Add the Stackdriver monitoring and logging agent to the instances running
the code. is not right.
The Stackdriver Monitoring agent gathers system and application metrics from your VM
instances and sends them to Monitoring. In order to make use of this approach, you
need application metrics but our application doesn't generate metrics. It just logs errors
whenever the upload times out and these are then ingested to Stackdriver logging. We
can update our application to enable custom metrics for these scenarios, but that is a lot
more work than creating metrics from log data in Stackdriver Logging
Ref: https://cloud.google.com/logging

In Stackdriver Logging, create a custom monitoring metric from log data and
create an alert in Stackdriver based on the new metric. is the right answer.
Our application adds entries to error logs whenever the application times out during
image upload and these logs are ingested to Stackdriver Logging. Since we already have
the required data in logs, we just need to create metrics from this log data in Stackdriver
Logging. And we can then set up an alert based on this metric. We can trigger an alert if
the number of occurrences of the relevant error message is greater than a predefined
value. Based on the alert, you can manually add more compute resources.
Ref: https://cloud.google.com/logging
Question 20: Correct
You created a kubernetes deployment by running kubectl run nginx --
image=nginx --replicas=1. After a few days, you decided you no longer want this
deployment. You identified the pod and deleted it by running kubectl delete pod.
You noticed the pod got recreated. $ kubectl get pods NAME READY STATUS
RESTARTS AGE nginx-84748895c4-nqqmt 1/1 Running 0 9m41s $ kubectl delete
pod nginx-84748895c4-nqqmt pod "nginx-84748895c4-nqqmt" deleted $ kubectl
get pods NAME READY STATUS RESTARTS AGE nginx-84748895c4-k6bzl 1/1
Running 0 25s What should you do to delete the deployment and avoid pod
getting recreated?

kubectl delete deployment nginx

(Correct)

kubectl delete --deployment=nginx

kubectl delete pod nginx-84748895c4-k6bzl --no-restart

kubectl delete nginx

Explanation
kubectl delete pod nginx-84748895c4-k6bzl --no-restart. is not right.
kubectl delete pod command does not support the flag --no-restart. The command fails
to execute due to the presence of an invalid flag.
$ kubectl delete pod nginx-84748895c4-k6bzl --no-restart
Error: unknown flag: --no-restart

Ref: https://kubernetes.io/docs/reference/kubectl/cheatsheet/#deleting-resources

kubectl delete --deployment=nginx. is not right.


kubectl delete command does not support the parameter --deployment. The command
fails to execute due to the presence of an invalid parameter.
$ kubectl delete --deployment=nginx
Error: unknown flag: --deployment

Ref: https://kubernetes.io/docs/reference/kubectl/cheatsheet/#deleting-resources

kubectl delete nginx. is not right.


We haven't provided the kubectl delete command information on what to delete,
whether a pod, a service or a deployment. The command syntax is wrong and fails to
execute.

$ kubectl delete nginx


error: resource(s) were provided, but no name, label selector, or --all flag spec
ified

Ref: https://kubernetes.io/docs/reference/kubectl/cheatsheet/#deleting-resources

kubectl delete deployment nginx. is the right answer.


This command correctly deletes the deployment. Pods are managed by kubernetes
workloads (deployments). When a pod is deleted, the deployment detects the pod is
unavailable and brings up another pod to maintain the replica count. The only way to
delete the workload is by deleting the deployment itself using the kubectl delete
deployment command.

$ kubectl delete deployment nginx


deployment.apps "nginx" deleted

Ref: https://kubernetes.io/docs/reference/kubectl/cheatsheet/#deleting-resources
Question 21: Incorrect
Your company recently migrated all infrastructure to Google Cloud Platform (GCP) and
you want to use Google Cloud Build to build all container images. You want to store the
build logs in a specific Google Cloud Storage bucket. You also have a requirement to
push the images to Google Container Registry. You wrote a cloud build YAML
configuration file with the following contents.
1. steps:
2. - name: 'gcr.io/cloud-builders/docker'
3. args: ['build', '-t', 'gcr.io/[PROJECT_ID]/[IMAGE_NAME]', '.']
4. images: ['gcr.io/[PROJECT_ID]/[IMAGE_NAME]']

How should you execute cloud build to satisfy these requirements?

Execute gcloud builds push --config=[CONFIG_FILE_PATH] [SOURCE]


Execute gcloud builds submit --config=[CONFIG_FILE_PATH] [SOURCE]

Execute gcloud builds run --config=[CONFIG_FILE_PATH] --gcs-log-


dir=[GCS_LOG_DIR] [SOURCE]

(Incorrect)

Execute gcloud builds submit --config=[CONFIG_FILE_PATH] --gcs-log-


dir=[GCS_LOG_DIR] [SOURCE]

(Correct)

Explanation
Execute gcloud builds push --config=[CONFIG_FILE_PATH] [SOURCE]. is not right.
gcloud builds command does not support push operation. The correct operation to
build images and push them to gcr is submit.
Ref: https://cloud.google.com/sdk/gcloud/reference/builds/submit

Execute gcloud builds run --config=[CONFIG_FILE_PATH] --gcs-log-


dir=[GCS_LOG_DIR] [SOURCE]. is not right.
gcloud builds command does not support run operation. The correct operation to build
images and push them to gcr is submit.
Ref: https://cloud.google.com/sdk/gcloud/reference/builds/submit

Execute gcloud builds submit --config=[CONFIG_FILE_PATH] [SOURCE]. is not


right.
This command correctly builds the container image and pushes the image to GCR
(Google Container Registry) but doesn’t upload the build logs to a specific GCS bucket.
If --gcs-log-dir is not set, gs://[PROJECT_NUMBER].cloudbuild-
logs.googleusercontent.com/ will be created and used.
Ref: https://cloud.google.com/sdk/gcloud/reference/builds/submit
Ref: https://cloud.google.com/cloud-build/docs/building/build-containers

Execute gcloud builds submit --config=[CONFIG_FILE_PATH] --gcs-log-


dir=[GCS_LOG_DIR] [SOURCE]. is the right answer.
This command correctly builds the container image, pushes the image to GCR (Google
Container Registry) and uploads the build logs to Google Cloud Storage.

--config flag specifies the YAML or JSON file to use as the build configuration file.
--gcs-log-dir specifies the directory in Google Cloud Storage to hold build logs.

[SOURCE] is the location of the source to build. The location can be a directory on a
local disk or a gzipped archive file (.tar.gz) in Google Cloud Storage.

Ref: https://cloud.google.com/sdk/gcloud/reference/builds/submit
Ref: https://cloud.google.com/cloud-build/docs/building/build-containers

Question 22: Correct


You created a compute instance by running gcloud compute instances create instance1.
You intended to create the instance in project gcp-ace-proj-266520 but the instance got
created in a different project. Your cloud shell gcloud configuration is as shown.
1. $ gcloud config list
2.
3. [component_manager]
4. disable_update_check = True
5. [compute]
6. gce_metadata_read_timeout_sec = 5
7. zone = europe-west2-a
8. [core]
9. account = gcp-ace-lab-user@gmail.com
10. disable_usage_reporting = False
11. project = gcp-ace-lab-266520
12. [metrics]
13. environment = devshell

What should you do to delete the instance that was created in the wrong project and
recreate it in gcp-ace-proj-266520 project?

1. gcloud compute instances delete instance1

2. gcloud config set project gcp-ace-proj-266520

3. gcloud compute instances create instance1

(Correct)

1. gcloud config set project gcp-ace-proj-266520


2. gcloud compute instances recreate instance1 --previous-project gcp-ace-lab-266520

1. gcloud compute instances delete instance1

2. gcloud compute instances create instance1

1. gcloud compute instances delete instance1

2. gcloud config set compute/project gcp-ace-proj-266520

3. gcloud compute instances create instance1

Explanation
1. gcloud compute instances delete instance1
2. gcloud compute instances create instance1. is not right.
The default core/project property is set to gcp-ace-lab-266520 in our current
configuration so the instance would have been created in this project. Running the first
command to delete the instance correctly deletes it from this project but we haven't
modified the core/project property before executing the second command so the
instance is recreated in the same project which is not what we want.
Ref: https://cloud.google.com/sdk/gcloud/reference/compute/instances/create
Ref: https://cloud.google.com/sdk/gcloud/reference/compute/instances/delete

1. gcloud config set project gcp-ace-proj-266520


2. gcloud compute instances recreate instance1 --previous-project gcp-ace-
lab-266520. is not right.
gcloud compute instances command doesn't support recreate action. It supports
create/delete which is what we are supposed to use for this requirement.
Ref: https://cloud.google.com/sdk/gcloud/reference/compute/instances

1. gcloud compute instances delete instance1


2. gcloud config set compute/project gcp-ace-proj-266520
3. gcloud compute instances create instance1. is not right.
The approach is right but the syntax is wrong. gcloud config does not have a
compute/project property. The project property is part of the core/ section as seen in
the output of gcloud configuration list in the question. In this scenario, we are trying to
set compute/project property that doesn't exist in the compute section so the command
fails.
Ref: https://cloud.google.com/sdk/gcloud/reference/config/set

1. gcloud compute instances delete instance1


2. gcloud config set project gcp-ace-proj-266520
3. gcloud compute instances create instance1. is the right answer.
This sequence of commands correctly deletes the instance from gcp-ace-lab-266520
which is the default project in the active gcloud configuration, then modifies the current
configuration to set the default project to gcp-ace-proj-266520, and finally creates the
instance in the project gcp-ace-proj-266520 which is the default project in active gcloud
configuration at the time of running the command. This produces the intended outcome
of deleting the instance from gcp-ace-lab-266520 project and recreating it in gcp-ace-
prod-266520
Ref: https://cloud.google.com/sdk/gcloud/reference/config/set
Ref: https://cloud.google.com/sdk/gcloud/reference/compute/instances/create
Ref: https://cloud.google.com/sdk/gcloud/reference/compute/instances/delete

Question 23: Incorrect


You developed an application to serve production users and you plan to use Cloud
SQL to host user state data which is very critical for the application flow. You want
to protect your user state data from zone failures. What should you do?

Create a Read replica in the same region but in a different zone.

Configure High Availability (HA) for Cloud SQL and Create a Failover replica in the
same region but in a different zone.

(Correct)

Configure High Availability (HA) for Cloud SQL and Create a Failover replica in a
different region.

(Incorrect)

Create a Read replica in a different region.

Explanation
Create a Read replica in the same region but in a different zone. is not right.
Read replicas do not provide failover capability. To provide failover capability, you need
to configure Cloud SQL Instance for High Availability.
Ref: https://cloud.google.com/sql/docs/mysql/replication

Create a Read replica in a different region. is not right.


Read replicas do not provide failover capability. To provide failover capability, you need
to configure Cloud SQL Instance for High Availability.
Ref: https://cloud.google.com/sql/docs/mysql/replication

Configure High Availability (HA) for Cloud SQL and Create a Failover replica
in a different region. is not right.
A Cloud SQL instance configured for HA is called a regional instance because it's
primary and secondary instances are in the same region. They are located in different
zones but within the same region. It is not possible to create a Failover replica in a
different region.
Ref: https://cloud.google.com/sql/docs/mysql/high-availability

Configure High Availability (HA) for Cloud SQL and Create a Failover replica
in the same region but in a different zone. is the right answer.
If a HA-configured instance becomes unresponsive, Cloud SQL automatically switches to
serving data from the standby instance. The HA configuration provides data
redundancy. A Cloud SQL instance configured for HA has instances in the primary zone
(Master node) and secondary zone (standby/failover node) within the configured region.
Through synchronous replication to each zone's persistent disk, all writes made to the
primary instance are also made to the standby instance. If the primary goes down, the
standby/failover node takes over and your data continues to be available to client
applications.
Ref: https://cloud.google.com/sql/docs/mysql/high-availability

Question 24: Correct


You want to find a list of regions and the prebuilt images offered by Google
Compute Engine. Which commands should you execute to get this retrieve this
information?

1. gcloud compute regions list

2. gcloud images list

1. gcloud compute regions list

2. gcloud compute images list

(Correct)

1. gcloud regions list

2. gcloud compute images list

1. gcloud regions list

2. gcloud images list

Explanation
1. gcloud regions list.
2. gcloud images list. is not right.
The correct command to list compute regions is gcloud compute regions list.
Ref: https://cloud.google.com/sdk/gcloud/reference/compute/regions/list
The correct command to list compute images is gcloud compute images list.
Ref: https://cloud.google.com/sdk/gcloud/reference/compute/images/list

1. gcloud compute regions list


2. gcloud images list. is not right.
The correct command to list compute images is gcloud compute images list.
Ref: https://cloud.google.com/sdk/gcloud/reference/compute/images/list

1. gcloud regions list


2. gcloud compute images list. is not right.
The correct command to list compute regions is gcloud compute regions list.
Ref: https://cloud.google.com/sdk/gcloud/reference/compute/regions/list

1. gcloud compute regions list


2. gcloud compute images list. is the right answer.
Both the commands correctly retrieve images and regions offered by Google Compute
Engine
Ref: https://cloud.google.com/sdk/gcloud/reference/compute/regions/list
Ref: https://cloud.google.com/sdk/gcloud/reference/compute/images/list

Question 25: Correct


You want to create a new role and grant it to the SME team. The new role should
provide your SME team BigQuery Job User and Cloud Bigtable User roles on all
projects in the organization. You want to minimize operational overhead. You
want to follow Google recommended practices. How should you create the new
role?

In GCP Console under IAM Roles, select both roles and combine them into a new
custom role. Grant the role to the SME team group at the organization level.

(Correct)

In GCP Console under IAM Roles, select both roles and combine them into a new
custom role. Grant the role to the SME team group at project. Repeat this step for
each project.

In GCP Console under IAM Roles, select both roles and combine them into a new
custom role. Grant the role to the SME team group at project. Use gcloud iam
promote-role to promote the role to all other projects and grant the role in each
project to the SME team group.

Execute command gcloud iam combineroles --global to combine the 2 roles into a
new custom role and grant them globally to SME team group.
Explanation
We want to create a new role and grant it to a team. Since you want to minimize
operational overhead, we need to grant it to a group - so that new users who join the
team just need to be added to the group and they inherit all the permissions. Also, this
team needs to have the role for all projects in the organization. And since we want to
minimize the operational overhead, we need to grant it at the organization level so that
all current projects, as well as future projects, have the role granted to them.

In GCP Console under IAM Roles, select both roles and combine them into a
new custom role. Grant the role to the SME team group at project. Repeat
this step for each project. is not right.
Repeating the step for all projects is a manual, error-prone and time-consuming task.
Also, if any projects were to be created in the future, we have to repeat the same
process again. This increases operational overhead.

In GCP Console under IAM Roles, select both roles and combine them into a
new custom role. Grant the role to the SME team group at project. Use gcloud
iam promote-role to promote the role to all other projects and grant the
role in each project to the SME team group. is not right.
Repeating the step for all projects is a manual, error-prone and time-consuming task.
Also, if any projects were to be created in the future, we have to repeat the same
process again. This increases operational overhead.

Execute command gcloud iam combine-roles --global to combine the 2 roles


into a new custom role and grant them globally to all. is not right.
There are several issues with this. gcloud iam command doesn't support the action
combine-roles. Secondly, we don't want to grant the roles globally. We want to grant
them to the SME team and no one else.

In GCP Console under IAM Roles, select both roles and combine them into a
new custom role. Grant the role to the SME team group at the organization
level. is the right answer.
This correctly creates the role and assigns the role to the group at the organization.
When any new users join the team, the only additional task is to add them to the group.
Also, when a new project is created under the organization, no additional human
intervention is needed. Since the role is granted at the organization level, it
automatically is granted to all the current and future projects belonging to the
organization.
Question 26: Incorrect
Your company stores customer PII data in Cloud Storage buckets. A subset of this
data is regularly imported into a BigQuery dataset to carry out analytics. You want
to make sure the access to this bucket is strictly controlled. Your analytics team
needs read access on the bucket so that they can import data in BigQuery. Your
operations team needs read/write access to both the bucket and BigQuery dataset
to add Customer PII data of new customers on an ongoing basis. Your Data
Vigilance officers need Administrator access to the Storage bucket and BigQuery
dataset. You want to follow Google recommended practices. What should you do?

At the Organization level, add your Data Vigilance officers user accounts to the
Owner role, add your operations team user accounts to the Editor role, and add
your analytics team user accounts to the Viewer role.

Use the appropriate predefined IAM roles for each of the access levels needed for
Cloud Storage and BigQuery. Add your users to those roles for each of the
services.

(Correct)

Create 3 custom IAM roles with appropriate permissions for the access levels
needed for Cloud Storage and BigQuery. Add your users to the appropriate roles.

At the Project level, add your Data Vigilance officers user accounts to the Owner
role, add your operations team user accounts to the Editor role, and add your
analytics team user accounts to the Viewer role.

(Incorrect)

Explanation
At the Organization level, add your Data Vigilance officers user accounts to
the Owner role, add your operations team user accounts to the Editor role,
and add your analytics team user accounts to the Viewer role. is not right.
Google recommends we apply the security principle of least privilege, where we grant
only necessary permissions to access specific resources.
Ref: https://cloud.google.com/iam/docs/overview
Providing these primitive roles at the organization levels grants them permissions on all
resources in all projects under the organization which violates the security principle of
least privilege.
Ref: https://cloud.google.com/iam/docs/understanding-roles

At the Project level, add your Data Vigilance officers user accounts to the
Owner role, add your operations team user accounts to the Editor role, and
add your analytics team user accounts to the Viewer role. is not right.
Google recommends we apply the security principle of least privilege, where we grant
only necessary permissions to access specific resources.
Ref: https://cloud.google.com/iam/docs/overview
Providing these primitive roles at the project level grants them permissions on all
resources in the project which violates the security principle of least privilege.
Ref: https://cloud.google.com/iam/docs/understanding-roles

Create 3 custom IAM roles with appropriate permissions for the access levels
needed for Cloud Storage and BigQuery. Add your users to the appropriate
roles. is not right.
While this has the intended outcome, it is not very efficient particularly when there are
predefined roles that can be used. Secondly, if Google adds/modifies permissions for
these services in the future, we would have to update our roles to reflect the
modifications. This results in operational overhead and increases costs.
Ref: https://cloud.google.com/storage/docs/access-control/iam-roles#primitive-roles-
intrinsic
Ref: https://cloud.google.com/bigquery/docs/access-control

Use the appropriate predefined IAM roles for each of the access levels
needed for Cloud Storage and BigQuery. Add your users to those roles for
each of the services. is the right answer.
For Google Cloud Storage service, Google provides predefined roles roles/owner,
roles/editor, roles/viewer that match the access levels we need. Similarly, Google
provides the roles roles/bigquery.dataViewer, roles/bigquery.dataOwner,
roles/bigquery.admin that match the access levels we need. We can assign these
predefined IAM roles to the respective users. Should Google add/modify permissions for
these services in the future, we don't need to modify the roles above as Google does
this for us; and this helps future proof our solution.
Ref: https://cloud.google.com/storage/docs/access-control/iam-roles#primitive-roles-
intrinsic
Ref: https://cloud.google.com/bigquery/docs/access-control

Question 27: Correct


In Cloud Shell, your active gcloud configuration is as shown below.
1. $ gcloud config list
2. [component_manager]
3. disable_update_check = True
4. [compute]
5. gce_metadata_read_timeout_sec = 5
6. zone = europe-west2-a
7. [core]
8. account = gcp-ace-lab-user@gmail.com
9. disable_usage_reporting = False
10. project = gcp-ace-lab-266520
11. [metrics]
12. environment = devshell

You want to create two compute instances - one in europe-west2-a and another in
europe-west2-b. What should you do? (Select 2)

gcloud compute instances create instance1

gcloud configuration set compute/zone europe-west2-b

gcloud compute instances create instance2

gcloud compute instances create instance1

gcloud config set compute/zone europe-west2-b

gcloud compute instances create instance2

(Correct)

gcloud compute instances create instance1

gcloud config set zone europe-west2-b


gcloud compute instances create instance2

gcloud compute instances create instance1

gcloud compute instances create instance2

gcloud compute instances create instance1

gcloud compute instances create instance2 --zone=europe-west2-b

(Correct)

Explanation
1. gcloud compute instances create instance1
2. gcloud compute instances create instance2. is not right.
The default compute/zone property is set to europe-west2-a in the current gcloud
configuration. Executing the two commands above would create two compute instances
in the default zone i.e. europe-west2-a which doesn't satisfy our requirement.
Ref: https://cloud.google.com/sdk/gcloud/reference/compute/instances/create

1. gcloud compute instances create instance1


2. gcloud config set zone europe-west2-b
3. gcloud compute instances create instance2. is not right.
The approach is right but the syntax is wrong. gcloud config does not have a core/zone
property. The syntax for this command is gcloud config set SECTION/PROPERTY VALUE.
If SECTION is missing, SECTION is defaulted to core. We are effectively trying to run
gcloud config set core/zone europe-west2-b but the core section doesn't have a
property called zone, so this command fails.
Ref: https://cloud.google.com/sdk/gcloud/reference/config/set

1. gcloud compute instances create instance1


2. gcloud configuration set compute/zone europe-west2-b
3. gcloud compute instances create instance2. is not right.
Like above, the approach is right but the syntax is wrong. You want to set the default
compute/zone property in gcloud configuration to europe-west2-b but it needs to be
done via the command gcloud config set and not gcloud configuration set.
Ref: https://cloud.google.com/sdk/gcloud/reference/config/set

1. gcloud compute instances create instance1


2. gcloud config set compute/zone europe-west2-b
3. gcloud compute instances create instance2. is the right answer.
The default compute/zone property is europe-west2-a in the current gcloud
configuration so executing the first gcloud compute instances create command creates
the instance in europe-west2-a zone. Next, executing the gcloud config set
compute/zone europe-west2-b changes the default compute/zone property in default
configuration to europe-west2-b. Executing the second gcloud compute instances
create command creates a compute instance in europe-west2-b which is what we want.
Ref: https://cloud.google.com/sdk/gcloud/reference/config/set
Ref: https://cloud.google.com/sdk/gcloud/reference/compute/instances/create

1. gcloud compute instances create instance1


2. gcloud compute instances create instance2 --zone=europe-west2-b. is the
right answer.
The default compute/zone property is europe-west2-a in the current gcloud
configuration so executing the first gcloud compute instances create command creates
the instance in europe-west2-a zone. Next, executing the second gcloud compute
instances create command with --zone property creates a compute instance in provided
zone i.e. europe-west2-b instead of using the default zone from the current active
configuration.
Ref: https://cloud.google.com/sdk/gcloud/reference/compute/instances/create

Question 28: Correct


You have two workloads on GKE (Google Kubernetes Engine) - create-order and
dispatch-order. create-order handles creation of customer orders; and dispatch-
order handles dispatching orders to your shipping partner. Both create-order and
dispatch-order workloads have cluster autoscaling enabled. The create-order
deployment needs to access (i.e. invoke web service of) dispatch-order
deployment. dispatch-order deployment cannot be exposed publicly. How should
you define the services?

Create a Service of type LoadBalancer for dispatch-order and an Ingress Resource


for that Service. Have create-order use the Ingress IP address.

Create a Service of type LoadBalancer for dispatch-order. Have create-order use


the Service IP address.

Create a Service of type NodePort for dispatch-order and an Ingress Resource for
that Service. Have create-order use the Ingress IP address.

Create a Service of type ClusterIP for dispatch-order. Have create-order use the
Service IP address.

(Correct)

Explanation
Create a Service of type LoadBalancer for dispatch-order. Have create-order
use the Service IP address. is not right.
When you create a Service of type LoadBalancer, the Google Cloud controller configures
a network load balancer that is publicly available. Since we don't want our service to be
publicly available, we shouldn't create a Service of type LoadBalancer
Ref: https://cloud.google.com/kubernetes-engine/docs/how-to/exposing-apps

Create a Service of type LoadBalancer for dispatch-order and an Ingress


Resource for that Service. Have create-order use the Ingress IP address. is
not right.
When you create a Service of type LoadBalancer, the Google Cloud controller configures
a network load balancer that is publicly available. Since we don't want our service to be
publicly available, we shouldn't create a Service of type LoadBalancer
Ref: https://cloud.google.com/kubernetes-engine/docs/how-to/exposing-apps

Create a Service of type NodePort for dispatch-order and an Ingress Resource


for that Service. Have create-order use the Ingress IP address. is not right.
Exposes the Service on each Node’s IP at a static port (the NodePort). If the compute
instance has public connectivity, the dispatch-order can be accessed publicly which is
undesirable. Secondly, dispatch-order has auto-scaling enabled so we shouldn't create a
service of NodePort. If autoscaler spins up another pod on the node, it fails to initialize
as the port on the node is already taken by an existing pod on the same node.
Ref: https://cloud.google.com/kubernetes-engine/docs/how-to/exposing-apps

Create a Service of type ClusterIP for dispatch-order. Have create-order use


the Service IP address. is the right answer.
ClusterIP exposes the Service on a cluster-internal IP that is only reachable within the
cluster. This satisfies our requirement that dispatch-order shouldn't be publicly
accessible. create-order which is also located in the same GKE cluster can now access
the ClusterIP of the service to reach dispatch-order.
Ref: https://kubernetes.io/docs/concepts/services-networking/service/

Question 29: Correct


You developed an application that reads objects from a cloud storage bucket. You
followed GCP documentation and created a service account with just the
permissions to read objects from the cloud storage bucket. However, when your
application uses this service account, it fails to read objects from the bucket. You
suspect this might be an issue with the permissions assigned to the service
account. You would like to authenticate a gsutil session with the service account
credentials, reproduce the issue yourself and identify the root cause. How can you
authenticate gsutil with service account credentials?

Create JSON keys for the service account and execute gcloud authenticate
activate-service-account --key-file [KEY_FILE]

Create JSON keys for the service account and execute gcloud auth service-account
--key-file [KEY_FILE]

Create JSON keys for the service account and execute gcloud authenticate service-
account --key-file [KEY_FILE]

Create JSON keys for the service account and execute gcloud auth activate-
service-account --key-file [KEY_FILE]
(Correct)

Explanation
Create JSON keys for the service account and execute gcloud authenticate
activate-service-account --key-file [KEY_FILE]. is not right.
gcloud doesn't support using "authenticate" to grant/revoke credentials for Cloud SDK.
The correct service is "auth".
Ref: https://cloud.google.com/sdk/gcloud/reference/auth

Create JSON keys for the service account and execute gcloud authenticate
service-account --key-file [KEY_FILE]. is not right.
gcloud doesn't support using "authenticate" to grant/revoke credentials for Cloud SDK.
The correct service is "auth".
Ref: https://cloud.google.com/sdk/gcloud/reference/auth

Create JSON keys for the service account and execute gcloud auth service-
account --key-file [KEY_FILE]. is not right.
gcloud auth does not support service-account action. The correct action to authenticate
a service account is activate-service-account.
Ref: https://cloud.google.com/sdk/gcloud/reference/auth/activate-service-account

Create JSON keys for the service account and execute gcloud auth activate-
service-account --key-file [KEY_FILE]. is the right answer.
This command correctly authenticates access to Google Cloud Platform with a service
account using its JSON key file. To allow gcloud (and other tools in Cloud SDK) to use
service account credentials to make requests, use this command to import these
credentials from a file that contains a private authorization key, and activate them for
use in gcloud
Ref: https://cloud.google.com/sdk/gcloud/reference/auth/activate-service-account

Question 30: Correct


You deployed a number of services to Google App Engine Standard . The services
are designed as microservices with several interdependencies between them. Most
services have few version upgrades but some key services have over 20 version
upgrades. You identified an issue with the service pt-createOrder and deployed a
new version v3 for this service. You are confident this works and want this new
version to receive all traffic for the service. You want to minimize effort and
ensure availability of service. What should you do?


Execute gcloud app versions stop v2 --service="pt-createOrder" and gcloud app
versions start v3 --service="pt-createOrder"

Execute gcloud app versions migrate v3 --service="pt-createOrder"

(Correct)

Execute gcloud app versions migrate v3

Execute gcloud app versions stop v2 and gcloud app versions start v3

Explanation
Execute gcloud app versions migrate v3. is not right.
gcloud app versions migrate v3 migrates all services to version v3. In our scenario, we
have multiple services with each service potentially being on a different version. We
don't want to migrate all services to v3, instead, we only want to migrate the pt-
createOrder service to v3.
Ref: https://cloud.google.com/sdk/gcloud/reference/app/versions/migrate

Execute gcloud app versions stop v2 --service="pt-createOrder" and gcloud


app versions start v3 --service="pt-createOrder". is not right.
Stopping version v2 and starting version v3 for pt-createOrder service would result in v3
receiving all traffic for pt-createOrder. While this is the intended outcome, stopping
version v2 before starting version v3 results in service being unavailable until v3 is ready
to receive traffic. As we want to "ensure availability", this option is not suitable.
Ref: https://cloud.google.com/sdk/gcloud/reference/app/versions/migrate

Execute gcloud app versions stop v2 and gcloud app versions start v3. is not
right.
Stopping version v2 and starting version v3 would result in migrating all services to
version v3 which is undesirable. We don't want to migrate all services to v3, instead, we
only want to migrate the pt-createOrder service to v3. Moreover, stopping version v2
before starting version v3 results in service being unavailable until v3 is ready to receive
traffic. As we want to "ensure availability", this option is not suitable.
Ref: https://cloud.google.com/sdk/gcloud/reference/app/versions/migrate
Execute gcloud app versions migrate v3 --service="pt-createOrder". is the
right answer.
This command correctly migrates the service pt-createOrder to use version 3 and
produces the intended outcome while minimizing effort and ensuring the availability of
service.
Ref: https://cloud.google.com/sdk/gcloud/reference/app/versions/migrate

Question 31: Correct


You have three gcloud configurations - one for each of development, test and
production projects. You want to list all the configurations and switch to a new
configuration. With the fewest steps possible, what's the fastest way to switch to
the correct configuration?

1. To list configurations - gcloud configurations list

2. To activate a configuration - gcloud configurations activate

1. To list configurations - gcloud configurations list

2. To activate a configuration - gcloud config activate

1. To list configurations - gcloud config list

2. To activate a configuration - gcloud config activate.

1. To list configurations - gcloud config configurations list

2. To activate a configuration - gcloud config configurations activate.

(Correct)

Explanation

1. To list configurations - gcloud configurations list


2. To activate a configuration - gcloud configurations activate. is not right.
gcloud configurations list does not list configurations. To list existing configurations, you
need to execute gcloud config configurations list.
Ref: https://cloud.google.com/sdk/gcloud/reference/config/configurations/list gcloud
configurations activate does not activate a named configuration. To activate a
configuration, you need to execute gcloud config configurations activate
Ref: https://cloud.google.com/sdk/gcloud/reference/config/configurations/activate

1. To list configurations - gcloud config list


2. To activate a configuration - gcloud config activate. is not right.
gcloud config list does not list configurations. It lists the properties of the existing
configuration. To list existing configurations, you need to execute gcloud config
configurations list.
Ref: https://cloud.google.com/sdk/gcloud/reference/config/configurations/list gcloud
config activate does not activate a named configuration. To activate a configuration, you
need to execute gcloud config configurations activate
Ref: https://cloud.google.com/sdk/gcloud/reference/config/configurations/activate

1. To list configurations - gcloud configurations list 2. To activate a


configuration - gcloud config activate. is not right.
gcloud configurations list does not list configurations. To list existing configurations, you
need to execute gcloud config configurations list.
Ref: https://cloud.google.com/sdk/gcloud/reference/config/configurations/list gcloud
config activate does not activate a named configuration. To activate a configuration, you
need to execute gcloud config configurations activate
Ref: https://cloud.google.com/sdk/gcloud/reference/config/configurations/activate

1. To list configurations - gcloud config configurations list


2. To activate a configuration - gcloud config configurations activate. is
the right answer.
The two commands together achieve the intended outcome. gcloud config
configurations list - lists existing named configurations and gcloud config configurations
activate - activates an existing named configuration
Ref: https://cloud.google.com/sdk/gcloud/reference/config/configurations/list
Ref: https://cloud.google.com/sdk/gcloud/reference/config/configurations/activate
See an example below

$ gcloud config configurations list


NAME IS_ACTIVE ACCOUNT PROJECT DEFAULT_ZONE DEFAULT_REGION
dev-configuration False gcp-ace-lab-dev
prod-configuration False gcp-ace-lab-prod
test-configuration True gcp-ace-lab-test

$ gcloud config configurations activate prod-configuration


Activated [prod-configuration].

$ gcloud config configurations list


NAME IS_ACTIVE ACCOUNT PROJECT DEFAULT_ZONE DEFAULT_REGION
dev-configuration False gcp-ace-lab-dev
prod-configuration True gcp-ace-lab-prod
test-configurationFalsegcp-ace-lab-test

Question 32: Correct


You have two compute instances in the same VPC but in different regions. You can
SSH from one instance to another instance using their external IP address but not
their internal IP address. What could be the reason for SSH failing on internal IP
address?

The compute instances are not using the right cross region SSH IAM permissions

The combination of compute instance network tags and VPC firewall rules allow
SSH from 0.0.0.0 but denies SSH from the VPC subnets IP range.

(Correct)

The internal IP address is disabled.

The compute instances have a static IP for their internal IP.

Explanation
The compute instances have a static IP for their internal IP. is not right.
Static internal IPs shouldn't be a reason for failed SSH connections. With all networking
set up correctly, SSH works fine on Static internal IPs.
Ref: https://cloud.google.com/compute/docs/ip-addresses#networkaddresses

The internal IP address is disabled. is not right.


Every compute instance has one or more internal IP addresses so this option is not
correct.

The compute instances are not using the right cross-region SSH IAM
permissions. is not right.
There is no such thing as cross region SSH IAM permissions.

The combination of compute instance network tags and VPC firewall rules
allow SSH from 0.0.0.0 but denies SSH from the VPC subnets IP range. is the
right answer.
The combination of compute instance network tags and VPC firewall rules can certainly
result in SSH traffic being allowed on the external IP range but disabled from subnets IP
range. The firewall rule can be configured to allow SSH traffic from 0.0.0.0/0 but deny
traffic from the VPC range e.g. 10.0.0.0/8. In this case, all SSH traffic from within the VPC
is denied but external SSH traffic (i.e. on external IP) is allowed.
Ref: https://cloud.google.com/vpc/docs/using-firewalls

Question 33: Correct


Your company plans to store sensitive PII data in a cloud storage bucket. Your
compliance department has asked you to ensure the objects in this bucket are
encrypted by customer managed encryption keys. What should you do?

In the bucket advanced settings, select Customer-supplied key and then select a
Cloud KMS encryption key.

In the bucket advanced settings, select Google-managed key and then select a
Cloud KMS encryption key.


Recreate the bucket to use a Customer-managed key. Encryption can only be
specified at the time of bucket creation.

In the bucket advanced settings, select Customer-managed key and then select a
Cloud KMS encryption key.

(Correct)

Explanation
In the bucket advanced settings, select Customer-supplied key and then
select a Cloud KMS encryption key. is not right.
Customer-Supplied key is not an option when selecting the encryption method in the
console. Moreover, we want to use customer managed encryption keys and not
customer supplied encryption keys. This does not fit our requirements.

In the bucket advanced settings, select Google-managed key and then select a
Cloud KMS encryption key. is not right.
While Google-managed key is an option when selecting the encryption method in
console, we want to use customer managed encryption keys and not Google Managed
encryption keys. This does not fit our requirements.

Recreate the bucket to use a Customer-managed key. Encryption can only be


specified at the time of bucket creation. is not right.
Bucket encryption can be changed at any time. The bucket doesn't have to be recreated
to change encryption.
Ref: https://cloud.google.com/storage/docs/encryption/using-customer-managed-
keys#add-default-key

In the bucket advanced settings, select Customer-managed key and then select
a Cloud KMS encryption key. is the right answer.
This option correctly selects the Customer-managed key and then the key to use which
satisfies our requirement. See the screenshot below for reference.
Ref: https://cloud.google.com/storage/docs/encryption/using-customer-managed-
keys#add-default-key
Question 34: Correct
Users of your application are complaining of slowness when loading the
application. You realize the slowness is because the App Engine deployment
serving the application is deployed in us-central where as all users of this
application are closest to europe-west3. You want to change the region of the App
Engine application to europe-west3 to minimize latency. What's the best way to
change the App Engine region?

Contact Google Cloud Support and request the change.

From the console, under the App Engine page, click edit, and change the region
drop-down.

Use the gcloud app region set command and supply the name of the new region.

Create a new project and create an App Engine instance in europe-west3.

(Correct)

Explanation
Use the gcloud app region set command and supply the name of the new
region. is not right.
gcloud app region command does not provide a set action. The only action gcloud app
region command currently supports is list which lists the availability of flex and standard
environments for each region.
Ref: https://cloud.google.com/sdk/gcloud/reference/app/regions/list

Contact Google Cloud Support and request the change. is not right.
Unfortunately, Google Cloud Support isn't of much use here as they would not be able
to change the region of an App Engine Deployment. App engine is a regional service,
which means the infrastructure that runs your app(s) is located in a specific region and is
managed by Google to be redundantly available across all the zones within that region.
Once an app engine deployment is created in a region, it can't be changed.
Ref: https://cloud.google.com/appengine/docs/locations

From the console, Click edit in App Engine dashboard page and change the
region drop-down. is not right.
The settings mentioned in this option aren't available in the App Engine dashboard. App
engine is a regional service. Once an app engine deployment is created in a region, it
can't be changed. As shown in the screenshot below, Region is greyed out.

Create a new project and create an App Engine instance in europe-west3. is


the right answer.

App engine is a regional service, which means the infrastructure that runs your app(s) is
located in a specific region and is managed by Google to be redundantly available
across all the zones within that region. Once an app engine deployment is created in a
region, it can't be changed. The only way is to create a new project and create an App
Engine instance in europe-west3, send all user traffic to this instance and delete the app
engine instance in us-central.

Ref: https://cloud.google.com/appengine/docs/locations
Question 35: Correct
You want to ingest and analyze large volumes of stream data from sensors in real
time, matching the high speeds of IoT data to track normal and abnormal
behavior. You want to run it through a data processing pipeline and store the
results. Finally, you want to enable customers to build dashboards and drive
analytics on their data in real time. What services should you use for this task?

Cloud Pub/Sub, Cloud Dataflow, Cloud Dataprep

Stackdriver, Cloud Dataflow, BigQuery

Cloud Pub/Sub, Cloud Dataflow, BigQuery

(Correct)

Cloud Pub/Sub, Cloud Dataflow, Cloud Dataproc

Explanation
You want to ingest large volumes of streaming data at high speeds. So you need to use
Cloud Pub/Sub. Cloud Pub/Sub provides a simple and reliable staging location for your
event data on its journey towards processing, storage, and analysis. Cloud Pub/Sub is
serverless and you can ingest events at any scale.
Ref: https://cloud.google.com/pubsub

Next, you want to analyze this data. Cloud Dataflow is a fully managed streaming
analytics service that minimizes latency, processing time, and cost through autoscaling
and batch processing. Dataflow enables fast, simplified streaming data pipeline
development with lower data latency.
Ref: https://cloud.google.com/dataflow

Next, you want to store these results. BigQuery is an ideal place to store these results as
BigQuery supports the querying of streaming data in real-time. This assists in real-time
predictive analytics.
Ref: https://cloud.google.com/bigquery
Therefore the correct answer is Cloud Pub/Sub, Cloud Dataflow, BigQuery.

Here’s more information from Google docs about the Stream analytics use case. Google
recommends we use Dataflow along with Pub/Sub and BigQuery.
https://cloud.google.com/dataflow#section-6
Google’s stream analytics makes data more organized, useful, and accessible from the
instant it’s generated. Built on Dataflow along with Pub/Sub and BigQuery, our
streaming solution provisions the resources you need to ingest, process, and analyze
fluctuating volumes of real-time data for real-time business insights. This abstracted
provisioning reduces complexity and makes stream analytics accessible to both data
analysts and data engineers.

and
https://cloud.google.com/solutions/stream-analytics
Ingest, process, and analyze event streams in real time. Stream analytics from Google
Cloud makes data more organized, useful, and accessible from the instant it’s generated.
Built on the autoscaling infrastructure of Pub/Sub, Dataflow, and BigQuery, our
streaming solution provisions the resources you need to ingest, process, and analyze
fluctuating volumes of real-time data for real-time business insights.
Question 36: Incorrect
You want to deploy a python application to an autoscaled managed instance
group on Compute Engine. You want to use GCP deployment manager to do this.
What is the fastest way to get the application onto the instances without
introducing undue complexity?

Include a startup script to bootstrap the python application when creating


instance template by running gcloud compute instance-templates create app-
template --metadata-from-file startup-script=/scripts/install_app.sh

(Correct)

Include a startup script to bootstrap the python application when creating


instance template by running gcloud compute instance-templates create app-
template --metadata-from-file startup-script-url=/scripts/install_app.sh

Once the instance starts up, connect over SSH and install the application.

(Incorrect)

Include a startup script to bootstrap the python application when creating


instance template by running gcloud compute instance-templates create app-
template --startup-script=/scripts/install_app.sh

Explanation
Include a startup script to bootstrap the python application when creating
instance template by running gcloud compute instance-templates create app-
template --startup-script=/scripts/install_app.sh. is not right.
gcloud compute instance-templates create command does not accept a flag called --
startup-script. While creating compute engine images, the startup script can be
provided through a special metadata key called startup-script which specifies a script
that will be executed by the instances once they start running. For convenience, --
metadata-from-file can be used to pull the value from a file.
Ref: https://cloud.google.com/sdk/gcloud/reference/compute/instance-
templates/create

Include a startup script to bootstrap the python application when creating


instance template by running gcloud compute instance-templates create app-
template --metadata-from-file startup-script-url=/scripts/install_app.sh. is
not right.
startup-script-url is to be used when contents of the script need to be pulled from a
publicly-accessible location on the web. But in this scenario, we are passing the location
of the script on the filesystem which doesn't work and the command errors out.

$ gcloud compute instance-templates create app-template --metadata-from-file star


tup-script-url=/scripts/install_app.sh
ERROR: (gcloud.compute.instance-templates.create) Unable to read file [/scripts/i
nstall_app.sh]: [Errno 2] No such file or directory: '/scripts/install_app.sh'

Once the instance starts up, connect over SSH and install the application. is
not right.
The managed instances group has auto-scaling enabled. If we are to connect over SSH
and install the application, we have to repeat this task on all current instances and on
future instances the autoscaler adds to the group. This process is manual, error-prone,
time consuming and should be avoided.

Include a startup script to bootstrap the python application when creating


instance template by running gcloud compute instance-templates create app-
template --metadata-from-file startup-script=/scripts/install_app.sh. is the
right answer.
This command correctly provides the startup script using the flag metadata-from-file
and providing a valid startup-script value. When creating compute engine images, the
startup script can be provided through a special metadata key called startup-script
which specifies a script that will be executed by the instances once they start running.
For convenience, --metadata-from-file can be used to pull the value from a file.
Ref: https://cloud.google.com/sdk/gcloud/reference/compute/instance-
templates/create

Question 37: Correct


Your company has migrated most of the data center VMs to Google Compute
Engine. The remaining VMs in the data center host legacy applications that are due
to be decommissioned soon and your company has decided to retain them in the
datacenter. Due to a change in business operational model, you need to introduce
changes to one of the legacy applications to read files from Google Cloud Storage.
However, your datacenter does not have access to the internet and your company
doesn't want to invest in setting up internet access as the datacenter is due to be
turned off soon. Your datacenter has a partner interconnect to GCP. You wish to
route traffic from your datacenter to Google Storage through partner
interconnect. What should you do?

1. In on-premises DNS configuration, map storage.cloud.google.com to


restricted.googleapis.com, which resolves to the 199.36.153.4/30.

2. Configure Cloud Router to advertise the 199.36.153.4/30 IP address range through


the Cloud VPN tunnel.

3. Created a Cloud DNS managed public zone for storage.cloud.google.com that maps
to 199.36.153.4/30 and authorize the zone for use by VPC network

1. In on-premises DNS configuration, map storage.cloud.google.com to


restricted.googleapis.com, which resolves to the 199.36.153.4/30.

2. Configure Cloud Router to advertise the 199.36.153.4/30 IP address range through


the Cloud VPN tunnel.

3. Add a custom static route to the VPC network to direct traffic with the destination
199.36.153.4/30 to the default internet gateway.

4. Created a Cloud DNS managed private zone for storage.cloud.google.com that maps
to 199.36.153.4/30 and authorize the zone for use by VPC network

1. In on-premises DNS configuration, map *.googleapis.com to


restricted.googleapis.com, which resolves to the 199.36.153.4/30.

2. Configure Cloud Router to advertise the 199.36.153.4/30 IP address range through


the Cloud VPN tunnel.

3. Created a Cloud DNS managed public zone for *.googleapis.com that maps to
199.36.153.4/30 and authorize the zone for use by VPC network

1. In on-premises DNS configuration, map *.googleapis.com to


restricted.googleapis.com, which resolves to the 199.36.153.4/30.
2. Configure Cloud Router to advertise the 199.36.153.4/30 IP address range through
the Cloud VPN tunnel.

3. Add a custom static route to the VPC network to direct traffic with the destination
199.36.153.4/30 to the default internet gateway.

4. Created a Cloud DNS managed private zone for *.googleapis.com that maps to
199.36.153.4/30 and authorize the zone for use by VPC network

(Correct)

Explanation
While Google APIs are accessible on *.googleapis.com, to restrict Private Google Access
within a service perimeter to only VPC Service Controls supported Google APIs and
services, hosts must send their requests to the restricted.googleapis.com domain name
instead of *.googleapis.com. The restricted.googleapis.com domain resolves to a VIP
(virtual IP address) range 199.36.153.4/30. This IP address range is not announced to the
Internet. If you require access to other Google APIs and services that aren't supported
by VPC Service Controls, you can use 199.36.153.8/30 (private.googleapis.com).
However, we recommend that you use restricted.googleapis.com, which integrates with
VPC Service Controls and mitigates data exfiltration risks. In either case, VPC Service
Controls service perimeters are always enforced on APIs and services that support VPC
Service Controls.
Ref: https://cloud.google.com/vpc-service-controls/docs/set-up-private-connectivity

This rules out the two options that map storage.cloud.google.com to


restricted.googleapis.com.

The main differences between the remaining two options are

Static route in the VPC network.

Public/Private zone.

According to Google’s guide on setting up private connectivity, in order to configure a


route to restricted.googleapis.com within the VPC, we need to create a static route
whose destination is 199.36.153.4/30 and whose next hop is the default Internet
gateway.

So, the right answer is


1. In on-premises DNS configuration, map *.googleapis.com to
restricted.googleapis.com, which resolves to the 199.36.153.4/30.
2. Configure Cloud Router to advertise the 199.36.153.4/30 IP address range
through the Cloud VPN tunnel.
3. Add a custom static route to the VPC network to direct traffic with the
destination 199.36.153.4/30 to the default internet gateway.
4. Created a Cloud DNS managed private zone for *.googleapis.com that maps
to 199.36.153.4/30 and authorize the zone for use by VPC network

Here’s more information about how to set up private connectivity to Google’s services
through VPC.

Ref: https://cloud.google.com/vpc/docs/private-access-options#private-vips
In the following example, the on-premises network is connected to a VPC network
through a Cloud VPN tunnel. Traffic from on-premises hosts to Google APIs travels
through the tunnel to the VPC network. After traffic reaches the VPC network, it is sent
through a route that uses the default internet gateway as its next hop. The next hop
allows traffic to leave the VPC network and be delivered to restricted.googleapis.com
(199.36.153.4/30).

The on-premises DNS configuration maps *.googleapis.com requests to


restricted.googleapis.com, which resolves to the 199.36.153.4/30.

Cloud Router has been configured to advertise the 199.36.153.4/30 IP address range
through the Cloud VPN tunnel by using a custom route advertisement. Traffic going to
Google APIs is routed through the tunnel to the VPC network.

A custom static route was added to the VPC network that directs traffic with the
destination 199.36.153.4/30 to the default internet gateway (as the next hop). Google
then routes traffic to the appropriate API or service.

If you created a Cloud DNS managed private zone for *.googleapis.com that maps to
199.36.153.4/30 and have authorized that zone for use by your VPC network, requests to
anything in the googleapis.com domain are sent to the IP addresses that are used by
restricted.googleapis.com

Question 38: Correct


You have a number of applications that have bursty workloads and are heavily
dependent on topics to decouple publishing systems from consuming systems.
Your company would like to go serverless to enable developers to focus on writing
code without worrying about infrastructure. Your solution architect has already
identified Cloud Pub/Sub as a suitable alternative for decoupling systems. You
have been asked to identify a suitable GCP Serverless service that is easy to use
with Cloud Pub/Sub. You want the ability to scale down to zero when there is no
traffic in order to minimize costs. You want to follow Google recommended
practices. What should you suggest?

Cloud Run

Cloud Functions
(Correct)

Cloud Run for Anthos

App Engine Standard

Explanation
GCP serverless compute portfolio includes 4 services, which are all listed in the answer
options. Our requirements are to identify a GCP serverless service that

Lets us scale down to 0

Integrates with Cloud Pub/Sub seamlessly

Cloud Run for Anthos. is not right.


Among the four options, App Engine Standard, Cloud Functions and Cloud Run can all
scale down to zero. Cloud Run for Anthos can scale the pods down the zero but the
number of nodes per cluster can not scale to zero so these nodes are billed in the
absence of requests. This rules out Cloud Run for Anthos.

App Engine Standard. is not right.


App Engine Standard doesn’t offer an out of the box integration with Cloud Pub/Sub.
We can use the Cloud Client Library to send and receive Pub/Sub messages as described
in the note below but the key point to note is the absence of out of the box integration
with Cloud Pub/Sub so this rules out App Engine Standard
Ref: https://cloud.google.com/appengine/docs/standard/nodejs/writing-and-
responding-to-pub-sub-messages

Cloud Run. is not right.


Cloud Run is an excellent product and integrates with Cloud Pub/Sub for several use
cases. For example, every time a new .csv file is created inside a Cloud Storage bucket,
an event is fired and delivered via a Pub/Sub subscription to a Cloud Run service. The
Cloud Run service extracts data from the file and stores it as structured data into a
BigQuery table.
Ref: https://cloud.google.com/run#section-7
At the same time, we want to follow Google recommended practices. Google doesn’t list
integration with Cloud Pub/Sub as a key feature of Cloud Run. Contrary to this, Google
says “If you’re building a simple API (a small set of functions to be accessed via HTTP or
Cloud Pub/Sub), we recommend using Cloud Functions.”

Cloud Functions. is the right answer.


Cloud Functions is Google Cloud’s event-driven serverless compute platform that lets
you run your code locally or in the cloud without having to provision servers. Cloud
Functions scales up or down, so you pay only for compute resources you use. Cloud
Functions have excellent integration with Cloud Pub/Sub, lets you scale down to zero
and is recommended by Google as the ideal serverless platform to use when dependent
on Cloud Pub/Sub.
“If you’re building a simple API (a small set of functions to be accessed via HTTP or
Cloud Pub/Sub), we recommend using Cloud Functions.”
Ref: https://cloud.google.com/serverless-options

Question 39: Incorrect


You deployed your application to a default node pool on GKE cluster and you want
to configure cluster autoscaling for this GKE cluster. For your application to be
profitable, you must limit the number of kubernetes nodes to 10. You want to
start small and scale up as traffic increases and scale down when the traffic goes
down. What should you do?

Create a new GKE cluster by running the command gcloud container clusters
create [CLUSTER_NAME] --enable-autoscaling --min-nodes=1 --max-nodes=10.
Redeploy your application

Update existing GKE cluster to enable autoscaling by running the command


gcloud container clusters update [CLUSTER_NAME] --enable-autoscaling --min-
nodes=1 --max-nodes=10

(Correct)

Set up a stack driver alert to detect slowness in the application. When the alert is
triggered, increase nodes in the cluster by running the command gcloud container
clusters resize CLUSTER_Name --size <new size>.
(Incorrect)

To enable autoscaling, add a tag to the instances in the cluster by running the
command gcloud compute instances add-tags [INSTANCE] --tags=enable-
autoscaling,min-nodes=1,max-nodes=10

Explanation
Set up a stack driver alert to detect slowness in the application. When the
alert is triggered, increase nodes in the cluster by running the command
gcloud container clusters resize CLUSTER_Name --size {new size}. is not right.
The command gcloud container clusters resize command resizes an existing cluster for
running containers. While it is possible to manually increase the number of nodes in the
cluster by running the command, the scale-up is not automatic, it is a manual process.
Also, there is no scale down so it doesn’t fit our requirement of "scale up as traffic
increases and scale down when the traffic goes down".
Ref: https://cloud.google.com/sdk/gcloud/reference/container/clusters/resize

To enable autoscaling, add a tag to the instances in the cluster by running


the command gcloud compute instances add-tags [INSTANCE] --tags=enable-
autoscaling,min-nodes=1,max-nodes=10. is not right.
Autoscaling can not be enabled on the GKE cluster by adding tags on compute
instances. Autoscaling can be enabled at the time of creating the cluster and can also be
enabled for existing clusters by running one of the gcloud container clusters to
create/update commands.
Ref: https://cloud.google.com/sdk/gcloud/reference/container/clusters/create
Ref: https://cloud.google.com/sdk/gcloud/reference/container/clusters/update

Create a new GKE cluster by running the command gcloud container clusters
create [CLUSTER_NAME] --enable-autoscaling --min-nodes=1 --max-nodes=10.
Redeploy your application. is not right.
The command gcloud container clusters create - creates a GKE cluster and the flag --
enable-autoscaling enables autoscaling and the parameters --min-nodes=1 --max-
nodes=10 define the minimum and maximum number of nodes in the node pool.
However, we want to configure cluster autoscaling for the existing GKE cluster; not
create a new GKE cluster.
Ref: https://cloud.google.com/sdk/gcloud/reference/container/clusters/create
Update existing GKE cluster to enable autoscaling by running the command
gcloud container clusters update [CLUSTER_NAME] --enable-autoscaling --min-
nodes=1 --max-nodes=10. is the right answer.
The command gcloud container clusters update - updates an existing GKE cluster. The
flag --enable-autoscaling enables autoscaling and the parameters --min-nodes=1 --
max-nodes=10 define the minimum and maximum number of nodes in the node pool.
This enables cluster autoscaling which scales up and scales down the nodes
automatically between 1 and 10 nodes in the node pool.

Question 40: Correct


Your team uses Splunk for centralized logging and you have a number of reports
and dashboards based on the logs in Splunk. You want to install splunk forwarder
on all nodes of your new Kubernetes Engine Autoscaled Cluster. The Splunk
forwarder forwards the logs to a centralized Splunk Server. What is the best way
to install Splunk Forwarder on all nodes in the cluster? You want to minimize
operational overhead?

SSH to each node and run a script to install the forwarder agent.

Include the forwarder agent in a DaemonSet deployment.

(Correct)

Include the forwarder agent in a StatefulSet deployment.

Use Deployment Manager to orchestrate the deployment of forwarder agents on


all nodes.

Explanation
SSH to each node and run a script to install the forwarder agent. is not right.
While this can be done, this approach does not scale. Every time the Kubernetes cluster
autoscaling adds a new node, we have to SSH to the instance and run the script which is
manual, possibly error-prone and adds operational overhead. We need to look for a
solution that automates this task.

Include the forwarder agent in a StatefulSet deployment. is not right.


In GKE, StatefulSets represents a set of Pods with unique, persistent identities and stable
hostnames that GKE maintains regardless of where they are scheduled. The main
purpose of StatefulSets is to set up persistent storage for pods that are deployed across
multiple zones. StatefulSets are not suitable for installing the forwarder agent nor do
they provide us the ability to install forwarder agents.
Ref: https://cloud.google.com/kubernetes-engine/docs/concepts/statefulset

Use Deployment Manager to orchestrate the deployment of forwarder agents on


all nodes. is not right.
You can use a deployment manager to create a number of GCP resources including GKE
Cluster but you can not use it to create Kubernetes deployments or apply configuration
files.
Ref: https://cloud.google.com/deployment-manager/docs/fundamentals

Include the forwarder agent in a DaemonSet deployment. is the right answer.


In GKE, DaemonSets manage groups of replicated Pods and adhere to a one-Pod-per-
node model, either across the entire cluster or a subset of nodes. As you add nodes to a
node pool, DaemonSets automatically add Pods to the new nodes. So by configuring
the pod to use Splunk forwarder agent image and with some minimal configuration (e.g.
identifying which logs need to be forwarded), you can automate the installation and
configuration of Splunk forwarder agent on each GKE cluster node.
Ref: https://cloud.google.com/kubernetes-engine/docs/concepts/daemonset

Question 41: Correct


You created a cluster.yaml file containing
1. resources:
2. - name: cluster
3. type: container.v1.cluster
4. properties:
5. zone: europe-west1-b
6. cluster:
7. description: "My GCP ACE cluster"
8. initialNodeCount: 2

You want to use Cloud Deployment Manager to create this cluster in GKE. What should
you do?


gcloud deployment-manager deployments create my-gcp-ace-cluster --config
cluster.yaml

(Correct)

gcloud deployment-manager deployments apply my-gcp-ace-cluster --type


container.v1.cluster --config cluster.yaml

gcloud deployment-manager deployments apply my-gcp-ace-cluster --config


cluster.yaml

gcloud deployment-manager deployments create my-gcp-ace-cluster --type


container.v1.cluster --config cluster.yaml

Explanation
gcloud deployment-manager deployments apply my-gcp-ace-cluster --config
cluster.yaml. is not right.
"gcloud deployment-manager deployments" doesn't support action apply. With Google
cloud in general, the action for creating is create and the action for retrieving is list. With
Kubernetes resources, the corresponding actions are apply and get respectively.
Ref: https://cloud.google.com/sdk/gcloud/reference/deployment-
manager/deployments/create

gcloud deployment-manager deployments apply my-gcp-ace-cluster --type


container.v1.cluster --config cluster.yaml. is not right.
"gcloud deployment-manager deployments" doesn't support action apply. With Google
cloud in general, the action for creating is create and the action for retrieving is list. With
Kubernetes resources, the corresponding actions are apply and get respectively.
Ref: https://cloud.google.com/sdk/gcloud/reference/deployment-
manager/deployments/create

gcloud deployment-manager deployments create my-gcp-ace-cluster --type


container.v1.cluster --config cluster.yaml. is not right.
"gcloud deployment-manager deployments create" creates deployments based on the
configuration file. (Infrastructure as code). It doesn't expect the parameter type passed
to it directly and fails when executed with the type parameter.
Ref: https://cloud.google.com/sdk/gcloud/reference/deployment-
manager/deployments/create

gcloud deployment-manager deployments create my-gcp-ace-cluster --config


cluster.yaml. is the right answer.
"gcloud deployment-manager deployments create" creates deployments based on the
configuration file. (Infrastructure as code). All the configuration related to the artifacts is
in the configuration file. This command correctly creates a cluster based on the provided
cluster.yaml configuration file.
Ref: https://cloud.google.com/sdk/gcloud/reference/deployment-
manager/deployments/create

Question 42: Correct


You have a number of compute instances belonging to an unmanaged instances
group. You need to SSH to one of the Compute Engine instances to run an ad hoc
script. You've already authenticated gcloud, however, you don't have an SSH key
deployed yet. In the fewest steps possible, what's the easiest way to SSH to the
instance?

Use the gcloud compute ssh command.

(Correct)

Create a key with the ssh-keygen command. Upload the key to the instance. Run
gcloud compute instances list to get the IP address of the instance, then use the
ssh command.

Run gcloud compute instances list to get the IP address of the instance, then use
the ssh command.

Create a key with the ssh-keygen command. Then use the gcloud compute ssh
command.
Explanation
Create a key with the ssh-keygen command. Upload the key to the instance.
Run gcloud compute instances list to get the IP address of the instance,
then use the ssh command. is not right.
This approach certainly works. You can create a key pair with ssh-keygen, update the
instance metadata with the public key and SSH to the instance. But is it the easiest way
to SSH to the instance with the fewest possible steps? Let’s explore other options to
decide (you will see that there is another option that does the same with less effort). You
can find more information about this option here:
https://cloud.google.com/compute/docs/instances/adding-removing-ssh-keys#block-
project-keys

Create a key with the ssh-keygen command. Then use the gcloud compute ssh
command. is not right.
This works but is more work (having to create the key) than the answer. gcloud compute
ssh ensures that the user's public SSH key is present in the project's metadata. If the
user does not have a public SSH key, one is generated using ssh-keygen and added to
the project’s metadata.

Run gcloud compute instances list to get the IP address of the instance,
then use the ssh command. is not right.
We can get the IP of the instance by executing the gcloud compute instances list but
unless an SSH is generated and updated in project metadata, you would not be able to
SSH to the instance. User access to a Linux instance through third-party tools is
determined by which public SSH keys are available to the instance. You can control the
public SSH keys that are available to a Linux instance by editing metadata, which is
where your public SSH keys and related information are stored.
Ref: https://cloud.google.com/compute/docs/instances/adding-removing-ssh-
keys#block-project-keys

Use the gcloud compute ssh command. is the right answer.


gcloud compute ssh ensures that the user's public SSH key is present in the project's
metadata. If the user does not have a public SSH key, one is generated using ssh-keygen
and added to the project’s metadata. This is similar to the other option where we copy
the key explicitly to the project’s metadata but here it is done automatically for us. There
are also security benefits with this approach. When we use gcloud compute ssh to
connect to Linux instances, we are adding a layer of security by storing your host keys as
guest attributes. Storing SSH host keys as guest attributes improve the security of your
connections by helping to protect against vulnerabilities such as man-in-the-middle
(MITM) attacks. On the initial boot of a VM instance, if guest attributes are enabled,
Compute Engine stores your generated host keys as guest attributes. Compute Engine
then uses these host keys that were stored during the initial boot to verify all
subsequent connections to the VM instance.
Ref: https://cloud.google.com/compute/docs/instances/connecting-to-instance
Ref: https://cloud.google.com/sdk/gcloud/reference/compute/ssh

Question 43: Correct


Your company owns a web application that lets users post travel stories. You
began noticing errors in logs for a specific Deployment. The deployment is
responsible for translating a post from one language to another. You've narrowed
the issue down to a specific container named "msg-translator-22" that is throwing
the errors. You are unable to reproduce the error in any other environment; and
none of the other containers serving the deployment have this issue. You would
like to connect to this container to figure out the root cause. What steps would
allow you to run commands against the msg-translator-22?

Use the kubectl run msg-translator-22 /bin/ bash command to run a shell on that
container.

Use the kubectl exec -it msg-translator-22 -- /bin/bash command to run a shell on
that container.

(Correct)

Use the kubectl run command to run a shell on that container.

Use the kubectl exec -it -- /bin/bash command to run a shell on that container.

Explanation
Use the kubectl run command to run a shell on that container. is not right.
kubectl run creates and runs a deployment. It creates a deployment or a job to manage
the created container(s). It is not possible to use kubectl run to connect to an existing
container.
https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#run

Use the kubectl run msg-translator-22 /bin/ bash command to run a shell on
that container. is not right.
kubectl run creates and runs a deployment. It creates a deployment or a job to manage
the created container(s). It is not possible to use kubectl run to connect to an existing
container.
https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#run

Use the kubectl exec -it -- /bin/bash command to run a shell on that
container. is not right.
While kubectl exec is used to execute a command in a container, the command above
doesn't quite work because we haven't passed to it the identifier of the container.
Ref: https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#exec

Use the kubectl exec -it msg-translator-22 -- /bin/bash command to run a


shell on that container. is the right answer.
kubectl exec is used to execute a command in a container. We pass the container name
msg-translator-22 so kubectl exec knows which container to connect to. And we pass
the command /bin/bash to it, so it starts a shell on the container and we can then run
custom commands and identify the root cause of the issue.
Ref: https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#exec

Question 44: Correct


Your company has chosen to go serverless to enable developers to focus on
writing code without worrying about infrastructure. You have been asked to
identify a GCP Serverless service that does not limit your developers to specific
runtimes. In addition, some of the applications need web sockets support. What
should you suggest?

App Engine Standard

Cloud Functions


Cloud Run

Cloud Run for Anthos

(Correct)

Explanation
App Engine Standard. is not right.
Google App Engine Standard offers a limited number of runtimes - Java, Node.js,
Python, Go, PHP and Ruby; and at the same time doesn’t offer support for Websockets.
Ref: https://cloud.google.com/appengine/docs/standard

Cloud Functions. is not right.


Like Google App Engine Standard, Cloud functions offer a limited number of runtimes -
Node.js, Python, Go and Java; and doesn’t offer support for Websockets.
Ref: https://cloud.google.com/blog/products/application-development/your-favorite-
runtimes-now-generally-available-on-cloud-functions

Cloud Run. is not right.


Cloud Run lets you run stateless containers in a fully managed environment. As this is
container-based, we are not limited to specific runtimes. Developers can write code
using their favorite languages (Go, Python, Java, C#, PHP, Ruby, Node.js, Shell, and
others). However, Cloud Run does not support Websockets.
Ref: https://cloud.google.com/run

Cloud Run for Anthos. is the right answer.


Cloud Run for Anthos leverage Kubernetes and serverless together using Cloud Run
integrated with Anthos. As this is container-based, we are not limited to specific
runtimes. Developers can write code using their favorite languages (Go, Python, Java,
C#, PHP, Ruby, Node.js, Shell, and others). Cloud Run for Anthos is the only serverless
GCP offering that supports WebSockets.
https://cloud.google.com/serverless-options

Question 45: Correct


Your company runs a very successful web platform and has accumulated 3
petabytes of customer activity data in sharded MySQL database located in your
datacenter. Due to storage limitations in your on-premise datacenter, your
company has decided to move this data to GCP. The data must be available all
through the day. Your business analysts, who have experience of using a SQL
Interface, have asked for a seamless transition. How should you store the data so
that availability is ensured while optimizing the ease of analysis for the business
analysts?

Import data into Google BigQuery.

(Correct)

Import data into Google Cloud Datastore.

Import data into Google Cloud SQL.

Import flat files into Google Cloud Storage.

Explanation
Import data into Google Cloud SQL. is not right.
Cloud SQL is a fully-managed relational database service. It supports MySQL so the
migration of data from your data center to cloud can be straightforward but Google
Cloud SQL cannot handle petabyte-scale data. The current second-generation instances
limit the storage to approximately 30TB.
Ref: https://cloud.google.com/sql#overview
Ref: https://cloud.google.com/sql/docs/quotas

Import flat files into Google Cloud Storage. is not right.


Cloud Storage is a service for storing objects in Google Cloud. You store objects in
containers called buckets. You could export the MySQL data into files and import them
into Google Cloud Storage, but it doesn't offer an SQL Interface to run queries/reports.
Ref: https://cloud.google.com/storage/docs/introduction

Import data into Google Cloud Datastore. is not right.


Your business analysts are already familiar with SQL Interface so we need a service that
supports SQL. However, Cloud Datastore is a NoSQL document database. Cloud
Datastore doesn't support SQL (it supports GQL which is similar to SQL, but not
identical).
Ref: https://cloud.google.com/datastore/docs/reference/gql_reference
Ref: https://cloud.google.com/datastore/docs/concepts/overview

Import data into Google BigQuery. is the right answer.


Bigquery is a petabyte-scale serverless, highly scalable, and cost-effective cloud data
warehouse that offers blazing-fast speeds, and with zero operational overhead.
BigQuery supports a standard SQL dialect that is ANSI:2011 compliant, which reduces
the impact and enables a seamless transition for your business analysts.
Ref: https://cloud.google.com/bigquery

Question 46: Incorrect


You created a kubernetes deployment by running kubectl run nginx --
image=nginx --labels="app=prod". Your kubernetes cluster is also used by a
number of other deployments. How can you find the identifier of the pods for this
nginx deployment?

gcloud list gke-deployments --filter={ pod }

(Incorrect)

kubectl get deployments --output=pods

kubectl get pods -l "app=prod"

(Correct)

gcloud get pods --selector="app=prod"

Explanation
gcloud get pods --selector="app=prod". is not right.
You can not retrieve pods from the Kubernetes cluster by using gcloud. You can list
pods by using Kubernetes CLI - kubectl get pods.
Ref: https://kubernetes.io/docs/tasks/access-application-cluster/list-all-running-
container-images/

gcloud list gke-deployments --filter={ pod }. is not right.


You can not retrieve pods from the Kubernetes cluster by using gcloud. You can list
pods by using Kubernetes CLI - kubectl get pods.
Ref: https://kubernetes.io/docs/tasks/access-application-cluster/list-all-running-
container-images/

kubectl get deployments --output=pods. is not right.


You can not list pods by listing Kubernetes deployments. You can list pods by using
Kubernetes CLI - kubectl get pods.
Ref: https://kubernetes.io/docs/tasks/access-application-cluster/list-all-running-
container-images/

kubectl get pods -l "app=prod". is the right answer.


This command correctly lists pods that have the label app=prod. When creating the
deployment, we used the label app=prod so listing pods that have this label retrieve the
pods belonging to nginx deployments. You can list pods by using Kubernetes CLI -
kubectl get pods.
Ref: https://kubernetes.io/docs/tasks/access-application-cluster/list-all-running-
container-images/
Ref: https://kubernetes.io/docs/tasks/access-application-cluster/list-all-running-
container-images/#list-containers-filtering-by-pod-label

Question 47: Correct


You have two Kubernetes resource configuration files.
1. deployment.yaml - creates a deployment
2. service.yaml - sets up a LoadBalancer service to expose the pods.

You don't have a GKE cluster in the development project and you need to provision one.
Which of the commands fail with an error in Cloud Shell when you are attempting to
create a GKE cluster and deploy the yaml configuration files to create a deployment and
service. (Select Two)

1. gcloud container clusters create cluster-1 --zone=us-central1-a

2. gcloud container clusters get-credentials cluster-1 --zone=us-central1-a


3. kubectl apply -f [deployment.yaml,service.yaml]

(Correct)

1. gcloud container clusters create cluster-1 --zone=us-central1-a

2. gcloud container clusters get-credentials cluster-1 --zone=us-central1-a

3. kubectl apply -f deployment.yaml

4. kubectl apply -f service.yaml

1. gcloud config set compute/zone us-central1-a

2. gcloud container clusters create cluster-1

3. gcloud container clusters get-credentials cluster-1 --zone=us-central1-a

4. kubectl apply -f deployment.yaml

5. kubectl apply -f service.yaml

1. gcloud container clusters create cluster-1 --zone=us-central1-a

2. gcloud container clusters get-credentials cluster-1 --zone=us-central1-a

3. kubectl apply -f deployment.yaml,service.yaml

1. gcloud container clusters create cluster-1 --zone=us-central1-a

2. gcloud container clusters get-credentials cluster-1 --zone=us-central1-a

3. kubectl apply -f deployment.yaml&&service.yaml

(Correct)
Explanation
1. gcloud container clusters create cluster-1 --zone=us-central1-a
2. gcloud container clusters get-credentials cluster-1 --zone=us-central1-a
3. kubectl apply -f deployment.yaml
4. kubectl apply -f service.yaml. is not right (i.e. commands executes
successfully)
You create a cluster by running gcloud container clusters create command. You then
fetch credentials for a running cluster by running gcloud container clusters get-
credentials command. Finally, you apply the kubernetes resource configuration by
running kubectl apply -f
Ref: https://cloud.google.com/sdk/gcloud/reference/container/clusters/create
Ref: https://cloud.google.com/sdk/gcloud/reference/container/clusters/get-credentials
Ref: https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#apply

1. gcloud container clusters create cluster-1 --zone=us-central1-a


2. gcloud container clusters get-credentials cluster-1 --zone=us-central1-a
3. kubectl apply -f deployment.yaml,service.yaml. is not right (i.e. commands
executes successfully)
Like above, but the only difference is that both configurations are applied in the same
statement. With kubectl apply, you can apply the configuration from a single file or
multiple files or even a complete directory.
Ref: https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#apply

1. gcloud config set compute/zone us-central1-a


2. gcloud container clusters create cluster-1
3. gcloud container clusters get-credentials cluster-1 --zone=us-central1-a
4. kubectl apply -f deployment.YAML
5. kubectl apply -f service.yaml. is not right (i.e. commands executes
successfully)
Like above, but the only difference is in how the compute zone is set. In this scenario,
you set the zone us-central1-a as the default zone so when you don't pass a zone
property to the gcloud container clusters create command, it takes the default zone
which is us-central1-a.
Ref: https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#apply

1. gcloud container clusters create cluster-1 --zone=us-central1-a


2. gcloud container clusters get-credentials cluster-1 --zone=us-central1-a
3. kubectl apply -f [deployment.yaml,service.yaml]. is the right answer (i.e.
commands fail)
kubectl apply can apply the configuration from a single file or multiple files or even a
complete directory. When applying configuration from multiple files, the file names
need to be separated by a comma. In this scenario, the filenames are passed as a list
and Kubernetes treats the list as literal so looks for files "[deployment.yaml" and
"service.yaml]" which it doesn't find.
Ref: https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#apply

1. gcloud container clusters create cluster-1 --zone=us-central1-a


2. gcloud container clusters get-credentials cluster-1 --zone=us-central1-a
3. kubectl apply -f deployment.yaml&&service.yaml. is the right answer (i.e.
commands fail)
kubectl apply can apply the configuration from a single file or multiple files or even a
complete directory. When applying configuration from multiple files, the file names
need to be separated by a comma. In this scenario, the filenames are separated by &&
and kubernetes treats the && as literal so it looks for the file
"deployment.yaml&&service.yaml" which it doesn't find.
Ref: https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#apply

Question 48: Correct


You ran the following commands to create two compute instances.
1. gcloud compute instances create instance1
2. gcloud compute instances create instance2

Both compute instances were created in europe-west2-a zone but you want to create
them in other zones. Your active gcloud configuration is as shown below.
1. $ gcloud config list
2.
3. [component_manager]
4. disable_update_check = True
5. [compute]
6. gce_metadata_read_timeout_sec = 5
7. zone = europe-west2-a
8. [core]
9. account = gcp-ace-lab-user@gmail.com
10. disable_usage_reporting = False
11. project = gcp-ace-lab-266520
12. [metrics]
13. environment = devshell

You want to modify the gcloud configuration such that you are prompted for zone
when you execute the create instance commands above. What should you do?


gcloud config set zone ""

gcloud config unset zone

gcloud config set compute/zone ""

gcloud config unset compute/zone

(Correct)

Explanation
gcloud config unset zone. is not right.
gcloud config does not have a core/zone property. The syntax for this command is
gcloud config unset SECTION/PROPERTY. If SECTION is missing from the command,
SECTION is defaulted to core. We are effectively trying to run the command gcloud
config unset core/zone but the core section doesn't have a property called zone, so this
command fails.
$ gcloud config unset zone
ERROR: (gcloud.config.unset) Section [core] has no property [zone].

Ref: https://cloud.google.com/sdk/gcloud/reference/config/unset

gcloud config set zone "". is not right.


gcloud config does not have a core/zone property. The syntax for this command is
gcloud config set SECTION/PROPERTY VALUE. If SECTION is missing, SECTION is
defaulted to core. We are effectively trying to run gcloud config set core/zone "" but the
core section doesn't have a property called zone, so this command fails.

$ gcloud config set zone ""


ERROR: (gcloud.config.unset) Section [core] has no property [zone].

Ref: https://cloud.google.com/sdk/gcloud/reference/config/set

gcloud config set compute/zone "". is not right.


This command uses the correct syntax but it doesn't unset the compute/zone property
correctly. Instead it sets it to "" in gcloud configuration. When the gcloud compute
instances create command runs, it picks the zone value from this configuration property
which is "" and attempts to create an instance in "" zone and fails because zone ""
doesn't exist. gcloud doesn't treat "" zone as an unset value. The zone must be explicitly
unset if it is to be removed from the configuration.

$ gcloud config set compute/zone ""


$ gcloud compute instances create instance1
Zone: Expected type (<type 'int'>, <type 'long'>) for field id, found projects/co
mpute-challenge-lab-266520/zones/ (type <type 'unicode'>)

Ref: https://cloud.google.com/sdk/gcloud/reference/config/set

gcloud config unset compute/zone. is the right answer.


This command uses the correct syntax and correctly unsets the zone in gcloud
configuration. The next time gcloud compute instances create command runs, it knows
there is no default zone defined in gcloud configuration and therefore prompts for a
zone before the instance can be created.
Ref: https://cloud.google.com/sdk/gcloud/reference/config/unset

Question 49: Correct


You have files in a Cloud Storage bucket that you need to share with your
suppliers. You want to restrict the time that the files are available to your suppliers
to 1 hour. You want to follow Google recommended practices. What should you
do?

Create a service account with just the permissions to access files in the bucket. Create a
JSON key for the service account. Execute the command gsutil signurl -m 1h {JSON Key
File} gs://{bucket}/*.

Create a JSON key for the Default Compute Engine Service Account. Execute the
command gsutil signurl -t 60m {JSON Key File} gs://{bucket}/. .


Create a service account with just the permissions to access files in the bucket. Create a
JSON key for the service account. Execute the command gsutil signurl -d 1h {JSON Key
File} gs://{bucket}/**.

(Correct)

Create a service account with just the permissions to access files in the bucket. Create a
JSON key for the service account. Execute the command gsutil signurl -p 60m {JSON Key
File} gs://{bucket}/.
Explanation
Create a JSON key for the Default Compute Engine Service Account. Execute
the command gsutil signurl -t 60m {JSON Key File} gs://{bucket}/*.* is not
right.
gsutil signurl does not support -t flag. Executing the command with -t flag fails as
shown.
$ gsutil signurl -t 60m keys.json gs://gcp-ace-lab-255520/*.*
CommandException: Incorrect option(s) specified. Usage:

Ref: https://cloud.google.com/storage/docs/gsutil/commands/signurl
Also, using the default compute engine service account violates the principle of least
privilege. The recommended approach is to create a service account with just the right
permissions needed and create JSON keys for this service account to use with gsutil
signurl command.

Create a service account with just the permissions to access files in the
bucket. Create a JSON key for the service account. Execute the command
gsutil signurl -p 60m {JSON Key File} gs://{bucket}/. is not right.
With gsutil signurl, -p is used to specify the key store password instead of prompting for
the password. It can not be used to pass a time value. Executing the command with -p
flag fails as shown.

$ gsutil signurl -p 60m keys.json gs://gcp-ace-lab-255520/*.*


TypeError: Last argument must be a byte string or a callable.

Ref: https://cloud.google.com/storage/docs/gsutil/commands/signurl

Create a service account with just the permissions to access files in the
bucket. Create a JSON key for the service account. Execute the command
gsutil signurl -m 1h {JSON Key File} gs://{bucket}/*. is not right.
With gsutil signurl, -m is used to specify the operation e.g. PUT/GET etc. Executing the
command with -m flag fails as shown.

$ gsutil signurl -m 1h keys.json gs://gcp-ace-lab-255520/*.*


CommandException: HTTP method must be one of[GET|HEAD|PUT|DELETE|RESUMABLE]

Ref: https://cloud.google.com/storage/docs/gsutil/commands/signurl

Create a service account with just the permissions to access files in the
bucket. Create a JSON key for the service account. Execute the command
gsutil signurl -d 1h {JSON Key File} gs://{bucket}/**. is the right answer.
This command correctly specifies the duration that the signed url should be valid for by
using the -d flag. The default is 1 hour so omitting the -d flag would have also resulted
in the same outcome. Times may be specified with no suffix (default hours), or with s =
seconds, m = minutes, h = hours, d = days. The max duration allowed is 7d.
Ref: https://cloud.google.com/storage/docs/gsutil/commands/signurl

Question 50: Correct


You plan to deploy an application on a autoscaled managed instances group. The
application uses tomcat server and runs on port 8080. You want to access the
application on https://www.example.com. You want to follow Google
recommended practices. What services would you use?

Google Domains, Cloud DNS, HTTP(S) Load Balancer

(Correct)

Google Domains, Cloud DNS private zone, HTTP(S) Load Balancer

Google Domains, Cloud DNS private zone, SSL Proxy Load Balancer

Google DNS, Google CDN, SSL Proxy Load Balancer


Explanation
To serve traffic on https://www.example.com, we have to first own the domain
example.com. We can use Google Domains service to register a domain.
Ref: https://domains.google/

Once we own example.com domain, we need to create a zone www.example.com. We


can use Cloud DNS, which is a scalable, reliable, and managed authoritative Domain
Name System (DNS) to create a DNS zone.
Ref: https://cloud.google.com/dns

Once the www.example.com zone is set up, we need to create a DNS (A) record to point
to the public IP of the Load Balancer. This is also carried out in Cloud DNS.

Finally, we need a load balancer to front the autoscaled managed instances group.
Google recommends we use HTTP(S) Load Balancer for this requirement as "SSL Proxy
Load Balancing is intended for non-HTTP(S) traffic. For HTTP(S) traffic, we recommend
that you use HTTP(S) Load Balancing."
Ref: https://cloud.google.com/load-balancing/docs/ssl

So Google Domains, Cloud DNS, HTTP(S) Load Balancer is the right answer.

You might also like