Week 3 GCP Lec Notes

Download as pdf or txt
Download as pdf or txt
You are on page 1of 14

Google Cloud Computing Foundation Course

Jimmy Iran
SMB Growth Program Manager
Google Cloud

Lecture-15
Cloud Console Mobile App

The cloud console mobile app provides another way for you to manage services running on GCP
directly from your mobile device. It’s a convenient resource that doesn’t cost anything extra.

(Refer Slide Time: 00:13)

The cloud console mobile app is available for iOS and Android and offers many capabilities. It
allows you to stay connected to the cloud and check billing, status, and critical issues. To see the
health of your service at a glance, you can create your custom dashboard showing key metrics
such as CPU usage, network usage, requests per second, server errors, and more. You can take
action to address issues directly from your device such as rolling back a bad release, stopping or
restarting a virtual machine, searching logs, or even connecting to a virtual machine via SSH.

The monitoring functionality allows you to view and respond to incidents, errors, and logging. If
you need to you can even access cloud shell to perform any G-cloud operation.

53
Google Cloud Computing Foundation Course
Jimmy Iran
SMB Growth Program Manager
Google Cloud

Lecture-16
Quiz

(Refer Slide Time: 00:05)

You have reached the end of the module. Complete the short quiz to test your understanding.
True or False: All GCP resources must be associated with the project. The answer is true.
Associating all resources with the project helps with billing and isolation. Which of the
following is a command-line tool that is part of the cloud SDK? C is the correct answer. The
gsutil command-line tool is used to work with cloud storage.

What command would you use to set up the default configuration of the cloud SDK? The gcloud
init command is used to set up the user, default project, and a default region and zone of the
SDK.

54
Google Cloud Computing Foundation Course
Sowmya Kannan
Google Cloud

Lecture-17
Module introduction

Hi, I am Sowmya. Welcome to the module used GCP to build your apps. In this module, you will
focus on leveraging GCP resources and server less managed services to build applications. So far
in this course, you have learned what GCP is and why you should have a solid platform before
beginning a GCP transfer. Now in this module, you will learn how to build apps directly in GCP.

(Refer Slide Time: 00:37)

The main objective of this module is to discover the different compute options in GCP. To
achieve this goal, you will need to meet the following learning objectives. Explore the role of
compute options in the cloud. Describe how to build and manage virtual machines, explain how
to build elastic applications using auto-scaling, and explore platform-as-a-service options by
leveraging the App Engine.

55
(Refer Slide Time: 01:10)

You will also be able to discuss how to build event-driven services utilizing cloud functions. And
explain how to containerize and orchestrate applications with Google Kubernetes Engine also
refer to as GKE.

(Refer Slide Time: 01:28)

56
This agenda shows the topics that make up this module. You will start by learning about
compute options in the cloud. You will then move on to finding out how to build and deploy
apps using compute engine, and how to create a virtual machine by completing a hands-on lab.
You will then discover how to configure elastic apps with auto-scaling and explore how App
Engine can run your applications without having you manage the infrastructure.

The second lab of the module will allow you to create a small App Engine application that
displays a short message. You will then move on to finding out about event-driven programs
with loud functions before completing another lab where you will create, deploy, and test a cloud
function using the Google cloud shell command line. You will finish the module learning about
containerizing and orchestrating apps with Google Kubernetes Engine before ending with a short
quiz and a recap of the key learning points from the module.

57
Google Cloud Computing Foundation Course
Sowmya Kannan
Google Cloud

Lecture-18
Compute Options in the Cloud

(Refer Slide Time: 00:06)

Let’s begin by learning about compute options in the cloud. GCP offers a variety of compute
services planning different usage options for general workload that require dedicated resources
for applications compute engine is a good option. If you are looking for a platform-as-a-service
app engine, it is a good option. Cloud functions offer a serverless option for triggering code to
run based on some kind of event.

And to run containers on a managed Kubernetes platform you can leverage Google Kubernetes
Engine. You will find out more about each of these compute services during this module.

58
Google Cloud Computing Foundation Course
Sowmya Kannan
Google Cloud

Lecture-19
Exploring IaaS with Compute Engine

(Refer Slide Time: 00:07)

Next, you will discover how to build and deploy applications with Compute Engine. Compute
Engine delivers virtual machines running in Google's innovative data centers and worldwide
fiber network. Compute Engine is ideal if you need complete control over the virtual machine
infrastructure need to make changes to the kernel such as providing your own network or
graphics drivers to squeeze out the last drop of performance.

Or if you need to run a software package that can’t easily be containerized or have existing VM
images to move to the cloud.

59
(Refer Slide Time: 00:43)

Compute Engine is a type of infrastructure as a service. It delivers scalable high-performance


virtual machines that run on Google's infrastructure. Compute Engine VMs boot quickly come
with persistent disk storage and deliver consistent performance. You can run any computing
workload on Compute Engine such as web server hosting, application hosting, and application
backends. Virtual servers are available in many configurations including predefined sizes.

Alternatively, there is the option to create custom machine types optimized for specific needs.
Compute Engine also allows users to run their choice of operating system. And while compute
engine allows users to run thousands of virtual CPUs in a system that has been designed to be
fast and offers strong performance consistency there is no upfront investment required. The
purpose of custom virtual machines is to ensure you can create virtual services with just enough
resources to work for your application.

For example, you want to run your application on a virtual machine but none of the predefined
versions will fit the resource footprint you require or your application needs to run on a specific
CPU architecture or GPUs are required to run your application. Custom virtual machines allow
for creating a perfect fit for your applications.

60
(Refer Slide Time: 02:28)

To meet your workload requirements there are different machine type options that you can
consider. For example, a higher proportion of memory to CPU, a higher proportion of CPU to
memory, or a blend of both through Google's standard configuration. Compute Engine offers
predefined machine types that you can use when you create an instance. A predefined machine
type has a preset number of virtual CPUs or vCPUs and amount of memory and is charged at a
set price.

You can choose from the general-purpose machine types, memory-optimized machine types, and
compute-optimized machine types. Predefined virtual machine configurations range from micro
instances of 2 vCPUs and 8 gigabytes of memory to memory optimized instances with up to 160
vCPUs and 3.75 terabytes of memory.

61
(Refer Slide Time: 03:42)

Compute Engine also allows you to create virtual machines with the vCPU and memory that
meet workload requirements. This has performance benefits and also reduces cost significantly.
One option is to select from predefined configurations. A general-purpose configuration provides
a balance between performance and memory, or you can optimize for memory, or performance.
You can create a machine type with as little as one vCPU and up to 80 vCPUs or any even
number of vCPUs in between.

You can configure up to 8 gigabytes of memory per vCPU. Alternatively, if none of the
predefined virtual machines fit your needs you have the option to create a custom virtual
machine. When you create a custom virtual machine you can choose the number of CPUs the
amount of memory required, the CPU architecture to leverage and the option of using GPUs.

62
(Refer Slide Time: 04:54)

Network storage up to 64 terabytes in size can be attached to VMs as persistent disks. Persistent
disks are the most common storage option due to their price performance and durability and can
be created in HDD or SSD formats. If a VM instance is terminated its persistent disk retains data
and can be attached to another instance. You can also take snapshots of your persistent disk and
create new persistent disks from that snapshot.

Compute Engine offers always encrypted local SSD block storage. Unlike standard persistent
disks, local SSDs are physically attached to the server hosting the VM instance offering very
high input-output operations per second and very low latency compared to persistent disks.
Predefined local SSD sizes up to 3 terabytes are available for any VM with at least one vCPU.

63
By default, most compute engine provided Linux images will automatically run an optimization
script that configures the instance for peak local SSD performance.

Standard persistent disk performance scales linearly up to the VM performance limits. A vCPU
counter 4 or more for your instance doesn’t limit the performance of standard persistent disks. A
vCPU count of less than 4 for an instance reduces the right limit for input-output operations per
second or IOPS because network egress limits are proportional to the vCPU count. The right
limit also depends on the size of input outputs or IOs.

For example, 16 kilobyte IOs consume more bandwidth than 8 kilobyte IOs at the same IOPS
level. Standard persistent disk IOPS and throughput performance increase linearly with the size
of the disk until it reaches set per instance limits. The IOPS performance of SSD persistent disks
depends on the number of vCPUs in the instance in addition to disk size. Lower core VMs have
lower right IOPS and throughput limits due to the network egress limitations on write
throughput.

SSD persistent disk performance scales linearly until it reaches either the limits of the volume or
the limits of each compute engine instance. SSD read bandwidth and IOPS consistency near the
maximum limits largely depends on network ingress utilization. Some variability is to be
expected especially for 16 kilobyte IOs near the maximum IOPS limits.

(Refer Slide Time: 08:18)

64
Networks connect compute engine instances to each other and the internet. Networks in the
cloud have a lot of similarities with physical networks. You can segment networks, use firewall
rules to restrict access to instances, and create static routes to forward traffic to specific
destinations. You can scale up applications on the compute engine from 0 to full throttle with
cloud load balancing. Distribute your load balanced compute resources in single or multiple
regions close to users and to meet your high availability requirements.

Sub-networks segment your cloud network IP space sub-network prefixes can be automatically
allocated or you can create a custom topology sub-networks and cloud load balancing are both
discussed in the module it helps to network. When you build a Compute Engine instance you use
a virtual network adapter which is part of the instance to connect a virtual machine to a network.
Much in the same way you would connect a physical server to a network.

For Compute Engine you can have up to 8 virtual adapters. Sub-networks and cloud load
balancing are both discussed in the module it helps to network.

(Refer Slide Time: 09:49)

65
All virtual machines are charged for one minute at boot time which is the minimum charge for a
VM. After that, per-second pricing begins meaning that you only pay for the compute time used.
Google offers sustained use discounts which automatically provide discounted prices for long-
running workloads without the need for signup fees or any upfront commitment. Predefined
machine types are discounted based on the percent of monthly use.

While custom machine types are discounted on a percent of total use. The GCP pricing calculator
is a great way to see pricing estimates based on the different configuration options that are
available and instances. salt-and in notes, persistent disks load balancing, and cloud TPUs.

66

You might also like