FSD Week 3

Download as pdf or txt
Download as pdf or txt
You are on page 1of 67

FULL STACK DEVELOPMENT - WEEK 3

What is Devops
A DevOps team includes developers and IT operations working collaboratively
throughout the product life cycle, in order to increase the speed and quality of software
deployment. It’s a new way of working, a cultural shift, that has significant implications for
teams and the organizations they work for. DevOps is an evolving philosophy and framework
that encourages faster, better application development and faster release of new or revised
software features or products to customers.

This closer relationship between “Dev” and “Ops” permeates every phase of the
DevOps lifecycle: from initial software planning to code, build, test, and release phases and
on to deployment, operations, and ongoing monitoring. This relationship propels a
continuous customer feedback loop of further improvement, development, testing, and
deployment. One result of these efforts can be the more rapid, continual release of necessary
feature changes or additions.

● Plan. This phase helps define business value and requirements. Sample tools
include Jira or Git to help track known issues and perform project management.

JB PORTALS 1
FULL STACK DEVELOPMENT - WEEK 3
● Code. This phase involves software design and the creation of software code.
Sample tools include GitHub, GitLab, Bitbucket, or Stash.
● Build. In this phase, you manage software builds and versions, and use
automated tools to help compile and package code for future release to
production. You use source code repositories or package repositories that also
“package” infrastructure needed for product release. Sample tools include
Docker, Ansible, Puppet, Chef, Gradle, Maven, or JFrog Artifactory.
● Test. This phase involves continuous testing (manual or automated) to ensure
optimal code quality. Sample tools include JUnit, Codeception, Selenium, Vagrant,
TestNG, or BlazeMeter.
● Deploy. This phase can include tools that help manage, coordinate, schedule, and
automate product releases into production. Sample tools include Puppet, Chef,
Ansible, Jenkins, Kubernetes, OpenShift, OpenStack, Docker, or Jira.
● Operate. This phase manages software during production. Sample tools include
Ansible, Puppet, PowerShell, Chef, Salt, or Otter.
● Monitor. This phase involves identifying and collecting information about issues
from a specific software release in production. Sample tools include New Relic,
Datadog, Grafana, Wireshark, Splunk, Nagios, or Slack.

DEVOPS ENGINEER PRACTICES


The 7 key practices of DevOps are:
1. Configuration Management
2. Continuous Integration
3. Automated Testing
4. Infrastructure as Code
5. Continuous Delivery
6. Continuous Deployment
7. Continuous Monitoring

Configuration Management
Configuration management is a systems engineering process for establishing
consistency of a product’s attributes throughout its life. In the technology world,
configuration management is an IT management process that tracks individual configuration
items of an IT system. IT systems are composed of IT assets that vary in granularity.
An IT asset may represent a piece of software, or a server, or a cluster of servers. The
following focuses on configuration management as it directly applies to IT software assets
and software asset CI/CD.
Software configuration management is a systems engineering process that tracks and
monitors changes to a software systems configuration metadata. In software development,
configuration management is commonly used alongside version control and CI/CD

JB PORTALS 2
FULL STACK DEVELOPMENT - WEEK 3
infrastructure. This post focuses on its modern application and use in agile CI/CD software
environments.

Configuration management (CM) is the practice of controlling and managing changes


to software using version control in a standard and repeatable way. The practice has two
components to it: using a versioning control software and a standard code repository
management strategy (which defines the process for branching, merging, etc.).
● The current target for the versioning control tool is Git
● The current target for the code repository management strategy is one of two
workflows depending on the needs of the team: Feature branch workflow or
Gitflow workflow

Continuous Integration
Continuous integration (CI) is the practice of automating the integration of code
changes from multiple contributors into a single software project. It’s a primary DevOps best
practice, allowing developers to frequently merge code changes into a central repository
where builds and tests then run. Automated tools are used to assert the new code’s
correctness before integration.

Continuous integration (CI) is the practice that requires developers to integrate code
into a shared repository often and obtain rapid feedback on its success during active
development.

JB PORTALS 3
FULL STACK DEVELOPMENT - WEEK 3
This is done as developers finish a specific piece of code and it has successfully passed
unit testing. CI also means creating a build in a tool like Bamboo/Jenkins/Gitlab that runs
after developer check-in, runs any test you have that can run on this build (unit & integration
for example) and provides feedback to the development team if it worked or if it failed. The
end goal is to create small workable chunks of code that are validated and integrated back
into the centralized code repository as frequently as possible. As such, CI is the foundation
for both continuous delivery and continuous deployment DevOps practices.

Automated Testing
Test automation is the practice of automatically reviewing and validating a software
product, such as a web application, to make sure it meets predefined quality standards for
code style, functionality (business logic), and user experience.Testing practices typically
involve the following stages:
● Unit testing: validates individual units of code, such as a function, so it works as
expected
● Integration testing: ensures several pieces of code can work together without
unintended consequences
● End-to-end testing: validates that the application meets the user’s expectations
● Exploratory testing: takes an unstructured approach to reviewing numerous areas of
an application from the user perspective, to uncover functional or visual issues
The different types of testing are often visualized as a pyramid. As you climb up the pyramid,
the number of tests in each type decreases, and the cost of creating and running tests
increases.

Infrastructure as Code
Infrastructure as Code (IaC) is the managing and provisioning of infrastructure
through code instead of through manual processes. With IaC, configuration files are created
that contain your infrastructure specifications, which makes it easier to edit and distribute
configurations. It also ensures that you provision the same environment every time. By
JB PORTALS 4
FULL STACK DEVELOPMENT - WEEK 3
codifying and documenting your configuration specifications, IaC aids configuration
management and helps you to avoid undocumented, ad-hoc configuration changes.
IaC is used to define code that when executed, can stand up an entire physical or
virtual environment including computing and networking infrastructure. It is a type of IT
infrastructure that operation teams can automatically manage and provision through code,
rather than using a manual process. An example of using IaC would be to use Terraform to
rapidly stand up nodes in a cloud environment, and then have the ability to destroy and
rebuild the environment consistently each time. Doing so gives the user the ability to version
control their infrastructure, and can be more agile when recovering from infrastructure
outages.

Continuous Delivery
The practice of making every change to source code ready for a production release as
soon as automated testing validates it. This includes automatically building, testing and
deploying. An approach to code approval and delivery approval needs to be in place to
ensure that the code can be deployed in an automated fashion with appropriate pauses for
approval depending on the specific needs of a program. This also implies the same process
for the lower environments, like QA, UA, etc.

Continuous Deployment
Continuous Deployment is the practice that strives to automate production
deployment end to end. In order for this practice to be implemented, a team needs to have
extremely high confidence in their automated tests. The ultimate goal is that as long as the
build has passed all automated tests, the code will be deployed. However, manual steps in
the deployment process can be maintained if necessary.
For example, a team can determine what type of changes can be deployed to
production in a completely automated fashion, while other types of changes may maintain a
manual approval step. Such a hybrid approach is a good way to begin to adopt this practice.

JB PORTALS 5
FULL STACK DEVELOPMENT - WEEK 3

Continuous Monitoring
DevOps monitoring entails overseeing the entire development process from planning,
development, integration and testing, deployment, and operations. It involves a complete
and real-time view of the status of applications, services, and infrastructure in the
production environment. Features such as real-time streaming, historical replay, and
visualizations are critical components of application and service monitoring. Continuous
monitoring is the practice of proactively monitoring, alerting, and taking action in key areas
to give teams visibility into the health of the application in the production environment. The
following areas are included to be aware of the impact of every deployment and reduce the
time between issue identification and resolution:

JB PORTALS 6
FULL STACK DEVELOPMENT - WEEK 3
Configuration Management
What is Version Control System?
Version control systems allow multiple developers, designers, and team members to
work together on the same project. It helps them work smarter and faster! A version control
system is critical to ensure everyone has access to the latest code and modifications are
tracked. As development becomes increasing complex and teams grow, there's a bigger need
to manage multiple versions and components of entire products.
The responsibility of the Version control system is to keep all the team members on
the same page. It makes sure that everyone on the team is working on the latest version of
the file and, most importantly, makes sure that all these people can work simultaneously on
the same project.
Let's try to understand the process with the help of this diagram:
There are 3 workstations or three different developers at three other locations, and
there's one repository acting as a server. The work stations are using that repository either
for the process of committing or updating the tasks.

There may be a large number of workstations using a single server repository. Each
workstation will have its working copy, and all these workstations will be saving their source
codes into a particular server repository.
This makes it easy for any developer to access the task being done using the
repository. If any specific developer's system breaks down, then the work won't stop, as
there will be a copy of the source code in the central repository.

Why do we need a Version Control System?


Any multinational company may face several problems like collaboration among
employees, storing several versions of files being made, and data backing up. All these
challenges are essential to be resolved for a company to be successful. This is when a Version
Control System comes into the picture.

Collaboration
JB PORTALS 7
FULL STACK DEVELOPMENT - WEEK 3
There are so many people located at different places, there may be a need to
communicate for a particular reason, or a set of people are working on the same project but
from other regions.
Storing Versions
The project is completed into several versions; in that situation, keeping all such commits in
a single place is a considerable challenge.

Restoring Previous Versions


Sometimes, there is a need to go back to the earlier versions to find the bug's root
cause. Figure Out What Happened. It is critical to know what changes were made to the
previous versions of the source code or where exactly the changes have been made in a file.
Backup
If the system or disk of the user breaks down and there is no backup, then all the efforts go
in vain.

Fundamentals of Git
Git is the best choice for most software teams today. While every team is different and
should do their own analysis, here are the main reasons why version control with Git is
preferred over alternatives:

Git is good
Git has the functionality, performance, security and flexibility that most teams and individual
developers need. These attributes of Git are detailed above. In side-by-side comparisons with
most other alternatives, many teams find that Git is very favorable.

Git is a de facto standard


Git is the most broadly adopted tool of its kind. This makes Git attractive for the
following reasons. At Atlassian, nearly all of our project source code is managed in Git.

Git is a quality open source project

JB PORTALS 8
FULL STACK DEVELOPMENT - WEEK 3
Git is a very well supported open source project with over a decade of solid
stewardship. The project maintainers have shown balanced judgment and a mature
approach to meeting the long term needs of its users with regular releases that improve
usability and functionality. The quality of the open source software is easily scrutinized and
countless businesses rely heavily on that quality.

Also Refer this link for

WHY USE GIT FOR YOUR ORGANIZATION: https://www.atlassian.com/git/tutorials/why-


git

Install Git on Windows: https://www.atlassian.com/git/tutorials/install-git

For Complete installation of Git for Windows : https://phoenixnap.com/kb/how-to-install-


git-windows

Git for Windows stand-alone installer


1. Download the latest Git for Windows installer : https://gitforwindows.org/.
2. When you've successfully started the installer, you should see the Git Setup wizard
screen. Follow the Next and Finish prompts to complete the installation. The default
options are pretty sensible for most users.
3. Open a Command Prompt (or Git Bash if during installation you elected not to use Git
from the Windows Command Prompt).
4. Run the following commands to configure your Git username and email using the
following commands, replacing Emma's name with your own. These details will be
associated with any commits that you create:
$ git config --global user.name "Emma Paris"
$ git config --global user.email "eparis@atlassian.com"

BASIC LOCAL GIT OPERATIONS


Remember that each file in your working directory can be in one of two states:
tracked or untracked. Tracked files are files that were in the last snapshot; they can be
unmodified, modified, or staged. Untracked files are everything else – any files in your
working directory that were not in your last snapshot and are not in your staging area. When
you first clone a repository, all of your files will be tracked and unmodified because you just
checked them out and haven’t edited anything.

As you edit files, Git sees them as modified, because you’ve changed them since your
last commit. You stage these modified files and then commit all your staged changes, and the
cycle repeats.

JB PORTALS 9
FULL STACK DEVELOPMENT - WEEK 3

REFERENCE LINKS:
https://www.youtube.com/watch?v=PSJ63LULKHA

https://www.youtube.com/watch?v=8JJ101D3knE

https://www.youtube.com/watch?v=b5oQZdzA37I

What is a Git repository?


A Git repository is a virtual storage of your project. It allows you to save versions of
your code, which you can access when needed.

Initializing a new repository: git init


To create a new repo, you'll use the git init command. git init is a one-time command
you use during the initial setup of a new repo. Executing this command will create a new .git
subdirectory in your current working directory. This will also create a new main branch.

Versioning an existing project with a new git repository


This example assumes you already have an existing project folder that you would like
to create a repo within. You'll first cd to the root project folder and then execute the git init
command.
cd /path/to/your/existing/code
git init
Pointing git init to an existing project directory will execute the same initialization setup as
mentioned above, but scoped to that project directory.
git init <project directory>

REFERENCE LINKS: https://www.atlassian.com/git/tutorials/setting-up-a-repository/git-


init

JB PORTALS 10
FULL STACK DEVELOPMENT - WEEK 3
Cloning an existing repository: git clone
If a project has already been set up in a central repository, the clone command is the
most common way for users to obtain a local development clone. Like git init, cloning is
generally a one-time operation. Once a developer has obtained a working copy, all version
control operations are managed through their local repository.
git clone <repo url>

git clone is used to create a copy or clone of remote repositories. You pass git clone a
repository URL. Git supports a few different network protocols and corresponding URL
formats. In this example, we'll be using the Git SSH protocol. Git SSH URLs follow a template
of: git@HOSTNAME:USERNAME/REPONAME.git

An example Git SSH URL would be: git@bitbucket.org:rhyolight/javascript-data-store.git


where the template values match:

HOSTNAME: bitbucket.org
USERNAME: rhyolight
REPONAME: javascript-data-store
When executed, the latest version of the remote repo files on the main branch will be
pulled down and added to a new folder. The new folder will be named after the REPONAME
in this case javascript-data-store. The folder will contain the full history of the remote
repository and a newly created main branch.

REFERENCE LINK: https://www.atlassian.com/git/tutorials/setting-up-a-repository/git-


clone

Staging and Committing Changes

git add
The git add command adds a change in the working directory to the staging area. It
tells Git that you want to include updates to a particular file in the next commit. However, git
add doesn't really affect the repository in any significant way—changes are not actually
recorded until you run git commit.

In conjunction with these commands, you'll also need git status to view the state of
the working directory and the staging area.

JB PORTALS 11
FULL STACK DEVELOPMENT - WEEK 3
How it works
The git add and git commit commands compose the fundamental Git workflow. These
are the two commands that every Git user needs to understand, regardless of their team’s
collaboration model. They are the means to record versions of a project into the repository’s
history.

Developing a project revolves around the basic edit/stage/commit pattern. First, you
edit your files in the working directory. When you’re ready to save a copy of the current state
of the project, you stage changes with git add. After you’re happy with the staged snapshot,
you commit it to the project history with git commit. The git reset command is used to undo
a commit or staged snapshot.

In addition to git add and git commit, a third command git push is essential for a
complete collaborative Git workflow. git push is utilized to send the committed changes to
remote repositories for collaboration. This enables other team members to access a set of
saved changes.

The staging area


The primary function of the git add command, is to promote pending changes in the
working directory, to the git staging area. The staging area is one of Git's more unique
features, and it can take some time to wrap your head around it if you’re coming from an SVN
(or even a Mercurial) background. It helps to think of it as a buffer between the working
directory and the project history. The staging area is considered one of the "three trees" of
Git, along with, the working directory, and the commit history.

Instead of committing all of the changes you've made since the last commit, the stage
lets you group related changes into highly focused snapshots before actually committing it
to the project history. This means you can make all sorts of edits to unrelated files, then go
back and split them up into logical commits by adding related changes to the stage and
commit them piece-by-piece. As in any revision control system, it’s important to create
atomic commits so that it’s easy to track down bugs and revert changes with minimal impact
on the rest of the project. Common options
git add <file>
Stage all changes in <file> for the next commit.
git add <directory>
Stage all changes in <directory> for the next commit.
git add -p

JB PORTALS 12
FULL STACK DEVELOPMENT - WEEK 3
Begin an interactive staging session that lets you choose portions of a file to add to
the next commit. This will present you with a chunk of changes and prompt you for a
command. Use y to stage the chunk, n to ignore the chunk, s to split it into smaller chunks, e
to manually edit the chunk, and q to exit.

Examples
When you’re starting a new project, git add serves the same function as svn import.
To create an initial commit of the current directory, use the following two commands:
git add .
git commit
Once you’ve got your project up-and-running, new files can be added by passing the path to
git add:
git add hello.py
git commit
The above commands can also be used to record changes to existing files. Again, Git doesn’t
differentiate between staging changes in new files vs. changes in files that have already been
added to the repository.

Git commit
The git commit command captures a snapshot of the project's currently staged
changes. Committed snapshots can be thought of as “safe” versions of a project—Git will
never change them unless you explicitly ask it to. Prior to the execution of git commit, The
git add command is used to promote or 'stage' changes to the project that will be stored in a
commit. These two commands git commit and git add are two of the most frequently used.

How it works
At a high-level, Git can be thought of as a timeline management utility. Commits are
the core building block units of a Git project timeline. Commits can be thought of as snapshots
or milestones along the timeline of a Git project. Commits are created with the git commit
command to capture the state of a project at that point in time. Git Snapshots are always
committed to the local repository. This is fundamentally different from SVN, wherein the
working copy is committed to the central repository. In contrast, Git doesn’t force you to
interact with the central repository until you’re ready. Just as the staging area is a buffer
between the working directory and the project history, each developer’s local repository is
a buffer between their contributions and the central repository.

Common options
git commit

JB PORTALS 13
FULL STACK DEVELOPMENT - WEEK 3
Commit the staged snapshot. This will launch a text editor prompting you for a commit
message. After you’ve entered a message, save the file and close the editor to create the actual
commit.
git commit -a
Commit a snapshot of all changes in the working directory. This only includes modifications
to tracked files (those that have been added with git add at some point in their history).
git commit -m "commit message"
A shortcut command that immediately creates a commit with a passed commit message. By
default, git commit will open up the locally configured text editor, and prompt for a commit
message to be entered. Passing the -m option will forgo the text editor prompt in-favor of an
inline message.
git commit -a -m "commit message"
A power user shortcut command that combines the -a and -m options. This combination
immediately creates a commit of all the staged changes and takes an inline commit message.
git commit --amend
This option adds another level of functionality to the commit command. Passing this option
will modify the last commit. Instead of creating a new commit, staged changes will be added
to the previous commit. This command will open up the system's configured text editor and
prompt to change the previously specified commit message.

Examples

Saving changes with a commit


The following example assumes you’ve edited some content in a file called hello.py on the
current branch, and are ready to commit it to the project history. First, you need to stage the
file with git add, then you can commit the staged snapshot.

git add hello.py


This command will add hello.py to the Git staging area. We can examine the result of this
action by using the git status command.

git status
On branch main
Changes to be committed:
(use "git reset HEAD <file>..." to unstage)
new file: hello.py
The green output new file: hello.py indicates that hello.py will be saved with the next commit.

JB PORTALS 14
FULL STACK DEVELOPMENT - WEEK 3
From the commit is created by executing:
git commit
This will open a text editor (customizable via git config) asking for a commit log message,
along with a list of what’s being committed:

# Please enter the commit message for your changes. Lines starting
# with '#' will be ignored, and an empty message aborts the commit.
# On branch main
# Changes to be committed:
# (use "git reset HEAD ..." to unstage)
#
#modified: hello.py
Git doesn't require commit messages to follow any specific formatting constraints,
but the canonical format is to summarize the entire commit on the first line in less than 50
characters, leave a blank line, then a detailed explanation of what’s been changed. For
example:

Change the message displayed by hello.py

- Update the sayHello() function to output the user's name


- Change the sayGoodbye() function to a friendlier message
It is a common practice to use the first line of the commit message as a subject line, similar
to an email. The rest of the log message is considered the body and used to communicate
details of the commit change set. Note that many developers also like to use the present tense
in their commit messages. This makes them read more like actions on the repository, which
makes many of the history-rewriting operations more intuitive.

Viewing the Commit History


After you have created several commits, or if you have cloned a repository with an existing
commit history, you’ll probably want to look back to see what has happened. The most basic
and powerful tool to do this is the git log command.

Associated git command


If you’re running git from the command line, the equivalent command is git log <filename>.
For example, if you want to find history information about a README.md file in the local
directory, run the following command:
git log

Git displays output similar to the following, which includes the commit time in UTC format:

JB PORTALS 15
FULL STACK DEVELOPMENT - WEEK 3
commit 0e62ed6d9f39fa9bedf7efc6edd628b137fa781a
Author: Mike Jang <mjang@gitlab.com>
Date: Tue Nov 26 21:44:53 2019 +0000

Deemphasize GDK as a doc build tool

commit 418879420b1e3a4662067bd07b64bb6988654697
Author: Marcin Sedlak-Jakubowski <msedlakjakubowski@gitlab.com>
Date: Mon Nov 4 19:58:27 2019 +0100
Fix typo

commit 21cc1fef11349417ed515557748369cfb235fc81
Author: Jacques Erasmus <jerasmus@gitlab.com>

Date: Mon Oct 14 22:13:40 2019 +0000


Add support for modern JS
Added rollup to the project
commit 2f5e895aebfa5678e51db303b97de56c51e3cebe
Author: Achilleas Pipinellis <axil@gitlab.com>
Date: Fri Sep 13 14:03:01 2019 +0000

Remove gitlab-foss Git URLs as we don't need them anymore

REFERENCE LINK: https://git-scm.com/book/en/v2/Git-Basics-Viewing-the-Commit-


History

Undoing public changes


When working on a team with remote repositories, extra consideration needs to be
made when undoing changes. Git reset should generally be considered a 'local' undo method.
A reset should be used when undoing changes to a private branch. This safely isolates the
removal of commits from other branches that may be in use by other developers.

It's important to remember that there is more than one way to 'undo' in a Git project.
Most of the discussion on this page touched on deeper topics that are more thoroughly
explained on pages specific to the relevant Git commands. The most commonly used 'undo'
tools are git checkout, git revert, and git reset. Some key points to remember are:
● Once changes have been committed they are generally permanent
● Use git checkout to move around and review the commit history
● git revert is the best tool for undoing shared public changes
● git reset is best used for undoing local private changes

JB PORTALS 16
FULL STACK DEVELOPMENT - WEEK 3

In addition to the primary undo commands, we took a look at other Git utilities: git
log for finding lost commits git clean for undoing uncommitted changes git add for modifying
the staging index.
Each of these commands has its own in-depth documentation. To learn more about a specific
command mentioned here, visit the corresponding links.

GIT BRANCHING

Git branches are effectively a pointer to a snapshot of your changes. When you want
to add a new feature or fix a bug—no matter how big or how small—you spawn a new branch
to encapsulate your changes. This makes it harder for unstable code to get merged into the
main code base, and it gives you the chance to clean up your future's history before merging
it into the main branch.

The diagram above visualizes a repository with two isolated lines of development,
one for a little feature, and one for a longer-running feature. By developing them in branches,
it’s not only possible to work on both of them in parallel, but it also keeps the main branch
free from questionable code.

How it works
A branch represents an independent line of development. Branches serve as an
abstraction for the edit/stage/commit process. You can think of them as a way to request a
brand new working directory, staging area, and project history. New commits are recorded
in the history for the current branch, which results in a fork in the history of the project.

JB PORTALS 17
FULL STACK DEVELOPMENT - WEEK 3
The git branch command lets you create, list, rename, and delete branches. It doesn’t
let you switch between branches or put a forked history back together again. For this reason,
git branch is tightly integrated with the git checkout and git merge commands.

Common Options
git branch
List all of the branches in your repository. This is synonymous with git branch --list.

git branch <branch>


Create a new branch called <branch>. This does not check out the new branch.

git branch -d <branch>


Delete the specified branch. This is a “safe” operation in that Git prevents you from deleting
the branch if it has unmerged changes.

git branch -D <branch>


Force delete the specified branch, even if it has unmerged changes. This is the command to
use if you want to permanently throw away all of the commits associated with a particular
line of development.

git branch -m <branch>


Rename the current branch to <branch>.

git branch -a
List all remote branches.

Creating Branches
It's important to understand that branches are just pointers to commits. When you create a
branch, all Git needs to do is create a new pointer, it doesn’t change the repository in any
other way. If you start with a repository that looks like this:

Git Tutorial: repository without any branches


Then, you create a branch using the following command:

git branch crazy-experiment


The repository history remains unchanged.

Note that this only creates the new branch. To start adding commits to it, you need to select
it with git checkout, and then use the standard git add and git commit commands.

JB PORTALS 18
FULL STACK DEVELOPMENT - WEEK 3
Deleting Branches
Once you’ve finished working on a branch and have merged it into the main code base, you’re
free to delete the branch without losing any history:

git branch -d crazy-experiment


However, if the branch hasn’t been merged, the above command will output an error
message:

error: The branch 'crazy-experiment' is not fully merged. If you are sure you want to delete
it, run 'git branch -D crazy-experiment'.
This protects you from losing access to that entire line of development. If you really want to
delete the branch (e.g., it’s a failed experiment), you can use the capital -D flag:

git branch -D crazy-experiment


This deletes the branch regardless of its status and without warnings, so use it judiciously.

The previous commands will delete a local copy of a branch. The branch may still exist in
remote repos. To delete a remote branch execute the following.

git push origin --delete crazy-experiment


Or
git push origin :crazy-experiment
This will push a delete signal to the remote origin repository that triggers a delete of the
remote crazy-experiment branch.

Usage: Existing branches


Assuming the repo you're working in contains pre-existing branches, you can switch
between these branches using git checkout. To find out what branches are available and what
the current branch name is, execute git branch.
$> git branch
main
another_branch
feature_inprogress_branch
$> git checkout feature_inprogress_branch
The above example demonstrates how to view a list of available branches by executing the
git branch command, and switch to a specified branch, in this case, the
feature_inprogress_branch.

JB PORTALS 19
FULL STACK DEVELOPMENT - WEEK 3
Switching Branches
Switching branches is a straightforward operation. Executing the following will point
HEAD to the tip of <branchname>.

git checkout <branchname>


Git tracks a history of checkout operations in the reflog. You can execute git reflog to view
the history.

Merging Local Branches Together


Merging is Git's way of putting a forked history back together again. The git merge
command lets you take the independent lines of development created by git branch and
integrate them into a single branch.

How it works
Git merge will combine multiple sequences of commits into one unified history. In the
most frequent use cases, git merge is used to combine two branches. The following examples
in this document will focus on this branch merging pattern. In these scenarios, git merge
takes two commit pointers, usually the branch tips, and will find a common base commit
between them. Once Git finds a common base commit it will create a new "merge commit"
that combines the changes of each queued merge commit sequence.

Say we have a new branch feature that is based off the main branch. We now want to
merge this feature branch into main.

JB PORTALS 20
FULL STACK DEVELOPMENT - WEEK 3
Invoking this command will merge the specified branch feature into the current
branch, we'll assume main. Git will determine the merge algorithm automatically (discussed
below).

Merge commits are unique against other commits in the fact that they have two
parent commits. When creating a merge commit Git will attempt to auto magically merge the
separate histories for you. If Git encounters a piece of data that is changed in both histories
it will be unable to automatically combine them.

Confirm the receiving branch


Execute git status to ensure that HEAD is pointing to the correct merge-receiving
branch. If needed, execute git checkout to switch to the receiving branch. In our case we will
execute git checkout main.

Merging
Once the previously discussed "preparing to merge" steps have been taken a merge
can be initiated by executing git merge where is the name of the branch that will be merged
into the receiving branch.

Our first example demonstrates a fast-forward merge. The code below creates a new
branch, adds two commits to it, then integrates it into the main line with a fast-forward
merge.
JB PORTALS 21
FULL STACK DEVELOPMENT - WEEK 3

# Start a new feature


git checkout -b new-feature main
# Edit some files
git add <file>
git commit -m "Start a feature"
# Edit some files
git add <file>
git commit -m "Finish a feature"
# Merge in the new-feature branch
git checkout main
git merge new-feature
git branch -d new-feature
This is a common workflow for short-lived topic branches that are used more as an
isolated development than an organizational tool for longer-running features.
Also note that Git should not complain about the git branch -d, since new-feature is now
accessible from the main branch. In the event that you require a merge commit during a fast
forward merge for record keeping purposes you can execute git merge with the --no-ffoption.

git merge --no-ff <branch>


This command merges the specified branch into the current branch, but always generates a
merge commit (even if it was a fast-forward merge). This is useful for documenting all
merges that occur in your repository.

What is GitHub?
GitHub is a Git repository hosting service that provides a web-based graphical
interface. It is the world’s largest coding community. Putting a code or a project into GitHub
brings it increased, widespread exposure. Programmers can find source codes in many
different languages and use the command-line interface, Git, to make and keep track of any
changes.
GitHub helps every team member work together on a project from any location while
facilitating collaboration. You can also review previous versions created at an earlier point
in time.

Benefits of GitHub
GitHub can be separated as the Git and the Hub. GitHub service includes access
controls as well as collaboration features like task management, repository hosting, and
team management. The key benefits of GitHub are as follows.
● It is easy to contribute to open source projects via GitHub.
● It helps to create an excellent document.
JB PORTALS 22
FULL STACK DEVELOPMENT - WEEK 3
● You can attract recruiter by showing off your work. If you have a profile on GitHub,
you will have a higher chance of being recruited.
● It allows your work to get out there in front of the public.
● You can track changes in your code across versions.

Distribute Git
DGit is short for “Distributed Git.” As many readers already know, Git itself is
distributed—any copy of a Git repository contains every file, branch, and commit in the
project’s entire history. DGit uses this property of Git to keep three copies of every
repository, on three different servers. The design of DGit keeps repositories fully available
without interruption even if one of those servers goes down. Even in the extreme case that
two copies of a repository become unavailable at the same time, the repository remains
readable; i.e., fetches, clones, and most of the web UI continue to work.

DGit performs replication at the application layer, rather than at the disk layer. Think
of the replicas as three loosely-coupled Git repositories kept in sync via Git protocols, rather
than identical disk images full of repositories. This design gives us great flexibility to decide
where to store the replicas of a repository and which replica to use for read Operations.

Github Account Creation


1. In the upper-right corner of any page, use the + drop-down menu, and select New
repository.

JB PORTALS 23
FULL STACK DEVELOPMENT - WEEK 3
2. Type a short, memorable name for your repository. For example, "hello-world".

3. Optionally, add a description of your repository. For example, "My first repository on
GitHub."

4. Choose a repository visibility. For more information, see "About repositories."

5. Select Initialize this repository with a README.

JB PORTALS 24
FULL STACK DEVELOPMENT - WEEK 3
6. Click Create repository.

Push to repositories
git push -u -f origin main
The -u (or --set-upstream) flag sets the remote origin as the upstream reference. This allows
you to later perform git push and git pull commands without having to specify an origin since
we always want GitHub in this case.
The -f (or --force) flag stands for force. This will automatically overwrite everything
in the remote directory. We’re using it here to overwrite the default README that GitHub
automatically initialized.

All together
git init
git add -A
git commit -m 'Added my project'
git remote add origin git@github.com:sammy/my-new-project.git
git push -u -f origin main

Versioning in Github
Lately I've been doing a lot of thinking around versioning in repositories. For all the
convenience and ubiquity of package.json, it does sometimes misrepresent the code that is
contained within a repository. For example, suppose I start out my project at v0.1.0 and
that's what's in my package.json file in my master branch. Then someone submits a pull
request that I merge in - the version number hasn't changed even though the repository now
no longer represents v0.1.0. The repository is actually now in an intermediate state, in
between v0.1.0 and the next official release.

To deal with that, I started changing the package.json version only long enough to
push a new release, and then I would change it to a dev version representing the next
scheduled release (such as v0.2.0-dev). That solved the problem of misrepresenting the
version number of the repository (provided people realize "dev" means "in flux day to day").
However, it introduced a yucky workflow that I really hated. When it was time for a release,
I'd have to:

JB PORTALS 25
FULL STACK DEVELOPMENT - WEEK 3
1. Manually change the version in package.json.
2. Tag the version in the repo.
3. Publish to npm.
4. Manually change the version in package.json to a dev version.
5. Push to master.
There may be some way to automate this, but I couldn't figure out a really nice way to do it.
That process works well enough when you have no unplanned releases. However, what if
I'm working on v0.2.0-dev after v0.1.0 was released, and need to do a v0.1.1 release?

Then I need to:


1. Note the current dev version.
2. Manually change the version to v0.1.1.
3. Tag the version in the repo.
4. Publish to npm.
5. Change the version back to the same version from step 1.
6. Push to master.

Add on top of this trying to create an automated changelog based on tagging, and things can
get a little bit tricky. My next thought was to have a release branch where the last published
release would live. Essentially, after v0.1.0, the release branch remains at v0.1.0 while the
master branch becomes v0.2.0-dev. If I need to do an intermediate release, then I merge
master onto release and change versions only in the release branch. Once again, this is a bit
messy because package.json is guaranteed to have different versions on master and release,
which always causes merge conflicts. This also means the changelog is updated only in the
release branch. This solution turned out to be more complex than I anticipated.

I'm still not sure the right way to do this, but my high-level requirements are:
1. Make sure the version in package.json is always accurate.
2. Don't require people to change the version to make a commit.
3. Don't require people to use a special build command to make a commit.
4. Distinguish between development (in progress) work vs. official releases.
5. Be able to auto-increment the version number (via npm version).

Collaboration
You can invite users to become collaborators to your personal repository. If you're
using GitHub Free, you can add unlimited collaborators on public and private repositories.

1. Ask for the username of the person you're inviting as a collaborator. If they don't have
a username yet, they can sign up for GitHub For more information, see "Signing up for
a new GitHub account".

JB PORTALS 26
FULL STACK DEVELOPMENT - WEEK 3
2. On GitHub.com, navigate to the main page of the repository.
3. Under your repository name, click Settings.

4. In the "Access" section of the sidebar, click Collaborators & teams.


5. Click Invite a collaborator.

6. In the search field, start typing the name of person you want to invite, then click a
name in the list of matches.

7. Click Add NAME to REPOSITORY.

8. The user will receive an email inviting them to the repository. Once they accept your
invitation, they will have collaborator access to your repository.

JB PORTALS 27
FULL STACK DEVELOPMENT - WEEK 3
Migration in Github
A migration is the process of transferring data from a source location (either a
GitHub.com organization or a GitHub Enterprise Server instance) to a target GitHub
Enterprise Server instance. Migrations can be used to transfer your data when changing
platforms or upgrading hardware on your instance.

Types of migrations
There are three types of migrations you can perform:
● A migration from a GitHub Enterprise Server instance to another GitHub Enterprise
Server instance. You can migrate any number of repositories owned by any user or
organization on the instance. Before performing a migration, you must have site
administrator access to both instances.
● A migration from a GitHub.com organization to a GitHub Enterprise Server instance.
You can migrate any number of repositories owned by the organization. Before
performing a migration, you must have administrative access to the GitHub.com
organization as well as site administrator access to the target instance.
● Trial runs are migrations that import data to a staging instance. These can be useful
to see what would happen if a migration were applied to your GitHub Enterprise
Server instance. We strongly recommend that you perform a trial run on a staging
instance before importing data to your production instance.

What is Cloud Computing


In Simplest terms, cloud computing means storing and accessing the data and
programs on remote servers that are hosted on the internet instead of the computer’s hard
drive or local server. Cloud computing is also referred to as Internet-based computing.

JB PORTALS 28
FULL STACK DEVELOPMENT - WEEK 3
Benefits of Cloud Hosting:
1. Scalability: With Cloud hosting, it is easy to grow and shrink the number and size of
servers based on the need. This is done by either increasing or decreasing the resources
in the cloud. This ability to alter plans due to fluctuation in business size and needs is a
superb benefit of cloud computing, especially when experiencing a sudden growth in
demand.
2. Instant: Whatever you want is instantly available in the cloud.
3. Save Money: An advantage of cloud computing is the reduction in hardware costs.
Instead of purchasing in-house equipment, hardware needs are left to the vendor. For
companies that are growing rapidly, new hardware can be large, expensive, and
inconvenient. Cloud computing alleviates these issues because resources can be
acquired quickly and easily. Even better, the cost of repairing or replacing equipment is
passed to the vendors. Along with purchase costs, off-site hardware cuts internal power
costs and saves space. Large data centers can take up precious office space and produce
a large amount of heat. Moving to cloud applications or storage can help maximize space
and significantly cut energy expenditures.
4. Reliability: Rather than being hosted on one single instance of a physical server, hosting
is delivered on a virtual partition that draws its resource, such as disk space, from an
extensive network of underlying physical servers. If one server goes offline it will have
no effect on availability, as the virtual servers will continue to pull resources from the
remaining network of servers.
5. Physical Security: The underlying physical servers are still housed within data centers
and so benefit from the security measures that those facilities implement to prevent
people from accessing or disrupting them on-site.
6. Outsource Management: When you are managing the business, Someone else manages
your computing infrastructure. You do not need to worry about management as well as
upgradation.
To more clarification about how cloud computing has changed the commercial deployment
of the system. Consider above the three examples:
1. Amazon Web Services(AWS): One of the most successful cloud-based
businesses is Amazon Web Services(AWS), which is an Infrastructure as a
Service(Iaas) offering that pays rent for virtual computers on Amazon’s
infrastructure.
2. Microsoft Azure Platform: Microsoft is creating the Azure platform which
enables the .NET Framework Application to run over the internet as an alternative
platform for Microsoft developers. This is the classic Platform as a Service(PaaS).
3. Google: Google has built a worldwide network of datacenters to service its search
engine. From this service, Google has captured the world’s advertising revenue.
By using that revenue, Google offers free software to users based on
infrastructure. This is called Software as a Service(SaaS).
JB PORTALS 29
FULL STACK DEVELOPMENT - WEEK 3
What is Cloud Computing Infrastructure?
Cloud computing infrastructure is the collection of hardware and software elements
needed to enable cloud computing. It includes computing power, networking, and storage, as well
as an interface for users to access their virtualized resources. The virtual resources mirror a
physical infrastructure, with components like servers, network switches, memory and storage
clusters.

Why Cloud Computing Infrastructure?


Cloud infrastructure offers the same capabilities as physical infrastructure but can
provide additional benefits like a lower cost of ownership, greater flexibility, and scalability.
Cloud computing infrastructure is available for private cloud, public cloud, and hybrid cloud
systems. It’s also possible to rent cloud infrastructure components from a cloud provider,
through cloud infrastructure as a service (Iaas). Cloud infrastructure systems allow for
integrated hardware and software and can provide a single management platform for multiple
clouds.

There are the following three types of cloud service models -


1. Infrastructure as a Service (IaaS)
2. Platform as a Service (PaaS)
3. Software as a Service (SaaS)

Infrastructure as a Service | IaaS


Iaas is also known as Hardware as a Service (HaaS). It is one of the layers of the
cloud computing platform. It allows customers to outsource their IT infrastructures such as
servers, networking, processing, storage, virtual machines, and other resources. Customers
access these resources on the Internet using a pay-as-per use model.

In traditional hosting services, IT infrastructure was rented out for a specific period
of time, with pre-determined hardware configuration. The client paid for the configuration
and time, regardless of the actual use. With the help of the IaaS cloud computing platform
layer, clients can dynamically scale the configuration to meet changing requirements and are
billed only for the services actually used.
IaaS provider provides the following services -
1. Compute: Computing as a Service includes virtual central processing units and
virtual main memory for the Vms that is provisioned to the end- users.
2. Storage: IaaS provider provides back-end storage for storing files.

JB PORTALS 30
FULL STACK DEVELOPMENT - WEEK 3
3. Network: Network as a Service (NaaS) provides networking components such as
routers, switches, and bridges for the Vms.
4. Load balancers: It provides load balancing capability at the infrastructure layer.

Advantages of IaaS cloud computing layer


There are the following advantages of IaaS computing layer -
1. Shared infrastructure : IaaS allows multiple users to share the same physical
infrastructure.
2. Web access to the resources: Iaas allows IT users to access resources over the internet.
3. Pay-as-per-use model: IaaS providers provide services based on the pay-as-per-use
basis. The users are required to pay for what they have used.
4. Focus on the core business: IaaS providers focus on the organization's core business
rather than on IT infrastructure.
5. On-demand scalability: On-demand scalability is one of the biggest advantages of IaaS.
Using IaaS, users do not worry about to upgrade software and troubleshoot the issues related
to hardware components.
Disadvantages of IaaS cloud computing layer
1. Security: Security is one of the biggest issues in IaaS. Most of the IaaS providers are not
able to provide 100% security.
2. Maintenance & Upgrade: Although IaaS service providers maintain the software, but
they do not upgrade the software for some organizations.
3. Interoperability issues: It is difficult to migrate VM from one IaaS provider to the other,
so the customers might face problem related to vendor lock-in.

Top Iaas Providers who are providing IaaS cloud computing platform

JB PORTALS 31
FULL STACK DEVELOPMENT - WEEK 3

IaaS Iaas Solution Details


Vendor

Amazon Web Elastic, Elastic Compute The cloud computing platform pioneer,
Services Cloud (EC2) Amazon offers auto scaling, cloud monitoring,
MapReduce, Route 53, and load balancing features as part of its
Virtual Private Cloud, portfolio.
etc.

Netmagic Netmagic IaaS Cloud Netmagic runs from data centers in Mumbai,
Solutions Chennai, and Bangalore, and a virtual data
center in the United States. Plans are underway
to extend services to West Asia.

Rackspace Cloud servers, cloud The cloud computing platform vendor focuses
files, cloud sites, etc. primarily on enterprise-level hosting services.

Reliance Reliance Internet Data RIDC supports both traditional hosting and
Communicati Center cloud services, with data centers in Mumbai,
ons Bangalore, Hyderabad, and Chennai. The cloud
services offered by RIDC include IaaS and SaaS.

Sify Sify IaaS Sify's cloud computing platform is powered by


Technologies HP's converged infrastructure. The vendor
offers all three types of cloud services: IaaS,
PaaS, and SaaS.

Tata InstaCompute InstaCompute is Tata Communications' IaaS


Communicati offering. InstaCompute data centers are
ons located in Hyderabad and Singapore, with
operations in both countries.

Platform as a Service | PaaS


Platform as a Service (PaaS) provides a runtime environment. It allows programmers
to easily create, test, run, and deploy web applications. You can purchase these applications
from a cloud service provider on a pay-as-per use basis and access them using the Internet
JB PORTALS 32
FULL STACK DEVELOPMENT - WEEK 3
connection. In PaaS, back end scalability is managed by the cloud service provider, so end-
users do not need to worry about managing the infrastructure.

Example: Google App Engine, Force.com, Joyent, Azure.


PaaS providers provide the Programming languages, Application frameworks,
Databases, and Other tools:

1. Programming languages
PaaS providers provide various programming languages for the developers to
develop the applications. Some popular programming languages provided by PaaS providers
are Java, PHP, Ruby, Perl, and Go.
2. Application frameworks
PaaS providers provide application frameworks to easily understand the application
development. Some popular application frameworks provided by PaaS providers are
Node.js, Drupal, Joomla, WordPress, Spring, Play, Rack, and Zend.
3. Databases
PaaS providers provide various databases such as ClearDB, PostgreSQL, MongoDB,
and Redis to communicate with the applications.
4. Other tools
PaaS providers provide various other tools that are required to develop, test, and
deploy the applications.

Advantages of PaaS
There are the following advantages of PaaS -
1) Simplified Development: PaaS allows developers to focus on development and
innovation without worrying about infrastructure management.

JB PORTALS 33
FULL STACK DEVELOPMENT - WEEK 3
2) Lower risk: No need for up-front investment in hardware and software. Developers only
need a PC and an internet connection to start building applications.
3) Prebuilt business functionality: Some PaaS vendors also provide already defined
business functionality so that users can avoid building everything from very scratch and
hence can directly start the projects only.
4) Instant community: PaaS vendors frequently provide online communities where the
developer can get the ideas to share experiences and seek advice from others.
5) Scalability: Applications deployed can scale from one to thousands of users without any
changes to the applications.
Disadvantages of PaaS cloud computing layer
1) Vendor lock-in: One has to write the applications according to the platform provided by
the PaaS vendor, so the migration of an application to another PaaS vendor would be a
problem.
2) Data Privacy: Corporate data, whether it can be critical or not, will be private, so if it is
not located within the walls of the company, there can be a risk in terms of privacy of data.
3) Integration with the rest of the systems applications: It may happen that some
applications are local, and some are in the cloud. So there will be chances of increased
complexity when we want to use data which in the cloud with the local data.

Popular PaaS Providers

Providers Services

Google App Engine App Identity, URL Fetch, Cloud storage client library, Logservice
(GAE)

Salesforce.com Faster implementation, Rapid scalability, CRM Services, Sales cloud,


Mobile connectivity, Chatter.

Windows Azure Compute, security, IoT, Data Storage.

AppFog Justcloud.com, SkyDrive, GoogleDocs

Openshift RedHat, Microsoft Azure.

Cloud Foundry from Data, Messaging, and other services.


VMware

JB PORTALS 34
FULL STACK DEVELOPMENT - WEEK 3
Software as a Service | SaaS
SaaS is also known as "On-Demand Software". It is a software distribution model in
which services are hosted by a cloud service provider. These services are available to end-
users over the internet so, the end-users do not need to install any software on their devices
to access these services.

There are the following services provided by SaaS providers -


Business Services - SaaS Provider provides various business services to start-up the
business. The SaaS business services include ERP (Enterprise Resource Planning), CRM
(Customer Relationship Management), billing, and sales.

Document Management - SaaS document management is a software application offered by


a third party (SaaS providers) to create, manage, and track electronic documents.

Example: Slack, Samepage, Box, and Zoho Forms.

Social Networks - As we all know, social networking sites are used by the general public, so
social networking service providers use SaaS for their convenience and handle the general
public's information.

Mail Services - To handle the unpredictable number of users and load on e-mail services,
many e-mail providers offering their services using SaaS.

Advantages of SaaS cloud computing layer


1) SaaS is easy to buy
SaaS pricing is based on a monthly fee or annual fee subscription, so it allows
organizations to access business functionality at a low cost, which is less than licensed
applications. Unlike traditional software, which is sold as a licensed based with an up-front
cost (and often an optional ongoing support fee), SaaS providers are generally pricing the
applications using a subscription fee, most commonly a monthly or annually fee.

JB PORTALS 35
FULL STACK DEVELOPMENT - WEEK 3
2. One to Many
SaaS services are offered as a one-to-many model means a single instance of the
application is shared by multiple users.

3. Less hardware required for SaaS


The software is hosted remotely, so organizations do not need to invest in additional
hardware.

4. Low maintenance required for SaaS


Software as a service removes the need for installation, set-up, and daily maintenance
for the organizations. The initial set-up cost for SaaS is typically less than the enterprise
software. SaaS vendors are pricing their applications based on some usage parameters, such
as a number of users using the application. So SaaS does easy to monitor and automatic
updates.

5. No special software or hardware versions required


All users will have the same version of the software and typically access it through
the web browser. SaaS reduces IT support costs by outsourcing hardware and software
maintenance and support to the IaaS provider.

6. Multidevice support
SaaS services can be accessed from any device such as desktops, laptops, tablets,
phones, and thin clients.

7. API Integration
SaaS services easily integrate with other software or services through standard APIs.

8. No client-side installation
SaaS services are accessed directly from the service provider using the internet
connection, so do not need to require any software installation.

Disadvantages of SaaS cloud computing layer


1) Security : Actually, data is stored in the cloud, so security may be an issue for some users.
However, cloud computing is not more secure than in-house deployment.

2) Latency issue: Since data and applications are stored in the cloud at a variable distance
from the end-user, there is a possibility that there may be greater latency when interacting
with the application compared to local deployment. Therefore, the SaaS model is not suitable
for applications whose demand response time is in milliseconds.

JB PORTALS 36
FULL STACK DEVELOPMENT - WEEK 3
3) Total Dependency on Internet : Without an internet connection, most SaaS applications
are not usable.

4) Switching between SaaS vendors is difficult: Switching SaaS vendors involves the
difficult and slow task of transferring the very large data files over the internet and then
converting and importing them into another SaaS also.

Popular SaaS Providers

Provider Services

Salseforce.co On-demand CRM solutions


m

Microsoft Online office suite


Office 365

Google Apps Gmail, Google Calendar, Docs, and sites

NetSuite ERP, accounting, order management, CRM, Professionals Services


Automation (PSA), and e-commerce applications.

GoToMeeting Online meeting and video-conferencing software

Constant E-mail marketing, online survey, and event marketing


Contact

Oracle CRM CRM applications

Workday, Inc Human capital management, payroll, and financial management.

JB PORTALS 37
FULL STACK DEVELOPMENT - WEEK 3
Cloud Deployment Model
It works as your virtual computing environment with a choice of deployment model
depending on how much data you want to store and who has access to the Infrastructure.

Different Types Of Cloud Computing Deployment Models


Most cloud hubs have tens of thousands of servers and storage devices to enable fast
loading. It is often possible to choose a geographic area to put the data "closer" to users. Thus,
deployment models for cloud computing are categorized based on their location. To know
which model would best fit the requirements of your organization, let us first learn about the
various types.

Public Cloud
The name says it all. It is accessible to the public. Public deployment models in the
cloud are perfect for organizations with growing and fluctuating demands. It also makes a
great choice for companies with low-security concerns. Thus, you pay a cloud service
provider for networking services, compute virtualization & storage available on the public
internet. It is also a great delivery model for the teams with development and testing. Its
configuration and deployment are quick and easy, making it an ideal choice for test
environments.

JB PORTALS 38
FULL STACK DEVELOPMENT - WEEK 3
Benefits of Public Cloud
● Minimal Investment - As a pay-per-use service, there is no large upfront cost and is
ideal for businesses who need quick access to resources
● No Hardware Setup - The cloud service providers fully fund the entire Infrastructure
● No Infrastructure Management - This does not require an in-house team to utilize the
public cloud.
Limitations of Public Cloud
● Data Security and Privacy Concerns - Since it is accessible to all, it does not fully
protect against cyber-attacks and could lead to vulnerabilities.
● Reliability Issues - Since the same server network is open to a wide range of users, it
can lead to malfunction and outages
● Service/License Limitation - While there are many resources you can exchange with
tenants, there is a usage cap.

Private Cloud
Now that you understand what the public cloud could offer you, of course, you are
keen to know what a private cloud can do. Companies that look for cost efficiency and greater
control over data & resources will find the private cloud a more suitable choice.
It means that it will be integrated with your data center and managed by your IT team.
Alternatively, you can also choose to host it externally. The private cloud offers bigger
opportunities that help meet specific organizations' requirements when it comes to
customization. It's also a wise choice for mission-critical processes that may have frequently
changing requirements.

Benefits of Private Cloud

JB PORTALS 39
FULL STACK DEVELOPMENT - WEEK 3
● Data Privacy - It is ideal for storing corporate data where only authorized personnel
gets access
● Security - Segmentation of resources within the same Infrastructure can help with
better access and higher levels of security.
● Supports Legacy Systems - This model supports legacy systems that cannot access the
public cloud.
Limitations of Private Cloud
● Higher Cost - With the benefits you get, the investment will also be larger than the
public cloud. Here, you will pay for software, hardware, and resources for staff and
training.
● Fixed Scalability - The hardware you choose will accordingly help you scale in a
certain direction
● High Maintenance - Since it is managed in-house, the maintenance costs also increase.

Community Cloud
The community cloud operates in a way that is similar to the public cloud. There's just
one difference - it allows access to only a specific set of users who share common objectives
and use cases. This type of deployment model of cloud computing is managed and hosted
internally or by a third-party vendor. However, you can also choose a combination of all
three.

JB PORTALS 40
FULL STACK DEVELOPMENT - WEEK 3
Benefits of Community Cloud
● Smaller Investment - A community cloud is much cheaper than the private & public
cloud and provides great performance
● Setup Benefits - The protocols and configuration of a community cloud must align
with industry standards, allowing customers to work much more efficiently.
Limitations of Community Cloud
● Shared Resources - Due to restricted bandwidth and storage capacity, community
resources often pose challenges.
● Not as Popular - Since this is a recently introduced model, it is not that popular or
available across industries

Hybrid Cloud
As the name suggests, a hybrid cloud is a combination of two or more cloud
architectures. While each model in the hybrid cloud functions differently, it is all part of the
same architecture. Further, as part of this deployment of the cloud computing model, the
internal or external providers can offer resources.
Let's understand the hybrid model better. A company with critical data will prefer
storing on a private cloud, while less sensitive data can be stored on a public cloud. The
hybrid cloud is also frequently used for 'cloud bursting'. It means, supposes an organization
runs an application on-premises, but due to heavy load, it can burst into the public cloud.

JB PORTALS 41
FULL STACK DEVELOPMENT - WEEK 3
Benefits of Hybrid Cloud
● Cost-Effectiveness - The overall cost of a hybrid solution decreases since it majorly
uses the public cloud to store data.
● Security - Since data is properly segmented, the chances of data theft from attackers
are significantly reduced.
● Flexibility - With higher levels of flexibility, businesses can create custom solutions
that fit their exact requirements
Limitations of Hybrid Cloud
● Complexity - It is complex setting up a hybrid cloud since it needs to integrate two or
more cloud architectures
● Specific Use Case - This model makes more sense for organizations that have multiple
use cases or need to separate critical and sensitive data
Virtualization in Cloud Computing
Virtualization is the "creation of a virtual (rather than actual) version of something,
such as a server, a desktop, a storage device, an operating system or network resources". In
other words, Virtualization is a technique, which allows to share a single physical instance
of a resource or an application among multiple customers and organizations. It does by
assigning a logical name to a physical storage and providing a pointer to that physical
resource when demanded.

What is the concept behind the Virtualization?


Creation of a virtual machine over existing operating system and hardware is known
as Hardware Virtualization. A Virtual machine provides an environment that is logically
separated from the underlying hardware.

The machine on which the virtual machine is going to create is known as Host
Machine and that virtual machine is referred as a Guest Machine
Types of Virtualization:
1. Hardware Virtualization.
2. Operating system Virtualization.
3. Server Virtualization.
4. Storage Virtualization.
1) Hardware Virtualization:
When the virtual machine software or virtual machine manager (VMM) is directly
installed on the hardware system is known as hardware virtualization. The main job of
hypervisor is to control and monitoring the processor, memory and other hardware
resources. After virtualization of hardware system we can install different operating system
on it and run different applications on those OS.

JB PORTALS 42
FULL STACK DEVELOPMENT - WEEK 3
Usage:
Hardware virtualization is mainly done for the server platforms, because controlling virtual
machines is much easier than controlling a physical server.

2) Operating System Virtualization:


When the virtual machine software or virtual machine manager (VMM) is installed on
the Host operating system instead of directly on the hardware system is known as operating
system virtualization.
Usage:
Operating System Virtualization is mainly used for testing the applications on different
platforms of OS.

3) Server Virtualization:
When the virtual machine software or virtual machine manager (VMM) is directly
installed on the Server system is known as server virtualization.

Usage:
Server virtualization is done because a single physical server can be divided into multiple
servers on the demand basis and for balancing the load.
4) Storage Virtualization:
Storage virtualization is the process of grouping the physical storage from multiple
network storage devices so that it looks like a single storage device.
Storage virtualization is also implemented by using software applications.

Usage:
Storage virtualization is mainly done for back-up and recovery purposes.

How does virtualization work in cloud computing?


Virtualization plays a very important role in cloud computing technology, normally
in the cloud computing, users share the data present in the clouds like applications etc, but
actually with the help of virtualization users share the Infrastructure.
The main usage of Virtualization Technology is to provide the applications with the
standard versions to their cloud users, suppose if the next version of that application is
released, then the cloud provider has to provide the latest version to their cloud users and
practically it is possible because it is more expensive.

JB PORTALS 43
FULL STACK DEVELOPMENT - WEEK 3
Difference between Cloud Services IAAS, PAAS and SAAS :

Basis Of IAAS PAAS SAAS

Infrastructure as a Platform as a Software as a


Stands for
service. service. service.

IAAS is used by PAAS is used by SAAS is used by the


Uses
network architects. developers. end user.

PAAS gives access to


IAAS gives access to run time
the resources like environment to SAAS gives access to
Access
virtual machines deployment and the end user.
and virtual storage. development tools
for application.

It is a service model It is a cloud


It is a service model
that provides computing model
in cloud computing
virtualized that delivers tools
Model that hosts software
computing that are used for the
to make it available
resources over the development of
to clients.
internet. applications.

There is no
Some knowledge is requirement about
Technical It requires technical
required for the technicalities
understanding. knowledge.
basic setup. company handles
everything.

JB PORTALS 44
FULL STACK DEVELOPMENT - WEEK 3

It is popular among It is popular among


It is popular among developers who consumers and
Popularity. developers and focus on the companies, such as
researchers. development of file sharing, email,
apps and scripts. and networking.

Amazon Web Facebook, and MS Office web,


Cloud services. Services, sun, Google search Facebook and
vCloud Express. engine. Google Apps.

Enterprise AWS virtual private


Microsoft Azure. IBM cloud analysis.
services. cloud.

Outsourced cloud Force.com,


Salesforce AWS, Terremark
services. Gigaspaces.

Operating System,
Runtime, Data of the
User Controls Nothing
Middleware, and application
Application data

AMAZON WEB SERVICES


Amazon Web Services (AWS), a subsidiary of Amazon.com, has invested billions of
dollars in IT resources distributed across the globe. These resources are shared among all
the AWS account holders across the globe. These account themselves are entirely isolated
from each other. AWS provides on-demand IT resources to its account holders on a pay-as-
you-go pricing model with no upfront cost. Amazon Web services offers flexibility because
you can only pay for services you use or you need.

JB PORTALS 45
FULL STACK DEVELOPMENT - WEEK 3
Enterprises use AWS to reduce capital expenditure of building their own private IT
infrastructure (which can be expensive depending upon the enterprise’s size and nature).
AWS has its own Physical fiber network that connects with Availability zones, regions and
Edge locations. All the maintenance cost is also bared by the AWS that saves a fortune for the
enterprises.

The Amazon Web Services (AWS) platform provides more than 200 fully featured
services from data centers located all over the world, and is the world's most comprehensive
cloud platform. Amazon web service is an online platform that provides scalable and cost-
effective cloud computing solutions.AWS is a broadly adopted cloud platform that offers
several on-demand operations like compute power, database storage, content delivery, etc.,
to help corporates scale and grow.

Applications of AWS
The most common applications of AWS are storage and backup, websites, gaming, mobile,
web, and social media applications. Some of the most crucial applications in detail are as
follows:

1. Storage and Backup


One of the reasons why many businesses use AWS is because it offers multiple types
of storage to choose from and is easily accessible as well. It can be used for storage and file
indexing as well as to run critical business applications.

2. Websites
Businesses can host their websites on the AWS cloud, similar to other web
applications.

3. Gaming
There is a lot of computing power needed to run gaming applications. AWS makes it
easier to provide the best online gaming experience to gamers across the world.

4. Mobile, Web and Social Applications


A feature that separates AWS from other cloud services is its capability to launch and
scale mobile, e-commerce, and SaaS applications. API-driven code on AWS can enable
companies to build uncompromisingly scalable applications without requiring any OS and
other systems.

JB PORTALS 46
FULL STACK DEVELOPMENT - WEEK 3
5. Big Data Management and Analytics (Application)
● Amazon Elastic MapReduced to process large amounts of data via the Hadoop
framework.
● Amazon Kinesis to analyze and process the streaming data.
● AWS Glue to handle, extract, transform and load jobs.
● Amazon Elasticsearch Service to enable a team to perform log analysis, and tool
monitoring with the help of the open source tool, Elastic-search.
6. Artificial Intelligence
● Amazon Lex to offer voice and text chatbot technology.
● Amazon Polly to translate text-to-speech translation such as Alexa Voice Services
and echo devices.
● Amazon Rekognition to analyze the image and face.

7. Messages and Notifications


● Amazon Simple Notification Service (SNS) for effective business or core
communication.
● Amazon Simple Email Service (SES) to receive or send emails for IT professionals
and marketers.
● Amazon Simple Queue Service (SQS) to enable businesses to subscribe or publish
messages to end users.

8. Augmented Reality and Virtual Reality


● Amazon Sumerian service enables users to make the use of AR and VR
development tools to offer 3D web applications, E-commerce & sales applications,
Marketing, Online education, Manufacturing, Training simulations, and Gaming.

9. Game Development
● AWS game development tools are used by large game development companies that
offer developer back-end services, analytics, and various developer tools.
● AWS allows developers to host game data as well as store the data to analyze the
gamer's performance and develop the game accordingly.

10. Internet of Things


● AWS IoT service offers a back-end platform to manage IoT devices as well as data
ingestion to database services and AWS storage.
● AWS IoT Button offers limited IoT functionality to hardware.
● AWS Greengrass offers AWS computing for IoT device installation.

JB PORTALS 47
FULL STACK DEVELOPMENT - WEEK 3

Companies Using AWS


Whether it’s technology giants, startups, government, food manufacturers or retail
organizations, there are so many companies across the world using AWS to develop, deploy
and host applications. According to Amazon, the number of active AWS users exceeds
1,000,000. Here is a list of companies using AWS:
● Netflix
● Intuit
● Coinbase
● Finra
● Johnson & Johnson
● Capital One
● Adobe
● Airbnb
● AOL
● Hitachi

Features of AWS

The following are the features of AWS:


● Flexibility
● Cost-effective
● Scalable and elastic
● Secure
● Experienced

JB PORTALS 48
FULL STACK DEVELOPMENT - WEEK 3
1) Flexibility
● The difference between AWS and traditional IT models is flexibility.
● The traditional models used to deliver IT solutions that require large investments in
a new architecture, programming languages, and operating system. Although these
investments are valuable, it takes time to adopt new technologies and can also slow
down your business.
● The flexibility of AWS allows us to choose which programming models, languages,
and operating systems are better suited for their project, so we do not have to learn
new skills to adopt new technologies.
● Flexibility means that migrating legacy applications to the cloud is easy, and cost-
effective. Instead of re-writing the applications to adopt new technologies, you just
need to move the applications to the cloud and tap into advanced computing
capabilities.
● Building applications in aws are like building applications using existing hardware
resources.
● The larger organizations run in a hybrid mode, i.e., some pieces of the application run
in their data center, and other portions of the application run in the cloud.
● The flexibility of aws is a great asset for organizations to deliver the product with
updated technology in time, and overall enhancing the productivity.

2) Cost-effective
● Cost is one of the most important factors that need to be considered in delivering IT
solutions.
● For example, developing and deploying an application can incur a low cost, but after
successful deployment, there is a need for hardware and bandwidth. Owing our own
infrastructure can incur considerable costs, such as power, cooling, real estate, and
staff.
● The cloud provides on-demand IT infrastructure that lets you consume the resources
what you actually need. In aws, you are not limited to a set amount of resources such
as storage, bandwidth or computing resources as it is very difficult to predict the
requirements of every resource. Therefore, we can say that the cloud provides
flexibility by maintaining the right balance of resources.
● AWS provides no upfront investment, long-term commitment, or minimum spend.
● You can scale up or scale down as the demand for resources increases or decreases
respectively.
● An aws allows you to access the resources more instantly. It has the ability to respond
the changes more quickly, and no matter whether the changes are large or small,

JB PORTALS 49
FULL STACK DEVELOPMENT - WEEK 3
means that we can take new opportunities to meet the business challenges that could
increase the revenue, and reduce the cost.

3) Scalable and elastic


● In a traditional IT organization, scalability and elasticity were calculated with
investment and infrastructure while in a cloud, scalability and elasticity provide
savings and improved ROI (Return On Investment).
● Scalability in aws has the ability to scale the computing resources up or down when
demand increases or decreases respectively.
● Elasticity in aws is defined as the distribution of incoming application traffic across
multiple targets such as Amazon EC2 instances, containers, IP addresses, and Lambda
functions.
● Elasticity load balancing and scalability automatically scale your AWS computing
resources to meet unexpected demand and scale down automatically when demand
decreases.
● The aws cloud is also useful for implementing short-term jobs, mission-critical jobs,
and the jobs repeated at the regular intervals.

4) Secure
● AWS provides a scalable cloud-computing platform that provides customers with
end-to-end security and end-to-end privacy.
● AWS incorporates the security into its services, and documents to describe how to
use the security features.
● AWS maintains confidentiality, integrity, and availability of your data which is the
utmost importance of the aws.

Physical security: Amazon has many years of experience in designing, constructing, and
operating large-scale data centers. An aws infrastructure is incorporated in AWS controlled
data centers throughout the world. The data centers are physically secured to prevent
unauthorized access.
Secure services: Each service provided by the AWS cloud is secure.
Data privacy: A personal and business data can be encrypted to maintain data privacy.

5) Experienced
● The AWS cloud provides levels of scale, security, reliability, and privacy.
● AWS has built an infrastructure based on lessons learned from over sixteen years of
experience managing the multi-billion dollar Amazon.com business.

JB PORTALS 50
FULL STACK DEVELOPMENT - WEEK 3
● Amazon continues to benefit its customers by enhancing their infrastructure
capabilities.
● Nowadays, Amazon has become a global web platform that serves millions of
customers, and AWS has been evolved since 2006, serving hundreds of thousands of
customers worldwide.

REFERENCE LINK FOR Top 30 AWS Services List in 2022: https://mindmajix.com/top-


aws-services

How to create and setup a virtual machine (VM) on Amazon Web Service
To create a new virtual machine instance on AWS, follow these steps:
1. Open the Amazon EC2 console.
2. From the EC2 console dashboard, select Launch Instance.
3. The Choose an Amazon Machine Image (AMI) page displays a list of basic machine
configurations (AMIs) to choose from. Select the AMI for Windows Server 2019 Base
or later. Note that these AMIs are marked Free tier eligible.

4. On the Choose an Instance Type page, select the t2.micro instance type (default).

5. On the Choose an Instance Type page, select Review and Launch to let the wizard
complete the other configuration settings for you.

JB PORTALS 51
FULL STACK DEVELOPMENT - WEEK 3
6. On the Review Instance Launch page, under Security Groups, you’ll see that the
wizard created and selected a security group for you. We will need to specify the security
group that was created in step 3 of the Prerequisites section.
○ Choose Edit security groups.
○ On the Configure Security Group page, choose Select an existing security group.
○ In the table, select the security group from the list of existing security groups.
○ Choose Review and Launch.
7. On the Review Instance Launch page, select Launch to create the new virtual machine.
8. When prompted for a key pair, select Choose an existing key pair. Then select the key
pair that you created in step 2 of the Prerequisites section
Do not select Proceed without a key pair. If you launch your instance without a key

9. Select the acknowledgement checkbox and then choose Launch Instances.


10. A confirmation page lets you know that your instance is launching. Select View
Instances to close the confirmation page and return to the console.
11. On the Instances screen, you can view the status of the launch. In the Name column,
select the Edit icon. In the popup dialog, type RhinoComputeVM to assign a namge to
this instance.

12. Once the Instance State column says that the VM is Running, you can then try to
connect to it via RDP.
13. With the instance row selected, click the Connect button in the top menu.

JB PORTALS 52
FULL STACK DEVELOPMENT - WEEK 3
14. On the Connect to instance page, select the RDP client tab. Select the Download
remote desktop file and save the .rdp file somewhere on your local computer.
15. Next, select the Get password button.
16. Choose Browse and navigate to the private key (.pem) file that you created when you
launched the instance.
17. Choose Decrypt Password. The console displays the default administrator password
for the instance under Password, replacing the Get password link shown previously.
Save this password in a safe place. This passord is required to connect to the instance.
18. Select Download remote desktop file to save the .rdp file to your local computer. You
will need this file when you connect to your instance using the Remote Desktop Connect
app.

Create a simple webapp using cloud services

REFERENCE LINK TO CREATE A WEBAPP IN AWS: https://aws.amazon.com/getting-


started/hands-on/build-web-app-s3-lambda-api-gateway-dynamodb/module-one/

In this module, you will use the AWS Amplify console to deploy the static resources for your
web application. In subsequent modules, you will add dynamic functionality to these pages
using AWS Lambda and Amazon API Gateway to call remote RESTful APIs.

What you will accomplish


In this module, you will:
● Create an Amplify app
● Upload files for a website directly to Amplify
● Deploy new versions of a webpage with Amplify

Key concepts
Static website – A static website has fixed content, unlike dynamic websites. Static websites
are the most basic type of website and are the easiest to create. All that is required is creating
a few HTML pages and publishing them to a web server.
Web hosting – Provides the technologies/services needed for the website to be viewed on
the internet.
AWS Regions – Separate geographic areas that AWS uses to house its infrastructure. These
are distributed around the world so that customers can choose a Region closest to them to
host their cloud infrastructure there.

JB PORTALS 53
FULL STACK DEVELOPMENT - WEEK 3
HOW TO CREATE A WEB APP USING AMPLIFY CONSOLE

Open your favorite text editor on your computer. Create a new file and paste the following
HTML in it:
<!DOCTYPE html>
<html>
<head>
<meta charset="UTF-8">
<title>Hello World</title>
</head>
<body>
Hello World
</body>
</html>
2. Save the file as index.html.
3. ZIP (compress) only the HTML file.
4. In a new browser window, log into the Amplify console. Note: We will be using the Oregon
(us-west-2) Region for this tutorial.
5. In the Get Started section, under Host your web app, choose the orange Get started button.
6. Select Deploy without Git provider. This is what you should see on the screen:

7. Choose the Continue button.


8. In the App name field, enter GettingStarted.
9. For Environment name, enter dev.
10. Select the Drag and drop method. This is what you should see on your screen:

JB PORTALS 54
FULL STACK DEVELOPMENT - WEEK 3

11. Choose the Choose files button.


12. Select the ZIP file you created in Step 3.
13. Choose the Save and deploy button.
14. After a few seconds, you should see the message Deployment successfully completed.

HOW TO TEST YOUR WEB APP

1. Select Domain Management in the left navigation menu.


2. Copy and paste the URL displayed in the form into your browser.
Your web app will load in a new browser tab and render "Hello World." Congratulations!
How to use cloud service for user authentication flow, allowing users to sign up, sign in and
reset their password.

Reference link for Azure Active Directory Authentication documentation:

https://learn.microsoft.com/en-us/azure/active-directory-b2c/add-password-reset-
policy?pivots=b2c-user-flow

https://learn.microsoft.com/en-us/azure/active-directory/authentication/

Create a sign-up and sign-in user flow


The sign-up and sign-in user flow handles both sign-up and sign-in experiences with a single
configuration. Users of your application are led down the right path depending on the
context.
1. Sign in to the Azure portal.
2. Select the Directories + Subscriptions icon in the portal toolbar.

JB PORTALS 55
FULL STACK DEVELOPMENT - WEEK 3
3. On the Portal settings | Directories + subscriptions page, find your Azure AD
B2C directory in the Directory name list, and then select Switch.
4. In the Azure portal, search for and select Azure AD B2C.
5. Under Policies, select User flows, and then select New user flow.

6. On the Create a user flow page, select the Sign up and sign in user flow.

7. Under Select a version, select Recommended, and then select Create.

8. Enter a Name for the user flow. For example, signupsignin1.

JB PORTALS 56
FULL STACK DEVELOPMENT - WEEK 3
9. Under Identity providers select at least one identity provider:
○ Under Local accounts, select one of the following: Email signup,
User ID signup, Phone signup, Phone/Email signup, or None. Learn
more.
○ Under Social identity providers, select any of the external social or
enterprise identity providers you've set up. Learn more.
10. Under Multifactor authentication, if you want to require users to verify their
identity with a second authentication method, choose the method type and
when to enforce multifactor authentication (MFA). Learn more.
11. Under Conditional access, if you've configured Conditional Access policies for
your Azure AD B2C tenant and you want to enable them for this user flow, select
the Enforce conditional access policies check box. You don't need to specify a
policy name. Learn more.
12. Under User attributes and token claims, choose the attributes you want to
collect from the user during sign-up and the claims you want returned in the
token. For the full list of values, select Show more, choose the values, and then
select OK.
Note
You can also create custom attributes for use in your Azure AD B2C tenant.

13. Select Create to add the user flow. A prefix of B2C_1 is automatically prepended
to the name.
14. Follow the steps to handle the flow for "Forgot your password?" within the sign-
up or sign-in policy.

JB PORTALS 57
FULL STACK DEVELOPMENT - WEEK 3
Set up a password reset flow in Azure Active Directory

In a sign-up and sign-in journey, a user can reset their own password by using the
Forgot your password? link. This self-service password reset flow applies to local accounts
in Azure Active Directory B2C (Azure AD B2C) that use an email address or a username with
a password for sign-in.

The password reset flow involves the following steps:


1. On the sign-up and sign-in page, the user selects the Forgot your password? link.
Azure AD B2C initiates the password reset flow.
2. In the next dialog that appears, the user enters their email address, and then
selects Send verification code. Azure AD B2C sends a verification code to the
user's email account. The user copies the verification code from the email,
enters the code in the Azure AD B2C password reset dialog, and then selects
Verify code.
3. The user can then enter a new password. (After the email is verified, the user
can still select the Change e-mail button; see Hide the change email button.)

To set up self-service password reset for the sign-up or sign-in user flow:
1. Sign in to the Azure portal.
2. In the portal toolbar, select the Directories + Subscriptions icon.
3. In the Portal settings | Directories + subscriptions pane, find your Azure AD B2C
directory in the Directory name list, and then select Switch.
4. In the Azure portal, search for and select Azure AD B2C.
5. Select User flows.
6. Select a sign-up or sign-in user flow (of type Recommended) that you want to
customize.
7. In the menu under Settings, select Properties.
8. Under Password configuration, select Self-service password reset.
9. Select Save.
10. In the left menu under Customize, select Page layouts.
11. In Page Layout Version, select 2.1.3 or later.
12. Select Save.
JB PORTALS 58
FULL STACK DEVELOPMENT - WEEK 3

Test the password reset flow


1. Select a sign-up or sign-in user flow (Recommended type) that you want to test.
2. Select Run user flow.
3. For Application, select the web application named webapp1 that you registered
earlier. The Reply URL should show https://jwt.ms.
4. Select Run user flow.
5. On the sign-up or sign-in page, select Forgot your password?.
6. Verify the email address of the account that you created earlier, and then select
Continue.
7. In the dialog that's shown, change the password for the user, and then select
Continue. The token is returned to https://jwt.ms and the browser displays it.
8. Check the return token's isForgotPassword claim value. If it exists and is set to
true, the user has reset the password.

What is CI/CD?
CI or Continuous Integration is the practice of automating the integration of code
changes from multiple developers into a single codebase. It is a software development
practice where the developers commit their work frequently into the central code repository
(Github or Stash). Then there are automated tools that build the newly committed code and
do a code review, etc as required upon integration.

The key goals of Continuous Integration are to find and address bugs quicker, make
the process of integrating code across a team of developers easier, improve software quality
and reduce the time it takes to release new feature updates. Some popular CI tools are
Jenkins, TeamCity, and Bamboo.

With Continuous Integration, developers frequently commit to a shared common


repository using a version control system such as Git. A continuous integration pipeline can
automatically run builds, store the artifacts, run unit tests and even conduct code reviews
using tools like Sonar. We can configure the CI pipeline to be triggered every time there is a
commit/merge in the codebase.

How CI Works?
Below is a pictorial representation of a CI pipeline- the workflow from developers
checking in their code to its automated build, test, and final notification of the build status.

JB PORTALS 59
FULL STACK DEVELOPMENT - WEEK 3

Once the developer commits their code to a version control system like Git, it triggers
the CI pipeline which fetches the changes and runs automated build and unit tests. Based on
the status of the step, the server then notifies the concerned developer whether the
integration of the new code to the existing code base was a success or a failure.
This helps in finding and addressing the bugs much quickly, makes the team more productive
by freeing the developers from manual tasks, and helps teams deliver updates to their
customers more frequently. It has been found that integrating the entire development cycle
can reduce the developer’s time involved by ~25 – 30%.

CD or Continuous Delivery
CD or Continuous Delivery is carried out after Continuous Integration to make sure
that we can release new changes to our customers quickly in an error-free way. This includes
running integration and regression tests in the staging area (similar to the production
environment) so that the final release is not broken in production. It ensures to automate
the release process so that we have a release-ready product at all times and we can deploy
our application at any point in time.

Continuous Delivery automates the entire software release process. The final decision
to deploy to a live production environment can be triggered by the developer/project lead
as required. Some popular CD tools are AWS CodeDeploy, Jenkins, and GitLab.

Why CD?
Continuous delivery helps developers test their code in a production-similar
environment, hence preventing any last moment or post-production surprises. These tests
may include UI testing, load testing, integration testing, etc. It helps developers discover and
resolve bugs preemptively.

JB PORTALS 60
FULL STACK DEVELOPMENT - WEEK 3
By automating the software release process, CD contributes to low-risk releases,
lower costs, better software quality, improved productivity levels, and most importantly, it
helps us deliver updates to customers faster and more frequently.
How CI and CD work together?
The below image describes how Continuous Integration combined with Continuous
Delivery helps quicken the software delivery process with lower risks and improved quality.

CI / CD workflow
We have seen how Continuous Integration automates the process of building, testing,
and packaging the source code as soon as it is committed to the code repository by the
developers. Once the CI step is completed, the code is deployed to the staging environment
where it undergoes further automated testing (like Acceptance testing, Regression testing,
etc.). Finally, it is deployed to the production environment for the final release of the product.

Introduction about GitHub Actions


Reference Link:
How to use Jenkins Pipeline v/s GitHub Actions:
https://www.youtube.com/watch?v=JKNF0VEoQPs
https://docs.github.com/en/actions/learn-github-actions/understanding-github-actions

GitHub Actions

GitHub Actions is a continuous integration and continuous delivery (CI/CD) platform


that allows you to automate your build, test, and deployment pipeline. You can create
workflows that build and test every pull request to your repository, or deploy merged pull
requests to production.

GitHub Actions goes beyond just DevOps and lets you run workflows when other
events happen in your repository. For example, you can run a workflow to automatically add
the appropriate labels whenever someone creates a new issue in your repository. You only
need a GitHub repository to create and run a GitHub Actions workflow. In this guide, you'll
add a workflow that demonstrates some of the essential features of GitHub Actions.
JB PORTALS 61
FULL STACK DEVELOPMENT - WEEK 3
The following example shows you how GitHub Actions jobs can be automatically
triggered, where they run, and how they can interact with the code in your repository.

The components of GitHub Actions


You can configure a GitHub Actions workflow to be triggered when an event occurs in
your repository, such as a pull request being opened or an issue being created. Your
workflow contains one or more jobs which can run in sequential order or in parallel. Each
job will run inside its own virtual machine runner, or inside a container, and has one or more
steps that either run a script that you define or run an action, which is a reusable extension
that can simplify your workflow.

Workflows
A workflow is a configurable automated process that will run one or more jobs.
Workflows are defined by a YAML file checked in to your repository and will run when
triggered by an event in your repository, or they can be triggered manually, or at a defined
schedule.

Workflows are defined in the .github/workflows directory in a repository, and a


repository can have multiple workflows, each of which can perform a different set of tasks.
For example, you can have one workflow to build and test pull requests, another workflow
to deploy your application every time a release is created, and still another workflow that
adds a label every time someone opens a new issue.

Events
An event is a specific activity in a repository that triggers a workflow run. For
example, activity can originate from GitHub when someone creates a pull request, opens an
issue, or pushes a commit to a repository. You can also trigger a workflow run on a schedule,
by posting to a REST API, or manually.

JB PORTALS 62
FULL STACK DEVELOPMENT - WEEK 3
Jobs
A job is a set of steps in a workflow that execute on the same runner. Each step is
either a shell script that will be executed, or an action that will be run. Steps are executed in
order and are dependent on each other. Since each step is executed on the same runner, you
can share data from one step to another. For example, you can have a step that builds your
application followed by a step that tests the application that was built.

Actions
An action is a custom application for the GitHub Actions platform that performs a
complex but frequently repeated task. Use an action to help reduce the amount of repetitive
code that you write in your workflow files. An action can pull your git repository from
GitHub, set up the correct toolchain for your build environment, or set up the authentication
to your cloud provider.

Runners
A runner is a server that runs your workflows when they're triggered. Each runner
can run a single job at a time. GitHub provides Ubuntu Linux, Microsoft Windows, and macOS
runners to run your workflows; each workflow run executes in a fresh, newly-provisioned
virtual machine. GitHub also offers larger runners, which are available in larger
configurations.

Create an example workflow


GitHub Actions uses YAML syntax to define the workflow. Each workflow is stored as
a separate YAML file in your code repository, in a directory named .github/workflows.
You can create an example workflow in your repository that automatically triggers a series
of commands whenever code is pushed. In this workflow, GitHub Actions checks out the
pushed code, installs the bats testing framework, and runs a basic command to output the
bats version: bats -v.
1. In your repository, create the .github/workflows/ directory to store your workflow
files.
2. In the .github/workflows/ directory, create a new file called learn-github-actions.yml
and add the following code.
YAML
name: learn-github-actions
run-name: ${{ github.actor }} is learning GitHub Actions
on: [push]
jobs:
check-bats-version:
runs-on: ubuntu-latest
steps:

JB PORTALS 63
FULL STACK DEVELOPMENT - WEEK 3
- uses: actions/checkout@v3
- uses: actions/setup-node@v3
with:
node-version: '14'
- run: npm install -g bats
- run: bats -v
3. Commit these changes and push them to your GitHub repository.

Visualizing the workflow file


In this diagram, you can see the workflow file you just created and how the GitHub
Actions components are organized in a hierarchy. Each step executes a single action or shell
script. Steps 1 and 2 run actions, while steps 3 and 4 run shell scripts.

Viewing the activity for a workflow run


When your workflow is triggered, a workflow run is created that executes the
workflow. After a workflow run has started, you can see a visualization graph of the run's
progress and view each step's activity on GitHub.
1. On GitHub.com, navigate to the main page of the repository.
2. Under your repository name, click Actions.

3. In the left sidebar, click the workflow you want to see.

JB PORTALS 64
FULL STACK DEVELOPMENT - WEEK 3
4. Under "Workflow runs", click the name of the run you want to see.

5. Under Jobs or in the visualization graph, click the job you want to see.

6. View the results of each step.

Creating your first workflow


1. Create a .github/workflows directory in your repository on GitHub if this directory
does not already exist.
2. In the .github/workflows directory, create a file named github-actions-demo.yml. For
more information, see "Creating new files."
3. Copy the following YAML contents into the github-actions-demo.yml file:
YAML
name: GitHub Actions Demo
run-name: ${{ github.actor }} is testing out GitHub Actions
on: [push]
jobs:
Explore-GitHub-Actions:
runs-on: ubuntu-latest
JB PORTALS 65
FULL STACK DEVELOPMENT - WEEK 3
steps:
- run: echo "The job was automatically triggered by a ${{ github.event_name }}
event."
- run: echo "This job is now running on a ${{ runner.os }} server hosted by GitHub!"
- run: echo "The name of your branch is ${{ github.ref }} and your repository is ${{
github.repository }}."
- name: Check out repository code
uses: actions/checkout@v3
- run: echo "The ${{ github.repository }} repository has been cloned to the runner."
- run: echo "The workflow is now ready to test your code on the runner."
- name: List files in the repository
run: |
ls ${{ github.workspace }}
- run: echo " This job's status is ${{ job.status }}."
4. Scroll to the bottom of the page and select Create a new branch for this commit and
start a pull request. Then, to create a pull request, click Propose new file.

Committing the workflow file to a branch in your repository triggers the push event and runs
your workflow.

Viewing your workflow results


1. On GitHub.com, navigate to the main page of the repository.
2. Under your repository name, click Actions.

3. In the left sidebar, click the workflow you want to see.

JB PORTALS 66
FULL STACK DEVELOPMENT - WEEK 3
4. From the list of workflow runs, click the name of the run you want to see.

5. Under Jobs , click the Explore-GitHub-Actions job.

6. The log shows you how each of the steps was processed. Expand any of the steps to
view its details.

For example, you can see the list of files in your repository:

The example workflow you just added is triggered each time code is pushed to the branch,
and shows you how GitHub Actions can work with the contents of your repository.

JB PORTALS 67

You might also like