FSD Week 3
FSD Week 3
FSD Week 3
What is Devops
A DevOps team includes developers and IT operations working collaboratively
throughout the product life cycle, in order to increase the speed and quality of software
deployment. It’s a new way of working, a cultural shift, that has significant implications for
teams and the organizations they work for. DevOps is an evolving philosophy and framework
that encourages faster, better application development and faster release of new or revised
software features or products to customers.
This closer relationship between “Dev” and “Ops” permeates every phase of the
DevOps lifecycle: from initial software planning to code, build, test, and release phases and
on to deployment, operations, and ongoing monitoring. This relationship propels a
continuous customer feedback loop of further improvement, development, testing, and
deployment. One result of these efforts can be the more rapid, continual release of necessary
feature changes or additions.
● Plan. This phase helps define business value and requirements. Sample tools
include Jira or Git to help track known issues and perform project management.
JB PORTALS 1
FULL STACK DEVELOPMENT - WEEK 3
● Code. This phase involves software design and the creation of software code.
Sample tools include GitHub, GitLab, Bitbucket, or Stash.
● Build. In this phase, you manage software builds and versions, and use
automated tools to help compile and package code for future release to
production. You use source code repositories or package repositories that also
“package” infrastructure needed for product release. Sample tools include
Docker, Ansible, Puppet, Chef, Gradle, Maven, or JFrog Artifactory.
● Test. This phase involves continuous testing (manual or automated) to ensure
optimal code quality. Sample tools include JUnit, Codeception, Selenium, Vagrant,
TestNG, or BlazeMeter.
● Deploy. This phase can include tools that help manage, coordinate, schedule, and
automate product releases into production. Sample tools include Puppet, Chef,
Ansible, Jenkins, Kubernetes, OpenShift, OpenStack, Docker, or Jira.
● Operate. This phase manages software during production. Sample tools include
Ansible, Puppet, PowerShell, Chef, Salt, or Otter.
● Monitor. This phase involves identifying and collecting information about issues
from a specific software release in production. Sample tools include New Relic,
Datadog, Grafana, Wireshark, Splunk, Nagios, or Slack.
Configuration Management
Configuration management is a systems engineering process for establishing
consistency of a product’s attributes throughout its life. In the technology world,
configuration management is an IT management process that tracks individual configuration
items of an IT system. IT systems are composed of IT assets that vary in granularity.
An IT asset may represent a piece of software, or a server, or a cluster of servers. The
following focuses on configuration management as it directly applies to IT software assets
and software asset CI/CD.
Software configuration management is a systems engineering process that tracks and
monitors changes to a software systems configuration metadata. In software development,
configuration management is commonly used alongside version control and CI/CD
JB PORTALS 2
FULL STACK DEVELOPMENT - WEEK 3
infrastructure. This post focuses on its modern application and use in agile CI/CD software
environments.
Continuous Integration
Continuous integration (CI) is the practice of automating the integration of code
changes from multiple contributors into a single software project. It’s a primary DevOps best
practice, allowing developers to frequently merge code changes into a central repository
where builds and tests then run. Automated tools are used to assert the new code’s
correctness before integration.
Continuous integration (CI) is the practice that requires developers to integrate code
into a shared repository often and obtain rapid feedback on its success during active
development.
JB PORTALS 3
FULL STACK DEVELOPMENT - WEEK 3
This is done as developers finish a specific piece of code and it has successfully passed
unit testing. CI also means creating a build in a tool like Bamboo/Jenkins/Gitlab that runs
after developer check-in, runs any test you have that can run on this build (unit & integration
for example) and provides feedback to the development team if it worked or if it failed. The
end goal is to create small workable chunks of code that are validated and integrated back
into the centralized code repository as frequently as possible. As such, CI is the foundation
for both continuous delivery and continuous deployment DevOps practices.
Automated Testing
Test automation is the practice of automatically reviewing and validating a software
product, such as a web application, to make sure it meets predefined quality standards for
code style, functionality (business logic), and user experience.Testing practices typically
involve the following stages:
● Unit testing: validates individual units of code, such as a function, so it works as
expected
● Integration testing: ensures several pieces of code can work together without
unintended consequences
● End-to-end testing: validates that the application meets the user’s expectations
● Exploratory testing: takes an unstructured approach to reviewing numerous areas of
an application from the user perspective, to uncover functional or visual issues
The different types of testing are often visualized as a pyramid. As you climb up the pyramid,
the number of tests in each type decreases, and the cost of creating and running tests
increases.
Infrastructure as Code
Infrastructure as Code (IaC) is the managing and provisioning of infrastructure
through code instead of through manual processes. With IaC, configuration files are created
that contain your infrastructure specifications, which makes it easier to edit and distribute
configurations. It also ensures that you provision the same environment every time. By
JB PORTALS 4
FULL STACK DEVELOPMENT - WEEK 3
codifying and documenting your configuration specifications, IaC aids configuration
management and helps you to avoid undocumented, ad-hoc configuration changes.
IaC is used to define code that when executed, can stand up an entire physical or
virtual environment including computing and networking infrastructure. It is a type of IT
infrastructure that operation teams can automatically manage and provision through code,
rather than using a manual process. An example of using IaC would be to use Terraform to
rapidly stand up nodes in a cloud environment, and then have the ability to destroy and
rebuild the environment consistently each time. Doing so gives the user the ability to version
control their infrastructure, and can be more agile when recovering from infrastructure
outages.
Continuous Delivery
The practice of making every change to source code ready for a production release as
soon as automated testing validates it. This includes automatically building, testing and
deploying. An approach to code approval and delivery approval needs to be in place to
ensure that the code can be deployed in an automated fashion with appropriate pauses for
approval depending on the specific needs of a program. This also implies the same process
for the lower environments, like QA, UA, etc.
Continuous Deployment
Continuous Deployment is the practice that strives to automate production
deployment end to end. In order for this practice to be implemented, a team needs to have
extremely high confidence in their automated tests. The ultimate goal is that as long as the
build has passed all automated tests, the code will be deployed. However, manual steps in
the deployment process can be maintained if necessary.
For example, a team can determine what type of changes can be deployed to
production in a completely automated fashion, while other types of changes may maintain a
manual approval step. Such a hybrid approach is a good way to begin to adopt this practice.
JB PORTALS 5
FULL STACK DEVELOPMENT - WEEK 3
Continuous Monitoring
DevOps monitoring entails overseeing the entire development process from planning,
development, integration and testing, deployment, and operations. It involves a complete
and real-time view of the status of applications, services, and infrastructure in the
production environment. Features such as real-time streaming, historical replay, and
visualizations are critical components of application and service monitoring. Continuous
monitoring is the practice of proactively monitoring, alerting, and taking action in key areas
to give teams visibility into the health of the application in the production environment. The
following areas are included to be aware of the impact of every deployment and reduce the
time between issue identification and resolution:
JB PORTALS 6
FULL STACK DEVELOPMENT - WEEK 3
Configuration Management
What is Version Control System?
Version control systems allow multiple developers, designers, and team members to
work together on the same project. It helps them work smarter and faster! A version control
system is critical to ensure everyone has access to the latest code and modifications are
tracked. As development becomes increasing complex and teams grow, there's a bigger need
to manage multiple versions and components of entire products.
The responsibility of the Version control system is to keep all the team members on
the same page. It makes sure that everyone on the team is working on the latest version of
the file and, most importantly, makes sure that all these people can work simultaneously on
the same project.
Let's try to understand the process with the help of this diagram:
There are 3 workstations or three different developers at three other locations, and
there's one repository acting as a server. The work stations are using that repository either
for the process of committing or updating the tasks.
There may be a large number of workstations using a single server repository. Each
workstation will have its working copy, and all these workstations will be saving their source
codes into a particular server repository.
This makes it easy for any developer to access the task being done using the
repository. If any specific developer's system breaks down, then the work won't stop, as
there will be a copy of the source code in the central repository.
Collaboration
JB PORTALS 7
FULL STACK DEVELOPMENT - WEEK 3
There are so many people located at different places, there may be a need to
communicate for a particular reason, or a set of people are working on the same project but
from other regions.
Storing Versions
The project is completed into several versions; in that situation, keeping all such commits in
a single place is a considerable challenge.
Fundamentals of Git
Git is the best choice for most software teams today. While every team is different and
should do their own analysis, here are the main reasons why version control with Git is
preferred over alternatives:
Git is good
Git has the functionality, performance, security and flexibility that most teams and individual
developers need. These attributes of Git are detailed above. In side-by-side comparisons with
most other alternatives, many teams find that Git is very favorable.
JB PORTALS 8
FULL STACK DEVELOPMENT - WEEK 3
Git is a very well supported open source project with over a decade of solid
stewardship. The project maintainers have shown balanced judgment and a mature
approach to meeting the long term needs of its users with regular releases that improve
usability and functionality. The quality of the open source software is easily scrutinized and
countless businesses rely heavily on that quality.
As you edit files, Git sees them as modified, because you’ve changed them since your
last commit. You stage these modified files and then commit all your staged changes, and the
cycle repeats.
JB PORTALS 9
FULL STACK DEVELOPMENT - WEEK 3
REFERENCE LINKS:
https://www.youtube.com/watch?v=PSJ63LULKHA
https://www.youtube.com/watch?v=8JJ101D3knE
https://www.youtube.com/watch?v=b5oQZdzA37I
JB PORTALS 10
FULL STACK DEVELOPMENT - WEEK 3
Cloning an existing repository: git clone
If a project has already been set up in a central repository, the clone command is the
most common way for users to obtain a local development clone. Like git init, cloning is
generally a one-time operation. Once a developer has obtained a working copy, all version
control operations are managed through their local repository.
git clone <repo url>
git clone is used to create a copy or clone of remote repositories. You pass git clone a
repository URL. Git supports a few different network protocols and corresponding URL
formats. In this example, we'll be using the Git SSH protocol. Git SSH URLs follow a template
of: git@HOSTNAME:USERNAME/REPONAME.git
HOSTNAME: bitbucket.org
USERNAME: rhyolight
REPONAME: javascript-data-store
When executed, the latest version of the remote repo files on the main branch will be
pulled down and added to a new folder. The new folder will be named after the REPONAME
in this case javascript-data-store. The folder will contain the full history of the remote
repository and a newly created main branch.
git add
The git add command adds a change in the working directory to the staging area. It
tells Git that you want to include updates to a particular file in the next commit. However, git
add doesn't really affect the repository in any significant way—changes are not actually
recorded until you run git commit.
In conjunction with these commands, you'll also need git status to view the state of
the working directory and the staging area.
JB PORTALS 11
FULL STACK DEVELOPMENT - WEEK 3
How it works
The git add and git commit commands compose the fundamental Git workflow. These
are the two commands that every Git user needs to understand, regardless of their team’s
collaboration model. They are the means to record versions of a project into the repository’s
history.
Developing a project revolves around the basic edit/stage/commit pattern. First, you
edit your files in the working directory. When you’re ready to save a copy of the current state
of the project, you stage changes with git add. After you’re happy with the staged snapshot,
you commit it to the project history with git commit. The git reset command is used to undo
a commit or staged snapshot.
In addition to git add and git commit, a third command git push is essential for a
complete collaborative Git workflow. git push is utilized to send the committed changes to
remote repositories for collaboration. This enables other team members to access a set of
saved changes.
Instead of committing all of the changes you've made since the last commit, the stage
lets you group related changes into highly focused snapshots before actually committing it
to the project history. This means you can make all sorts of edits to unrelated files, then go
back and split them up into logical commits by adding related changes to the stage and
commit them piece-by-piece. As in any revision control system, it’s important to create
atomic commits so that it’s easy to track down bugs and revert changes with minimal impact
on the rest of the project. Common options
git add <file>
Stage all changes in <file> for the next commit.
git add <directory>
Stage all changes in <directory> for the next commit.
git add -p
JB PORTALS 12
FULL STACK DEVELOPMENT - WEEK 3
Begin an interactive staging session that lets you choose portions of a file to add to
the next commit. This will present you with a chunk of changes and prompt you for a
command. Use y to stage the chunk, n to ignore the chunk, s to split it into smaller chunks, e
to manually edit the chunk, and q to exit.
Examples
When you’re starting a new project, git add serves the same function as svn import.
To create an initial commit of the current directory, use the following two commands:
git add .
git commit
Once you’ve got your project up-and-running, new files can be added by passing the path to
git add:
git add hello.py
git commit
The above commands can also be used to record changes to existing files. Again, Git doesn’t
differentiate between staging changes in new files vs. changes in files that have already been
added to the repository.
Git commit
The git commit command captures a snapshot of the project's currently staged
changes. Committed snapshots can be thought of as “safe” versions of a project—Git will
never change them unless you explicitly ask it to. Prior to the execution of git commit, The
git add command is used to promote or 'stage' changes to the project that will be stored in a
commit. These two commands git commit and git add are two of the most frequently used.
How it works
At a high-level, Git can be thought of as a timeline management utility. Commits are
the core building block units of a Git project timeline. Commits can be thought of as snapshots
or milestones along the timeline of a Git project. Commits are created with the git commit
command to capture the state of a project at that point in time. Git Snapshots are always
committed to the local repository. This is fundamentally different from SVN, wherein the
working copy is committed to the central repository. In contrast, Git doesn’t force you to
interact with the central repository until you’re ready. Just as the staging area is a buffer
between the working directory and the project history, each developer’s local repository is
a buffer between their contributions and the central repository.
Common options
git commit
JB PORTALS 13
FULL STACK DEVELOPMENT - WEEK 3
Commit the staged snapshot. This will launch a text editor prompting you for a commit
message. After you’ve entered a message, save the file and close the editor to create the actual
commit.
git commit -a
Commit a snapshot of all changes in the working directory. This only includes modifications
to tracked files (those that have been added with git add at some point in their history).
git commit -m "commit message"
A shortcut command that immediately creates a commit with a passed commit message. By
default, git commit will open up the locally configured text editor, and prompt for a commit
message to be entered. Passing the -m option will forgo the text editor prompt in-favor of an
inline message.
git commit -a -m "commit message"
A power user shortcut command that combines the -a and -m options. This combination
immediately creates a commit of all the staged changes and takes an inline commit message.
git commit --amend
This option adds another level of functionality to the commit command. Passing this option
will modify the last commit. Instead of creating a new commit, staged changes will be added
to the previous commit. This command will open up the system's configured text editor and
prompt to change the previously specified commit message.
Examples
git status
On branch main
Changes to be committed:
(use "git reset HEAD <file>..." to unstage)
new file: hello.py
The green output new file: hello.py indicates that hello.py will be saved with the next commit.
JB PORTALS 14
FULL STACK DEVELOPMENT - WEEK 3
From the commit is created by executing:
git commit
This will open a text editor (customizable via git config) asking for a commit log message,
along with a list of what’s being committed:
# Please enter the commit message for your changes. Lines starting
# with '#' will be ignored, and an empty message aborts the commit.
# On branch main
# Changes to be committed:
# (use "git reset HEAD ..." to unstage)
#
#modified: hello.py
Git doesn't require commit messages to follow any specific formatting constraints,
but the canonical format is to summarize the entire commit on the first line in less than 50
characters, leave a blank line, then a detailed explanation of what’s been changed. For
example:
Git displays output similar to the following, which includes the commit time in UTC format:
JB PORTALS 15
FULL STACK DEVELOPMENT - WEEK 3
commit 0e62ed6d9f39fa9bedf7efc6edd628b137fa781a
Author: Mike Jang <mjang@gitlab.com>
Date: Tue Nov 26 21:44:53 2019 +0000
commit 418879420b1e3a4662067bd07b64bb6988654697
Author: Marcin Sedlak-Jakubowski <msedlakjakubowski@gitlab.com>
Date: Mon Nov 4 19:58:27 2019 +0100
Fix typo
commit 21cc1fef11349417ed515557748369cfb235fc81
Author: Jacques Erasmus <jerasmus@gitlab.com>
It's important to remember that there is more than one way to 'undo' in a Git project.
Most of the discussion on this page touched on deeper topics that are more thoroughly
explained on pages specific to the relevant Git commands. The most commonly used 'undo'
tools are git checkout, git revert, and git reset. Some key points to remember are:
● Once changes have been committed they are generally permanent
● Use git checkout to move around and review the commit history
● git revert is the best tool for undoing shared public changes
● git reset is best used for undoing local private changes
JB PORTALS 16
FULL STACK DEVELOPMENT - WEEK 3
In addition to the primary undo commands, we took a look at other Git utilities: git
log for finding lost commits git clean for undoing uncommitted changes git add for modifying
the staging index.
Each of these commands has its own in-depth documentation. To learn more about a specific
command mentioned here, visit the corresponding links.
GIT BRANCHING
Git branches are effectively a pointer to a snapshot of your changes. When you want
to add a new feature or fix a bug—no matter how big or how small—you spawn a new branch
to encapsulate your changes. This makes it harder for unstable code to get merged into the
main code base, and it gives you the chance to clean up your future's history before merging
it into the main branch.
The diagram above visualizes a repository with two isolated lines of development,
one for a little feature, and one for a longer-running feature. By developing them in branches,
it’s not only possible to work on both of them in parallel, but it also keeps the main branch
free from questionable code.
How it works
A branch represents an independent line of development. Branches serve as an
abstraction for the edit/stage/commit process. You can think of them as a way to request a
brand new working directory, staging area, and project history. New commits are recorded
in the history for the current branch, which results in a fork in the history of the project.
JB PORTALS 17
FULL STACK DEVELOPMENT - WEEK 3
The git branch command lets you create, list, rename, and delete branches. It doesn’t
let you switch between branches or put a forked history back together again. For this reason,
git branch is tightly integrated with the git checkout and git merge commands.
Common Options
git branch
List all of the branches in your repository. This is synonymous with git branch --list.
git branch -a
List all remote branches.
Creating Branches
It's important to understand that branches are just pointers to commits. When you create a
branch, all Git needs to do is create a new pointer, it doesn’t change the repository in any
other way. If you start with a repository that looks like this:
Note that this only creates the new branch. To start adding commits to it, you need to select
it with git checkout, and then use the standard git add and git commit commands.
JB PORTALS 18
FULL STACK DEVELOPMENT - WEEK 3
Deleting Branches
Once you’ve finished working on a branch and have merged it into the main code base, you’re
free to delete the branch without losing any history:
error: The branch 'crazy-experiment' is not fully merged. If you are sure you want to delete
it, run 'git branch -D crazy-experiment'.
This protects you from losing access to that entire line of development. If you really want to
delete the branch (e.g., it’s a failed experiment), you can use the capital -D flag:
The previous commands will delete a local copy of a branch. The branch may still exist in
remote repos. To delete a remote branch execute the following.
JB PORTALS 19
FULL STACK DEVELOPMENT - WEEK 3
Switching Branches
Switching branches is a straightforward operation. Executing the following will point
HEAD to the tip of <branchname>.
How it works
Git merge will combine multiple sequences of commits into one unified history. In the
most frequent use cases, git merge is used to combine two branches. The following examples
in this document will focus on this branch merging pattern. In these scenarios, git merge
takes two commit pointers, usually the branch tips, and will find a common base commit
between them. Once Git finds a common base commit it will create a new "merge commit"
that combines the changes of each queued merge commit sequence.
Say we have a new branch feature that is based off the main branch. We now want to
merge this feature branch into main.
JB PORTALS 20
FULL STACK DEVELOPMENT - WEEK 3
Invoking this command will merge the specified branch feature into the current
branch, we'll assume main. Git will determine the merge algorithm automatically (discussed
below).
Merge commits are unique against other commits in the fact that they have two
parent commits. When creating a merge commit Git will attempt to auto magically merge the
separate histories for you. If Git encounters a piece of data that is changed in both histories
it will be unable to automatically combine them.
Merging
Once the previously discussed "preparing to merge" steps have been taken a merge
can be initiated by executing git merge where is the name of the branch that will be merged
into the receiving branch.
Our first example demonstrates a fast-forward merge. The code below creates a new
branch, adds two commits to it, then integrates it into the main line with a fast-forward
merge.
JB PORTALS 21
FULL STACK DEVELOPMENT - WEEK 3
What is GitHub?
GitHub is a Git repository hosting service that provides a web-based graphical
interface. It is the world’s largest coding community. Putting a code or a project into GitHub
brings it increased, widespread exposure. Programmers can find source codes in many
different languages and use the command-line interface, Git, to make and keep track of any
changes.
GitHub helps every team member work together on a project from any location while
facilitating collaboration. You can also review previous versions created at an earlier point
in time.
Benefits of GitHub
GitHub can be separated as the Git and the Hub. GitHub service includes access
controls as well as collaboration features like task management, repository hosting, and
team management. The key benefits of GitHub are as follows.
● It is easy to contribute to open source projects via GitHub.
● It helps to create an excellent document.
JB PORTALS 22
FULL STACK DEVELOPMENT - WEEK 3
● You can attract recruiter by showing off your work. If you have a profile on GitHub,
you will have a higher chance of being recruited.
● It allows your work to get out there in front of the public.
● You can track changes in your code across versions.
Distribute Git
DGit is short for “Distributed Git.” As many readers already know, Git itself is
distributed—any copy of a Git repository contains every file, branch, and commit in the
project’s entire history. DGit uses this property of Git to keep three copies of every
repository, on three different servers. The design of DGit keeps repositories fully available
without interruption even if one of those servers goes down. Even in the extreme case that
two copies of a repository become unavailable at the same time, the repository remains
readable; i.e., fetches, clones, and most of the web UI continue to work.
DGit performs replication at the application layer, rather than at the disk layer. Think
of the replicas as three loosely-coupled Git repositories kept in sync via Git protocols, rather
than identical disk images full of repositories. This design gives us great flexibility to decide
where to store the replicas of a repository and which replica to use for read Operations.
JB PORTALS 23
FULL STACK DEVELOPMENT - WEEK 3
2. Type a short, memorable name for your repository. For example, "hello-world".
3. Optionally, add a description of your repository. For example, "My first repository on
GitHub."
JB PORTALS 24
FULL STACK DEVELOPMENT - WEEK 3
6. Click Create repository.
Push to repositories
git push -u -f origin main
The -u (or --set-upstream) flag sets the remote origin as the upstream reference. This allows
you to later perform git push and git pull commands without having to specify an origin since
we always want GitHub in this case.
The -f (or --force) flag stands for force. This will automatically overwrite everything
in the remote directory. We’re using it here to overwrite the default README that GitHub
automatically initialized.
All together
git init
git add -A
git commit -m 'Added my project'
git remote add origin git@github.com:sammy/my-new-project.git
git push -u -f origin main
Versioning in Github
Lately I've been doing a lot of thinking around versioning in repositories. For all the
convenience and ubiquity of package.json, it does sometimes misrepresent the code that is
contained within a repository. For example, suppose I start out my project at v0.1.0 and
that's what's in my package.json file in my master branch. Then someone submits a pull
request that I merge in - the version number hasn't changed even though the repository now
no longer represents v0.1.0. The repository is actually now in an intermediate state, in
between v0.1.0 and the next official release.
To deal with that, I started changing the package.json version only long enough to
push a new release, and then I would change it to a dev version representing the next
scheduled release (such as v0.2.0-dev). That solved the problem of misrepresenting the
version number of the repository (provided people realize "dev" means "in flux day to day").
However, it introduced a yucky workflow that I really hated. When it was time for a release,
I'd have to:
JB PORTALS 25
FULL STACK DEVELOPMENT - WEEK 3
1. Manually change the version in package.json.
2. Tag the version in the repo.
3. Publish to npm.
4. Manually change the version in package.json to a dev version.
5. Push to master.
There may be some way to automate this, but I couldn't figure out a really nice way to do it.
That process works well enough when you have no unplanned releases. However, what if
I'm working on v0.2.0-dev after v0.1.0 was released, and need to do a v0.1.1 release?
Add on top of this trying to create an automated changelog based on tagging, and things can
get a little bit tricky. My next thought was to have a release branch where the last published
release would live. Essentially, after v0.1.0, the release branch remains at v0.1.0 while the
master branch becomes v0.2.0-dev. If I need to do an intermediate release, then I merge
master onto release and change versions only in the release branch. Once again, this is a bit
messy because package.json is guaranteed to have different versions on master and release,
which always causes merge conflicts. This also means the changelog is updated only in the
release branch. This solution turned out to be more complex than I anticipated.
I'm still not sure the right way to do this, but my high-level requirements are:
1. Make sure the version in package.json is always accurate.
2. Don't require people to change the version to make a commit.
3. Don't require people to use a special build command to make a commit.
4. Distinguish between development (in progress) work vs. official releases.
5. Be able to auto-increment the version number (via npm version).
Collaboration
You can invite users to become collaborators to your personal repository. If you're
using GitHub Free, you can add unlimited collaborators on public and private repositories.
1. Ask for the username of the person you're inviting as a collaborator. If they don't have
a username yet, they can sign up for GitHub For more information, see "Signing up for
a new GitHub account".
JB PORTALS 26
FULL STACK DEVELOPMENT - WEEK 3
2. On GitHub.com, navigate to the main page of the repository.
3. Under your repository name, click Settings.
6. In the search field, start typing the name of person you want to invite, then click a
name in the list of matches.
8. The user will receive an email inviting them to the repository. Once they accept your
invitation, they will have collaborator access to your repository.
JB PORTALS 27
FULL STACK DEVELOPMENT - WEEK 3
Migration in Github
A migration is the process of transferring data from a source location (either a
GitHub.com organization or a GitHub Enterprise Server instance) to a target GitHub
Enterprise Server instance. Migrations can be used to transfer your data when changing
platforms or upgrading hardware on your instance.
Types of migrations
There are three types of migrations you can perform:
● A migration from a GitHub Enterprise Server instance to another GitHub Enterprise
Server instance. You can migrate any number of repositories owned by any user or
organization on the instance. Before performing a migration, you must have site
administrator access to both instances.
● A migration from a GitHub.com organization to a GitHub Enterprise Server instance.
You can migrate any number of repositories owned by the organization. Before
performing a migration, you must have administrative access to the GitHub.com
organization as well as site administrator access to the target instance.
● Trial runs are migrations that import data to a staging instance. These can be useful
to see what would happen if a migration were applied to your GitHub Enterprise
Server instance. We strongly recommend that you perform a trial run on a staging
instance before importing data to your production instance.
JB PORTALS 28
FULL STACK DEVELOPMENT - WEEK 3
Benefits of Cloud Hosting:
1. Scalability: With Cloud hosting, it is easy to grow and shrink the number and size of
servers based on the need. This is done by either increasing or decreasing the resources
in the cloud. This ability to alter plans due to fluctuation in business size and needs is a
superb benefit of cloud computing, especially when experiencing a sudden growth in
demand.
2. Instant: Whatever you want is instantly available in the cloud.
3. Save Money: An advantage of cloud computing is the reduction in hardware costs.
Instead of purchasing in-house equipment, hardware needs are left to the vendor. For
companies that are growing rapidly, new hardware can be large, expensive, and
inconvenient. Cloud computing alleviates these issues because resources can be
acquired quickly and easily. Even better, the cost of repairing or replacing equipment is
passed to the vendors. Along with purchase costs, off-site hardware cuts internal power
costs and saves space. Large data centers can take up precious office space and produce
a large amount of heat. Moving to cloud applications or storage can help maximize space
and significantly cut energy expenditures.
4. Reliability: Rather than being hosted on one single instance of a physical server, hosting
is delivered on a virtual partition that draws its resource, such as disk space, from an
extensive network of underlying physical servers. If one server goes offline it will have
no effect on availability, as the virtual servers will continue to pull resources from the
remaining network of servers.
5. Physical Security: The underlying physical servers are still housed within data centers
and so benefit from the security measures that those facilities implement to prevent
people from accessing or disrupting them on-site.
6. Outsource Management: When you are managing the business, Someone else manages
your computing infrastructure. You do not need to worry about management as well as
upgradation.
To more clarification about how cloud computing has changed the commercial deployment
of the system. Consider above the three examples:
1. Amazon Web Services(AWS): One of the most successful cloud-based
businesses is Amazon Web Services(AWS), which is an Infrastructure as a
Service(Iaas) offering that pays rent for virtual computers on Amazon’s
infrastructure.
2. Microsoft Azure Platform: Microsoft is creating the Azure platform which
enables the .NET Framework Application to run over the internet as an alternative
platform for Microsoft developers. This is the classic Platform as a Service(PaaS).
3. Google: Google has built a worldwide network of datacenters to service its search
engine. From this service, Google has captured the world’s advertising revenue.
By using that revenue, Google offers free software to users based on
infrastructure. This is called Software as a Service(SaaS).
JB PORTALS 29
FULL STACK DEVELOPMENT - WEEK 3
What is Cloud Computing Infrastructure?
Cloud computing infrastructure is the collection of hardware and software elements
needed to enable cloud computing. It includes computing power, networking, and storage, as well
as an interface for users to access their virtualized resources. The virtual resources mirror a
physical infrastructure, with components like servers, network switches, memory and storage
clusters.
In traditional hosting services, IT infrastructure was rented out for a specific period
of time, with pre-determined hardware configuration. The client paid for the configuration
and time, regardless of the actual use. With the help of the IaaS cloud computing platform
layer, clients can dynamically scale the configuration to meet changing requirements and are
billed only for the services actually used.
IaaS provider provides the following services -
1. Compute: Computing as a Service includes virtual central processing units and
virtual main memory for the Vms that is provisioned to the end- users.
2. Storage: IaaS provider provides back-end storage for storing files.
JB PORTALS 30
FULL STACK DEVELOPMENT - WEEK 3
3. Network: Network as a Service (NaaS) provides networking components such as
routers, switches, and bridges for the Vms.
4. Load balancers: It provides load balancing capability at the infrastructure layer.
Top Iaas Providers who are providing IaaS cloud computing platform
JB PORTALS 31
FULL STACK DEVELOPMENT - WEEK 3
Amazon Web Elastic, Elastic Compute The cloud computing platform pioneer,
Services Cloud (EC2) Amazon offers auto scaling, cloud monitoring,
MapReduce, Route 53, and load balancing features as part of its
Virtual Private Cloud, portfolio.
etc.
Netmagic Netmagic IaaS Cloud Netmagic runs from data centers in Mumbai,
Solutions Chennai, and Bangalore, and a virtual data
center in the United States. Plans are underway
to extend services to West Asia.
Rackspace Cloud servers, cloud The cloud computing platform vendor focuses
files, cloud sites, etc. primarily on enterprise-level hosting services.
Reliance Reliance Internet Data RIDC supports both traditional hosting and
Communicati Center cloud services, with data centers in Mumbai,
ons Bangalore, Hyderabad, and Chennai. The cloud
services offered by RIDC include IaaS and SaaS.
1. Programming languages
PaaS providers provide various programming languages for the developers to
develop the applications. Some popular programming languages provided by PaaS providers
are Java, PHP, Ruby, Perl, and Go.
2. Application frameworks
PaaS providers provide application frameworks to easily understand the application
development. Some popular application frameworks provided by PaaS providers are
Node.js, Drupal, Joomla, WordPress, Spring, Play, Rack, and Zend.
3. Databases
PaaS providers provide various databases such as ClearDB, PostgreSQL, MongoDB,
and Redis to communicate with the applications.
4. Other tools
PaaS providers provide various other tools that are required to develop, test, and
deploy the applications.
Advantages of PaaS
There are the following advantages of PaaS -
1) Simplified Development: PaaS allows developers to focus on development and
innovation without worrying about infrastructure management.
JB PORTALS 33
FULL STACK DEVELOPMENT - WEEK 3
2) Lower risk: No need for up-front investment in hardware and software. Developers only
need a PC and an internet connection to start building applications.
3) Prebuilt business functionality: Some PaaS vendors also provide already defined
business functionality so that users can avoid building everything from very scratch and
hence can directly start the projects only.
4) Instant community: PaaS vendors frequently provide online communities where the
developer can get the ideas to share experiences and seek advice from others.
5) Scalability: Applications deployed can scale from one to thousands of users without any
changes to the applications.
Disadvantages of PaaS cloud computing layer
1) Vendor lock-in: One has to write the applications according to the platform provided by
the PaaS vendor, so the migration of an application to another PaaS vendor would be a
problem.
2) Data Privacy: Corporate data, whether it can be critical or not, will be private, so if it is
not located within the walls of the company, there can be a risk in terms of privacy of data.
3) Integration with the rest of the systems applications: It may happen that some
applications are local, and some are in the cloud. So there will be chances of increased
complexity when we want to use data which in the cloud with the local data.
Providers Services
Google App Engine App Identity, URL Fetch, Cloud storage client library, Logservice
(GAE)
JB PORTALS 34
FULL STACK DEVELOPMENT - WEEK 3
Software as a Service | SaaS
SaaS is also known as "On-Demand Software". It is a software distribution model in
which services are hosted by a cloud service provider. These services are available to end-
users over the internet so, the end-users do not need to install any software on their devices
to access these services.
Social Networks - As we all know, social networking sites are used by the general public, so
social networking service providers use SaaS for their convenience and handle the general
public's information.
Mail Services - To handle the unpredictable number of users and load on e-mail services,
many e-mail providers offering their services using SaaS.
JB PORTALS 35
FULL STACK DEVELOPMENT - WEEK 3
2. One to Many
SaaS services are offered as a one-to-many model means a single instance of the
application is shared by multiple users.
6. Multidevice support
SaaS services can be accessed from any device such as desktops, laptops, tablets,
phones, and thin clients.
7. API Integration
SaaS services easily integrate with other software or services through standard APIs.
8. No client-side installation
SaaS services are accessed directly from the service provider using the internet
connection, so do not need to require any software installation.
2) Latency issue: Since data and applications are stored in the cloud at a variable distance
from the end-user, there is a possibility that there may be greater latency when interacting
with the application compared to local deployment. Therefore, the SaaS model is not suitable
for applications whose demand response time is in milliseconds.
JB PORTALS 36
FULL STACK DEVELOPMENT - WEEK 3
3) Total Dependency on Internet : Without an internet connection, most SaaS applications
are not usable.
4) Switching between SaaS vendors is difficult: Switching SaaS vendors involves the
difficult and slow task of transferring the very large data files over the internet and then
converting and importing them into another SaaS also.
Provider Services
JB PORTALS 37
FULL STACK DEVELOPMENT - WEEK 3
Cloud Deployment Model
It works as your virtual computing environment with a choice of deployment model
depending on how much data you want to store and who has access to the Infrastructure.
Public Cloud
The name says it all. It is accessible to the public. Public deployment models in the
cloud are perfect for organizations with growing and fluctuating demands. It also makes a
great choice for companies with low-security concerns. Thus, you pay a cloud service
provider for networking services, compute virtualization & storage available on the public
internet. It is also a great delivery model for the teams with development and testing. Its
configuration and deployment are quick and easy, making it an ideal choice for test
environments.
JB PORTALS 38
FULL STACK DEVELOPMENT - WEEK 3
Benefits of Public Cloud
● Minimal Investment - As a pay-per-use service, there is no large upfront cost and is
ideal for businesses who need quick access to resources
● No Hardware Setup - The cloud service providers fully fund the entire Infrastructure
● No Infrastructure Management - This does not require an in-house team to utilize the
public cloud.
Limitations of Public Cloud
● Data Security and Privacy Concerns - Since it is accessible to all, it does not fully
protect against cyber-attacks and could lead to vulnerabilities.
● Reliability Issues - Since the same server network is open to a wide range of users, it
can lead to malfunction and outages
● Service/License Limitation - While there are many resources you can exchange with
tenants, there is a usage cap.
Private Cloud
Now that you understand what the public cloud could offer you, of course, you are
keen to know what a private cloud can do. Companies that look for cost efficiency and greater
control over data & resources will find the private cloud a more suitable choice.
It means that it will be integrated with your data center and managed by your IT team.
Alternatively, you can also choose to host it externally. The private cloud offers bigger
opportunities that help meet specific organizations' requirements when it comes to
customization. It's also a wise choice for mission-critical processes that may have frequently
changing requirements.
JB PORTALS 39
FULL STACK DEVELOPMENT - WEEK 3
● Data Privacy - It is ideal for storing corporate data where only authorized personnel
gets access
● Security - Segmentation of resources within the same Infrastructure can help with
better access and higher levels of security.
● Supports Legacy Systems - This model supports legacy systems that cannot access the
public cloud.
Limitations of Private Cloud
● Higher Cost - With the benefits you get, the investment will also be larger than the
public cloud. Here, you will pay for software, hardware, and resources for staff and
training.
● Fixed Scalability - The hardware you choose will accordingly help you scale in a
certain direction
● High Maintenance - Since it is managed in-house, the maintenance costs also increase.
Community Cloud
The community cloud operates in a way that is similar to the public cloud. There's just
one difference - it allows access to only a specific set of users who share common objectives
and use cases. This type of deployment model of cloud computing is managed and hosted
internally or by a third-party vendor. However, you can also choose a combination of all
three.
JB PORTALS 40
FULL STACK DEVELOPMENT - WEEK 3
Benefits of Community Cloud
● Smaller Investment - A community cloud is much cheaper than the private & public
cloud and provides great performance
● Setup Benefits - The protocols and configuration of a community cloud must align
with industry standards, allowing customers to work much more efficiently.
Limitations of Community Cloud
● Shared Resources - Due to restricted bandwidth and storage capacity, community
resources often pose challenges.
● Not as Popular - Since this is a recently introduced model, it is not that popular or
available across industries
Hybrid Cloud
As the name suggests, a hybrid cloud is a combination of two or more cloud
architectures. While each model in the hybrid cloud functions differently, it is all part of the
same architecture. Further, as part of this deployment of the cloud computing model, the
internal or external providers can offer resources.
Let's understand the hybrid model better. A company with critical data will prefer
storing on a private cloud, while less sensitive data can be stored on a public cloud. The
hybrid cloud is also frequently used for 'cloud bursting'. It means, supposes an organization
runs an application on-premises, but due to heavy load, it can burst into the public cloud.
JB PORTALS 41
FULL STACK DEVELOPMENT - WEEK 3
Benefits of Hybrid Cloud
● Cost-Effectiveness - The overall cost of a hybrid solution decreases since it majorly
uses the public cloud to store data.
● Security - Since data is properly segmented, the chances of data theft from attackers
are significantly reduced.
● Flexibility - With higher levels of flexibility, businesses can create custom solutions
that fit their exact requirements
Limitations of Hybrid Cloud
● Complexity - It is complex setting up a hybrid cloud since it needs to integrate two or
more cloud architectures
● Specific Use Case - This model makes more sense for organizations that have multiple
use cases or need to separate critical and sensitive data
Virtualization in Cloud Computing
Virtualization is the "creation of a virtual (rather than actual) version of something,
such as a server, a desktop, a storage device, an operating system or network resources". In
other words, Virtualization is a technique, which allows to share a single physical instance
of a resource or an application among multiple customers and organizations. It does by
assigning a logical name to a physical storage and providing a pointer to that physical
resource when demanded.
The machine on which the virtual machine is going to create is known as Host
Machine and that virtual machine is referred as a Guest Machine
Types of Virtualization:
1. Hardware Virtualization.
2. Operating system Virtualization.
3. Server Virtualization.
4. Storage Virtualization.
1) Hardware Virtualization:
When the virtual machine software or virtual machine manager (VMM) is directly
installed on the hardware system is known as hardware virtualization. The main job of
hypervisor is to control and monitoring the processor, memory and other hardware
resources. After virtualization of hardware system we can install different operating system
on it and run different applications on those OS.
JB PORTALS 42
FULL STACK DEVELOPMENT - WEEK 3
Usage:
Hardware virtualization is mainly done for the server platforms, because controlling virtual
machines is much easier than controlling a physical server.
3) Server Virtualization:
When the virtual machine software or virtual machine manager (VMM) is directly
installed on the Server system is known as server virtualization.
Usage:
Server virtualization is done because a single physical server can be divided into multiple
servers on the demand basis and for balancing the load.
4) Storage Virtualization:
Storage virtualization is the process of grouping the physical storage from multiple
network storage devices so that it looks like a single storage device.
Storage virtualization is also implemented by using software applications.
Usage:
Storage virtualization is mainly done for back-up and recovery purposes.
JB PORTALS 43
FULL STACK DEVELOPMENT - WEEK 3
Difference between Cloud Services IAAS, PAAS and SAAS :
There is no
Some knowledge is requirement about
Technical It requires technical
required for the technicalities
understanding. knowledge.
basic setup. company handles
everything.
JB PORTALS 44
FULL STACK DEVELOPMENT - WEEK 3
Operating System,
Runtime, Data of the
User Controls Nothing
Middleware, and application
Application data
JB PORTALS 45
FULL STACK DEVELOPMENT - WEEK 3
Enterprises use AWS to reduce capital expenditure of building their own private IT
infrastructure (which can be expensive depending upon the enterprise’s size and nature).
AWS has its own Physical fiber network that connects with Availability zones, regions and
Edge locations. All the maintenance cost is also bared by the AWS that saves a fortune for the
enterprises.
The Amazon Web Services (AWS) platform provides more than 200 fully featured
services from data centers located all over the world, and is the world's most comprehensive
cloud platform. Amazon web service is an online platform that provides scalable and cost-
effective cloud computing solutions.AWS is a broadly adopted cloud platform that offers
several on-demand operations like compute power, database storage, content delivery, etc.,
to help corporates scale and grow.
Applications of AWS
The most common applications of AWS are storage and backup, websites, gaming, mobile,
web, and social media applications. Some of the most crucial applications in detail are as
follows:
2. Websites
Businesses can host their websites on the AWS cloud, similar to other web
applications.
3. Gaming
There is a lot of computing power needed to run gaming applications. AWS makes it
easier to provide the best online gaming experience to gamers across the world.
JB PORTALS 46
FULL STACK DEVELOPMENT - WEEK 3
5. Big Data Management and Analytics (Application)
● Amazon Elastic MapReduced to process large amounts of data via the Hadoop
framework.
● Amazon Kinesis to analyze and process the streaming data.
● AWS Glue to handle, extract, transform and load jobs.
● Amazon Elasticsearch Service to enable a team to perform log analysis, and tool
monitoring with the help of the open source tool, Elastic-search.
6. Artificial Intelligence
● Amazon Lex to offer voice and text chatbot technology.
● Amazon Polly to translate text-to-speech translation such as Alexa Voice Services
and echo devices.
● Amazon Rekognition to analyze the image and face.
9. Game Development
● AWS game development tools are used by large game development companies that
offer developer back-end services, analytics, and various developer tools.
● AWS allows developers to host game data as well as store the data to analyze the
gamer's performance and develop the game accordingly.
JB PORTALS 47
FULL STACK DEVELOPMENT - WEEK 3
Features of AWS
JB PORTALS 48
FULL STACK DEVELOPMENT - WEEK 3
1) Flexibility
● The difference between AWS and traditional IT models is flexibility.
● The traditional models used to deliver IT solutions that require large investments in
a new architecture, programming languages, and operating system. Although these
investments are valuable, it takes time to adopt new technologies and can also slow
down your business.
● The flexibility of AWS allows us to choose which programming models, languages,
and operating systems are better suited for their project, so we do not have to learn
new skills to adopt new technologies.
● Flexibility means that migrating legacy applications to the cloud is easy, and cost-
effective. Instead of re-writing the applications to adopt new technologies, you just
need to move the applications to the cloud and tap into advanced computing
capabilities.
● Building applications in aws are like building applications using existing hardware
resources.
● The larger organizations run in a hybrid mode, i.e., some pieces of the application run
in their data center, and other portions of the application run in the cloud.
● The flexibility of aws is a great asset for organizations to deliver the product with
updated technology in time, and overall enhancing the productivity.
2) Cost-effective
● Cost is one of the most important factors that need to be considered in delivering IT
solutions.
● For example, developing and deploying an application can incur a low cost, but after
successful deployment, there is a need for hardware and bandwidth. Owing our own
infrastructure can incur considerable costs, such as power, cooling, real estate, and
staff.
● The cloud provides on-demand IT infrastructure that lets you consume the resources
what you actually need. In aws, you are not limited to a set amount of resources such
as storage, bandwidth or computing resources as it is very difficult to predict the
requirements of every resource. Therefore, we can say that the cloud provides
flexibility by maintaining the right balance of resources.
● AWS provides no upfront investment, long-term commitment, or minimum spend.
● You can scale up or scale down as the demand for resources increases or decreases
respectively.
● An aws allows you to access the resources more instantly. It has the ability to respond
the changes more quickly, and no matter whether the changes are large or small,
JB PORTALS 49
FULL STACK DEVELOPMENT - WEEK 3
means that we can take new opportunities to meet the business challenges that could
increase the revenue, and reduce the cost.
4) Secure
● AWS provides a scalable cloud-computing platform that provides customers with
end-to-end security and end-to-end privacy.
● AWS incorporates the security into its services, and documents to describe how to
use the security features.
● AWS maintains confidentiality, integrity, and availability of your data which is the
utmost importance of the aws.
Physical security: Amazon has many years of experience in designing, constructing, and
operating large-scale data centers. An aws infrastructure is incorporated in AWS controlled
data centers throughout the world. The data centers are physically secured to prevent
unauthorized access.
Secure services: Each service provided by the AWS cloud is secure.
Data privacy: A personal and business data can be encrypted to maintain data privacy.
5) Experienced
● The AWS cloud provides levels of scale, security, reliability, and privacy.
● AWS has built an infrastructure based on lessons learned from over sixteen years of
experience managing the multi-billion dollar Amazon.com business.
JB PORTALS 50
FULL STACK DEVELOPMENT - WEEK 3
● Amazon continues to benefit its customers by enhancing their infrastructure
capabilities.
● Nowadays, Amazon has become a global web platform that serves millions of
customers, and AWS has been evolved since 2006, serving hundreds of thousands of
customers worldwide.
How to create and setup a virtual machine (VM) on Amazon Web Service
To create a new virtual machine instance on AWS, follow these steps:
1. Open the Amazon EC2 console.
2. From the EC2 console dashboard, select Launch Instance.
3. The Choose an Amazon Machine Image (AMI) page displays a list of basic machine
configurations (AMIs) to choose from. Select the AMI for Windows Server 2019 Base
or later. Note that these AMIs are marked Free tier eligible.
4. On the Choose an Instance Type page, select the t2.micro instance type (default).
5. On the Choose an Instance Type page, select Review and Launch to let the wizard
complete the other configuration settings for you.
JB PORTALS 51
FULL STACK DEVELOPMENT - WEEK 3
6. On the Review Instance Launch page, under Security Groups, you’ll see that the
wizard created and selected a security group for you. We will need to specify the security
group that was created in step 3 of the Prerequisites section.
○ Choose Edit security groups.
○ On the Configure Security Group page, choose Select an existing security group.
○ In the table, select the security group from the list of existing security groups.
○ Choose Review and Launch.
7. On the Review Instance Launch page, select Launch to create the new virtual machine.
8. When prompted for a key pair, select Choose an existing key pair. Then select the key
pair that you created in step 2 of the Prerequisites section
Do not select Proceed without a key pair. If you launch your instance without a key
12. Once the Instance State column says that the VM is Running, you can then try to
connect to it via RDP.
13. With the instance row selected, click the Connect button in the top menu.
JB PORTALS 52
FULL STACK DEVELOPMENT - WEEK 3
14. On the Connect to instance page, select the RDP client tab. Select the Download
remote desktop file and save the .rdp file somewhere on your local computer.
15. Next, select the Get password button.
16. Choose Browse and navigate to the private key (.pem) file that you created when you
launched the instance.
17. Choose Decrypt Password. The console displays the default administrator password
for the instance under Password, replacing the Get password link shown previously.
Save this password in a safe place. This passord is required to connect to the instance.
18. Select Download remote desktop file to save the .rdp file to your local computer. You
will need this file when you connect to your instance using the Remote Desktop Connect
app.
In this module, you will use the AWS Amplify console to deploy the static resources for your
web application. In subsequent modules, you will add dynamic functionality to these pages
using AWS Lambda and Amazon API Gateway to call remote RESTful APIs.
Key concepts
Static website – A static website has fixed content, unlike dynamic websites. Static websites
are the most basic type of website and are the easiest to create. All that is required is creating
a few HTML pages and publishing them to a web server.
Web hosting – Provides the technologies/services needed for the website to be viewed on
the internet.
AWS Regions – Separate geographic areas that AWS uses to house its infrastructure. These
are distributed around the world so that customers can choose a Region closest to them to
host their cloud infrastructure there.
JB PORTALS 53
FULL STACK DEVELOPMENT - WEEK 3
HOW TO CREATE A WEB APP USING AMPLIFY CONSOLE
Open your favorite text editor on your computer. Create a new file and paste the following
HTML in it:
<!DOCTYPE html>
<html>
<head>
<meta charset="UTF-8">
<title>Hello World</title>
</head>
<body>
Hello World
</body>
</html>
2. Save the file as index.html.
3. ZIP (compress) only the HTML file.
4. In a new browser window, log into the Amplify console. Note: We will be using the Oregon
(us-west-2) Region for this tutorial.
5. In the Get Started section, under Host your web app, choose the orange Get started button.
6. Select Deploy without Git provider. This is what you should see on the screen:
JB PORTALS 54
FULL STACK DEVELOPMENT - WEEK 3
https://learn.microsoft.com/en-us/azure/active-directory-b2c/add-password-reset-
policy?pivots=b2c-user-flow
https://learn.microsoft.com/en-us/azure/active-directory/authentication/
JB PORTALS 55
FULL STACK DEVELOPMENT - WEEK 3
3. On the Portal settings | Directories + subscriptions page, find your Azure AD
B2C directory in the Directory name list, and then select Switch.
4. In the Azure portal, search for and select Azure AD B2C.
5. Under Policies, select User flows, and then select New user flow.
6. On the Create a user flow page, select the Sign up and sign in user flow.
JB PORTALS 56
FULL STACK DEVELOPMENT - WEEK 3
9. Under Identity providers select at least one identity provider:
○ Under Local accounts, select one of the following: Email signup,
User ID signup, Phone signup, Phone/Email signup, or None. Learn
more.
○ Under Social identity providers, select any of the external social or
enterprise identity providers you've set up. Learn more.
10. Under Multifactor authentication, if you want to require users to verify their
identity with a second authentication method, choose the method type and
when to enforce multifactor authentication (MFA). Learn more.
11. Under Conditional access, if you've configured Conditional Access policies for
your Azure AD B2C tenant and you want to enable them for this user flow, select
the Enforce conditional access policies check box. You don't need to specify a
policy name. Learn more.
12. Under User attributes and token claims, choose the attributes you want to
collect from the user during sign-up and the claims you want returned in the
token. For the full list of values, select Show more, choose the values, and then
select OK.
Note
You can also create custom attributes for use in your Azure AD B2C tenant.
13. Select Create to add the user flow. A prefix of B2C_1 is automatically prepended
to the name.
14. Follow the steps to handle the flow for "Forgot your password?" within the sign-
up or sign-in policy.
JB PORTALS 57
FULL STACK DEVELOPMENT - WEEK 3
Set up a password reset flow in Azure Active Directory
In a sign-up and sign-in journey, a user can reset their own password by using the
Forgot your password? link. This self-service password reset flow applies to local accounts
in Azure Active Directory B2C (Azure AD B2C) that use an email address or a username with
a password for sign-in.
To set up self-service password reset for the sign-up or sign-in user flow:
1. Sign in to the Azure portal.
2. In the portal toolbar, select the Directories + Subscriptions icon.
3. In the Portal settings | Directories + subscriptions pane, find your Azure AD B2C
directory in the Directory name list, and then select Switch.
4. In the Azure portal, search for and select Azure AD B2C.
5. Select User flows.
6. Select a sign-up or sign-in user flow (of type Recommended) that you want to
customize.
7. In the menu under Settings, select Properties.
8. Under Password configuration, select Self-service password reset.
9. Select Save.
10. In the left menu under Customize, select Page layouts.
11. In Page Layout Version, select 2.1.3 or later.
12. Select Save.
JB PORTALS 58
FULL STACK DEVELOPMENT - WEEK 3
What is CI/CD?
CI or Continuous Integration is the practice of automating the integration of code
changes from multiple developers into a single codebase. It is a software development
practice where the developers commit their work frequently into the central code repository
(Github or Stash). Then there are automated tools that build the newly committed code and
do a code review, etc as required upon integration.
The key goals of Continuous Integration are to find and address bugs quicker, make
the process of integrating code across a team of developers easier, improve software quality
and reduce the time it takes to release new feature updates. Some popular CI tools are
Jenkins, TeamCity, and Bamboo.
How CI Works?
Below is a pictorial representation of a CI pipeline- the workflow from developers
checking in their code to its automated build, test, and final notification of the build status.
JB PORTALS 59
FULL STACK DEVELOPMENT - WEEK 3
Once the developer commits their code to a version control system like Git, it triggers
the CI pipeline which fetches the changes and runs automated build and unit tests. Based on
the status of the step, the server then notifies the concerned developer whether the
integration of the new code to the existing code base was a success or a failure.
This helps in finding and addressing the bugs much quickly, makes the team more productive
by freeing the developers from manual tasks, and helps teams deliver updates to their
customers more frequently. It has been found that integrating the entire development cycle
can reduce the developer’s time involved by ~25 – 30%.
CD or Continuous Delivery
CD or Continuous Delivery is carried out after Continuous Integration to make sure
that we can release new changes to our customers quickly in an error-free way. This includes
running integration and regression tests in the staging area (similar to the production
environment) so that the final release is not broken in production. It ensures to automate
the release process so that we have a release-ready product at all times and we can deploy
our application at any point in time.
Continuous Delivery automates the entire software release process. The final decision
to deploy to a live production environment can be triggered by the developer/project lead
as required. Some popular CD tools are AWS CodeDeploy, Jenkins, and GitLab.
Why CD?
Continuous delivery helps developers test their code in a production-similar
environment, hence preventing any last moment or post-production surprises. These tests
may include UI testing, load testing, integration testing, etc. It helps developers discover and
resolve bugs preemptively.
JB PORTALS 60
FULL STACK DEVELOPMENT - WEEK 3
By automating the software release process, CD contributes to low-risk releases,
lower costs, better software quality, improved productivity levels, and most importantly, it
helps us deliver updates to customers faster and more frequently.
How CI and CD work together?
The below image describes how Continuous Integration combined with Continuous
Delivery helps quicken the software delivery process with lower risks and improved quality.
CI / CD workflow
We have seen how Continuous Integration automates the process of building, testing,
and packaging the source code as soon as it is committed to the code repository by the
developers. Once the CI step is completed, the code is deployed to the staging environment
where it undergoes further automated testing (like Acceptance testing, Regression testing,
etc.). Finally, it is deployed to the production environment for the final release of the product.
GitHub Actions
GitHub Actions goes beyond just DevOps and lets you run workflows when other
events happen in your repository. For example, you can run a workflow to automatically add
the appropriate labels whenever someone creates a new issue in your repository. You only
need a GitHub repository to create and run a GitHub Actions workflow. In this guide, you'll
add a workflow that demonstrates some of the essential features of GitHub Actions.
JB PORTALS 61
FULL STACK DEVELOPMENT - WEEK 3
The following example shows you how GitHub Actions jobs can be automatically
triggered, where they run, and how they can interact with the code in your repository.
Workflows
A workflow is a configurable automated process that will run one or more jobs.
Workflows are defined by a YAML file checked in to your repository and will run when
triggered by an event in your repository, or they can be triggered manually, or at a defined
schedule.
Events
An event is a specific activity in a repository that triggers a workflow run. For
example, activity can originate from GitHub when someone creates a pull request, opens an
issue, or pushes a commit to a repository. You can also trigger a workflow run on a schedule,
by posting to a REST API, or manually.
JB PORTALS 62
FULL STACK DEVELOPMENT - WEEK 3
Jobs
A job is a set of steps in a workflow that execute on the same runner. Each step is
either a shell script that will be executed, or an action that will be run. Steps are executed in
order and are dependent on each other. Since each step is executed on the same runner, you
can share data from one step to another. For example, you can have a step that builds your
application followed by a step that tests the application that was built.
Actions
An action is a custom application for the GitHub Actions platform that performs a
complex but frequently repeated task. Use an action to help reduce the amount of repetitive
code that you write in your workflow files. An action can pull your git repository from
GitHub, set up the correct toolchain for your build environment, or set up the authentication
to your cloud provider.
Runners
A runner is a server that runs your workflows when they're triggered. Each runner
can run a single job at a time. GitHub provides Ubuntu Linux, Microsoft Windows, and macOS
runners to run your workflows; each workflow run executes in a fresh, newly-provisioned
virtual machine. GitHub also offers larger runners, which are available in larger
configurations.
JB PORTALS 63
FULL STACK DEVELOPMENT - WEEK 3
- uses: actions/checkout@v3
- uses: actions/setup-node@v3
with:
node-version: '14'
- run: npm install -g bats
- run: bats -v
3. Commit these changes and push them to your GitHub repository.
JB PORTALS 64
FULL STACK DEVELOPMENT - WEEK 3
4. Under "Workflow runs", click the name of the run you want to see.
5. Under Jobs or in the visualization graph, click the job you want to see.
Committing the workflow file to a branch in your repository triggers the push event and runs
your workflow.
JB PORTALS 66
FULL STACK DEVELOPMENT - WEEK 3
4. From the list of workflow runs, click the name of the run you want to see.
6. The log shows you how each of the steps was processed. Expand any of the steps to
view its details.
For example, you can see the list of files in your repository:
The example workflow you just added is triggered each time code is pushed to the branch,
and shows you how GitHub Actions can work with the contents of your repository.
JB PORTALS 67