Bloque2 - Az 400t00a Enu Trainerhandbook PDF Free
Bloque2 - Az 400t00a Enu Trainerhandbook PDF Free
Bloque2 - Az 400t00a Enu Trainerhandbook PDF Free
STUDENT USE PROHIBITED 200 Module 6 Managing Application Config and Secrets
ties. The OWASP organization (Open Web Application Security Project) lists injections in their OWASP Top
10 2017 document as the number one threat to web application security.
In this tutorial we'll simulation sql injection attack…
Getting Started
●● Use the SQL Injection ARM template here1 to provision a web app and a sql database with known
sql injection vulnerability
●● Ensure you can browse to the ‘Contoso Clinic’ web app provisioned in your sql injection resource
group
How it works
1. Navigate to the Patients view and in the search box type "'" and hit enter. You'll see an error page
with SQL exception indicating that the search box is feeding the text into a SQL statement
The helpful error message is enough to guess that the text in the search box is being appended into the
sql statement.
2. Next try passing SQL statement 'AND FirstName = 'Kim'-- in the search box. You'll see that the
results in the list below are filtered down to only show the entry with firstname Kim
1 https://azure.microsoft.com/en-us/resources/templates/101-sql-injection-attack-prevention/
MCT USE ONLY. STUDENT USE PROHIBITED
Introduction to Security 201
3. You can try to order the list by SSN by using this statement in the search box 'order by SSN--
4. Now for the finale run this drop statement to drop the table that holds the information being dis-
played in this page… 'AND 1=1; Drop Table Patients --. Once the operation is complete, try
and load the page. You'll see that the view errors out with an exception indicating that the dbo.
patients table cannot be found
There's more
The Azure security centre team has other playbooks2 you can look at to learn how vulnerabilities are
exploited to trigger a virus attack and a DDoS attack.
2 https://azure.microsoft.com/en-gb/blog/enhance-your-devsecops-practices-with-azure-security-center-s-newest-playbooks/
MCT USE ONLY. STUDENT USE PROHIBITED 202 Module 6 Managing Application Config and Secrets
Getting Started
●● Download and install the Threat Modelling tool5
How to do it
1. Launch the Microsoft Threat Modelling Tool and choose the option to Create a Model…
3 https://docs.microsoft.com/en-us/azure/security/azure-security-threat-modeling-tool-feature-overview
4 https://blogs.msdn.microsoft.com/secdevblog/2018/09/12/microsoft-threat-modeling-tool-ga-release/
5 https://aka.ms/threatmodelingtool
MCT USE ONLY. STUDENT USE PROHIBITED 204 Module 6 Managing Application Config and Secrets
2. From the right panel search and add Azure App Service Web App, Azure SQL Database, link
them up to show a request and response flow as demonstrated below…
3. From the toolbar menu select View -> Analysis view, the analysis view will show you a full list of
threats categorised by severity.
4. To generate a full report of the threats, from the toolbar menu select Reports -> Create full report,
select a location to save the report.
A full report is generated with details of the threat along with the SLDC phase it applies to as well as
possible mitigation and links to more details…
MCT USE ONLY. STUDENT USE PROHIBITED
Implement Secure and Compliant Development Process 205
There's more
You can find a full list of threats used in the threat modelling tool here6
6 https://docs.microsoft.com/en-us/azure/security/develop/threat-modeling-tool-threats
MCT USE ONLY. STUDENT USE PROHIBITED 206 Module 6 Managing Application Config and Secrets
CI (Continuous Integration)
The CI build should be executed as part of the pull request (PR-CI) process and once the merge is
complete. Typically, the primary difference between the two runs is that the PR-CI process doesn't need
to do any of the packaging/staging that is done in the CI build. These CI builds should run static code
analysis tests to ensure that the code is following all rules for both maintenance and security. Several
tools can be used for this, such as one of the following:
●● SonarQube
●● Visual Studio Code Analysis and the Roslyn Security Analyzers
●● Checkmarx - A Static Application Security Testing (SAST) tool
●● BinSkim - A binary static analysis tool that provides security and correctness results for Windows
portable executables
●● and many more
Many of the tools seamlessly integrate into the Azure Pipelines build process. Visit the Visual Studio
Marketplace for more information on the integration capabilities of these tools.
MCT USE ONLY. STUDENT USE PROHIBITED
Implement Secure and Compliant Development Process 207
In addition to code quality being verified with the CI build, two other tedious or ignored validations are
scanning 3rd party packages for vulnerabilities and OSS license usage. Often when we ask about 3rd
party package vulnerabilities and the licenses, the response is fear or uncertainty. Those organizations
that are trying to manage 3rd party packages vulnerabilities and/or OSS licenses, explain that their
process for doing so is tedious and manual. Fortunately, there are a couple of tools by WhiteSource
Software that can make this identification process almost instantaneous. The tool runs through each build
and reports all of the vulnerabilities and the licenses of the 3rd party packages. WhiteSource Bolt is a new
option, which includes a 6-month license with your Visual Studio Subscription. Bolt provides a report of
these items but doesn't include the advanced management and alerting capabilities that the full product
offers. With new vulnerabilities being regularly discovered, your build reports could change even though
your code doesn't. Checkmarx includes a similar WhiteSource Bolt integration so there could be some
overlap between the two tools. See Manage your open source usage and security as reported by your
CI/CD pipeline7 for more information about WhiteSource and the Azure Pipelines integration.
Infrastructure Vulnerabilities
In addition to validating the application, the infrastructure should also be validated to check for any
vulnerabilities. When using the public cloud such as Azure, deploying the application and shared infra-
structure is easy, so it is important to validate that everything has been done securely. Azure includes
many tools to help report and prevent these vulnerabilities including Security Center and Azure Policies.
Also, we have set up a scanner that can ensure any public endpoints and ports have been whitelisted or
else it will raise an infrastructure issue. This is run as part of the Network pipeline to provide immediate
verification, but it also needs to be executed each night to ensure that there aren't any resources publicly
exposed that should not be.
For more information, see also Secure DevOps Kit (AzSK) CICD Extensions for Azure8.
7 https://blogs.msdn.microsoft.com/visualstudioalmrangers/2017/06/08/manage-your-open-source-usage-and-security-as-reported-by-
your-cicd-pipeline/
8 https://marketplace.visualstudio.com/items?itemName=azsdktm.AzSDK-task
MCT USE ONLY. STUDENT USE PROHIBITED 208 Module 6 Managing Application Config and Secrets
application to scan it for vulnerabilities. There are different levels of tests that are categorized into passive
tests and active tests. Passive tests scan the target site as is but don't try to manipulate the requests to
expose additional vulnerabilities. These can run fast and are usually a good candidate for a CI process
that you want to complete in a few minutes. Whereas the Active Scan can be used to simulate many
techniques that hackers commonly use to attack websites. These tests can also be referred to dynamic or
fuzz tests because the tests are often trying a large number of different combinations to see how the site
reacts to verify that it doesn't reveal any information. These tests can run for much longer, and typically
you don't want to cut these off at any particular time. These are better executed nightly as part of a
separate Azure DevOps release.
One tool to consider for penetration testing is OWASP ZAP. OWASP is a worldwide not-for-profit
organization dedicated to helping improve the quality of software. ZAP is a free penetration testing tool
for beginners to professionals. ZAP includes an API and a weekly docker container image that can be
integrated into your deployment process. Refer to the oswa zap vsts extension9 repo for details on how
to set up the integration. Here we're going to explain the benefits of including this into your process.
The application CI/CD pipeline should run within a few minutes, so you don't want to include any
long-running processes. The baseline scan is designed to identify vulnerabilities within a couple of
minutes making it a good option for the application CI/CD pipeline. The Nightly OWASP ZAP can spider
the website and run the full Active Scan to evaluate the most combinations of possible vulnerabilities.
OWASP ZAP can be installed on any machine in your network, but we like to use the OWASP Zap/Weekly
docker container within Azure Container Services. This allows for the latest updates to the image and also
allows being able to spin up multiple instances of the image so several applications within an enterprise
can be scanned at the same time. The following figure outlines the steps for both the Application CI/CD
pipeline and the longer running Nightly OWASP ZAP pipeline.
9 https://github.com/deliveron/owasp-zap-vsts-extension
MCT USE ONLY. STUDENT USE PROHIBITED
Implement Secure and Compliant Development Process 209
Even with continuous security validation running against every change to help ensure new vulnerabilities
are not introduced, hackers are continuously changing their approaches, and new vulnerabilities are
being discovered. Good monitoring tools allow you to help detect, prevent, and remediate issues discov-
ered while your application is running in production. Azure provides a number of tools that provide
detection, prevention, and alerting using rules such as OWASP Top 1010 / modSecurity and now even
using machine learning to detect anomalies and unusual behavior to help identify attackers.
Minimize security vulnerabilities by taking a holistic and layered approach to security including secure
infrastructure, application architecture, continuous validation, and monitoring. DevSecOps practices
enable your entire team to incorporate these security capabilities throughout the entire lifecycle of your
application. Establishing continuous security validation into your CI/CD pipeline can allow your applica-
tion to stay secure while you are improving the deployment frequency to meet needs of your business to
stay ahead of the competition.
10 https://www.owasp.org/images/7/72/OWASP_Top_10-2017_%28en%29.pdf.pdf
MCT USE ONLY. STUDENT USE PROHIBITED 210 Module 6 Managing Application Config and Secrets
Example
It’s 2:00 AM, Adam is done making all changes to his super awesome code piece, the tests are all running
fine, he hit commit -> push -> all commits pushed successfully to git. Happily, he drives back home. Ten
mins later he gets a call from the SecurityOps engineer, – Adam did you push the Secret Key to our public
repo?
YIKES!! that blah.config file Adam thinks, how could I have forgotten to include that in .gitignore, the
nightmare has already begun ….
We can surely try to blame Adam here for committing the sin of checking in sensitive secrets and not
following the recommended practices of managing configuration files, but the bigger question is that if
the underlying toolchain had abstracted out the configuration management from the developer, this
fiasco would have never happened!
History
The virus was injected a long time ago…
Since the early days of .NET, there has been the notion of app.config and web.config files which provide a
playground for developers to make their code flexible by moving common configuration into these files.
When used effectively, these files are proven to be worthy of dynamic configuration changes. However a
lot of time we see the misuse of what goes into these files. A common culprit is how samples and
documentation have been written, most samples out in the web would usually leverage these config files
for storing key elements such as ConnectionStrings, and even password. The values might be obfuscated
but what we are telling developers is that “hey, this is a great place to push your secrets!”. So, in a world
where we are preaching using configuration files, we can’t blame the developer for not managing the
governance of it. Don’t get me wrong; I am not challenging the use of Configuration here, it is an abso-
lute need of any good implementation, I am instead debating the use of multiple json, XML, yaml files in
maintaining configuration settings. Configs are great for ensuring the flexibility of the application, config
files, however, in my opinion, are a pain to manage especially across environments.
MCT USE ONLY. STUDENT USE PROHIBITED
Rethinking Application Config Data 211
Separation of Concerns
One of the key reasons we would want to move the configuration away from source control is to deline-
ate responsibilities. Let’s define some roles to elaborate this, none of these are new concepts but rather a
high-level summary:
●● Configuration Custodian: Responsible for generating and maintaining the life cycle of configuration
values, these include CRUD on keys, ensuring the security of secrets, regeneration of keys and tokens,
defining configuration settings such as Log levels for each environment. This role can be owned by
operation engineers and security engineering while injecting configuration files through proper
DevOps processes and CI/CD implementation. Note that they do not define the actual configuration
but are custodians of their management.
●● Configuration Consumer: Responsible for defining the schema (loose term) for the configuration
that needs to be in place and then consuming the configuration values in the application or library
code. This is the Dev. And Test teams, they should not be concerned about what the value of keys are
rather what the capability of the key is, for example, a developer may need different ConnectionString
in the application but does not need to know the actual value across different environments.
●● Configuration Store: The underlying store that is leveraged to store the configuration, while this can
be a simple file, but in a distributed application, this needs to be a reliable store that can work across
environments. The store is responsible for persisting values that modify the behavior of the applica-
tion per environment but are not sensitive and does not require any encryption or HSM modules.
●● Secret Store: While you can store configuration and secrets together, it violates our separation of
concern principle, so the recommendation is to leverage a separate store for persisting secrets. This
allows a secure channel for sensitive configuration data such as ConnectionStrings, enables the
operations team to have Credentials, Certificate, Token in one repository and minimizes the security
risk in case the Configuration Store gets compromised.
type of backing store used, and the latency of this store, it might be helpful to implement a caching
mechanism within the external configuration store. For more information, see the Caching Guidance. The
figure illustrates an overview of the External Configuration Store pattern with optional local cache.
11 https://docs.microsoft.com/en-us/azure/key-vault/key-vault-overview
MCT USE ONLY. STUDENT USE PROHIBITED
Manage Secrets, Tokens, and Certificates 213
Access to a key vault requires proper authentication and authorization before a caller (user or application)
can get access. Authentication establishes the identity of the caller, while authorization determines the
operations that they are allowed to perform.
Authentication is done via Azure Active Directory. Authorization may be done via role-based access
control (RBAC) or Key Vault access policy. RBAC is used when dealing with the management of the vaults
and key vault access policy is used when attempting to access data stored in a vault.
Azure Key Vaults may be either software- or hardware-HSM protected. For situations where you require
added assurance you can import or generate keys in hardware security modules (HSMs) that never leave
the HSM boundary. Microsoft uses Thales hardware security modules. You can use Thales tools to move a
key from your HSM to Azure Key Vault.
Finally, Azure Key Vault is designed so that Microsoft does not see or extract your data.
The Scenario
We will be building a Vehicle microservice which provides CRUD operations for sending vehicle data to a
CosmosDB document store. The sample micro-service needs to interact with the Configuration stores to
get values such as connectionstring, database name, collection name, etc. We interact with Azure Key
Vault for this purpose. Additionally, the application needs the Authentication token for Azure Key Vault
itself, these details along with other Configuration will be stored in Kubernetes.
AzureKeyVault is the Secret Store for all the secrets that are application specific. It allows for the creation
of these secrets and also managing the lifecycle of them. It is recommended that you have a separate
Azure KeyVault per environment to ensure isolation. The following command can be used to add a new
configuration into KeyVault:
#Get a list of existing secrets
az keyvault secret list --vault-name -o table
MCT USE ONLY. STUDENT USE PROHIBITED
Manage Secrets, Tokens, and Certificates 217
The clientsecret is the only piece of secure information we store in Kubernetes, all the application specific
secrets are stored in Azure KeyVault. This is comparatively safer since the above scripts do not need to go
in the same git repo. (so we don’t check them in by mistake) and can be managed separately. We still
control the expiry of this secret using Azure KeyVault, so the Security engineer still has full control over
access and permissions.
1. Injecting Values into the Container: During runtime, Kubernetes will automatically push the above
values as environment variables for the deployed containers, so the system does not need to worry
about loading them from a configuration file. The Kubernetes configuration for the deployment looks
like below. As you would notice, we only provide a reference to the ConfigMaps and Secret that have
been created instead of punching in the actual values.
apiVersion: apps/v1beta1
kind: Deployment
metadata:
name: vehicle-api-deploy #name for the deployment
labels:
app: vehicle-api #label that will be used to map the service, this tag is very important
spec:
replicas: 1
selector:
matchLabels:
app: vehicle-api #label that will be used to map the service, this tag is very important
template:
metadata:
labels:
app: vehicle-api #label that will be used to map the service, this tag is very important
spec:
containers:
- name: vehicleapi #name for the container configuration
image: <yourdockerhub>/<youdockerimage>:<youdockertagversion> # **CHANGE THIS: the tag
for the container to be deployed
imagePullPolicy: Always #getting latest image on each deployment
ports:
MCT USE ONLY. STUDENT USE PROHIBITED 218 Module 6 Managing Application Config and Secrets
Getting ready
SonarQube is an open platform to manage code quality. Originally famous in the Java community,
SonarQube now supports over 20 programming languages. The joint investments made by Microsoft and
SonarSource make SonarQube easier to integrate in Pipelines and better at analyzing .NET-based
applications. You can read more about the capabilities offered by SonarQube here: https://www.
sonarqube.org/. SonarSource, the company behind SonarQube offers a hosted SonarQube environment
called as SonarCloud.
DevSecOps helps bring a fresh perspective by introducing a culture of making everyone accountable for
security, using automation to move the process of inspection left and overall looking at security with a
360 lens.
Anyone doing cloud at scale is likely to have multiple Azure subscriptions with hundreds of resource
groups in Azure dedicated to the application in question. Resource Groups comprising of resources such
as Web App’s, Azure Functions, Blob Storage, Redis Cache, App Insights, Service Bus, SQL warehouse
among some other Azure resource types. The ratio of security consultants to the number of releases
makes it practically impossible for security to inspect the changes to these resource types to guarantee
compliance of best practices.
Getting started
●● Start by installing the AzSDK extension from the Azure DevOps marketplace12
How to do it
The AzSk extension gives you the ability to both scan ARM templates to identify shortcoming in your
ARM templates and also gives you the ability to scan actual Azure resources provisioned in an Azure
subscription for compliance to best practices. We'll cover both in this tutorial.
12 https://marketplace.visualstudio.com/items?itemName=azsdktm.AzSDK-task
MCT USE ONLY. STUDENT USE PROHIBITED
Implement Tools for Managing Security and Compliance in a Pipeline 221
3. Next, configure the task - To start scanning, you just need to provide the root folder where you have
ARM templates (and optionally mark it to scan recursively).
4. Run the build and let the build complete. You will see glimpse of the scan in the build output itself.
5. Once the build completes, you will find that task has attached all the results to the build log. You will
be surprised to see the issues you find in your ARM templates.
MCT USE ONLY. STUDENT USE PROHIBITED 222 Module 6 Managing Application Config and Secrets
6. Occasionally you will find issues which you have decided as safe to ignore. The task allows you to skip
such failures from the scan so that you will not get alerted as a security issue or cause in build failures.
The way you configure this is simple - Use the generated csv file and keep only entries you need to
ignore from the scan and commit to your repository as a csv file. Then specify this file in the task as
below.
The task currently scans App Service, Storage, SQL, CDN, Traffic Manager, Document DB, Redis Cache,
and Data Lake services only.
●● In the newly added environment add the azSdk task and configure the Azure DevOps SPN for the
Azure subscription, specify the name of the Azure resource group you wish to inspect as well as the
Azure subscription id the resource group resides in.
●● Run the release pipeline and wait for the release to complete… The default security policies are
evaluated, you have the option of customizing or creating your subset… If the setup isn’t compliant
the release pipeline will fail by default…
MCT USE ONLY. STUDENT USE PROHIBITED 224 Module 6 Managing Application Config and Secrets
The logs give you the full picture! The extension gives you a criticality rating and also tells you if the issue
can be auto fixed, a description and recommendation for the specific line item as well…
MCT USE ONLY. STUDENT USE PROHIBITED
Implement Tools for Managing Security and Compliance in a Pipeline 225
Summary
Pushing Security left does not mean using the old ways of security compliance checks earlier in the
life-cycle, instead it is an opportunity to leverage the DevOps mindset to innovate and automate the
security inspection to allow development teams to release features with confidence.
How to do it
1. Launch the Azure Portal and create a new resource group called az-rg-labs-azurepolicy-001
2. From within Azure Portal, open Policy, from the left pane click on Definitions.
Definitions are effectively the rules you want to impose. You can use the built in policies, duplicate and
edit them, or create your own from various templates like those on GitHub13
13 https://github.com/Azure/azure-policy
MCT USE ONLY. STUDENT USE PROHIBITED 226 Module 6 Managing Application Config and Secrets
3. You can see the definition is a JSON file that needs a list of allowed locations and will cause a deny.
You could duplicate this definition and add more checks if needed but we’ll just assign it. Click Assign.
4. When assigning the policy, a scope needs to be selected… There are 3 options, as listed below, choose
Resource Group.
●● Management Group
●● Subscription
●● Resource Group
5. When assigning the policy, in the basics section change the assignment name and add a description.
6. To test this policy, in the resource group az-rg-labs-azurepolicy-001 try to create a resource
with location other than the allowed location.
MCT USE ONLY. STUDENT USE PROHIBITED
Implement Tools for Managing Security and Compliance in a Pipeline 227
There's more
Policies in Azure are a great way to scale enterprise policies for provisioning infrastructure across your
estate. You can now learn more about policies here14. Policies are part of Azure governance which now
also supports blue prints15 and Resource Graphs16.
14 https://docs.microsoft.com/en-us/azure/governance/policy/overview
15 https://azure.microsoft.com/en-gb/services/blueprints/
16 https://azure.microsoft.com/en-gb/features/resource-graph/
MCT USE ONLY. STUDENT USE PROHIBITED 228 Module 6 Managing Application Config and Secrets
Lab
Integrating Azure Key Vault with Azure DevOps
In this lab, Integrating Azure KeyVault with Azure DevOps17, we'll cover:
●● Create a key vault, from the Azure portal, to store a MySQL server password
●● Configure permissions to let a service principal to read the value
●● Retrieve the password in an Azure pipeline and passed on to subsequent tasks
17 https://www.azuredevopslabs.com/labs/vstsextend/azurekeyvault/
MCT USE ONLY. STUDENT USE PROHIBITED
Module Review and Takeaways 229
Suggested answer
What are the five stages of threat modeling?
Suggested answer
Why would you use WhiteSource Bolt?
Suggested answer
What is the Azure Key Vault and why would use it?
MCT USE ONLY. STUDENT USE PROHIBITED 230 Module 6 Managing Application Config and Secrets
Answers
What is OWASP ZAP and how can it be used?
OWASP ZAP can be used for penetration testing. Testing can be active or passive. Conduct a quick baseline
scan to identify vulnerabilities. Conduct nightly more intensive scans.
Define security requirements. Create an application diagram. Identify threats. Mitigate threats. Validate that
threats have been mitigated.
Use WhiteSource Bolt to automatically detect alerts on vulnerable open source components, outdated librar-
ies, and license compliance issues in your code.
What is the Azure Key Vault and why would use it?
Azure Key Vault is a cloud key management service which allows you to create, import, store & maintain
keys and secrets used by your cloud applications. The applications have no direct access to the keys, which
helps improving the security & control over the stored keys & secrets. Use the Key Vault to centralize
application and configuration secrets, securely store secrets and keys, and monitor access and use.
MCT USE ONLY. STUDENT USE PROHIBITED
Module 7 Managing Code Quality and Securi-
ty Policies
Module Overview
Module Overview
Technical Debt refers to the trade-off between decisions that make something easy in the short term and
the ones that make it maintainable in the long term. Companies constantly need to trade off between
solving the immediate, pressing problems and fixing long-term issues. Both code quality and security are
mostly overlooked by software development teams as not their problem to solve! Part of the solution to
this problem is to create a quality-focused culture that encourages shared responsibility and ownership
for both code quality and security compliance. Azure DevOps has great tooling and ecosystem to
improve code quality and apply automated security checks. After completing this module, students will
be able to:
Learning Objectives
After completing this module, students will be able to:
●● Manage code quality including: technical debt SonarCloud, and other tooling solutions
●● Manage security policies with open source and OWASP
MCT USE ONLY. STUDENT USE PROHIBITED 232 Module 7 Managing Code Quality and Security Policies
Reliability
Reliability measures the probability that a system will run without failure over a specific period of opera-
tion. It relates to the number of defects and availability of the software.
Number of defects can be measured by running a static analysis tool. Software availability can be meas-
ured using the mean time between failures (MTBF). Low defect counts are especially important for
developing a reliable codebase.
Maintainability
Maintainability measures how easily software can be maintained. It relates to the size, consistency,
structure, and complexity of the codebase. And ensuring maintainable source code relies on a number of
factors, such as testability and understandability.
You can’t use a single metric to ensure maintainability. Some metrics you may consider to improve
maintainability are the number of stylistic warnings and Halstead complexity measures. Both automation
and human reviewers are essential for developing maintainable codebases.
Testability
Testability measures how well the software supports testing efforts. It relies on how well you can control,
observe, isolate, and automate testing, among other factors.
Testability can be measured based on how many test cases you need to find potential faults in the
system. Size and complexity of the software can impact testability. So, applying methods at the code level
— such as cyclomatic complexity — can help you improve the testability of the component.
Portability
Portability measures how usable the same software is in different environments. It relates to platform
independency.
There isn’t a specific measure of portability. But there are several ways you can ensure portable code. It’s
important to regularly test code on different platforms, rather than waiting until the end of development.
It’s also a good idea to set your compiler warning levels as high as possible — and use at least two
compilers. Enforcing a coding standard also helps with portability.
MCT USE ONLY. STUDENT USE PROHIBITED
Managing Code Quality 233
Reusability
Reusability measures whether existing assets — such as code — can be used again. Assets are more
easily reused if they have characteristics such as modularity or loose coupling.
Reusability can be measured by the number of interdependencies. Running a static analyzer can help you
identify these interdependencies.
Quality Metrics
While there are various quality metrics, a few of the most important ones are listed below.
Defect Metrics
The number of defects — and severity of those defects — are important metrics of overall quality.
Complexity Metrics
Complexity metrics can help in measuring quality. Cyclomatic complexity measures of the number of
linearly independent paths through a program’s source code. Another way to understand quality is
through calculating Halstead complexity measures. These measure:
●● Program vocabulary
●● Program length
●● Calculated program length
●● Volume
●● Difficulty
●● Effort
Code analysis tools can be used to check for considerations such as security, performance, interoperabili-
ty, language usage, globalization, and should be part of every developer’s toolbox and software build
process. Regularly running a static code analysis tool and reading its output is a great way to improve as
a developer because the things caught by the software rules can often teach you something.
●● Bug Bounce Percentage - What percentage of customer or bug tickets are being re-opened?
●● Unplanned Work Percentage - What percentage of the overall work being performed is unplanned?
✔️ Note: Over time, technical debt must be paid back. Otherwise, the team's ability to fix issues, and to
implement new features and enhancements will take longer and longer, and eventually become cost-pro-
hibitive.
1 https://sonarcloud.io/about
MCT USE ONLY. STUDENT USE PROHIBITED 236 Module 7 Managing Code Quality and Security Policies
If you drill into the issues, you can then see what the issues are, along with suggested remedies, and
estimates of the time required to apply a remedy.
NDepend
For .NET developers, a common tool is NDepend.
NDepend is a Visual Studio extension that assesses the amount of technical debt that a developer has
added during a recent development period, typically in the last hour. With this information, the developer
might be able to make the required corrections before ever committing the code. NDepend lets you
create code rules that are expressed as C# LINQ queries but it has a large number of built-in rules that
detect a wide range of code smells.
2 https://www.ndepend.com
3 https://marketplace.visualstudio.com/items?itemName=ndepend.ndependextension&targetId=2ec491f3-0a97-4e53-bfef-20bf80c7e1ea
4 https://marketplace.visualstudio.com/items?itemName=alanwales.resharper-code-analysis
MCT USE ONLY. STUDENT USE PROHIBITED 238 Module 7 Managing Code Quality and Security Policies
It's important, up front, to agree that everyone is trying to achieve better code quality. Achieving code
quality can seem challenging because there is no one single best way to write any piece of code, at least
code with any complexity.
Everyone wants to do good work and to be proud of what they create. This means that it's easy for
developers to become over-protective of their code. The organizational culture must let all involved feel
that the code reviews are more like mentoring sessions where ideas about how to improve code are
shared, than interrogation sessions where the aim is to identify problems and blame the author.
The knowledge sharing that can occur in mentoring-style sessions can be one of the most important
outcomes of the code review process. It often happens best in small groups (perhaps even just two
people), rather than in large team meetings. And it's important to highlight what has been done well, not
just what needs to be improved.
Developers will often learn more in effective code review sessions than they will in any type of formal
training. Reviewing code should be seen as an opportunity for all involved to learn, not just as a chore
that must be completed as part of a formal process.
It's easy to see two or more people working on a problem and think that one person could have com-
pleted the task by themselves. That's a superficial view of the longer-term outcomes. Team management
needs to understand that improving the code quality reduces the cost of code, not increases it. Team
leaders need to establish and foster an appropriate culture across their teams.
MCT USE ONLY. STUDENT USE PROHIBITED
Managing Security Policies 239
5 http://owasp.org
MCT USE ONLY. STUDENT USE PROHIBITED 240 Module 7 Managing Code Quality and Security Policies
OWASP regularly publish a set of Secure Coding Practices. Their guidelines currently cover advice in the
following areas:
●● Input Validation
●● Output Encoding
●● Authentication and Password Management
●● Session Management
●● Access Control
●● Cryptographic Practices
●● Error Handling and Logging
●● Data Protection
●● Communication Security
●● System Configuration
●● Database Security
●● File Management
●● Memory Management
●● General Coding Practices
To learn about common vulnerabilities, and to see how they appear in applications, OWASP also publish-
es an intentionally vulnerable web application called The Juice Shop Tool Project6. It includes vulnerabil-
ities from all of the OWASP Top 107.
In 2002, Microsoft underwent a company-wide re-education and review phase to focus on producing
secure application code. The book, Writing Secure Code by David LeBlanc, Michael Howard8, was
written by two of the people involved and provides detailed advice on how to write secure code.
For more information, you can see:
●● The OWASP foundation9
●● OWASP Secure Coding Practices Quick Reference Guide10
●● OWASP Code Review guide11
●● OWASP Top Ten12
6 https://www.owasp.org/index.php/OWASP_Juice_Shop_Project
7 https://www.owasp.org/index.php/Category:OWASP_Top_Ten_Project
8 https://www.booktopia.com.au/ebooks/writing-secure-code-david-leblanc/prod2370006179962.html
9 http://owasp.org
10 https://www.owasp.org/images/0/08/OWASP_SCP_Quick_Reference_Guide_v2.pdf
11 https://www.owasp.org/images/2/2e/OWASP_Code_Review_Guide-V1_1.pdf
12 https://www.owasp.org/index.php/Category:OWASP_Top_Ten_Project
MCT USE ONLY. STUDENT USE PROHIBITED
Managing Security Policies 241
Micro Focus Fortify13 provides build tasks that can be used in Azure DevOps continuous integration
builds to identify vulnerabilities in source code. It offers two styles of analysis.
●● Fortify Static Code Analyzer (SCA) searches for violations of security-specific coding rules and
guidelines. It works in a variety of languages.
●● Fortify on Demand is a service for checking application security. The outcomes of an SCA scan are
audited by a team of security experts, including the use of Fortify WebInspect for automated dynamic
scanning.
Checkmarx CxSAST14 is a solution for Static Source Code Analysis (SAST) and Open Source Analysis
(OSA) designed for identifying, tracking and fixing technical and logical security flaws.
It is designed to integrated into Azure DevOps pipelines and allows for early detection and mitigation of
crucial security flaws. To improve performance, it is capable of incremental scanning (ie: just checking the
code recently altered or introduced).
BinSkim15 is a static analysis tool that scans binary files. BinSkim replaces an earlier Microsoft tool called
BinScope. In particular, it checks that the executable produced (ie: a Windows PE formatted file) has
opted into all of the binary mitigations offered by the Windows Platform, including:
●● SafeSEH is enabled for safe exception handling
●● ASLR is enabled so that memory is not laid out in a predictable fashion
●● Stack Protection is enabled to prevent overflow
OWASP Zed Attack Proxy Scan. Also known as OWASP ZAP Scan is an open-source web application
security scanner that is intended for users with all levels of security knowledge. It can be used by profes-
sional penetration testers.
13 https://marketplace.visualstudio.com/items?itemName=fortifyvsts.hpe-security-fortify-vsts
14 https://marketplace.visualstudio.com/items?itemName=checkmarx.cxsast
15 https://blogs.msdn.microsoft.com/secdevblog/2016/08/17/introducing-binskim/
16 https://cve.mitre.org/about/
MCT USE ONLY. STUDENT USE PROHIBITED 242 Module 7 Managing Code Quality and Security Policies
Lab
Managing Technical Debt with Azure DevOps
and SonarCloud
In this hands-on lab, Managing Technical Debt with Azure DevOps and SonarCloud17, you will learn
how to manage and report on technical debt using SonarCloud integration with Azure DevOps.
You will perform the following tasks:
●● Integrate SonarCloud with Azure DevOps and run an analysis
●● Analyze the results
●● Configure a quality profile to control the rule set used for analyzing your project
✔️ Note: You must have already completed the Lab Environment Setup in the Welcome section.
17 https://www.azuredevopslabs.com/labs/azuredevops/sonarcloud/
MCT USE ONLY. STUDENT USE PROHIBITED
Module Review and Takeaways 243
Suggested answer
What is code smells? Give an example of a code smell.
Suggested answer
You are using Azure Repos for your application source code repository. You want to create an audit of open
source libraries that you have used. Which tool could you use?
Suggested answer
Name three attributes of high quality code.
Suggested answer
You are using Azure Repos for your application source code repository. You want to perform code quality
checks. Which tool could you use?
MCT USE ONLY. STUDENT USE PROHIBITED 244 Module 7 Managing Code Quality and Security Policies
Answers
You want to run a penetration test against your application. Which tool could you use?
OWASP ZAP. OWASP ZAP is designed to run penetration testing against applications. Bolt is used to
analyze open source library usage. The two Sonar products are for code quality and code coverage analysis.
Code smells are characteristics in your code that could possibly be a problem. Code smells hint at deeper
problems in the design or implementation of the code. For example, code that works but contains many
literal values or duplicated code.
You are using Azure Repos for your application source code repository. You want to create an audit of
open source libraries that you have used. Which tool could you use?
WhiteSource Bolt is used to analyze open source library usage. OWASP ZAP is designed to run penetration
testing against applications. The two Sonar products are for code quality and code coverage analysis.
High quality code should have well-defined interfaces. It should be clear and easy to read so self-document-
ing is desirable, as is short (not long) method bodies.
You are using Azure Repos for your application source code repository. You want to perform code quality
checks. Which tool could you use?
SonarCloud is the cloud-based version of the original SonarQube, and would be best for working with code
in Azure Repos.
MCT USE ONLY. STUDENT USE PROHIBITED
Module 8 Implementing a Container Build
Strategy
Module Overview
Module Overview
Containers are the third model of compute, after bare metal and virtual machines – and containers are
here to stay. Docker gives you a simple platform for running apps in containers, old and new apps on
Windows and Linux, and that simplicity is a powerful enabler for all aspects of modern IT. Containers
aren’t only faster and easier to use than VMs; they also make far more efficient use of computing hard-
ware.
Learning Objectives
After completing this module, students will be able to:
●● Implement a container strategy including how containers are different from virtual machines and how
microservices use containers
●● Implement containers using Docker
MCT USE ONLY. STUDENT USE PROHIBITED 246 Module 8 Implementing a Container Build Strategy
Virtual Machines
A VM is essentially an emulation of a real computer that executes programs like a real computer. VMs run
on top of a physical machine using a “hypervisor”. As you can see in the diagram, VMs package up the
virtual hardware, a kernel (i.e. OS) and user space for each new VM.
MCT USE ONLY. STUDENT USE PROHIBITED
Managing Code Quality 247
Container
Unlike a VM which provides hardware virtualization, a container provides operating-system-level virtual-
ization by abstracting the “user space”. This diagram shows you that containers package up just the user
space, and not the kernel or virtual hardware like a VM does. Each container gets its own isolated user
space to allow multiple containers to run on a single host machine. We can see that all the operating
system level architecture is being shared across containers. The only parts that are created from scratch
are the bins and libs. This is what makes containers so lightweight.
MCT USE ONLY. STUDENT USE PROHIBITED 248 Module 8 Implementing a Container Build Strategy
Docker is a software containerization platform with a common toolset, packaging model, and deploy-
ment mechanism, which greatly simplifies containerization and distribution of applications that can be
run anywhere. This ubiquitous technology not only simplifies management by offering the same manage-
ment commands against any host, it also creates a unique opportunity for seamless DevOps.
From a developer’s desktop to a testing machine, to a set of production machines, a Docker image can
be created that will deploy identically across any environment in seconds. This is a massive and growing
ecosystem of applications packaged in Docker containers, with DockerHub, the public containerized-ap-
plication registry that Docker maintains, currently publishing more than 180,000 applications in the public
community repository. Additionally, to guarantee the packaging format remains universal, Docker
organized the Open Container Initiative (OCI), aiming to ensure container packaging remains an open
and foundation-led format.
As an example of the power of containers, a SQL Server Linux instance can be deployed using a Docker
image in seconds.
For more information, see:
●● Docker Ebook, Docker for the Virtualization Admin1
●● Mark Russinovich blog post on Containers: Docker, Windows, and Trends2
1 https://goto.docker.com/docker-for-the-virtualization-admin.html
2 https://azure.microsoft.com/en-us/blog/containers-docker-windows-and-trends/
MCT USE ONLY. STUDENT USE PROHIBITED 250 Module 8 Implementing a Container Build Strategy
The most immediately lucrative use for containers has been focused on simplifying DevOps with easy
developer-to-test-to-production flows for services deployed in the cloud or on-premises. But there is
another fast-growing scenario where containers are becoming very compelling.
Microservices is an approach to application development where every part of the application is deployed
as a fully self-contained component, called a microservice, that can be individually scaled and updated.
Example Scenario
Imagine that you are part of a software house that produces a large monolithic financial management
application that you are migrating to a series of microservices. The existing application would include the
code to update the general ledger for each transaction, and it would have this code in many places
throughout the application. If the schema of the general ledger transactions table is modified, this would
require changes throughout the application.
By comparison, the application could be modified to make a notification that a transaction has occurred.
Any microservice that is interested in the transactions could subscribe. In particular, a separate general
ledger microservice could subscribe to the transaction notifications, and then perform the general ledger
related functionality. If the schema of the table that holds the general ledger transactions is modified,
only the general ledger microservice should need to be updated.
If a particular client organization wants to run the application and not use the general ledger, that service
could just be disabled. No other changes to the code would be required.
Scale
In a dev/test environment on a single system, while you might have a single instance of each microser-
vice, in production you might scale out to different numbers of instances across a cluster of servers
depending on their resource demands as customer request levels rise and fall. If different teams produce
them, the teams can also independently update them. Microservices is not a new approach to program-
MCT USE ONLY. STUDENT USE PROHIBITED
Managing Code Quality 251
ming, nor is it tied explicitly to containers, but the benefits of Docker containers are magnified when
applied to a complex microservice-based application. Agility means that a microservice can quickly scale
out to meet increased load, the namespace and resource isolation of containers prevents one microser-
vice instance from interfering with others, and use of the Docker packaging format and APIs unlocks the
Docker ecosystem for the microservice developer and application operator. With a good microservice
architecture, customers can solve the management, deployment, orchestration and patching needs of a
container-based service with reduced risk of availability loss while maintaining high agility.
3 https://azure.microsoft.com/en-us/services/container-instances/
4 https://azure.microsoft.com/en-us/services/kubernetes-service/
5 https://azure.microsoft.com/en-us/services/container-registry/
6 https://azure.microsoft.com/en-us/services/service-fabric/
7 https://azure.microsoft.com/en-us/services/app-service/
MCT USE ONLY. STUDENT USE PROHIBITED 252 Module 8 Implementing a Container Build Strategy
Azure Web Apps provides a managed service for both Windows and Linux based web applications, and
provides the ability to deploy and run containerized applications for both platforms. It provides options
for auto-scaling and load balancing and is easy to integrate with Azure DevOps.
The RUN command is run when the image is being created by docker build. It is generally used to
configure items within the image.
By comparison, the last line represents a command that will be executed when a new container is created
from the image ie: it is run after container creation.
For more information, you can see:
Dockerfile reference8
8 https://docs.docker.com/engine/reference/builder/
MCT USE ONLY. STUDENT USE PROHIBITED
Managing Code Quality 253
With multi-stage builds, you use multiple FROM statements in your Dockerfile. Each FROM instruction
starts a new stage. The stages are numbered in order, starting with stage 0. To make the file easier to
maintain without needing to constantly change numbers that reference, note how each stage has been
named (or aliased) by using an AS clause.
Each FROM instruction can have a different parent (ie: base). This allows the developer to control what is
copied from one stage to another, and avoids the need for intermediate images.
Another advantage of named stages is that they are easier to refer to in external commands. For example,
not all stages need to be built each time. You can see that in the following Docker CLI command:
$ docker build --target publish -t gregsimages/popkorn:latest .
The –target option tells docker build that it needs to create an image up to the target of publish, which
was one of the named stages.
9 https://docs.docker.com/develop/develop-images/multistage-build/
MCT USE ONLY. STUDENT USE PROHIBITED
Managing Code Quality 255
The response from this command, returns the loginserver which has the fully qualified url of the
registry.
{
"adminUserEnabled": false,
"creationDate": "2020-03-08T22:32:13.175925+00:00",
"id": "/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/myResourceGroup/
providers/Microsoft.ContainerRegistry/registries/myaz400containerregistry",
"location": "eastus",
"loginServer": "myaz400containerregistry.azurecr.io",
"name": "myaz400containerregistry",
"provisioningState": "Succeeded",
"resourceGroup": "myResourceGroup",
"sku": {
"name": "Basic",
"tier": "Basic"
},
"status": null,
"storageAccount": null,
"tags": {},
"type": "Microsoft.ContainerRegistry/registries"
}
Log in to registry
Before pushing and pulling container images, you must log in to the registry. To do so, use the az acr
login command.
az acr login --name <acrName>
Before you can push an image to your registry, you must tag it with the fully qualified name of your ACR
login server. The login server name is in the format ‘registry-name’.azurecr.io (all lowercase), for example,
myaz400containerregistry.azurecr.io.
docker tag hello-world <acrLoginServer>/hello-world:v1
Finally, use docker push to push the image to the ACR instance. Replace acrLoginServer with the login
server name of your ACR instance. This example creates the hello-world repository, containing the
hello-world:v1 image.
MCT USE ONLY. STUDENT USE PROHIBITED 256 Module 8 Implementing a Container Build Strategy
After pushing the image to your container registry, remove the hello-world:v1 image from your local
Docker environment.
docker rmi <acrLoginServer>/hello-world:v1
Clean up resources
When no longer needed, you can use the az group delete command to remove the resource group, the
container registry, and the container images stored there.
az group delete --name myResourceGroup
10 https://docs.microsoft.com/en-us/aspnet/core/host-and-deploy/docker/visual-studio-tools-for-docker?view=aspnetcore-3.1
MCT USE ONLY. STUDENT USE PROHIBITED
Lab 257
Lab
Modernizing Your Existing ASP.NET Apps with
Azure
In this hands-on lab, Modernizing your existing ASP.NET Apps with Azure11,you will learn how to
modernize an existing ASP.NET application with migration to Docker images managed by the Azure
Container Registry.
You will perform the following tasks:
●● Migrate the LocalDB to SQL Server in Azure
●● Using the Docker tools in Visual Studio 2017, add Docker support for the application
●● Publish Docker Images to Azure Container Registry (ACR)
●● Push the new Docker images from ACR to Azure Container Instances (ACI)
11 https://www.azuredevopslabs.com/labs/vstsextend/aspnetmodernize/
MCT USE ONLY. STUDENT USE PROHIBITED 258 Module 8 Implementing a Container Build Strategy
Suggested answer
You are designing a multi-stage Dockerfile. How can one stage refer to another stage within the Dockerfile?
Suggested answer
What is the line continuation character in Dockerfiles?
Suggested answer
You are using Azure to manage your containers. Which container orchestration styles are supported?
Suggested answer
When the Open Container Initiative defined a standard container image file format, which format did they
choose as a starting point?
MCT USE ONLY. STUDENT USE PROHIBITED
Module Review and Takeaways 259
Answers
You are reviewing an existing Dockerfile. How would you know if it's a multi-stage Dockerfile?
Multi-stage Docker files are characterized by containing more than one starting point provided as FROM
instructions.
You are designing a multi-stage Dockerfile. How can one stage refer to another stage within the Docker-
file?
The FROM clause in a multi-stage Dockerfile can contain an alias via an AS clause. The stages can refer to
each other by number or by the alias names.
Lines can be broken and continued on the next line of a Dockerfile by using the backslash character.
You are using Azure to manage your containers. Which container orchestration styles are supported?
When the Open Container Initiative defined a standard container image file format, which format did they
choose as a starting point?
Module Overview
Module Overview
Welcome to this module about managing artifact versioning, security, and compliance. In this module, we
will talk about how you can secure your packages and feeds and check security requirements on the
packages used in developing your software solutions. Also we will cover how to make sure the packages
used are compliant to the standard and requirements that exist in your organization from a licensing and
security vulnerability perspective.
Learning objectives
At the end of this module, students will be able to:
●● Inspect open source software packages for security and license complaince to align with corporate
standards
●● Configure build pipeline to access package security and license rating
●● Configure secure access to package feeds
●● Inspect codebase to identify code dependencies that can be converted to packages
●● Identify and recommend standardized package types and versions across the solution
●● Refactor existing build pipelines to implement version strategy that publishes packages
●● Manage security and compliance
MCT USE ONLY. STUDENT USE PROHIBITED 262 Module 9 Manage Artifact Versioning, Security, and Compliance
Package Security
Package Feeds
Package feeds are a trusted source of packages. The packages that are offered will be consumed by other
code bases and used to build software that needs to be secure. Imagine what would happen if a package
feed would offer malicious components in its packages. Each consumer would be affected when installing
the packages onto its development machine or build server. This also happens at any other device that
will run the end product, as the malicious components will be executed as part of the code. Usually the
code runs with high priviliges, giving a substantial security risk if any of the packages cannot be trusted
and might contain unsafe code.
Therefore, it is essential that package feeds are secured for access by authorized accounts, so only
verified and trusted packages are stored there. Noone should be able to push packages to a feed without
the proper role and permissions. This prevents others from pushing malicious packages. It still assumes
that the persons who are allowed to push packages will only add safe and secure packages. Especially in
the open source world this is performed by the community. A package source can further guard its feed
with the use of security and vulnerability scan tooling. Additionally, consumers of packages can use
similar tooling to perform the scans themselves.
Another aspect of security for package feeds is about public or private availability of the packages. The
feeds of public sources are usually available for anonymous consumption. Private feeds on the other
hand have restricted access most of the time. This applies to consumption and publishing of packages.
Private feeds will allow only users in specific roles or teams access to its packages.
Package compliance
Nowadays companies have obligations towards their customers and employees to make sure that the
services they offer with software and IT are compliant to rules and regulations. In part the rules and
regulations come from governments, certification and standards institutes. They might also be self
imposed rules and regulations from the company or organization itself. This could include rules about
how open source is used, which license types are allowed and a package version policy.
When development teams are empowered to make their own choices for the use of packages it becomes
very important that they are choosing the right type of packages. In this case that would imply that the
packages are allowed from a licensing perspective and follow the chosen ruleset to be compliant with the
applicable policies. It involves practices, guidelines, hosting, additional work and introduction of tooling
that will help to make sure that compliancy is part of the software development process when it comes to
producing and consuming packages.
The compliancy of packages should be guaranteed and provable. The software development processes
should take this into account in an integral fashion. We will look at open source software and licensing in
the next chapters, and see how we can leverage Azure DevOps and other tools and services to implement
a policy for security and compliancy for closed and open source alike.
Roles
Azure Artifacts has four different roles for package feeds. These are incremental in the permissions they
give.
The roles are in incremental order:
●● Reader: Can list and restore (or install) packages from the feed
●● Collaborator: Is able to save packages from upstream sources
●● Contributor: Can push and unlist packages in the feed
●● Owner: has all available permissions for a package feed
When creating an Azure Artifacts feed, the Project Collection Build Service is given contribu-
tor rights by default. This organization-wide build identity in Azure Pipelines is able to access the feeds it
needs when running tasks. If you changed the build identity to be at project level, you will need also give
that identity permissions to access the feed.
Any contributors to the team project are also contributors to the feed. Project Collection Administrators
and administrators of the team project, plus the creator of the feed are automatically made owners of the
feed. The roles for these users and groups can be changed or removed.
Permissions
The feeds in Azure Artifacts require permission to the various features it offers. The list of permissions
consists of increasing priviliged operations.
The list of priviliges is as follows:
For each permission you can assign users, teams and groups to a specific role, giving the permissions
corresponding to that role. You need to have the Owner role to be able to do so. Once an account has
access to the feed from the permission to list and restore packages it is considered a Feed user.
MCT USE ONLY. STUDENT USE PROHIBITED 264 Module 9 Manage Artifact Versioning, Security, and Compliance
Just like permissions and roles for the feed itself, there are additional permissions for access to the
individual views. Any feed user has access to all the views, whether the default views of @Local, @Release
or @Prerelease, or newly created ones. During creation of a feed you can choose whether the feed is
visible to people in your Azure DevOps organization or only specific people.
See also:
Secure and share packages using feed permissions1
Credentials
Azure DevOps users will authenticate against Azure Active Directory when accessing the Azure DevOps
portal. After being successfully authenticated, they will not have to provide any credentials to Azure
Artifacts itself. The roles for the user, based on its identity, or team and group membership, are for
authorization. When access is allowed, the user can simply navigate to the Azure Artifacts section of the
team project.
The authentication from Azure Pipelines to Azure Artifacts feeds is taken care of transparently. It will be
based upon the roles and its permissions for the build identity. The previous section on Roles covered
some details on the required roles for the build identity.
The authentication from inside Azure DevOps does not need any credentials for accessing feeds by itself.
However, when accessing secured feeds outside Azure Artifacts, such as other package sources, you most
1 https://docs.microsoft.com/en-us/azure/devops/artifacts/feeds/feed-permissions
MCT USE ONLY. STUDENT USE PROHIBITED
Package Security 265
likely have to provide credentials to authenticate to the feed manager. Each package type has its own
way of handling the credentials and providing access upon authentication. The command-line tooling will
provide support in the authentication process.
For the build tasks in Azure Pipelines, you will provide the credentials via a Service connection.
MCT USE ONLY. STUDENT USE PROHIBITED 266 Module 9 Manage Artifact Versioning, Security, and Compliance
Challenge to corporates
All in all, modern software development, including the Microsoft developer platform and ecosystem
implies the use of open-source components. This has implications for companies that build software,
either commercially or for internal use. The inclusion of software components that are not build by the
companies themselves, means that there is no full control over the sources.
Others being responsible for the source code that is used in components used within a company, means
that you have to accept the risks involved with it. The source code could:
●● Be of low quality
This would impact maintainability, reliability and performance of the overall solution
●● Have no active maintenance
The code would not evolve over time, or be alterable without making a copy of the source code,
effectively forking away from the origin.
●● Contain malicious code
The entire system that includes and uses the code will be compromised. Potentially the entire compa-
ny's IT and infrastructure is affected.
●● Have security vulnerabilities
The security of a software system is as good as its weakest part. Using source code with vulnerabilities
makes the entire system susceptible to attack by hackers and misuse.
●● Have unfavorable licensing restrictions
The effect of a license can affect the entire solution that uses the open-source software.
The companies will have to make a trade-off: its developers want to be able to use open-source software
components, allowing them to speed up development and use modern frameworks, libraries and practic-
es. On the other hand, giving the developers and projects the freedom to include open-source software
should not put the company at risk. The challenges to the company are in finding a way to keep the
developers empowered and free to choose technology to use, while making sure the risks for the compa-
ny are managed as well as possible.
Other challenges comes from companies that offer open-source software to the public. These challenges
include having a business model around the open-source, when to publish open-source code and how to
deal with community contributions. The fact that your source code is open, doesn't imply that anyone can
make changes to it. There can be contributions from community collaboration, but a company does not
necessarily have to accept it. This is referred to as closed open-source. Suggestions for change are
welcome, but the maintainers are the one that carry out the actual changes.
2 http://www.dotnetfoundation.org
MCT USE ONLY. STUDENT USE PROHIBITED 268 Module 9 Manage Artifact Versioning, Security, and Compliance
●● Be specific to a product
●● Restrict other software
●● and more - See the Open Source Definition3
To cover the exact terms of a license several types exist. Each type has its own specifics and implications,
which we will cover in the next part.
Even though open-source software is generally developed by multiple contributors from the community,
it does not guarantee that the software is secure and without vulnerabilities. Chances are they are
discovered by being inspected by multiple reviewers, but the discovery might not be immediate or before
being consumed by others.
Since the source code is open-source, people with malicious intent can also inspect the code for vulnera-
bilities and exploit these when possible. In that regard, it is both a blessing and a curse that open-source
software has source code available for others.
Types of licenses
There are multiple licenses used in open-source and they are different in nature. The license spectrum is a
chart that shows licenses from the perspective of the developer and the implications of use for down-
stream requirements that are imposed on the overall solution and source code.
On the left side there are the “attribution” licenses. They are permissive in nature and allow practically
every type of use by the software that consumes it. An example is building commercially available
software including the components or source code under this license. The only restriction is that the
original attribution to the authors remains included in the source code or as part of the downstream use
of the new software.
3 http://opensource.org/osd
MCT USE ONLY. STUDENT USE PROHIBITED
Open source software 269
The right side of the spectrum shows the “copyleft” licenses. These licenses are considered viral in nature,
as the use of the source code and its components, and distribution of the complete software, implies that
all source code using it should follow the same license form. The viral nature is that the use of the
software covered under this license type forces you to forward the same license for all work with or on
the original software.
The middle of the spectrum shows the “downstream” or "weak copyleft" licenses. It also requires that
when the covered code is distributed, it must do so under the same license terms. Unlike the copyleft
licenses this does not extend to improvements or additions to the covered code.
License rating
Licenses can be rated by the impact that they have. When a package has a certain type of license, the use
of the package implies keeping to the requirements of the package. The impact the license has on the
downstream use of the code, components and packages can be rated as High, Medium and Low, de-
pending on the copy-left, downstream or attribution nature of the license type.
For compliancy reasons, a high license rating can be considered a risk for compliancy, intellectual proper-
ty and exclusive rights.
Package security
The use of components creates a software supply chain. The resultant product is a composition of all its
parts and components. This applies to the security level of the solution as well. Therefore, similar to
license types it is important to know how secure the components being used are. If one of the compo-
nents used is not secure, then the entire solution isn't either. We will talk more on package security and
vulnerabilities in the next chapter.
MCT USE ONLY. STUDENT USE PROHIBITED 270 Module 9 Manage Artifact Versioning, Security, and Compliance
Tool Type
Artifactory Artifact repository
SonarCube Artifact repository
WhiteSource (Bolt) Build scanning
Configure pipeline
The configuration of the scanning for license types and security vulnerability in the pipeline is done by
using appropriate build tasks in your DevOps tooling. For Azure DevOps these are build pipeline tasks.
WhiteSource is a third party product that offers both a paid and free version to use in Azure Pipelines.
The tool uses the local build artifacts on the build server and runs directly from there. It will scan for the
MCT USE ONLY. STUDENT USE PROHIBITED
Integrating license and vulnerability scans 271
various package types used in the build and analyze those found. This requires external connectivity. The
results of the analysis are returned in the build results as part of the step for the task.
Immutable packages
As packages get new versions, your codebase can choose when to use a new version of the packages it
consumes. It does so by specifying the specific version of the package it requires. This implies that
packages themselves should always have a new version when they change. Whenever a package is
published to a feed it should not be allowed to change any more. If it were, it would be at the risk of
introducing potential breaking changes to the code. In essence, a published package is considered to be
immutable. Replacing or updating an existing version of a package is not allowed. Most of the package
feeds do not allow operations that would change an existing version. Regardless of the size of the change
a package can only be updated by the introduction of a new version. The new version should indicate the
type of change and impact it might have.
See also Key concepts for Azure Artifacts4.
Versioning of artifacts
It is proper software development practices to indicate changes to code with the introduction of an
increased version number. However small or large a change, it requires a new version. A component and
its package can have independent versions and versioning schemes.
The versioning scheme can differ per package type. Typically, it uses a scheme that can indicate the type
of change that is made. Most commonly this involves three types of changes:
●● Major change
Major indicates that the package and its contents have changed significantly. It often occurs at the
introduction of a complete new version of the package. This can be at a redesign of the component.
Major changes are not guaranteed to be compatible and usually have breaking changes from older
versions. Major changes might require a substantial amount of work to adopt the consuming code-
base to the new version.
●● Minor change
Minor indicates that the package and its contents have substantial changes made, but are a smaller
4 https://docs.microsoft.com/en-us/azure/devops/artifacts/artifacts-key-concepts#immutability
MCT USE ONLY. STUDENT USE PROHIBITED
Implement a versioning strategy 273
increment than a major change. These changes can be backward compatible from the previous
version, although they are not guaranteed to be.
●● Patch
A patch or revision is used to indicate that a flaw, bug or malfunctioning part of the component has
been fixed. Normally, this is a backward compatible version compared to the previous version.
How artifacts are versioned technically varies per package type. Each type has its own way of indicating
the version in metadata. The corresponding package manager is able to inspect the version information.
The tooling can query the package feed for packages and the available versions.
Additionally, a package type might have its own conventions for versioning as well as a particular version-
ing scheme.
See also Publish to NuGet feeds5
Semantic versioning
One of the predominant ways of versioning is the use of semantic versionsing. It is not a standard per se,
but does offer a consistent way of expressing intent and semantics of a certain version. It describes a
version in terms of its backward compatibility to previous versions.
Semantic versioning uses a three part version number and an additional label. The version has the form
of Major.Minor.Patch, corresponding to the three types of changes covered in the previous section.
Examples of versions using the semantic versioning scheme are 1.0.0 and 3.7.129. These versions do
not have any labels.
For prerelease versions it is customary to use a label after the regular version number. A label is a textual
suffix separated by a hyphen from the rest of the version number. The label itself can be any text describ-
ing the nature of the prerelease. Examples of these are rc1, beta27 and alpha, forming version
numbers like 1.0.0-rc1 as a prerelease for the upcoming 1.0.0 version.
Prereleases are a common way to prepare for the release of the label-less version of the package. Early
adopters can take a dependency on a prerelease version to build using the new package. In general it is
not a good idea to use prerelease version of packages and their components for released software.
It is good to anticipate on the impact of the new components by creating a separate branch in the
codebase and use the prerelease version of the package. Changes are that there will be incompatible
changes from a prerelease to the final version.
See also Semantic Versioning 2.0.06.
Release views
When building packages from a pipeline, the package needs to have a version before the package can be
consumed and tested. Only after testing is the quality of the package known. Since package versions
cannot and should not be changed, it becomes challenging to choose a certain version beforehand.
Azure Artifacts recognizes a quality level of packages in its feeds and the difference between prerelease
and release versions. It offers different views on the list of packages and their versions, separating these
based on their quality level. It fits well with the use of semantic versioning of the packages for predictabil-
ity of the intent of a particular version, but is additional metadata from the Azure Artifacts feed called a
descriptor.
5 https://docs.microsoft.com/en-us/azure/devops/pipelines/artifacts/nuget#package-versioning
6 https://semver.org/
MCT USE ONLY. STUDENT USE PROHIBITED 274 Module 9 Manage Artifact Versioning, Security, and Compliance
Feeds in Azure Artifacts have three different views by default. These view are added at the moment a new
feed is created. The three views are:
●● Release
The @Release view contains all packages that are considered official releases.
●● Prerelease
The @Prerelease view contains all packages that have a label in their version number.
●● Local
The @Local view contains all release and prerelease packages as well as the packages downloaded
from upstream sources.
Using views
You can use views to offer help consumers of a package feed to filter between released and unreleased
versions of packages. Essentially, it allows a consumer to make a conscious decision to choose from
released packages, or opt-in to prereleases of a certain quality level.
By default the @Local view is used to offer the list of available packages. The format for this URI is:
https://pkgs.dev.azure.com/{yourteamproject}/_packaging/{feedname}/nuget/v3/index.json
When consuming a package feed by its URI endpoint, the address can have the requested view included.
For a specific view, the URI includes the name of the view, which changes to be:
https://pkgs.dev.azure.com/{yourteamproject}/_packaging/{feedname}@{Viewname}/nuget/v3/index.json
The tooling will show and use the packages from the specified view automatically.
Tooling may offer an option to select prerelease versions, such as shown in this Visual Studio 2017 NuGet
dialog. This does not relate or refer to the @Prerelease view of a feed. Instead, it relies on the presence
of prerelease labels of semantic versioning to include or exclude packages in the search results.
See also:
●● Views on Azure DevOps Services feeds7
●● Communicate package quality with prerelease and release views8
Promoting packages
Azure Artifacts has the notion of promoting packages to views as a means to indicate that a version is of
a certain quality level. By selectively promoting packages you can plan when packages have a certain
quality and are ready to be released and supported by the consumers.
You can promote packages to one of the available views as the quality indicator. The two views Release
and Prerelease might be sufficient, but you can create more views when you want finer grained quality
levels if necessary, such as alpha and beta.
Packages will always show in the Local view, but only in a particular view after being promoted to it.
Depending on the URL used to connect to the feed, the available packages will be listed.
7 https://docs.microsoft.com/en-us/azure/devops/artifacts/concepts/views
8 https://docs.microsoft.com/en-us/azure/devops/artifacts/feeds/views
MCT USE ONLY. STUDENT USE PROHIBITED
Implement a versioning strategy 275
Upstream sources will only be evaluated when using the @Local view of the feed. Views Afer they have
been downloaded and cached in the @Local view, you can see and resolve the packages in other views
after they have promoted to it.
It is up to you to decide how and when to promote packages to a specific view. This process can be
automated by using a Azure Pipelines task as part of the build pipeline.
Packages that have been promoted to a view will not be deleted based on the retention policies.
●● dotnet restore
●● dotnet build
●● dotnet push
9 https://docs.microsoft.com/en-us/azure/devops/artifacts/concepts/best-practices
MCT USE ONLY. STUDENT USE PROHIBITED
Implement a versioning strategy 277
It shows the feed already contains the PartsUnlimited.Security 1.0.0. We go back to the Visual Studio
project to see what is happening.
4. Open the source code for the PartsUnlimited package in Visual Studio in a separate solution.
Lab
Manage Open Source Security and License with
WhiteSource
In this lab, Managing Open-source security and license with WhiteSource10, you will use WhiteSource
Bolt with Azure DevOps to automatically detect alerts on vulnerable open source components, outdated
libraries, and license compliance issues in your code. You will be using WebGoat, a deliberately insecure
web application, maintained by OWASP designed to teach web application security lessons. You will learn
how to:
●● Detect and remedy vulnerable open source components.
●● Generate comprehensive open source inventory reports per project or build.
●● Enforce open source license compliance, including dependencies’ licenses.
●● Identify outdated open source libraries with recommendations to update.
✔️ Note: You must have already completed the prerequisite labs in the Welcome section.
10 https://www.azuredevopslabs.com/labs/vstsextend/WhiteSource/
MCT USE ONLY. STUDENT USE PROHIBITED
Module Review and Takeaways 279
Suggested answer
How can an open source library cause licensing issues if it is free to download?
Suggested answer
What is the minimum feed permission that will allow you to list available packages and to install them?
Suggested answer
What is open source software?
Suggested answer
How can you restrict which files are uploaded in a universal package feed?
MCT USE ONLY. STUDENT USE PROHIBITED 280 Module 9 Manage Artifact Versioning, Security, and Compliance
Answers
What issues are often associated with the use of open source libraries?
How can an open source library cause licensing issues if it is free to download?
Each library has usage restrictions as part of the licensing. These restrictions might not be compatible with
your intended application use.
What is the minimum feed permission that will allow you to list available packages and to install them?
Reader
A type of software where users of code are permitted to study, change, and distribute the software. The open
source license type can limit the actions (such as sale provisions) that can be taken.
How can you restrict which files are uploaded in a universal package feed?
Module Overview
Module Overview
Welcome to this module about designing a release strategy. In this module, we will talk about Continu-
ous Delivery in general. In this introduction, we will cover the basics. I'll explain the concepts of Continu-
ous Delivery, Continuous Integration and Continuous Deployment and also the relation to DevOps, and
we will discuss why you would you need Continuous Delivery and Continuous Deployment. After that, we
will talk about releases and deployments and the differences between those two.
Once we have covered these general topics, we will talk about release strategies and artifact sources, and
walk through some considerations when choosing and defining those. We will also discuss the considera-
tions for setting up deployment stages and your delivery and deployment cadence, and lastly about
setting up your release approvals.
After that, we will cover some ground to create a high-quality release pipeline and talk about the quality
of your release process and the quality of a release and difference between those two. We will take a look
at how to visualize your release process quality and how to control your release using release gates as a
mechanism. Finally, we will look at how to deal with release notes and documentation.
After these introductions, we will take a brief look at deployment patterns. We will cover modern deploy-
ment patterns like canary releases, but we will also take a quick look at the traditional deployment
patterns, like DTAP environments.
Finally, we take a look at choosing the right release management tool. There are a lot of tools out there.
We will cover the components that you need to take a look at if you are going to choose the right release
management tool product or company.
Learning objectives
At the end of this module, students will be able to:
●● Differentiate between a release and a deployment
●● Define the components of a release pipeline
MCT USE ONLY. STUDENT USE PROHIBITED 282 Module 10 Design a Release Strategy
Silo-Based Development
Long release cycles, a lot of testing, code freezes, night and weekend work and a lot of people involved,
ensure that everything works. But the more we change, the more risk it entails, and we are back at the
beginning. On many occasions resulting in yet another document or process that should be followed.
This is what I call silo-based development.
If we look at this picture of a traditional, silo-based value stream, we see Bugs and Unplanned work,
necessary updates or support work and planned (value adding) work, all added to the backlog of the
MCT USE ONLY. STUDENT USE PROHIBITED 284 Module 10 Design a Release Strategy
teams. When everything is planned and the first “gate” can be opened, everything drops to the next
phase. All the work, and thus all the value moves in piles to the next phase. It moves from Plan phase to a
Realize phase where all the work is developed, tested and documented, and from here, it moves to the
release phase. All the value is released at the same time. As a result, the release takes a long time.
We need to move towards a situation where the value is not piled up and released all at once, but where
value flows through a pipeline. Just like in the picture, a piece of work is a marble. And only one piece of
work can flow through the pipeline at once. So work has to be prioritized in the right way. As you can see
the pipeline has green and red outlets. These are the feedback loops or quality gates that we want to
have in place.
A feedback loop can be different things:
●● A unit test to validate the code
●● An automated build to validate the sources
●● An automated test on a Test environment
●● Some monitor on a server
●● Usage instrumentation in the code
MCT USE ONLY. STUDENT USE PROHIBITED
Introduction to Continuous Delivery 285
If one of the feedback loops is red, the marble cannot pass the outlet and it will end up in the Monitor
and Learn tray. This is where the learning happens. The problem is analyzed and solved so that the next
time a marble passes the outlet, it is green.
Every single piece of work flows through the pipeline until it ends up in the tray of value. The more that is
automated the faster value flows through the pipeline.
Companies want to move toward Continuous Delivery. They see the value. They hear their customers.
Companies want to deliver their products as fast as possible. Quality should be higher. The move to
production should be faster. Technical Debt should be lower.
A great way to improve your software development practices was the introduction of Agile and Scrum.
Last year around 80% of all companies claimed that they adopted Scrum as a software development
practice. By using Scrum, many teams can produce a working piece of software after a sprint of maybe 2
or 3 weeks. But producing working software is not the same as delivering working software. The result is
that all “done” increments are waiting to be delivered in the next release, which is coming in a few
months.
What we see now, is that Agile teams within a non-agile company are stuck in a delivery funnel. The
bottleneck is no longer the production of working software, but the problem has become the delivery of
working software. The finished product is waiting to be delivered to the customers to get business value,
but this does not happen. Continuous Delivery needs to solve this problem.
"DevOps is the union of people, process, and products to enable Continuous Delivery of value to our end
users."
Looking at this definition, Continuous Delivery is an enabler for DevOps. DevOps focuses on organiza-
tions and bringing people together to Build and Run their software products.
Continuous Delivery is a practice. Being able to deliver software on-demand. Not necessarily a 1000 times
a day. Deploying every code change to production is what we call Continuous Deployment.
To be able to do this we need automation, we need a strategy, and we need pipelines. And this is what
we will cover in the rest of this module.
1 https://docs.microsoft.com/en-us/azure/devops/pipelines/release/releases?view=vsts
MCT USE ONLY. STUDENT USE PROHIBITED
Introduction to Continuous Delivery 287
configurable you can implement the Feature Toggle. We will talk about Feature Toggles in Module 3 in
more detail.
See also: Explore how to progressively expose your features in production for some or all users2.
Once we have prepared our software, we need to make sure that the installation will not expose any new
or changed functionality to the end user.
When the software has been deployed, we need to watch how the system behaves. Does it act the same
as it did in the past?
If it is clear that the system is stable and operates the same as it did before, we can decide to flip a switch.
This might reveal one or more features to the end user, or change a set of routines that are part of the
system.
The whole idea of separating deployment from release (exposing features with a switch) is compelling
and something we want to incorporate in our Continuous Delivery practice. It helps us with more stable
releases and better ways to roll back when we run into issues when we have a new feature that produces
problems.
We switch it off again and then create a hotfix. By separating deployment from the release of a feature,
you create the opportunity to deploy any time of the day, since the new software will not affect the
system that already works.
Discussion
What are your bottlenecks?
Have a discussion about the need for Continuous Delivery in your organization and what blocks the
implementation.
Topics you might want to discuss are:
●● Does your organization need Continuous Delivery?
●● Do you use Agile/Scrum?
●● The Organization
●● Application Architecture
●● Skills
●● Tooling
●● Tests
●● other things?
2 https://docs.microsoft.com/en-us/azure/devops/articles/phase-features-with-feature-flags?view=vsts
MCT USE ONLY. STUDENT USE PROHIBITED 288 Module 10 Design a Release Strategy
In this part of the module, we will walk through all the components of the release pipeline in detail and
talk about what to consider for each component.
The components that make up the release pipeline or process are used to create a release. There is a
difference between a release and the release pipeline or process.
The release pipeline is the blueprint through which releases are done. We will cover more of this when
discussing the quality of releases and releases processes.
See also Release pipelines3.
The most common and most used way to get an artifact within the release pipeline is to use a build
artifact. The build pipeline compiles, tests, and eventually produces an immutable package, which is
stored in a secure place (storage account, database etc.).
The release pipeline then uses a secure connection to this secured place to get the build artifact and
perform additional actions to deploy this to an environment. The big advantage of using a build artifact is
3 https://docs.microsoft.com/en-us/azure/devops/pipelines/release/?view=vsts
4 https://docs.microsoft.com/en-us/azure/devops/artifacts/artifacts-key-concepts?view=vsts
MCT USE ONLY. STUDENT USE PROHIBITED 290 Module 10 Design a Release Strategy
that the build produces a versioned artifact. The artifact is linked to the build and gives us automatic
traceability. We can always find the sources that produced this artifact.
Another possible artifact source is version control. We can directly link our version control to our release
pipeline. The release is then related to a specific commit in our version control system. With that, we can
also see which version of a file or script is eventually installed. In this case, the version does not come
from the build, but from version control. A consideration for choosing a version control artifact instead of
a build artifact can be that you only want to deploy one specific file. If no additional actions are required
before this file is used in the release pipeline, it does not make sense to create a versioned package
containing one that file. Helper scripts that perform actions to support the release process (clean up,
rename, string actions) are typically good candidates to get from version control.
Another possibility of an artifact source can be a network share containing a set of files. However, you
should be aware of the possible risk. The risk is that you are not 100% sure that the package that you are
going to deploy is the same package that was put on the network share. If other people can access the
network share as well, the package might be compromised. For that reason, this option will not be
sufficient to prove integrity in a regulated environment (banks, insurance companies).
Last but not least, container registries are a rising star when it comes to artifact sources. Container
registries are versioned repositories where container artifacts are stored. By pushing a versioned contain-
er to the content repository, and consuming that same version within the release pipeline, it has more or
less have the same advantages as using a build artifact stored in a safe location.
5 https://semver.org/
MCT USE ONLY. STUDENT USE PROHIBITED
Release strategy recommendations 291
Choosing the right artifact source is tightly related to the requirements you have regarding traceability
and auditability. If you need an immutable package (containing multiple files) that can never be changed
and be traced, a build artifact is the best choice. If it is one file, you can directly link to source control.
You can also point at a disk or network share, but this implies some risk concerning auditability and
immutability. Can you ensure the package never changed?
See also Release artifacts and artifact sources6.
Steps
Let's take a look at how to work with one or more artifact sources in the release pipeline.
1. In the Azure DevOps environment, open the Parts Unlimited project, then from the main menu, click
Pipelines, then click Releases.
6 https://docs.microsoft.com/en-us/azure/devops/pipelines/release/artifacts?view=vsts
MCT USE ONLY. STUDENT USE PROHIBITED 292 Module 10 Design a Release Strategy
3. In the Select a template pane, note the available templates, but then click the Empty job option at
the top. This is because we are going to focus on selecting an artifact source.
4. In the Artifacts section, click +Add an artifact.
5. Note the available options in the Add an artifact pane, and click the option to see more artifact
types, so that you can see all the available artifact types:
While we're in this section, let's briefly look at the available options.
6. Click Build and note the parameters required. This option is used to retrieve artifacts from an Azure
DevOps Build pipeline. Using it requires a project name, and a build pipeline name. (Note that
projects can have multiple build pipelines). This is the option that we will use shortly.
MCT USE ONLY. STUDENT USE PROHIBITED
Release strategy recommendations 293
7. Click Azure Repository and note the parameters required. It requires a project name and also asks
you to select the source repository.
8. Click GitHub and note the parameters required. The Service is a connection to the GitHub repository.
It can be authorized by either OAuth or by using a GitHub personal access token. You also need to
select the source repository.
9. Click TFVC and note the parameters required. It also requires a project name and also asks you to
select the source repository.
Note: A release pipeline can have more than one set of artifacts as input. A common example is a situation
where as well as your project source, you also need to consume a package from a feed.
10. Click Azure Artifacts and note the parameters required. It requires you to identify the feed, package
type, and package.
MCT USE ONLY. STUDENT USE PROHIBITED 294 Module 10 Design a Release Strategy
11. Click GitHub Release and note the parameters required. It requires a service connection and the
source repository.
13. Click Docker Hub and note the parameters required. This option would be useful if your containers
are stored in Docker Hub rather than in an Azure Container Registry. After choosing a secure service
connection, you need to select the namespace and the repository
MCT USE ONLY. STUDENT USE PROHIBITED
Release strategy recommendations 295
14. Last but not least, click Jenkins and note the parameters required. You do not need to get all your
artifacts from Azure. You can retrieve them from a Jenkins build. So if you have a Jenkins Server in
your infrastructure, you can use the build artifacts from there, directly in your Azure DevOps pipelines.
We have now added the artifacts that we will need for later walkthroughs.
16. To save the work, click Save, then in the Save dialog box, click OK.
Deployment Stages
A stage or deployment stage is a logical and independent entity that represents where you want to
deploy a release generated from a release pipeline. Sometimes a stage is called an environment. For
MCT USE ONLY. STUDENT USE PROHIBITED
Release strategy recommendations 297
example Test or Production. But it does not neccesarily reflect the lifecycle of a product. It can represent
any physical or real stage that you need. For example, the deployment in a stage may be to a collection
of servers, a cloud, or multiple clouds. In fact, you can even use a stage to represent shipping the soft-
ware to an app store, or the manufacturing process of a boxed product, or a way to group a cohort of
users for a specific version of an application.
You must be able to deploy to a stage independently of other stages in the pipeline. There should be no
dependency between stages in your pipeline. For example, your pipeline might consist of two stages A
and B, and your pipeline could deploy Release 2 to A and Release 1 to B. If you make any assumptions in
B about the existence of a certain release in A, the two stages are not independent.
Here are some suggestions and examples for stages:
●● Dev, QA, Prod - As new builds are produced, they can be deployed to Dev. They can then be promot-
ed to QA, and finally to Prod. At any time, each of these stages may have a different release (set of
build artifacts) deployed to them. This is a good example of the use of stages in a release pipeline.
●● Customer adoption rings (for example, early adopter ring, frequent adopter ring, late adopter ring)
- You typically want to deploy new or beta releases to your early adopters more often than to other
users. Therefore, you are likely to have different releases in each of these rings. This is a good example
of the use of stages in a pipeline.
●● Database and web tiers of an application - These should be modeled as a single stage because you
want the two to be in sync. If you model these as separate stages, you risk deploying one build to the
database stage and a different build to the web tier stage.
●● Staging and production slots of a web site - There is clearly an interdependence between these two
slots. You do not want the production slot to be deployed independently of the build version currently
deployed to the staging slot. Therefore, you must model the deployment to both the staging and
production slots as a single stage.
●● Multiple geographic sites with the same application - In this example, you want to deploy your
website to many geographically distributed sites around the globe and you want all of them to be the
same version. You want to deploy the new version of your application to a staging slot in all the sites,
test it, and - if all of them pass - swap all the staging slots to production slots. In this case, given the
interdependence between the sites, you cannot model each site as a different stage. Instead, you
must model this as a single stage with parallel deployment to multiple sites (typically by using jobs).
●● Multiple test stages to test the same application - Having one or more release pipelines, each with
multiple stages intended to run test automation for a build, is a common practice. This is fine if each
of the stages deploys the build independently, and then runs tests. However, if you set up the first
stage to deploy the build, and subsequent stages to test the same shared deployment, you risk
overriding the shared stage with a newer build while testing of the previous builds is still in progress.
rethink your strategy around stages. For example, a stage is not necessarily a long-lived entity. When we
talk about Continuous Delivery, where we might deploy our application multiple times a day, we may
assume that the application is also tested every time the application is deployed. The question that we
need to ask ourselves is, do we want to test in an environment that is already in use, or do we want a
testing environment that is clean from the start?
On many occasions both scenarios are valid. Sometimes you want to start from scratch, and sometimes
you want to know what happens if you refresh the environment. In a DevOps world, we see infrastructure
as just another piece of software (Infrastructure as Code). Using Cloud technology combined with
Infrastructure as Code gives us new possibilities when it comes to environments. We are not limited to a
fixed number of environments anymore. Instead, we can spin up environments on demand. When we
want to test something, we spin up a new environment, deploy our code and run our tests. When we are
done, we can clean up the environment. Traditional labels for environments, therefore, do not apply
anymore. Let's take Test as an example. Maybe we have different test environments, one for load testing,
one for integration testing, one for system testing and one for functional testing. The sky is the limit!
Depending on the needs of the organization and the DevOps teams, the number of stages and the
purpose of stages vary. Some organizations stick to the DTAP (Dev, Test, Acceptance, Production) where
others deploy directly to production with temporary stages in between.
Important things to consider are there for
●● Is your stage long lived or short lived?
●● What is the purpose of this specific stage?
●● Who is going to use it?
●● Is your target application overwriting an existing one would always be a fresh install?
●● Do you need a new stage for bug fixes?
●● Do you need an isolated environment with encrypted data or disconnected from a network?
●● Can you afford downtime?
●● Who is the owner of the stage? Who can apply changes?
These and maybe other considerations need to play a crucial role in defining the number of stages and
the purpose of stages.
Everything you need to know about Deployment stages in combination with Azure DevOps you can find
on the Microsoft Docs7
Discussion
What deployment stages would you define for your or-
ganization?
Have a discussion about what deployment stages you recognise in your organization?
Consider the following things:
●● Is your stage long lived or short lived?
●● What is the purpose of this specific stage?
●● Who is going to use it?
7 https://docs.microsoft.com/en-us/azure/devops/pipelines/release/environments?view=vsts
MCT USE ONLY. STUDENT USE PROHIBITED
Release strategy recommendations 299
●● Is your target application overwriting an existing one would always be a fresh install?
●● Do you need a new stage for bug fixes?
●● Do you need an isolated environment with encrypted data or disconnected from a network?
●● Can you afford downtime?
●● Who is the owner of the stage? Who can apply changes?
Steps
Let's now take a look at the other section in the release pipeline that we have created: Stages.
1. Click on Stage 1 and in the Stage properties pane, set Stage name to Development and close the
pane.
Note: stages can be based on templates. For example, you might be deploying a web application using
node.js or Python. For this walkthrough, that won't matter because we are just focussing on defining a
strategy.
2. To add a second stage, click +Add in the Stages section and note the available options. You have a
choice to create a new stage, or to clone an existing stage. Cloning a stage can be very helpful in
minimizing the number of parameters that need to be configured. But for now, just click New stage.
3. When the Select a template pane appears, scroll down to see the available templates. For now, we
don't need any of these, so just click Empty job at the top, then in the Stage properties pane, set
Stage name to Test, then close the pane.
MCT USE ONLY. STUDENT USE PROHIBITED 300 Module 10 Design a Release Strategy
4. Hover over the Test stage and notice that two icons appear below. These are the same options that
were available in the menu drop down that we used before. Click the Clone icon to clone the stage to
a new stage.
5. Click on the Copy of Test stage and in the stage properties pane, set Stage name to Production and
close the pane.
We have now defined a very traditional deployment strategy. Each of the stages contains a set of tasks,
and we will look at those tasks later in the course.
*Note: The same artifact sources move through each of the stages.
The lightning bolt icon on each stage shows that we can set a trigger as a predeployment condition. The
person icon on both ends of a stage, show that we can have pre and post deployment approvers.
MCT USE ONLY. STUDENT USE PROHIBITED
Release strategy recommendations 301
Concurrent stages
You'll notice that at present we have all the stages one after each other in a sequence. It is also possible
to have concurrent stages. Let's see an example.
6. Click the Test stage, and on the stage properties pane, set Stage name to Test Team A and close the
pane.
7. Hover over the Test Team A stage, and click the Clone icon that appears, to create a new cloned
stage.
8. Click the Copy of Test Team A stage, and on the stage properties pane, set Stage name to Test
Team B and close the pane.
9. Click the Pre-deployment conditions icon (i.e. the lightning bolt) on Test Team B to open the
pre-deployment settings.
10. In the Pre-deployment conditions pane, note that the stage can be triggered in three different ways:
The stage can immediately follow Release. (That is how the Development stage is currently configured). It
can require manual triggering. Or, more commonly, it can follow another stage. At present, it is following
Test Team A but that's not what we want.
11. From the Stages drop down list, chose Development and uncheck Test Team A, then close the pane.
We now have two concurrent Test stages.
MCT USE ONLY. STUDENT USE PROHIBITED 302 Module 10 Design a Release Strategy
Stage vs Environment
You may have wondered why these items are called Stages and not Environments.
In the current configuration, we are in fact using them for different environments. But this is not always
the case. Here is a deployment strategy based upon regions instead:
Azure DevOps pipelines are very configurable and support a wide variety of deployment strategies. The
name Stages is a better fit than Environment even though the stages can be used for environments.
For now, let's give the pipeline a better name and save the work.
12. At the top of the screen, hover over the New release pipeline name and when a pencil appears, click
it to edit the name. Type Release to all environments as the name and hit enter or click elsewhere on
the screen.
MCT USE ONLY. STUDENT USE PROHIBITED
Release strategy recommendations 303
13. For now, save the environment based release pipeline that you have created by clicking Save, then in
the Save dialog box, click OK.
Scheduled Triggers
This speaks for itself, but what it allows you to, is to set up time-based manner to start a new release. For
example every night at 3:00 AM or at 12:00 PM. You can have one or multiple schedules per day, but it
will always run on this specific time.
Manual trigger
With a manual trigger, a person or system triggers the release based on a specific event. When it is a
person, it probably uses some UI to start a new release. When it is an automated process most likely,
some event will occur, and by using the automation engine, which is usually part of the release manage-
ment tool, you can trigger the release from another system.
As we mentioned in the introduction, Continuous Delivery is not only about deploying multiple times a
day, it is about being able to deploy on demand. When we define our cadence, questions that we should
ask ourselves are:
●● Do we want to deploy our application?
●● Do we want to deploy multiple times a day
●● Can we deploy to a stage? Is it used?
For example, when a tester is testing an application during the day might not want to deploy a new
version of the app during the test phase.
Another example, when your application incurs downtime, you do not want to deploy when users are
using the application.
MCT USE ONLY. STUDENT USE PROHIBITED 304 Module 10 Design a Release Strategy
The frequency of deployment, or cadence, differs from stage to stage. A typical scenario that we often
see is that continuous deployment happens to the development stage. Every new change ends up there
once it is completed and builds. Deploying to the next phase does not always occur multiple times a day
but only during the night.
When you are designing your release strategy, choose your triggers carefully and think about the re-
quired release cadence.
Some things we need to take into consideration are
●● What is your target environment?
●● Is it used by one team or is it used by multiple teams?
●● If a single team uses it, you can deploy frequently. Otherwise, you need to be a bit more careful.
●● Who are the users? Do they want a new version multiple times a day?
●● How long does it take to deploy?
●● Is there downtime? What happens to performance? Are users impacted?
Some tools make a difference between a release and deployment. This is what we talked about in the
introduction. You have to realise that a Trigger for the release pipeline only creates a new release. In most
cases, you also need to set up triggers for the various stages as well to start deployments. For example,
you can set up an automatic deployment to the first stage after the creation of a release. And, after that,
when the deployment to the first stage is successful, start the deployment to the next stage(s).
For more information, see also:
●● Release triggers8
●● Stage Triggers9
Steps
Let's now take a look at when our release pipeline is used to create deployments. Mostly, this will involve
the use of triggers.
When we refer to a deployment, we are referring to each individual stage, and each stage can have its
own set of triggers that determine when the deployment occurs.
1. Click the lightning bolt on the _Parts Unlimited-ASP.NET-CI artifact.
8 https://docs.microsoft.com/en-us/azure/devops/pipelines/release/triggers?view=vsts
9 https://docs.microsoft.com/en-us/azure/devops/pipelines/release/triggers?view=vsts#env-triggers
MCT USE ONLY. STUDENT USE PROHIBITED
Release strategy recommendations 305
2. In the Continuous deployment trigger pane, click the Disabled option to enable continuous deploy-
ment. It will then say Enabled.
Once this is selected, every time that a build completes, a deployment of the release pipeline will start.
✔️ Note: You can filter which branches affect this, so for example you could choose the master branch or
a particular feature branch.
Scheduled Deployments
You might not want to have a deployment commence every time a build completes. That might be very
disruptive to testers downstream if it was happening too often. Instead, it might make sense to set up a
deployment schedule.
3. Click on the Scheduled release trigger icon to open its settings.
MCT USE ONLY. STUDENT USE PROHIBITED 306 Module 10 Design a Release Strategy
4. In the Scheduled release trigger pane, click the Disabled option to enable scheduled release. It will
then say Enabled and additional options will appear.
You can see in the screenshot above that a deployment using the release pipeline would now occur each
weekday at 3AM. This might be convenient when you for example, share a stage with testers who work
during the day. You don't want to constantly deploy new versions to that stage while they're working.
This setting would create a clean fresh environment for them at 3AM each weekday.
✔️ Note: The default timezone is UTC. You can change this to suit your local timezone as this might be
easier to work with when creating schedules.
5. For now, we don't need a scheduled deployment, so click the Enabled button again to disable the
scheduled release trigger and close the pane.
Pre-deployment Triggers
6. Click the lightning bolt on the Development stage to open the pre-deployment conditions.
MCT USE ONLY. STUDENT USE PROHIBITED
Release strategy recommendations 307
✔️ Note: Both artifact filters and a schedule can be set at the pre-deployment for each stage rather than
just at the artifact configuration level.
Deployment to any stage doesn't happen automatically unless you have chosen to allow that.
●● Infrastructure health. Execute monitoring and validate the infrastructure against compliance rules after
deployment, or wait for proper resource utilisation and a positive security report.
In short, approvals and gates give you additional control over the start and completion of the deploy-
ment pipeline. They can usually be set up as a pre-deployment and post-deployment condition, that can
include waiting for users to approve or reject deployments manually, and checking with other automated
systems until specific requirements are verified. In addition, you can configure a manual intervention to
pause the deployment pipeline and prompt users to carry out manual tasks, then resume or reject the
deployment.
To find out more about Release Approvals and Gates, check these documents.
●● Release approvals and gates overview10
●● Release Approvals11
●● Release Gates12
Steps
Let's now take a look at when our release pipeline needs manual approval before deployment of a stage
starts, or manual approval that the deployment of a stage completed as expected.
While DevOps is all about automation, manual approvals are still very useful. There are many scenarios
where they are needed. For example, a product owner might want to sign off a release before it moves to
production. Or the scrum team wans to make sure that no new software is deployed to the test environ-
ment before someone signs off on it, because they might need to find an appropriate time if it's con-
stantly in use.
This can help to gain trust in the DevOps processes within the business.
Even if the process will later be automated, people might still want to have a level of manual control until
they become comfortable with the processes. Explicit manual approvals can be a great way to achieve
that.
Let's try one.
1. Click the pre-deployment conditions icon for the Development stage to open the settings.
10 https://docs.microsoft.com/en-us/azure/devops/pipelines/release/approvals/approvals?view=vsts
11 https://docs.microsoft.com/en-us/azure/devops/pipelines/release/approvals/approvals?view=vsts
12 https://docs.microsoft.com/en-us/azure/devops/pipelines/release/approvals/gates?view=vsts
MCT USE ONLY. STUDENT USE PROHIBITED 310 Module 10 Design a Release Strategy
2. Click the Disabled button in the Pre-deployment approvals section to enable it.
3. In the Approvers list, find your own name and select it. Then set the Timeout to 1 Days.
Note: Approvers is a list, not just a single value. If you add more than one person in the list, you can also
choose if they need to approve in sequence, or if either or both approvals are needed.
4. Take note of the approver policy options that are available:
It is very common to not allow a user who requests a release or deployment to also approve it. In this
case, we are the only approver so we will leave that unchecked.
5. Close the Pre-deployment conditions pane and notice that a checkmark has appeared beside the
person in the icon.
MCT USE ONLY. STUDENT USE PROHIBITED
Release strategy recommendations 311
8. In the Create a new release pane, note the available options, then click Create.
MCT USE ONLY. STUDENT USE PROHIBITED 312 Module 10 Design a Release Strategy
9. In the upper left of the screen, you can see that a release has been created.
10. At this point, an email should have been received, indicating that an approval is required.
MCT USE ONLY. STUDENT USE PROHIBITED
Release strategy recommendations 313
At this point, you could just click the link in the email, but instead, we'll navigate within Azure DevOps to
see what's needed.
11. Click on the Release 1 Created link (or whatever number it is for you) in the area we looked at in Step
9 above. We are then taken to a screen that shows the status of the release.
MCT USE ONLY. STUDENT USE PROHIBITED 314 Module 10 Design a Release Strategy
You can see that a release has been manually triggered and that the Development stage is waiting for an
approval. As an approver, you can now perform that approval.
12. Hover over the Development stage and click the Approve icon that appears.
Note: Options to cancel the deployment or to view the logs are also provided at this point
13. In the Development approvals window, add a comment and click Approve.
The deployment stage will then continue. Watch as each stage proceeds and succeeds.
MCT USE ONLY. STUDENT USE PROHIBITED
Release strategy recommendations 315
Steps
Let's now take a look at when our release pipeline needs to perform automated checks for issues like
code quality, before continuing with the deployments. That automated approval phase is achieved by
using Release Gates.
Let's take a look at configuring a release gate.
1. Click the lightning icon on the Development stage to open the pre-deployment conditions settings.
2. In the Pre-deployment conditions pane, click the Disabled button beside Gates to enable them.
MCT USE ONLY. STUDENT USE PROHIBITED 316 Module 10 Design a Release Strategy
3. Click +Add to see the available types of gates, then click Query work items.
We will use the Query work items gate to check if there are any outstanding bugs that need to be dealt
with. It does this by running a work item query. This is an example of what is commonly called a Quality
Gate.
4. Set Display name to No critical bugs allowed, and from the Query drop down list, choose Critical
Bugs. Leave the Upper threshold set to zero because we don't want to allow any bugs at all.
MCT USE ONLY. STUDENT USE PROHIBITED
Release strategy recommendations 317
5. Click the drop down beside Evaluation options to see what can be configured. While 15 minutes is a
reasonable value in production, for our testing, change The time between re-evaluation of gates to
5 Minutes.
The release gate doesn't just fail or pass a single time. It can keep evaluating the status of the gate. It
might fail the first time, but after re-evaluation, it might then pass if the underlying issue has been
corrected.
6. Close the pane and click Save and OK to save the work.
7. Click Create release to start a new release, and in the Create a new release pane, click Create.
9. If it is waiting for approval, click Approve to allow it to continue, and in the Development pane, click
Approve.
After a short while, you should see the release continuing and then entering the phase where it will
process the gates.
10. In the Development pane, click Gates to see the status of the release gates.
You will notice that the gate failed the first time it was checked. In fact, it will be stuck in the processing
gates stage, as there is a critical bug. Let's look at that bug and resolve it.
11. Close the pane and click Save then OK to save the work.
MCT USE ONLY. STUDENT USE PROHIBITED
Release strategy recommendations 319
13. In the Queries window, click All to see all the available queries.
You will see that there is one critical bug that needs to be resolved.
15. In the properties pane for the bug, change the State to Done, then click Save.
MCT USE ONLY. STUDENT USE PROHIBITED 320 Module 10 Design a Release Strategy
Note that there are now no critical bugs that will stop the release.
17. Return to the release by clicking Pipelines then Releases in the main menu, then clicking the name of
the latest release.
18. When the release gate is checked next time, the release should continue and complete successfully.
MCT USE ONLY. STUDENT USE PROHIBITED
Release strategy recommendations 321
Clean up
To avoid excessive wait time in later walkthroughs, we'll disable the release gates.
19. In the main menu, click Pipelines, then click Releases, then click Edit to open the release pipeline
editor.
20. Click the Pre-deployment conditions icon (i.e. the lightning bolt) on the Development task, and in
the Pre-deployment conditions pane, click the switch beside Gates to disable release gates.
21. Click Save, then click OK.
MCT USE ONLY. STUDENT USE PROHIBITED 322 Module 10 Design a Release Strategy
Measuring quality
How do you measure the quality of your release process? The quality of your release process cannot be
measured directly because it is a process. What you can measure is how good your process works. If your
release process constantly changes, this might be an indication that there is something wrong in the
process. If your releases constantly fail, and you constantly have to update your release process to make
it work, might also be an indication that something is wrong with your release process.
Maybe something is wrong with the schedule on which your release runs, and you notice that your
release always fails at a particular day or at a certain time. Or your release always fails after the deploy-
ment to another environment. This might be an indication that some things are maybe dependent or
related.
What you can do to keep track of your release process quality, is creating visualisations about the quality
of all the releases following that same release process or release pipeline. For example, adding a dash-
board widget which shows you the status of every release.
MCT USE ONLY. STUDENT USE PROHIBITED
Building a High-Quality Release pipeline 323
The release also has a quality aspect, but this is tightly related to the quality of the actual deployment
and the package that has been deployed.
When we want to measure the quality of a release itself, we can perform all kinds of checks within the
pipeline. Of course, you can execute all different types of tests like integration tests, load tests or even
you UI tests while running your pipeline and check the quality of the release that you are deploying.
Using a quality gate is also a perfect way to check the quality of your release. There are many different
quality gates. For example, a gate that monitors to check if everything is healthy on your deployment
targets, work item gates that verify the quality of your requirements process. You can add additional
security and compliance checks. For example, do we comply with the 4-eyes principle, or do we have the
proper traceability?
●● Compliance checks
Document store
An often used way of storing release notes is by creating text files, or documents in some document
store. This way, the release notes are stored together with other documents. The downside of this
approach is that there is no direct connection between the release in the release management tool and
the release notes that belong to this release.
Wiki
The most used way that is used at customers is to store the release notes in a Wiki. For example, Conflu-
ence from Atlassian, SharePoint Wiki, SlimWiki or the Wiki in Azure DevOps.
The release notes are created as a page in the wiki, and by using hyperlinks, relations can be associated
with the build, the release and the artifacts.
In a work item
Another option is to store your release notes as part of your work items. Work items can be Bugs, Tasks,
Product Backlog Items or User Stories. To save release notes in work items, you can create or use a
separate field within the work item. In this field, you type the publicly available release notes that will be
communicated to the customer. With a script or specific task in your build and release pipeline, you can
then generate the release notes and store them as an artifact or publish them to an internal or external
website.
MCT USE ONLY. STUDENT USE PROHIBITED
Building a High-Quality Release pipeline 325
13 https://marketplace.visualstudio.com/items?itemName=richardfennellBM.BM-VSTS-XplatGenerateReleaseNotes
14 https://marketplace.visualstudio.com/items?itemName=richardfennellBM.BM-VSTS-WIKIUpdater-Tasks
15 https://www.atlassian.com/software/confluence
16 https://azure.microsoft.com/en-us/services/devops/wiki/
MCT USE ONLY. STUDENT USE PROHIBITED 326 Module 10 Design a Release Strategy
When choosing the right Release Management tool, you should look at the possibilities of all the differ-
ent components and map them to the needs you have. There are many tools available in the marketplace
from which we will discuss some in the next chapter. The most important thing to notice is that not every
vendor or tool treats Release Management in the same manner.
The tools in the marketplace can be divided into 2 categories
●● Tools that can do Build and Continuous Integration and Deployment
●● Tools that can do Release Management
In many cases, companies only require the deployment part of Release Management. Deployment, or
installing software can be done by all the build tools out there. Primarily because the technical part of the
release is executing a script or running a program. Release Management that requires approvals, quality
gates, and different stages, needs a different kind of tool that usually tightly integrate with the build and
CI tools but are not the same thing.
Previously, s we have discussed all the components that are part of the release pipeline, and here we will
briefly highlight the things you need to consider when choosing a Release Management Tool.
Stages
Running a Continuous Integration pipeline that build and deploys your product is a very commonly used
scenario.
But what if you want to deploy the same release to different environments? When choosing the right
release management tool, you should consider the following things when it comes to stages (or environ-
ments)
●● Can you use the same artifact to deploy to different stages?
●● Can you differ the configuration between the stages
●● Can you have different steps for each stage?
MCT USE ONLY. STUDENT USE PROHIBITED
Choosing the right release management tool 329
●● Traceability
●● Can we see where the released software originates from (which code)
●● Can we see the requirements that led to this change
●● Can we follow the requirements through the code, build and release
●● Auditablity
●● Can we see who, when and why the release process changed
●● Can we see who, when and why a new release has been deployed
Security is vital in this. When people can do everything, including deleting evidence, this is not ok. Setting
up the right roles, permissions and authorisation are important to protect your system and your pipeline.
When looking at an appropriate Release Management tool, you can consider
●● Does it integrate with your company's Active Directory?
●● Can you set up roles and permissions?
●● Is there change history of the release pipeline itself?
●● Can you ensure the artifact did not change during the release?
●● Can you link requirements to the release?
●● Can you link source code changes to the release pipeline?
●● Can you enforce approval or 4-eyes principle?
●● Can you see release history and the people who triggered the release?
Jenkins
The leading open source automation server, Jenkins provides hundreds of plugins to support building,
deploying and automating any project.
●● On-prem system. Offered as SaaS by third-party
MCT USE ONLY. STUDENT USE PROHIBITED 332 Module 10 Design a Release Strategy
Links
●● Jenkins17
●● Tutorial: Jenkins CI/CD to deploy an ASP.NET Core application to Azure Web App service18
●● Azure Friday - Jenkins CI/CD with Service Fabric19
Circle CI
CircleCI’s continuous integration and delivery platform help software teams rapidly release code with
confidence by automating the build, test, and deploy process. CircleCI offers a modern software develop-
ment platform that lets teams ramp quickly, scale easily, and build confidently every day.
●● CircleCI is a cloud-based system or an on-prem system
●● Rest API — you have access to projects, build and artifacts
●● The result of the build is going to be an artifact.
●● Integration with GitHub and BitBucket
●● Integrates with various clouds
●● Not part of a bigger suite
●● Not fully customizable
Links
●● circleci/20
●● How to get started on CircleCI 2.0: CircleCI 2.0 Demo21
17 https://jenkins.io/
18 https://cloudblogs.microsoft.com/opensource/2018/09/21/configure-jenkins-cicd-pipeline-deploy-asp-net-core-application/
19 https://www.youtube.com/watch?v=5RYmooIZqS4
20 https://circleci.com/
21 https://www.youtube.com/watch?v=KhjwnTD4oec
MCT USE ONLY. STUDENT USE PROHIBITED
Choosing the right release management tool 333
●● Integration with many build and source control systems (Github, Jenkins, Azure Repos, Bitbucket,
Team Foundation Version Control, etc.)
●● Cross Platform support, all languages and platforms
●● Rich marketplace with extra plugins, build tasks and release tasks and dashboard widgets
●● Part of the Azure DevOps suite. Tightly integrated
●● Fully customizable
●● Manual approvals and Release Quality Gates supported
●● Integrated with (Azure) Active Directory
●● Extensive roles and permissions
Links
●● Azure Pipelines22
●● Building and Deploying your Code with Azure Pipelines23
GitLab Pipelines
GitLab helps teams automate the release and delivery of their applications to enable them to shorten the
delivery lifecycle, streamline manual processes and accelerate team velocity. With Continuous Delivery
(CD), built into the pipeline, deployment can be automated to multiple environments like staging and
production, and support advanced features such as canary deployments. Because the configuration and
definition of the application are version controlled and managed, it is easy to configure and deploy your
application on demand.
GitLab24
Atlassian Bamboo
Bamboo is a continuous integration (CI) server that can be used to automate the release management for
a software application, creating a Continuous Delivery pipeline.
Atlassian Bamboo25
XL Deploy/XL Release
XL Release is an end-to-end pipeline orchestration tool for Continuous Delivery and DevOps teams. It
handles automated tasks, manual tasks, and complex dependencies and release trains. And XL Release is
designed to integrate with your change and release management tools.
xl-release - XebiaLabs26
22 https://azure.microsoft.com/en-us/services/devops/pipelines/
23 https://www.youtube.com/watch?v=NuYDAs3kNV8
24 https://about.gitlab.com/stages-devops-lifecycle/release/
25 https://www.atlassian.com/software/bamboo/features
26 https://xebialabs.com/products/xl-release/
MCT USE ONLY. STUDENT USE PROHIBITED 334 Module 10 Design a Release Strategy
Multiple choice
Would adding a feature flag increase or decrease technical debt?
Increase
Decrease
Suggested answer
You plan to slowly increase the traffic to a newer version of your site. What type of deployment pattern is
this?
4. Suggested answer
When you want to change an immutable object of any type, what do you do?
5. Suggested answer
What can you use to prevent a deployment in Azure DevOps when a security testing tool finds a compliance
problem?
MCT USE ONLY. STUDENT USE PROHIBITED
Module Review and Takeaways 335
Answers
Multiple choice
Would adding a feature flag increase or decrease the cyclomatic complexity of the code?
■■ Increase
Decrease
Multiple choice
Would adding a feature flag increase or decrease technical debt?
■■ Increase
Decrease
You plan to slowly increase the traffic to a newer version of your site. What type of deployment pattern is
this?
Blue-green
When you want to change an immutable object of any type, what do you do?
You make a new one and (possibly) remove the old one
What can you use to prevent a deployment in Azure DevOps when a security testing tool finds a compli-
ance problem?
Release gate
MCT USE ONLY. STUDENT USE PROHIBITED
Module 11 Set up a Release Management
Workflow
Module Overview
Module Overview
Continuous Delivery is much more about enabling teams within your organization. Enable them to deliver
the software on demand. Making it possible that you can press a button at any time of the day, and still
have a good product means a number of things. It says that the code needs to be high quality, the build
needs to be fully automated and tested, and the deployment of the software needs to be fully automated
and tested as well.
Now we need to dive a little bit further into the release management tooling. We will include a lot of
things coming from Azure pipelines. A part of the Azure DevOps suite. Azure DevOps is an integrated
solution for implementing DevOps and Continuous Delivery in your organization. We will cover some
specifics of Azure pipelines, but this does not mean they do not apply for other products available in the
marketplace. Many of the other tools share the same concepts and only differ in naming.
Release pipelines
A release pipeline, in its simplest form, is nothing more than the execution of a number of steps. In this
module, we will dive a little bit further into the details of one specific stage. The steps that need to be
executed and the mechanism that you need to execute the steps within the pipeline.
In this module, we will talk about agent and agent pools that you might need to execute your release
pipeline. We will look at variables for the release pipeline and the various stages.
After that, we dive into the tasks that you can use to execute your deployment. Do you want to use script
files or do you want to use specific tasks that can perform one job outstanding? For example, the market-
places of both Azure DevOps and Jenkins have a lot of tasks in the store that you can use to make your
life a lot easier.
We will talk about secrets and secret management in your pipeline. A fundamental part to secure your
not only your assets but also the process of releasing your software. At the end of the module, we will
talk about alerting mechanisms. How to report on your software, how to report on your quality and how
MCT USE ONLY. STUDENT USE PROHIBITED 338 Module 11 Set up a Release Management Workflow
to get notified by using service hooks. Finally, we will dive a little bit further into automatic approvals
using automated release gates.
Learning objectives
After completing this module, students will be able to:
●● Explain the terminology used in Azure DevOps and other Release Management Tooling
●● Describe what a Build and Release task is, what it can do, and some available deployment tasks
●● Classify an Agent, Agent Queue, and Agent Pool
●● Explain why you sometimes need multiple release jobs in one release pipeline
●● Differentiate between multi-agent and multi-configuration release job
●● Use release variables and stage variables in your release pipeline
●● Deploy to an environment securely using a service connection
●● Embed testing in the pipeline
●● List the different ways to inspect the health of your pipeline and release by using alerts, service hooks,
and reports
●● Create a release gate
MCT USE ONLY. STUDENT USE PROHIBITED
Create a Release Pipeline 339
Add steps to specify what you want to build, the tests that you want to run, and all of the other steps
needed to complete the build process. There are steps for building, testing, running utilities, packaging,
and deploying.
If a task is not available, you can find a lot of community tasks in the marketplace. Jenkins, Azure DevOps
and Atlassian have an extensive marketplace where additional tasks can be found.
MCT USE ONLY. STUDENT USE PROHIBITED 344 Module 11 Set up a Release Management Workflow
Links
For more information, see also:
●● Task types & usage1
●● Tasks for Azure2
●● Atlassian marketplace3
●● Jenkins Plugins4
●● Azure DevOps Marketplace5
1 https://docs.microsoft.com/en-us/azure/devops/pipelines/process/tasks?view=azure-devops&tabs=yaml
2 https://github.com/microsoft/azure-pipelines-tasks
3 https://marketplace.atlassian.com/addons/app/bamboo/trending
4 https://plugins.jenkins.io/
5 https://marketplace.visualstudio.com/
MCT USE ONLY. STUDENT USE PROHIBITED
Create a Release Pipeline 345
Deploy
Task
Distribute app builds to testers and users via App
Center
Task
Start, Stop, Restart or Slot swap for an Azure App
Service
Task
Deploy an Azure Cloud Service
Task
Incorporate secrets from an Azure Key Vault into a
release pipeline
Task
Run your scripts and make changes to your Azure
DB for MySQL.
Task
Run a PowerShell script within an Azure environ-
ment
Task
Deploy an Azure SQL database using DACPAC or
run scripts using SQLCMD
Task
Build a machine image using Packer.
Chef (https://docs.microsoft.com/en-us/azure/
devops/pipelines/tasks/deploy/chef?view=vsts)
MCT USE ONLY. STUDENT USE PROHIBITED
Create a Release Pipeline 353
Task
Run Scripts with knife commands on your chef
workstation
Task
Build, tag, push, or run Docker images, or run a
Docker command. Task can be used with Docker
or Azure Container registry
Docker
Build, push or run multi-container Docker applica-
tions.
Docker Compose
MCT USE ONLY. STUDENT USE PROHIBITED
Create a Release Pipeline 355
Task
Deploy, configure, update your Kubernetes cluster
in Azure Container Service by running helm
commands.
Task
Create or update a website, web app, virtual
directory, or application pool on a machine group
Kubernetes (https://docs.microsoft.com/en-us/
azure/devops/pipelines/tasks/deploy/kuber-
netes?view=vsts)
MCT USE ONLY. STUDENT USE PROHIBITED
Create a Release Pipeline 357
Task
Execute PowerShell scripts on remote machine(s)
Task
Deploy a Service Fabric application to a cluster
using a compose file
SSH (https://docs.microsoft.com/en-us/azure/
devops/pipelines/tasks/deploy/ssh?view=vsts)
MCT USE ONLY. STUDENT USE PROHIBITED
Create a Release Pipeline 359
Task
Copy files to remote machine(s)
6 https://docs.microsoft.com/en-us/azure/devops/pipelines/tasks/index?view=vsts
MCT USE ONLY. STUDENT USE PROHIBITED 360 Module 11 Set up a Release Management Workflow
An agent can most of the times run on different Operating Systems and in some cases as a SaaS (hosted
agent) solution.
Agents are organized into pools and provide access to the pools by using queues.
Agent pools
Agent pools are used to organize and define permission boundaries around your agents. In Azure
DevOps, Pools are scoped to your organization. You can share your pool across multiple team project
collections.
Agent Queues
An agent queue provides access to a pool of agents. When you create a build or release definition, you
specify which queue it uses. Queues are scoped to your team project collection so that you can share
them across build and release definitions in multiple team projects.
Private (or Custom) agents. Private agents are provisioned on private virtual machines (VMs) and are
custom built to accommodate the project's needs.
System capabilities
System capabilities are name/value pairs that you can use to ensure that your build definition is run only
by agents that meet the criteria that you specified. Environment variables automatically appear in the list.
Some capabilities (such as frameworks) are also added automatically.
When a build is queued, the system sends the job only to agents that have the capabilities demanded by
the build definition7.
7 https://docs.microsoft.com/en-us/azure/devops/pipelines/build/options?view=vsts&tabs=yaml
MCT USE ONLY. STUDENT USE PROHIBITED 362 Module 11 Set up a Release Management Workflow
User capabilities
You can manually add capabilities (name/value pairs) that you know your agent has and that you want
your build definition to be able to demand8.
If this also does not work, you need to fall back on the traditional approach of installing agents on the
target environment. In Azure DevOps, this is called Deployment Groups. The agent on the target server is
registered and do everything themselves. This means that they only need to have outbound access and is
sometimes the only allowed option
Learn more
For more information, see also:
●● Azure Pipelines agents9
●● Provision deployment groups10
●● Microsoft-hosted agents11
●● Agent pools12
●● Self-hosted Linux agents13
●● Deploy an Azure Pipeline agent in Windows14
8 https://docs.microsoft.com/en-us/azure/devops/pipelines/agents/v2-linux?view=vsts
9 https://docs.microsoft.com/en-us/azure/devops/pipelines/agents/agents?view=vsts
10 https://docs.microsoft.com/en-us/azure/devops/pipelines/release/deployment-groups/?view=vsts
11 https://docs.microsoft.com/en-us/azure/devops/pipelines/agents/hosted?view=vsts&tabs=yaml
12 https://docs.microsoft.com/en-us/azure/devops/pipelines/agents/pools-queues?view=vsts
13 https://docs.microsoft.com/en-us/vsts/build-release/actions/agents/v2-linux?view=vsts
14 https://docs.microsoft.com/en-us/azure/devops/pipelines/agents/v2-windows?view=vsts
MCT USE ONLY. STUDENT USE PROHIBITED
Create a Release Pipeline 363
15 https://docs.microsoft.com/en-us/azure/devops/pipelines/agents/v2-osx?view=vsts
16 https://docs.microsoft.com/en-us/azure/devops/pipelines/agents/v2-linux?view=vsts
MCT USE ONLY. STUDENT USE PROHIBITED 364 Module 11 Set up a Release Management Workflow
17 https://docs.microsoft.com/en-us/azure/devops/pipelines/process/phases?view=vsts&tabs=yaml
18 https://docs.microsoft.com/en-us/azure/devops/pipelines/process/phases?view=vsts&tabs=yaml
MCT USE ONLY. STUDENT USE PROHIBITED
Create a Release Pipeline 365
Container Jobs
Containers offer a lightweight abstraction over the host operating system. You can select the exact
versions of operating systems, tools, and dependencies that your build requires. When you specify a
container in your pipeline, the agent will first fetch and start the container. Then, each step of the job will
run inside the container.
For more information, see Define container jobs (YAML)19.
19 https://docs.microsoft.com/en-us/azure/devops/pipelines/process/container-phases?view=vsts&tabs=yaml
20 https://docs.microsoft.com/en-us/azure/devops/pipelines/process/deployment-group-phases?view=vsts&tabs=yaml
MCT USE ONLY. STUDENT USE PROHIBITED 366 Module 11 Set up a Release Management Workflow
●● Multi-agent: Run the same set of tasks on multiple agents using the specified number of agents. For
example, you can run a broad suite of 1000 tests on a single agent. Or, you can use two agents and
run 500 tests on each one in parallel.
For more information, see Specify jobs in your pipeline21.
Discussion
How to use Release jobs
Do you see a purpose for Release Jobs in your pipeline and how would you set it up?
Topics you might want to consider are:
●● Do you have artifacts from multiple sources?
●● Do you want to run deployments on different servers simultaneously?
●● Do you need multiple platforms?
●● How long does your release take?
●● Can you run your deployment in parallel or does it need to run in sequence?
Release Variables
Variables give you a convenient way to get critical bits of data into various parts of the pipeline. As the
name suggests, the contents of a variable may change between releases, stages of jobs of your pipeline.
The system predefines some variables, and you are free to add your own as well.
The most important thing you need to think about when using variables in the release pipeline is the
scope of the variable. You can imagine that a variable containing the name of the target server may vary
between a Development environment and a Test Environment.
Within the release pipeline, you can use variables in different scopes and different ways.
For more information, see Release variables and debugging22.
21 https://docs.microsoft.com/en-us/azure/devops/pipelines/process/phases?view=vsts&tabs=designer#multi-configuration
22 https://docs.microsoft.com/en-us/azure/devops/pipelines/release/variables?view=vsts&tabs=batch
MCT USE ONLY. STUDENT USE PROHIBITED
Create a Release Pipeline 367
Predefined variables
When running your release pipeline, there are always variables that you need that come from the agent
or context of the release pipeline. For example, the agent directory where the sources are downloaded,
the build number or build id, the name of the agent or any other information. This information is usually
accessible in pre-defined variables that you can use in your tasks.
Stage variables
Share values across all of the tasks within one specific stage by using stage variables. Use a stage-level
variable for values that vary from stage to stage (and are the same for all the tasks in a stage).
Variable groups
Share values across all of the definitions in a project by using variable groups. We will cover variable
groups later in this module.
On-Premises servers
In most cases, when you deploy to an on-premises server, the hardware and the operating system is
already in place. The server is already there and ready Sometimes empty but most of the times not. In this
case, the release pipeline can focus on deploying the application only.
In some cases, you might want to start or stop a virtual machine (for example Hyper-V or VMWare). The
scripts that you use to start or stop the on-premises servers should be part of your source control and be
delivered to your release pipeline as a build artifact. Using a task in the release pipeline, you can run the
script that starts or stops the servers.
When you want to take it one step further, and you want to configure the server as well. You should take
a look at technologies like PowerShell Desired State Configuration(DSC), or use tools like Puppet and
Chef. All these products will maintain your server and keep it in a particular state. When the server
changes its state, they (Puppet, Chef, DSC) recover the changed configuration to the original configura-
tion.
Integrating a tool like Puppet, Chef or Powershell DSC in to the release pipeline is no different from any
other task you add.
Infrastructure as a service
When you use the cloud as your target environment things change a little bit. Some organizations did a
lift and shift from their on-premises server to cloud servers. Then your deployment works the same as to
an on-premises server. But when you use the cloud to provide you with Infrastructure as a Service (IaaS),
you can leverage the power of the cloud, to start and create servers when you need them.
This is where Infrastructure as Code (IaC) starts playing a significant role. By creating a script or template,
you can create a server or other infrastructural components like a SQL server, a network or an IP address.
By defining a template or using a command line and save it in a script file, you can use that file in your
release pipeline tasks to execute this on your target cloud. As part of your pipeline, the server (or another
component) will be created. After that, you can execute the steps actually to deploy the software.
MCT USE ONLY. STUDENT USE PROHIBITED
Provision and Configure Environments 369
Technologies like Azure Resource Manager (ARM) or Terraform are great to create infrastructure on
demand.
Platform as a Service
When you are moving from Infrastructure as a Service (IaaS) towards Platform as a Service (PaaS), you will
get the infrastructure from the cloud that you are running on.
For example: In Azure, you can choose to create a Web application. The server, the hardware, the net-
work, the public IP address, the storage account, and even the web server, is arranged by the cloud. The
user only needs to take care of the web application that will run on this platform.
The only thing that you need to do is to provide the templates which instruct the cloud to create a
WebApp. The same goes for Functions as a Service(FaaS or Serverless technologies. In Azure called Azure
Functions and in AWS called AWS Lambda.
You only deploy your application, and the cloud takes care of the rest. However, you need to instruct the
platform (the cloud) to create a placeholder where your application can be hosted. You can define this
template in ARM or Terraform. You can use the Azure CLI or command line tools or in AWS use CloudFor-
mation. In all cases, the infrastructure is defined in a script file and live alongside the application code in
source control.
Clusters
Last but not least you can deploy your software to a cluster. A cluster is a group of servers that work
together to host high-scale applications.
When you run a cluster as Infrastructure as a Service, you need to create and maintain the cluster. This
means that you need to provide the templates to create a cluster. You also need to make sure that you
roll out updates, bug fixes and patches to your cluster. This is comparable with Infrastructure as a Service.
When you use a hosted cluster, you should consider this as Platform as a Service. You instruct the cloud
to create the cluster, and you deploy your software to the cluster. When you run a container cluster, you
can use the container cluster technologies like Kubernetes or Docker Swarm.
Summary
Regardless of the technology, you choose to host your application, the creation, or at least configuration
of your infrastructure should be part of your release pipeline and part of your source control repository.
Infrastructure as Code is a fundamental part of Continuous Delivery and gives you the freedom to create
servers and environments on demand.
Links
●● AWS Cloudformation23
●● Terraform24
●● Powershell DSC25
●● AWS Lambda26
23 https://aws.amazon.com/cloudformation/
24 https://www.terraform.io/
25 https://docs.microsoft.com/en-us/powershell/dsc/overview/overview
26 https://docs.aws.amazon.com/lambda/latest/dg/welcome.html
MCT USE ONLY. STUDENT USE PROHIBITED 370 Module 11 Set up a Release Management Workflow
●● Azure Functions27
●● Chef28
●● Puppet29
●● Azure Resource Manager /ARM30
Steps
Let's now take a look at how a release pipeline can access resources that require a secure connection. In
Azure DevOps, these are implemented by Service Connections.
You can set up a service connection to environments to create a secure and safe connection to the
environment that you want to deploy to. Service connections are also used to get resources from other
places in a secure manner. For example, you might need to get your source code from GitHub.
In this case, let's take a look at configuring a service connection to Azure.
1. From the main menu in the Parts Unlimited project, click Project settings at the bottom of the
screen.
2. In the Project Settings pane, from the Pipelines section, click Service connections. Click the drop
down beside +New service connection.
27 https://azure.microsoft.com/en-us/services/functions
28 https://www.chef.io/chef/
29 https://puppet.com/
30 https://docs.microsoft.com/en-us/azure/azure-resource-manager/resource-group-overview
MCT USE ONLY. STUDENT USE PROHIBITED
Provision and Configure Environments 371
As you can see, there are many types of service connections. You can create a connection to the Apple
App Store or to the Docker Registry, to Bitbucket, or to Azure Service bus.
In this case, we want to deploy a new Azure resource, so we'll use the Azure Resource Manager option.
3. Click Azure Resource Manager to add a new service connection.
4. Set the Connection name to ARM Service Connection, click on an Azure Subscription, then select
an existing Resource Group.
Note: You might be prompted to logon to Azure at this point. If so, logon first.
MCT USE ONLY. STUDENT USE PROHIBITED 372 Module 11 Set up a Release Management Workflow
Notice that what we are actually creating is a Service Principal. We will be using the Service Principal as
a means of authenticating to Azure. At the top of the window, there is also an option to set up Managed
Identity Authentication instead.
The Service Principal is a type of service account that only has permissions in the specific subscription and
resource group. This makies it a very safe way to connect from the pipeline.
5. Click OK to create it. It will then be shown in the list.
6. In the main Parts Unlimited menu, click Pipelines then *Releases, then Edit to see the release pipeline.
Click the link to View stage tasks.
MCT USE ONLY. STUDENT USE PROHIBITED
Provision and Configure Environments 373
The current list of tasks is then shown. Because we started with an empty template, there are no tasks as
yet. Each stage can execute many tasks.
7. Click the + sign to the right of Agent job to add a new task. Note the available list of task types.
8. In the Search box, enter the word storage and note the list of storage-related tasks. These include
standard tasks, and tasks available from the Marketplace.
MCT USE ONLY. STUDENT USE PROHIBITED 374 Module 11 Set up a Release Management Workflow
We will use the Azure file copy task to copy one of our source files to a storage account container.
9. Hover over the Azure file copy task type, and click Add when it appears. The task will be added to
the stage but requires further configuration.
10. Click the File Copy task to see the required settings.
MCT USE ONLY. STUDENT USE PROHIBITED
Provision and Configure Environments 375
11. Set the Display Name to Backup website zip file, then click the ellipsis beside Source and locate the
file as follows, then click OK to select it.
MCT USE ONLY. STUDENT USE PROHIBITED 376 Module 11 Set up a Release Management Workflow
We then need to provide details of how to connect to the Azure subscription. The easiest and most
secure way to do that is to use our new Service Connection.
12. From the Azure Subscription drop down list, find and select the ARM Service Connection that we
created.
13. From the Destination Type drop down list, select Azure Blob, and from the RM Storage Account
and Container Name, select the storage account, and enter the name of the container, then click
Save at the top of the screen and OK.
14. To test the task, click Create release, and in the Create a new release pane, click Create.
15. Click the new release to view the details.
16. On the release page, approve the release so that it can continue.
17. Once the Development stage has completed, you should see the file in the Azure storage account.
MCT USE ONLY. STUDENT USE PROHIBITED
Provision and Configure Environments 377
A key advantage of using service connections is that this type of connection is is managed in a single
place within the project settings, and doesn't involve connection details spread throughout the pipeline
tasks.
MCT USE ONLY. STUDENT USE PROHIBITED 378 Module 11 Set up a Release Management Workflow
Task Groups
A task group allows you to encapsulate a sequence of tasks, already defined in a build or a release
pipeline, into a single reusable task that can be added to a build or release pipeline, just like any other
task. You can choose to extract the parameters from the encapsulated tasks as configuration variables,
and abstract the rest of the task information.
Task groups are a way to standardize and centrally manage deployment steps for all your applications.
When you include a task group in your definitions, and then make a change centrally to the task group,
the change is automatically reflected in all the definitions that use the task group. There is no need to
change each one individually.
For more information, see Task groups for builds and releases31.
Variable Groups
A variable group is used to store values that you want to make available across multiple builds and
release pipelines.
Examples
●● Store the username and password for a shared server
●● Store a share connection string
●● Store the geolocation of an application
●● Store all settings for a specific application
For more information, see Variable Groups for Azure Pipelines and TFS32.
31 https://docs.microsoft.com/en-us/azure/devops/pipelines/library/task-groups?view=vsts
32 https://docs.microsoft.com/en-us/azure/devops/pipelines/library/variable-groups?view=vsts
MCT USE ONLY. STUDENT USE PROHIBITED
Manage and Modularize Tasks and Templates 379
Custom Tasks
Instead of using the out-of-the-box tasks, or using a command line or shell script, you can also use your
custom build and release task. By creating your own tasks, the tasks are available for publicly or privately
to everyone you share it with.
Creating your own task has significant advantages.
●● You get access to variables that are otherwise not accessible,
●● you can use and reuse secure endpoint to a target server
●● you can safely and efficiently distribute across your whole organization
●● users do not see implementation details.
For more information, see Add a build or release task33.
Steps
Let's now take a look at how a release pipeline can reuse groups of tasks.
It's common to want to reuse a group of tasks in more than one stage within a pipeline or in different
pipelines.
1. In the main menu for the Parts Unlimited project, click Pipelines then click Task groups.
You will notice that you don't currently have any task groups defined.
33 https://docs.microsoft.com/en-us/azure/devops/extend/develop/add-build-task?view=vsts
MCT USE ONLY. STUDENT USE PROHIBITED 380 Module 11 Set up a Release Management Workflow
There is an option to import task groups but the most common way to create a task group is directly
within the release pipeline, so let's do that.
2. In the main menu, click Pipelines then click Releases, and click Edit to open the pipeline that we have
been working on.
3. The Development stage currently has a single task. We will add another task to that stage. Click the
View stage tasks link to open the stage editor.
4. Click the + sign to the right of the Agent job line to add a new task. In the Search box, type data-
base.
MCT USE ONLY. STUDENT USE PROHIBITED
Manage and Modularize Tasks and Templates 381
6. Set the Display name to Deploy devopslog database, and from the **Azure Subscriptions" drop
down list, click ARM Service Connection.
Note: we are able to reuse our service connection here
7. In the SQL Database section, set a unique name for the SQL Server, set the Database to devopslog,
set the Login to devopsadmin, and set any suitable password.
MCT USE ONLY. STUDENT USE PROHIBITED 382 Module 11 Set up a Release Management Workflow
8. In the Deployment Package section, set the Deploy type to Inline SQL Script, set the Inline SQL
Script to:
CREATE TABLE dbo.TrackingLog
(
TrackingLogID int IDENTITY(1,1) PRIMARY KEY,
TrackingDetails nvarchar(max)
);
11. Click Create task group, then in the Create task group window, set Name to Backup website zip
file and deploy devopslog. Click the Category drop down list to see the available options. Ensure
that Deploy is selected, and click Create.
In the list of tasks, the individual tasks have now disappeared and the new task group appears instead.
12. From the Task drop down list, select the Test Team A stage.
MCT USE ONLY. STUDENT USE PROHIBITED 384 Module 11 Set up a Release Management Workflow
13. Click the + sign to the right of Agent job to add a new task. In the Search box, type backup and
notice that the new task group appears like any other task.
14. Hover on the task group and click Add when it appears.
Task groups allow for each reuse of a set of tasks and limits the number of places where edits need to
occur.
Walkthrough cleanup
15. Click Remove to remove the task group from the Test Team A stage.
16. From the Tasks drop down list, select the Development stage. Again click Remove to remove the
task group from the Development stage.
17. Click Save then OK.
MCT USE ONLY. STUDENT USE PROHIBITED
Manage and Modularize Tasks and Templates 385
Steps
Let's now take a look at how a release pipeline can make use of predefined sets of variables, called
Variable Groups.
Similar to the way we used task groups, variable groups provide a convenient way to avoid the need to
redefine many variables when defining stages within pipelines, and even when working across multiple
pipelines. Let's create a variable group and see how it can be used.
1. On the main menu for the Parts Unlimited project, click Pipelines, then click Library. There are
currently no variable groups in the project.
2. Click + Variable group to commence creating a variable group. Set Variable group name to Web-
site Test Product Details.
3. In the Variables section, click +Add, then in Name, enter ProductCode, and in Value, enter RED-
POLOXL.
MCT USE ONLY. STUDENT USE PROHIBITED 386 Module 11 Set up a Release Management Workflow
You can see an extra column that shows a lock. It allows you to have variable values that are locked and
not displayed in the configuation screens. While this is often used for values like passwords, notice that
there is an option to link secrets from an Azure key vault as variables. This would be a preferable option
for variables that are providing credentials that need to be secured outside the project.
In this example, we are just providing details of a product that will be used in testing the website.
4. Add another variable called Quantity with a value of 12.
5. Add another variable called SalesUnit with a value of Each.
7. On the main menu, click Pipelines, then click Releases, then click Edit to return to editing the release
pipeline that we have been working on. From the top menu, click Variables.
Variable groups are linked to pipelines, rather than being directly added to them.
9. Click Link variable group , then in the Link variable group pane, click to select the Website Test
Product Details variable group (notice that it shows you how many variables are contained), then in
the Variable group scope, select the Development, Test Team A, and Test Team B stages.
MCT USE ONLY. STUDENT USE PROHIBITED 388 Module 11 Set up a Release Management Workflow
We need the test product for development and during testing but we do not need it in production. If it
was needed in all stages, we would have chosen Release for the Variable group scope instead.
10. Click Link to complete the link.
The variables contained in the variable group are now available for use within all stages except Produc-
tion, just the same way as any other variable.
MCT USE ONLY. STUDENT USE PROHIBITED
Integrate Secrets with the release pipeline 389
34 https://docs.microsoft.com/en-us/azure/devops/pipelines/library/service-endpoints?view=vsts
35 https://docs.microsoft.com/en-us/azure/devops/pipelines/process/variables?view=vsts
36 https://docs.microsoft.com/en-us/vsts/build-release/concepts/library/variable-groups?view=vsts#link-secrets-from-an-azure-key-vault-as-
variables
MCT USE ONLY. STUDENT USE PROHIBITED 390 Module 11 Set up a Release Management Workflow
Steps
Let's now take a look at how a release pipeline can make use of the secrets that it needs during deploy-
ment.
In the walkthrough on variable groups, you saw that variable groups can work in conjunction with Azure
Key Vault.
1. In the Azure Portal, in the key vault (that was set up in the pre-requisites section) properties, in the
Secrets section, create the following secrets using the +Generate/Import option with an Upload
option of Manual.
3. From the Tasks drop down list, click Development to open the tasks editor for the Development
stage.
5. Hover over the Azure Key Vault task and when Add appears, click it, then in the task list, click the
Azure Key Vault task to open its settings.
MCT USE ONLY. STUDENT USE PROHIBITED 392 Module 11 Set up a Release Management Workflow
We can use the same service connection that was used in earlier walkthroughs.
6. Set Display name to the name of your key vault, and from the Azure subscription drop down list,
select ARM Service Connection. In the Key vault drop down list, select your key vault.
The Secrets filter can be used to define which secrets are required for the project. The default is to bring
in all secrets in the vault. It will be more secure to restrict the secrets to just those that are needed.
7. In the Secrets filter, enter database-login,database-password.
12. Click Link secrets from an Azure key vault as variables. In the Azure subscription drop down list,
click ARM Service Connection, and from the Key vault name drop down list, select your key vault.
Note the warning that appears. Any service principal that needs to list secrets or get their values, needs
to have been permitted to do so, by creating an access policy. Azure DevOps is offering to configure this
for you.
13. Click Authorize to create the required access policy.
Note: you will be required to log on to Azure to perform this action
The warning should then disappear.
MCT USE ONLY. STUDENT USE PROHIBITED 394 Module 11 Set up a Release Management Workflow
15. Click to select both database-login and database-password secrets, then click Ok.
MCT USE ONLY. STUDENT USE PROHIBITED
Integrate Secrets with the release pipeline 395
19. Click Link to link the variable group to the Production stage of the pipeline. Click the drop down
beside Database Credentials so that you can see the variables that are now present.
An important advantage of using variable groups to import Azure Key Vault secrets, over adding the
Azure Key Vault task to a stage, is that they can be scoped to more than one stage in the pipeline, and
linked to more than one pipeline.
MCT USE ONLY. STUDENT USE PROHIBITED 398 Module 11 Set up a Release Management Workflow
>source: http://lisacrispin.com/2011/11/08/using-the-agile-testing-quadrants/
We can make four quadrants where each side of the square defines what we are targeting with our tests.
●● Business facing, meaning the tests are more functional and most of the time executed by end users of
the system or by specialized testers that know the problem domain very well.
●● Supporting the Team, means it helps a development team to get constant feedback on the product so
they can find bugs fast and deliver a product with quality build in
●● Technology facing, means the tests are rather technical and non-meaningful to business people. They
are typical tests written and executed by the developers in a development team.
●● Critique Product, are tests that are there to validate the workings of a product on it’s functional and
non-functional requirements.
Now we can place different test types we see in the different quadrants.
e.g., we can put functional tests, Story tests, prototypes and simulations in the first quadrant. These tests
MCT USE ONLY. STUDENT USE PROHIBITED
Configure Automated Integration and Functional Test Automation 399
are there to support the team in delivering the right functionality and are business facing since they are
more functional.
In quadrant two we can place tests like exploratory tests, Usability tests, acceptance tests, etc.
In quadrant three we place tests like Unit tests, Component tests, and System or integration tests.
In quadrant four we place Performance tests, load tests, security tests, and any other non-functional
requirements test.
Now if you look at these quadrants, you can see that specific tests are easy to automate or are automat-
ed by nature. These tests are in quadrant 3 and 4
Tests that are automatable but most of the time not automated by nature are the tests in quadrant 1
Tests that are the hardest to automate are in quadrant 2
What we also see is that the tests that cannot be automated or are hard to automate are tests that can be
executed in an earlier phase and not after release. This is what we call shift-left where we move the
testing process more towards the development cycle.
We need to automate as many tests as possible. And we need to test as soon as possible. A few of the
principles we can use are:
●● Tests should be written at the lowest level possible
●● Write once, run anywhere including production system
●● Product is designed for testability
●● Test code is product code, only reliable tests survive
●● Test ownership follows product ownership
By testing at the lowest level possible, you will find that you have a large number of tests that do not
require infrastructure or applications to be deployed. For the tests that need an app or infrastructure, we
can use the pipeline to execute them.
To execute tests within the pipeline, we can run scripts or use tools that execute certain types of tests. On
many occasions, these are external tools that you execute from the pipeline, like Owasp ZAP, SpecFlow,
or Selenium. In other occasions, you can use test functionality from a platform like Azure. For example
Availability or Load Tests that are executed from within the cloud platform.
When you want to write your own automated tests, choose the language that resembles the language
from your code. In most cases, the developers that write the application should also write the test, so it
makes sense to use the same language. For example, write tests for your .Net application in .Net, and
write tests for your Angular application in Angular.
To execute Unit Tests or other low-level tests that do not need a deployed application or infrastructure,
the build and release agent can handle this. When you need to execute tests with a UI or other special-
ized functionality, you need to have a Test agent that can run the test and report the results back.
Installation of the test agent then needs to be done up front, or as part of the execution of your pipeline.