C# 12 for Cloud, Web, and Desktop Applications
C# 12 for Cloud, Web, and Desktop Applications
C# 12 for Cloud, Web, and Desktop Applications
www.bpbonline.com
First Edition 2024
ISBN: 978-93-55519-023
All Rights Reserved. No part of this publication may be reproduced, distributed or transmitted in any
form or by any means or stored in a database or retrieval system, without the prior written permission
of the publisher with the exception to the program listings which may be entered, stored and executed
in a computer system, but they can not be reproduced by the means of publication, photocopy,
recording, or by any electronic and mechanical means.
All trademarks referred to in the book are acknowledged as properties of their respective owners but
BPB Publications cannot guarantee the accuracy of this information.
www.bpbonline.com
Dedicated to
My beloved parents:
Carlos Roberto Coutinho de Araujo
Sonia Maria Pereira Vivas
About the Author
This book covers many different aspects of web scraping, the importance of
automation of web scraping. This book also introduces the importance of web
scraping in the field of real time industry. It shows how the data is important
for the industries. This book solves the basic understanding of web scraping /
crawling in the data world. It also gives importance for python learning. So
that python’s basic concepts get refreshed. This book gives information about
the usefulness of Python in web scraping as well.
This book takes a practical approach for web scraping learners. It covers few
real time industry examples as well. It will cover information such as Python
Basically used for automation, machine learning. It can be used for easy data
manipulating and transforming. Used in different domains for data mining
purposes. You can design API access frameworks for end to end automation
in different domains like finance, retail etc.
This book is divided into 14 chapters. They will cover Python basics, basics
and advance in web scraping, why python is better for web scraping, real
time examples in web scraping etc. So, learners can get more interest in web
scraping tools as well. The details are listed below:
Chapter 1: Introduction to Visual Studio 2022 - Readers will explore the
redesigned user interface, enhanced code analysis tools, and new debugging
and testing functionalities. This chapter aims to equip developers with the
knowledge and tools needed to leverage Visual Studio’s latest features for
enhanced productivity and efficiency.
Chapter 2: What is New in C# 12 - Whether experienced or new to C#, this
chapter provides valuable insights into the evolving C# landscape,
empowering developers to leverage the latest tools and advancements for
efficient and effective coding.
Chapter 3: Mastering Entity Framework Core - It explores Database-First
and Code-First approaches, covering schema creation and data model
generation. It explains data modeling concepts, data annotations, and
customization. The chapter also addresses data management tasks like
querying and modifying data using LINQ to Entities, with examples.
Chapter 4: Getting Started with Azure Functions - covers triggers and
bindings, exploring HTTP, Timer, Blob, and Queue triggers with detailed
setup instructions and best practices. The chapter also explains how bindings
facilitate interaction with Azure services like Blob Storage and Cosmos DB,
demonstrating data access and manipulation through binding examples.
Chapter 5: Azure SQL, Cosmos DB and Blob Storage - providing step-by-
step instructions and key concepts for each service. From creating and
connecting to Azure SQL Database instances to understanding Cosmos DB
APIs and working with Blob Storage containers, readers will gain valuable
insights into leveraging these services effectively within their .NET
applications.
Chapter 6: Unleashing the Power of Async Operations with Azure
Service Bus - explores key concepts like queues, topics, and subscriptions,
providing practical instructions and best practices for implementation. From
understanding messaging patterns to configuring advanced features like
message sessions and dead-letter queues, readers will gain valuable insights
into building robust and scalable messaging solutions with Azure Service
Bus.
Chapter 7: Securing Your Apps with Azure Key Vault - focus on
authentication, the chapter guides readers through creating and configuring
Key Vault instances, storing secrets, and accessing sensitive data securely.
Step-by-step instructions and coverage of authentication options, including
client certificates and Azure Active Directory authentication, equip
developers with the knowledge and tools to ensure robust security measures
for their applications.
Chapter 8: Building Dynamic Web Apps with Blazor and ASP.NET -
Focusing on Blazor’s key features like Hot Reload, Security, and Strongly-
typed databinding, the chapter presents practical case studies to demonstrate
their application in real-world scenarios. Starting with an overview of Blazor
and its benefits, it covers project setup with ASP.NET and Visual Studio,
exploring both client-side and server-side hosting models
Chapter 9: Real-time Communication with SignalR and ASP.NET -
Starting with an overview of SignalR’s benefits, it covers project setup with
ASP.NET and Visual Studio, exploring various transport protocols like
WebSockets and Server-Sent Events
Chapter 10: Implementing MicroServices with Web APIs - covers
microservices architecture and the advantages of using Web APIs. It details
designing scalable architectures, including service discovery, load balancing,
and fault tolerance. Exploring scaling techniques like horizontal, vertical, and
auto-scaling, it provides practical instructions, best practices, and pitfalls to
avoid, empowering developers to create robust microservices architectures.
Chapter 11: CI/CD with Docker and Azure DevOps - Beginning with an
overview of Docker and Azure DevOps, it demonstrates how to automate the
CI-CD process. Covering Docker commands for building and deploying
containerized apps, it explains CI-CD concepts and their role in streamlining
development and deployment. The chapter walks through configuring build
pipelines, release pipelines, and environment management using Azure
DevOps
Chapter 12: Building Multi-platform Apps with .NET MAUI and Blazor
Hybrid - explores its role in creating apps for various platforms. The chapter
introduces Blazor Hybrid and its integration with .NET MAUI, comparing it
with Blazor to highlight their respective benefits and drawbacks. It explains
how Blazor Hybrid enables developers to build responsive UIs seamlessly
integrated with .NET MAUI
Chapter 13: Windows UI Library: Crafting Native Windows
Experience- Starting with an introduction to WinUI, it delves into how it
facilitates the development of responsive desktop apps. Exploring the Fluent
Design System’s principles, including depth and motion, it demonstrates
using WinUI’s UI controls to implement engaging user interfaces
Chapter 14: Unit Testing and Debugging - craft effective unit tests using
nUnit and xUnit for code reliability. Discover automated testing frameworks
for better code quality. Dive into debugging with C# tools, mastering
techniques like breakpoints and variable inspection for efficient error
resolution, ultimately boosting developer productivity
Code Bundle and Coloured Images
Please follow the link to download the
Code Bundle and the Coloured Images of the book:
https://rebrand.ly/fcn7t93
The code bundle for the book is also hosted on GitHub at
https://github.com/bpbpublications/CSharp12-For-Cloud-Web-and-
Desktop-Applications. In case there’s an update to the code, it will be
updated on the existing GitHub repository.
We have code bundles from our rich catalogue of books and videos available
at https://github.com/bpbpublications. Check them out!
Errata
We take immense pride in our work at BPB Publications and follow best
practices to ensure the accuracy of our content to provide with an indulging
reading experience to our subscribers. Our readers are our mirrors, and we
use their inputs to reflect and improve upon human errors, if any, that may
have occurred during the publishing processes involved. To let us maintain
the quality and help us reach out to any readers who might be having
difficulties due to any unforeseen errors, please write to us at :
errata@bpbonline.com
Your support, suggestions and feedbacks are highly appreciated by the BPB
Publications’ Family.
Did you know that BPB offers eBook versions of every book published, with PDF and ePub files
available? You can upgrade to the eBook version at www.bpbonline.com and as a print book
customer, you are entitled to a discount on the eBook copy. Get in touch with us at :
business@bpbonline.com for more details.
At www.bpbonline.com, you can also read a collection of free technical articles, sign up for a
range of free newsletters, and receive exclusive discounts and offers on BPB books and eBooks.
Piracy
If you come across any illegal copies of our works in any form on the internet, we would be
grateful if you would provide us with the location address or website name. Please contact us at
business@bpbonline.com with a link to the material.
Reviews
Please leave a review. Once you have read and used this book, why not leave a review on the site
that you purchased it from? Potential readers can then see and use your unbiased opinion to make
purchase decisions. We at BPB can understand what you think about our products, and our authors
can see your feedback on their book. Thank you!
For more information about BPB, please visit www.bpbonline.com.
2. What is New in C# 12
Introduction
Structure
Objectives
C# 11 updates
Raw string literals
Generic math support
Generic attributes
Unicode Transformation Format-8 string literals
Newlines in string interpolation expressions
List patterns
File-local types
Required members
Auto-default structs
Pattern match Span<char> on a constant string
Extended nameof scope
Numeric IntPtr and UIntPtr
ref fields and scoped ref
Improved method group conversion to delegate
Warning wave 7
C# 12 updates
Primary constructors
Collection expressions
Inline arrays
Default lambda parameters
ref readonly parameters
Alias any type
Conclusion
12. Building Multi-platform Apps with .NET MAUI and Blazor Hybrid
Introduction
Structure
Objectives
.NET MAUI overview
Differences between Blazor X Blazor Hybrid
Case study with step-by-step implementation
Creating the .NET MAUI project
Using Blazor Hybrid UI from Desktop client
Using Blazor Hybrid UI from mobile client
Conclusion
Index
CHAPTER 1
Introduction to Visual Studio 2022
Introduction
This chapter explores the latest features and improvements in Microsoft's
interactive development environment. The new version of Visual Studio
provides developers with better productivity and enhanced collaboration
capabilities. The chapter covers the significant changes, including the
introduction of the 64-bit architecture, improved performance, and better
integration with Azure services.
This chapter highlights the significant updates of the IDE's user interface,
which includes an updated start window, a redesigned search experience, and
a refreshed iconography. This chapter also covers the enhanced code analysis
capabilities, which include code suggestion, auto-correction, and intelligent
code completion. Additionally, the chapter delves into new debugging and
testing features, including live unit testing, snapshot debugging, and time
travel debugging.
Structure
This chapter covers the following topics:
Significant changes from Visual Studio 2019
Live unit testing
Snapshot debugging
Time travelling debugging
Objectives
At the end of this chapter, readers will have a solid understanding of the main
changes and new features introduced in Visual Studio 2022, including live
unit testing, snapshot debugging, and time travelling debugging. They will be
equipped with the knowledge necessary to leverage these tools effectively in
their C# development workflows for cloud, web, and desktop applications.
Performance improvements
Visual Studio 2022 introduces a range of performance improvements that
enhance the development experience for developers. These improvements
target various areas of the IDE, including build acceleration for .NET SDK
style projects, external sources de-compilation, the Threads Window, Quick
Add Item, code coverage, and Razor & C# experiences.
Let's delve into each of these enhancements:
Build acceleration for .NET SDK style projects: Visual Studio 2022
incorporates optimizations for faster build times, specifically for .NET
SDK style projects. By improving the build process up to 80%,
developers experience reduced waiting times, allowing for quicker
iterations and increased productivity.
External sources De-compilation: This improvement enables
developers to navigate and debug through code even when the source
files are external 10x faster than Visual Studio 2019.
Find in files: With enhanced search indexing, parallel processing,
smarter search algorithms, search result previews, and advanced
filtering options, developers can locate the desired information within
their codebase 3x faster, reducing time spent on searching and
improving overall productivity.
Git tooling: Visual Studio 2022 introduces a new feature called the
Commit Graph, which enhances the visualization and navigation of Git
commits within the IDE. The Commit Graph in Visual Studio 2022
demonstrates a substantial improvement in performance, with an
average loading time for branch history reduced by 70%.
Threads window: The Threads Window in Visual Studio 2022 has
undergone performance improvements, resulting in a more responsive
and efficient debugging for applications with many threads. Developers
can now effortlessly analyze and manage threads, gaining deeper
insights into multi-threaded applications and improving overall
debugging productivity.
Quick add item: Visual Studio 2022 introduces the “New Item” menu
command, which accelerates the process of adding new items to
projects. With this enhancement, developers can swiftly add multiple
files and folders streamlining the project creation and modification
process.
Code coverage: Code coverage analysis in Visual Studio 2022 has
been optimized for 35% faster code coverage tests. Developers can
now measure the code coverage of their tests more efficiently, allowing
them to identify areas of code that lack test coverage and improve
overall code quality.
Razor and C# experience: Visual Studio 2022 brings support for code
actions, including some helpful shortcuts that are vital for web
development. These enhancements enhance the responsiveness and
responsiveness of code editing, navigation, and IntelliSense features,
enabling developers to write, modify, and navigate code more
smoothly.
With these performance improvements in Visual Studio 2022, developers can
expect a more efficient and responsive development environment. The build
acceleration for .NET SDK style projects, external sources de-compilation,
improvements to the Threads Window, Quick Add Item feature, enhanced
code coverage analysis, and optimized Razor and C# experiences collectively
enhance productivity and streamline the development process, enabling
developers to deliver high-quality applications more efficiently.
64 -bit support
One of the significant enhancements in Visual Studio 2022 is its transition to
a 64-bit application. This shift from a 32-bit to a 64-bit architecture brings
several advantages and benefits to developers.
Let's explore the significance of Visual Studio 2022 being a 64-bit
application:
Increased memory access: As a 64-bit application, Visual Studio
2022 can take advantage of the expanded memory address space
provided by 64-bit systems. This allows the IDE to access larger
amounts of memory, enabling developers to work with larger and more
complex projects without facing memory limitations or performance
issues. Developers may now work with solutions containing more
projects, capitalizing on the extended memory support offered by the
64-bit architecture.
Compatibility with 64-bit Tools and Libraries: Being a 64-bit
application, Visual Studio 2022 seamlessly integrates with other 64-bit
tools, libraries, and components, ensuring compatibility and
interoperability. Developers can take full advantage of the
advancements and optimizations provided by 64-bit technologies in the
development ecosystem.
Future-Proofing: The transition to a 64-bit application positions
Visual Studio 2022 for future growth and scalability. It aligns with the
industry's shift towards 64-bit computing and ensures that the IDE can
accommodate the evolving demands of modern development
environments and technologies.
This transition empowers developers with a more robust and capable
development environment, enabling them to tackle complex projects,
streamline workflows, and deliver high-quality applications with greater
efficiency.
Smarter IntelliCode
Program Synthesis using Examples (PROSE) technology is an automated
program synthesis framework developed by Microsoft Research. It enables
developers to generate code automatically based on input-output examples,
allowing for rapid prototyping and development. PROSE leverages machine
learning and programming language techniques to understand and generalize
patterns from provided examples, reducing the manual effort required in
writing code. It has been applied to various domains, including data
wrangling, API usage, and code refactoring. PROSE aims to enhance
developer productivity by automating repetitive coding tasks through the
power of example-based program synthesis.
IntelliCode in Visual Studio 2022 actively analyzes your code changes as you
type, leveraging PROSE technology to identify common patterns in manual
code modifications. By recognizing how developers typically perform
automatable code changes, IntelliCode offers helpful suggestions to
streamline your coding process. When you're in the middle of a code fix or
refactoring, IntelliCode presents suggestions from the completions list,
allowing you to effortlessly complete the code change with a single click.
This intelligent assistance provided by IntelliCode optimizes your coding
experience, enabling you to work more efficiently and with greater accuracy.
Multi-repository support
Visual Studio 2022 now offers support for multiple repositories, allowing you
to work with up to 10 active Git repositories simultaneously. This feature
enables seamless collaboration and efficient management of solutions that
span across multiple repositories. You can perform Git operations across
various repositories concurrently, streamlining your workflow.
For instance, in a large-scale web project, you might have separate
repositories for the frontend, API, database, documentation, and various
libraries or dependencies. Previously, managing work across these
repositories required opening multiple instances of Visual Studio. However,
with the multi-repository support, you can now handle, view, and debug all
these repositories within a single instance of Visual Studio. This capability
enhances productivity by eliminating the need to switch between different
IDE instances and allows for a consolidated and cohesive development
experience across multiple repositories.
Here are the key highlights of the new look and feel in Visual Studio 2022:
Refreshed visual theme: The IDE features a refined visual theme that
aligns with contemporary design principles. The interface incorporates
updated icons, cleaner layouts, and improved typography, resulting in a
more visually pleasing and cohesive experience.
Simplified toolbar and menus: The toolbar and menus have been
streamlined to prioritize commonly used commands and reduce clutter.
This simplification enhances navigation and allows for quicker access
to essential tools and features, promoting a more efficient workflow.
Enhanced iconography: Visual Studio 2022 introduces a new set of
modern icons that are visually consistent and easily recognizable. The
redesigned icons provide a clearer representation of commands and
functionalities, making it easier for developers to identify and use the
desired features.
Improved editor experience: The code editor in Visual Studio 2022
benefits from a refined visual design, offering better readability and an
enhanced focus on code content. Syntax highlighting, code
suggestions, and other editing features have been optimized for
improved visibility and ease of use.
Customizable window layouts: Visual Studio 2022 empowers
developers to personalize their workspace by customizing window
layouts. You can arrange tool windows, panes, and panels according to
your preferences, creating a workspace that suits your specific needs
and enhances productivity.
Sync your custom settings: With the capability to sync your settings,
you can maintain productivity seamlessly, regardless of whether you're
working from home or the office. Visual Studio offers robust features
to sync the settings that matter to you, allowing you to store them
securely in the cloud. This means you can access your personalized
settings and configurations from any location, enabling you to work at
your best and maximize efficiency wherever you find most convenient.
Whether you switch between different machines or work remotely, the
ability to sync settings ensures a consistent and optimized development
environment that supports your productivity no matter where you are.
Improved dark theme and custom themes: The new dark theme in
Visual Studio 2022 brings significant enhancements to contrast, accent
color, and accessibility, ensuring a better visual experience for most
users. Furthermore, the Visual Studio marketplace offers a wide array
of custom themes, providing you with the flexibility to choose the
theme that best suits your preferences and optimizes your workflow.
Whether you opt for the improved default dark theme or decide to
customize Visual Studio with a theme of your choice, we strive to
make your Visual Studio experience tailored to your needs and
preferences.
The new look and feel in Visual Studio 2022 introduce a modern and visually
appealing interface, enhancing the overall user experience. Visual Studio
2022 provides a fresh and enjoyable environment for developers to create
their software solutions.
NUnit
Visual Studio Adapter minimum version:
NUnit3TestAdapter version 3.5.1
MSTest
Visual Studio Adapter minimum version:
MSTest.TestAdapter 1.1.4-preview
Framework minimum version:
MSTest.TestFramework 1.0.5-preview
Snapshot debugging
The Snapshot Debugger available in Application Insights empowers you to
effortlessly gather a debug snapshot of your web application in case of an
exception. This invaluable snapshot unveils the precise state of your source
code and variables at the exact moment the exception occurred, giving you
the possibility to simulate the scenario that throw the exception.
The Snapshot Debugger in Application Insights main functionalities:
Collection of snapshots for your most frequently thrown exceptions.
Monitoring of system-generated logs from your web app.
Provision of essential information necessary for diagnosing,
simulating, and resolving issues within your production environment.
Microsoft.ApplicationInsights.SnapshotCollector"
<!-- The default is true, but you can
disable Snapshot Debugging by setting it to
false -->
<IsEnabled>true</IsEnabled>
<!-- Snapshot Debugging is usually
disabled in developer mode, but you can enable
it by setting this to true. -->
<!-- DeveloperMode is a property on the
active TelemetryChannel. -->
<IsEnabledInDeveloperMode>false</IsEnabledInDevelop
<!-- How many times we need to see an
exception before we ask for snapshots. -->
<ThresholdForSnapshotting>1</ThresholdForSnapshotti
<!-- The maximum number of examples we
create for a single problem. -->
<MaximumSnapshotsRequired>3</MaximumSnapshotsRequir
<!-- The maximum number of problems that
we can be tracking at any time. -->
<MaximumCollectionPlanSize>50</MaximumCollectionPla
<!-- How often we reconnect to the stamp.
The default value is 15 minutes.-->
<ReconnectInterval>00:15:00</ReconnectInterval
<!-- How often to reset problem counters.
-->
<ProblemCounterResetInterval>1.00:00:00</ProblemCou
<!-- The maximum number of snapshots
allowed in ten minutes.The default value is 1.
-->
<SnapshotsPerTenMinutesLimit>3</SnapshotsPerTenMinu
<!-- The maximum number of snapshots
allowed per day. -->
<SnapshotsPerDayLimit>30</SnapshotsPerDayLimit
<!-- Whether or not to collect snapshot in
low IO priority thread. The default value is
true. -->
<SnapshotInLowPriorityThread>true</SnapshotInLowPri
<!-- Agree to send anonymous data to
Microsoft to make this product better. -->
<ProvideAnonymousTelemetry>true</ProvideAnonymousTe
<!-- The limit on the number of failed
requests to request snapshots before the
telemetry processor is disabled. -->
<FailedRequestLimit>3</FailedRequestLimit>
</Add>
</TelemetryProcessors>
b. For ASP.NET Core Applications
i. Create a new class named
SnapshotCollectorTelemetryProcessorFactory
using
Microsoft.ApplicationInsights.AspNetCore;
using
Microsoft.ApplicationInsights.Extensibility;
using
Microsoft.ApplicationInsights.SnapshotCollector;
using Microsoft.Extensions.Options;
internal class
SnapshotCollectorTelemetryProcessorFactory :
ITelemetryProcessorFactory
{
private readonly IServiceProvider
_serviceProvider;
public
SnapshotCollectorTelemetryProcessorFactory(IServicePr
serviceProvider) =>
_serviceProvider = serviceProvider;
public ITelemetryProcessor
Create(ITelemetryProcessor next)
{
IOptions<SnapshotCollectorConfiguration>
snapshotConfigurationOptions =
_serviceProvider.GetRequiredService<IOptions<Snapshot
();
return new
SnapshotCollectorTelemetryProcessor(next,
configuration:
snapshotConfigurationOptions.Value);
}
}
ii. Configure the Dependency Injection
using
Microsoft.ApplicationInsights.AspNetCore;
using
Microsoft.ApplicationInsights.SnapshotCollector;
var builder =
WebApplication.CreateBuilder(args);
builder.Services.AddApplicationInsightsTelemetry();
builder.Services.AddSnapshotCollector(config
=>
builder.Configuration.Bind(nameof(SnapshotCollectorCo
config));
builder.Services.AddSingleton<ITelemetryProcessorFact
(sp => new
SnapshotCollectorTelemetryProcessorFactory(sp));
iii. If you need any customization then you need to update your
appsettings.json. Following we have a default configuration:
1. {
2. "SnapshotCollectorConfiguration": {
3. "IsEnabledInDeveloperMode": false,
4. "ThresholdForSnapshotting": 1,
5. "MaximumSnapshotsRequired": 3,
6. "MaximumCollectionPlanSize": 50,
7. "ReconnectInterval": "00:15:00",
8.
"ProblemCounterResetInterval":"1.00:00:00",
9. "SnapshotsPerTenMinutesLimit": 1,
10. "SnapshotsPerDayLimit": 30,
11. "SnapshotInLowPriorityThread": true,
12. "ProvideAnonymousTelemetry": true,
13. "FailedRequestLimit": 3
14. }
15. }
Recording a snapshot
When it comes to time-traveling debugging, recording snapshots is a
fundamental aspect of the process. Time-traveling debugging allows
developers to replay and analyze the execution of their code, stepping
forward and backward through time to identify and debug issues. Recording
snapshots enables the capture and preservation of program state at different
points in time, facilitating the exploration of program behavior and the
identification of bugs.
From Debug |Attach Snapshot debugger you can attach the Snapshot
debugger:
Select your Virtual Machine and check both options. Do not forget about the
Virtual Machine specificities.
Figure 1.9: Window with the Attach Snapshot debugger settings
Select a breakpoint and in its breakpoint settings, you have to check the
”Collect a time travel trace to the end of this method” option.
You will be able to see the snapshots recorded at the Diagnostic Tools
window.
Introduction
Welcome to the exciting world of C# 11 and C# 12! In this chapter, we will
discuss the latest enhancements and features introduced in these versions of
the C# programming language. From improved string handling to enhanced
pattern matching and method group conversions, C# continues to evolve,
offering developers powerful tools to build robust and efficient applications.
We will start by exploring the main changes introduced in C# 11, including
raw string literals, generic math support, and file-local types. Then, we will
dive into the primary constructors, collection expressions, and other exciting
additions brought by C# 12. Along the way, we will discuss how these
features enhance productivity, readability, and performance, empowering
developers to write cleaner and more maintainable code.
Whether you are a seasoned C# developer or just starting your journey with
the language, this chapter will provide valuable insights into the latest
advancements in C# development. Let us discuss the cutting-edge features
that C# 11 and C# 12 have to offer!
Structure
This chapter covers the following topics:
C# 11 updates
Raw string literals
Generic math support
Generic attributes
Unicode Transformation
Newlines in string interpolation expressions
List patterns
File-local types
Required members
Auto-default structs
Pattern match Span<char> on a constant string
Extended nameof scope
Numeric IntPtr and UintPtr
ref fields and scoped ref
Improved method group conversion to delegate
Warning wave 7
C# 12 updates
Primary constructors
Collection expressions
Inline arrays
Optional parameters in lambda expressions
ref readonly parameters
Alias any type
Objectives
In this chapter, our primary goal is to provide a comprehensive overview of
the key features introduced in C# 11 and C# 12, empowering developers to
leverage the latest advancements in the language effectively. We aim to
familiarize readers with the main changes from C# 10, such as raw string
literals, generic math support, and file-local types, before delving into the
enhancements introduced in subsequent versions.
Through clear explanations and practical examples, we strive to help
developers understand the benefits and applications of each new feature,
enabling them to write cleaner, more efficient, and maintainable code. By the
end of this chapter, readers will have gained a solid understanding of the
advancements in C# 11 and C# 12, equipping them with the knowledge and
skills to leverage these features in their own projects effectively.
C# 11 updates
This section will discuss the significant updates introduced in C# 11,
enhancing developer capabilities and code efficiency. From the introduction
of primary constructors to the inline arrays, C# 11 introduces a variety of
features aimed at streamlining development processes and improving code
readability. Developers will explore primary constructors, collection
expressions, inline arrays, optional parameters in lambda expressions, ref
readonly parameters, aliasing any type, and the introduction of inline arrays.
Understanding these changes empowers developers to leverage the latest
tools and techniques to create robust, efficient, and maintainable code in their
C# projects.
Generic attributes
You have the option to define a generic class with a base class of
System.Attribute. This feature simplifies the syntax for attributes that
necessitate a System.Type parameter. In the past, you would have had to
design an attribute with a constructor parameter requiring a Type.
The type arguments must adhere to the constraints imposed by the typeof
operator. Types requiring metadata annotations are prohibited. For instance,
the following types are not permissible as type parameters:
dynamic
string? (or any nullable reference type)
(int X, int Y) (or any other tuple types using C# tuple syntax)
These types incorporate annotations describing the type and are not directly
represented in metadata. In such cases, it is recommended to use the
underlying types:
Use object in place of dynamic
Prefer string over string?
Substitute (int X, int Y) with ValueTuple<int, int>
This is how we had to declare a generic attribute before C# 11:
1. public class SampleAttribute : Attribute
2. {
3. public SampleAttribute(Type t) => ParamType
= t;
4.
5. public Type ParamType { get; }
6. }
This is how we would use this generic attribute:
1. public class Class1
2. {
3.
4. [SampleAttribute(typeof(string))]
5. public string SampleMethod() =>
default;
6. }
After those changes in C# 11 this is how we declare a generic attribute:
1. public class SampleAttribute<T> : Attribute
{ }
This is how we use the generic attribute in C# 11:
1. public class Class1
2. {
3. [SampleAttribute<string>()]
4. public string Method() => default;
5. }
List patterns
List patterns broaden pattern matching to accommodate sequences of
elements in a list or an array. For instance, the condition sequence is [1, 2, 3]
evaluates to true when the sequence represents an array or a list containing
three integers (1, 2, and 3). This allows matching elements using various
patterns, such as constant, type, property, and relational patterns. The discard
pattern (_) is employed to match any individual element, while the newly
introduced range pattern (..) is utilized to match any sequence with zero or
more elements.
With those changes, we can have something like this:
1. int[] values = { 2, 4, 6 };
2.
3. Console.WriteLine(values is [2, 4, 6]);
// True
4. Console.WriteLine(values is [2, 4, 8]);
// False
5. Console.WriteLine(values is [2, 4, 6, 8]);
// False
6. Console.WriteLine(values is [0 or 2, <= 4, >=
6]); // True
As illustrated in the previous example, a list pattern is considered matched
when each nested pattern aligns with the corresponding element in an input
sequence. Within a list pattern, you have the flexibility to employ any pattern.
If your goal is to match any element, you can utilize the discard pattern.
Alternatively, if you also intend to capture the element, the var pattern can be
employed, as demonstrated in the following example:
1. List<int> elements = new() { 7, 8, 9 };
2.
3. if (elements is [var initial, _, _])
4. {
5. Console.WriteLine($"The lead element
6. in a three-item
collection is {initial}.");
7. }
8. // Output:
9. // The lead element in a three-item collection is
7.
According to the documentation, list pattern matching in C# involves three
distinct patterns: Discard Pattern, Range Pattern, and Var Pattern. Let us
delve into each of these patterns to understand their significance:
The Discard Pattern proves useful when matching one or more elements from
a sequence, provided we are aware of the sequence's length.
1. int[] fibonacciSequence = { 1, 2, 3, 5, 8 };
2. bool matchingResult = false;
3.
4. // Matching result is false, length does not match
5. matchingResult = fibonacciSequence is [_, _,
3, _];
6.
7. // Matching result is false, 3 is not at the same
position
8. matchingResult = fibonacciSequence is [_, _,
_, 3, _];
9.
10. // Matching result is false, length matches, but 2
and 3 are not
11. // at the same positions
12. matchingResult = fibonacciSequence is [2, _,
3, _, _];
13.
14. // Matching result is true, single element and its
position
15. // and length match
16. matchingResult = fibonacciSequence is [1, _,
_, _, _];
17.
18. // Matching result is true, multiple elements and
their positions
19. //and length match
20. matchingResult = fibonacciSequence is [1, _,
3, _, _];
In the code above, you can observe the specification of elements for list
pattern matching, with the constraint being the necessity to know the
sequence's length for accurate comparison; otherwise, the comparison would
invariably return false.
For scenarios where the length of the sequence to be compared is unknown,
the Range Pattern becomes valuable. The use of two dots in this pattern
indicates that any number of elements may exist in place of those two dots. It
is important to note that the two dots can only be used once in the sequence:
1. int[] customFibonacci = { 1, 2, 3, 5, 8 };
2. bool matchingResult = false;
3.
4. // Matching result is true, as customFibonacci
ends with element 8
5. matchingResult = customFibonacci is [.., 8];
6.
7. // Matching result is false, 3 is not the second
last element
8. matchingResult = customFibonacci is [.., 3,
_];
9.
10. // Matching result is true, as sequence begins
with 1 and ends with 8
11. matchingResult = customFibonacci is [1, ..,
8];
12.
13. // Matching result is true, as 1 is the first
element
14. matchingResult = customFibonacci is [1, ..];
The pattern shown in the example is a constant pattern, where direct numbers
are employed in the sequence. Alternatively, relational patterns can be used,
allowing the inclusion of comparison expressions, as demonstrated in the
code below:
1. int[] customFibonacci = { 1, 2, 3, 5, 8 };
2. bool matchingResult = false;
3.
4. // Matching result is false, as customFibonacci
ends with element 8
5. matchingResult = customFibonacci is [.., <8];
6.
7. // Matching result is true, 3 is the third element
from the end
8. matchingResult = customFibonacci is [.., >= 3,
_, _];
9.
10. // Matching result is false, as the first element
is not greater
11. //than 1
12. matchingResult = customFibonacci is [>1, ..,
8];
The Var Pattern provides the flexibility to use the var keyword followed by a
variable name, capturing the element present at that position in the sequence.
This variable can then be utilized within the same scope for various purposes,
offering versatility in pattern matching scenarios as we can see below:
1. int[] customFibonacci = { 1, 2, 3, 5, 8 };
2.
3. // Two elements are assigned to two variables
4. if (customFibonacci is [.., var penultimate,
var last])
5. {
6. Console.WriteLine($"penultimate =
{penultimate}");
7. Console.WriteLine($"last = {last}");
8. }
9. else
10. {
11. Console.WriteLine("Pattern did not
match!");
12. }
13.
14. //------------------------------------------------
-------
15. // Case - Patterns are not matching
16. //------------------------------------------------
-------
17. int[] someNumbers = { 1 };
18. if (someNumbers is [var firstValue, var
secondValue, ..])
19. {
20. Console.WriteLine($"firstValue =
{firstValue}");
21. Console.WriteLine($"secondValue =
{secondValue}");
22. }
23. else
24. {
25. Console.WriteLine("Pattern did not
match!");
26. }
File-local types
The file modifier confines the scope and visibility of a top-level type to the
file in which it is declared. Typically, the file modifier is applied to types
generated by a source generator. File-local types offer a convenient solution
for source generators to prevent name collisions among generated types. The
file modifier designates a type as file-local, exemplified in the following
instance:
1. file class ConfidentialComponent
2. {
3. // implementation
4. }
Any types nested within a file-local type are likewise restricted to visibility
within the file where it is declared. Other types in the assembly may share the
same name as a file-local type without causing naming collisions.
A file-local type cannot serve as the return type or parameter type of any
member with broader visibility than the file scope. Additionally, a file-local
type cannot be a field member of a type with visibility exceeding file scope.
Nevertheless, a more visible type may implicitly implement a file-local
interface type. Explicit implementation of a file-local interface is also
possible, but such implementations can only be utilized within the file scope.
The subsequent illustration demonstrates a public type employing a
confidential type within the file to offer a worker method. Moreover, the
public type implicitly implements a local interface defined within the file:
1. // In CustomFile.cs:
2. file interface ICustomFeature
3. {
4. int OfferSolution();
5. }
6.
7. file class RestrictedComponent
8. {
9. public int PerformTask() => 42;
10. }
11.
12. public class SpecialWidget : ICustomFeature
13. {
14. public int OfferSolution()
15. {
16. var performer = new
RestrictedComponent();
17. return performer.PerformTask();
18. }
19. }
In an alternate source file, it is possible to define types sharing identical
names with the file-local types. However, the file-local types remain hidden
and are not accessible, as we can see from the following code below:
1. // In AnotherFile.cs:
2. // No conflict with RestrictedComponent
3. // declared in CustomFile.cs
4. public class RestrictedComponent
5. {
6. public void ExecuteTask()
7. {
8. // omitted
9. }
10. }
Required members
The concept of required members addresses certain well-known limitations
associated with constructors. Constructors traditionally necessitate the caller
to maintain the order or position of parameters, even if they do not require
specific values within their implementation scope. This often results in the
need for multiple constructors with different parameter combinations, leading
to potential complexity. Additionally, any introduction of a new parameter
affects all caller implementations. Required members offer a solution to these
challenges by eliminating the necessity for multiple constructors and
removing restrictions on parameter positions during object initialization.
When utilizing required members, the compiler ensures that callers initialize
these members within the object initialization scope.
However, there are limitations to the use of the required keyword. It cannot
be applied to static properties or fields, as its design is specific to object
initialization scope. Similarly, the required keyword is not applicable to
private members since they are not visible to the caller. Additionally, it
cannot be used for read-only members, as assignment is restricted to within
the constructor.
With those changes we may have the following code:
1. public class CustomArticle
2. {
3. public required string Headline { get;
set; }
4. public string? Subheading { get; set; }
5. public required string Writer { get; set;
}
6. public required DateTime ReleaseDate {
get; set; }
7.
8. public override string ToString()
9. {
10. if
(string.IsNullOrWhiteSpace(Subheading))
11. {
12. return $"{Headline} by {Writer}
13.
({ReleaseDate.ToShortDateString()})";
14. }
15.
16. return $"{Headline}: {Subheading} by
{Writer}
17.
({ReleaseDate.ToShortDateString()})";
18. }
19. }
And when creating a new instance of CustomArticle class, we must provide
the required fields, as follows:
1. class Program
2. {
3. static void Main()
4. {
5. // Instantiate CustomArticle
6. var article = new CustomArticle
7. {
8. Headline = "Exploring C# 11
Features",
9. Subheading = "A Deep Dive into the
Latest Language
10. Enhancements",
11. Writer = "John Doe",
12. ReleaseDate = DateTime.Now
13. };
14.
15. // Display article details using ToString
method
16. Console.WriteLine(article.ToString());
17. }
18. }
In this example, a CustomArticle instance is created with specified values for
Headline, Subheading, Writer, and ReleaseDate. The ToString method is then
called to display the details of the article.
If we try to instantiate the CustomArticle without providing the required
members, as the code below shows:
1. class Program
2. {
3. static void Main()
4. {
5. // Instantiate CustomArticle
6. var article = new CustomArticle
7. {
8. Subheading = "A Deep Dive into the
Latest Language
9. Enhancements",
10. Writer = "John Doe",
11. ReleaseDate = DateTime.Now
12. };
13.
14. // Display article details using ToString
method
15. Console.WriteLine(article.ToString());
16. }
17. }
We have the following error, as the figure shows:
Figure 2.1: Warning when trying to initialize a class without required members
Auto-default structs
The C# 11 compiler guarantees the initialization of all fields in a struct type
to their default values during the execution of a constructor. This alteration
ensures that any field or auto property not explicitly initialized by a
constructor is automatically initialized by the compiler. Structs with
constructors that do not definitively assign values to all fields can now be
successfully compiled, and any fields not explicitly initialized will be set to
their default values.
We can see an example of the changes in C# 11 in the example below:
1. public readonly struct ResultInfo
2. {
3. public ResultInfo(double result)
4. {
5. ResultValue = result;
6. }
7.
8. public ResultInfo(double result, string
information)
9. {
10. ResultValue = result;
11. Information = information;
12. }
13.
14. public ResultInfo(string information)
15. {
16. ResultValue = 0; // Default value for
double
17. Information = information;
18. }
19.
20. public double ResultValue { get; init; }
21. public string Information { get; init; } =
"Default result";
22.
23. public override string ToString() => $"
{ResultValue}
24.
({Information})";
25. }
26.
27. public static void Main()
28. {
29. var data1 = new ResultInfo(8.5);
30. Console.WriteLine(data1); // output: 8.5
(Default result)
31.
32. var data2 = new ResultInfo();
33. Console.WriteLine(data2); // output: 0
(Default result)
34.
35. var data3 = default(ResultInfo);
36. Console.WriteLine(data3); // output: 0
(Default result)
37. }
Each struct inherently possesses a public parameterless constructor. If you
choose to implement a parameterless constructor for a struct, it must be
declared as public. In the case where a struct includes any field initializers, it
is mandatory to explicitly declare a constructor, which may or may not be
parameterless. If a struct declares a field initializer without any constructors,
a compilation error is reported by the compiler. When a struct features an
explicitly declared constructor, whether with parameters or parameterless, all
field initializers for that struct are executed. Fields lacking a field initializer
or an assignment in a constructor are automatically set to their default values.
Warning wave 7
In each release of the C# compiler, new warnings and errors may be
introduced. When these new warnings could potentially be reported on
existing code, they are introduced under an opt-in system known as a
warning wave. This opt-in system ensures that new warnings will not be
displayed on existing code unless you take explicit action to enable them. The
activation of warning waves is accomplished using the AnalysisLevel
element in your project file. If
<TreatWarningsAsErrors>true</TreatWarningsAsErrors> is specified,
warnings from enabled warning waves will be treated as errors. Warning
wave 5 diagnostics were incorporated in C# 9, wave 6 diagnostics in C# 10,
and wave 7 diagnostics in C# 11.
In C# 11 we had the following addition:
CS8981: The type name only contains lower-cased ascii characters.
Any additional keywords introduced in C# will consist solely of
lowercase ASCII characters. ASCII abbreviated from American
Standard Code for Information Interchange, is a character encoding
standard for electronic communication.
This warning serves to prevent conflicts between your types and any potential
future keywords. To resolve this warning, consider renaming the type to
incorporate at least one non-lowercase ASCII character. This could involve
using an uppercase character, a digit, or an underscore. The provided code
results in CS8981:
1. public class samplelowercaseclassname
2. {
3. }
C# 12 updates
In this section, we delve into the notable updates introduced in C# 12, aimed
at enhancing developer productivity and code expressiveness. From the
introduction of pattern-based interfaces to the integration of global using
directives, C# 12 introduces several features designed to streamline
development workflows and improve code readability. Developers will
explore pattern-based interfaces, allowing for more flexible type declarations
and improved code organization. Additionally, the integration of global using
directives simplifies namespace management, reducing boilerplate code and
enhancing code clarity. Understanding these changes empowers developers to
leverage the latest language features effectively, enabling them to write
cleaner, more concise, and maintainable code in their C# projects.
Primary constructors
Now, primary constructors are no longer exclusive to record types; they can
be created in any class or struct. These constructors allow parameters to be
accessible throughout the entire class body. Explicitly declared constructors
must call the primary constructor using the this() syntax to ensure that all
parameters are definitively assigned. Introducing a primary constructor in a
class prevents the compiler from generating an implicit parameterless
constructor. In structs, the implicit parameterless constructor initializes all
fields, including primary constructor parameters, to the 0-bit pattern.
However, the compiler only generates public properties for primary
constructor parameters in record types, not in non-record classes or structs.
We can see an example of primary constructor in the code block below:
1. public class Client(string clientName)
2. {
3. public string Name { get; } = clientName;
// compiler warning
4.
5. public string FullName => clientName;
6.
7. public void ModifyName() => clientName =
clientName.ToUpper();
8. }
9.
10. // Console app
11. var client = new Client("Thiago");
12. client.ModifyName();
13. Console.WriteLine($"Name: {client.Name}"); //
Thiago
14. Console.WriteLine($"Full Name:
{client.FullName}"); // THIAGO
Collection expressions
A collection expression allows for concise creation of common collection
values. It's a succinct syntax enclosed in [ and ] brackets, suitable for
assigning to various collection types. For instance, consider initializing a
System.Span<T> of string elements representing the months of the year:
1. Span<string> monthsOfYear = ["Jan", "Feb",
"Mar", "Apr", "May",
2. "Jun", "Jul", "Aug", "Sep",
"Oct", "Nov", "Dec"];
3. foreach (var month in monthsOfYear)
4. {
5. Console.WriteLine(month);
6. }
The spread operator, denoted by .. in a collection expression, replaces its
argument with the elements from that collection. This argument must be of a
collection type. Below are examples illustrating the functionality of the
spread operator:
1. int[] evenNumbers = [2, 4, 6, 8];
2. int[] oddNumbers = [1, 3, 5, 7, 9];
3. int[] allNumbers = [..evenNumbers,
..oddNumbers];
4.
5. foreach (var number in allNumbers)
6. {
7. Console.Write($"{number}, ");
8. }
9. // Output:
10. // 2, 4, 6, 8, 1, 3, 5, 7, 9,
Inline arrays
Inline arrays enhance application performance by enabling the creation of
fixed-size arrays within struct types. Typically utilized by runtime and library
developers, they offer performance similar to unsafe fixed-size buffers. While
developers may not directly declare inline arrays, they benefit from them
when accessing System.Span<T> or System.ReadOnlySpan<T> objects via
runtime APIs. Below is an example declaration of an inline array within a
struct:
1. [System.Runtime.CompilerServices.InlineArray(7)
2. public struct SampleArray
3. {
4. private int _element0;
5. }
Usage is identical to standard arrays:
1. var sampleArray = new SampleArray ();
2. for (int i = 0; i < 7; i++)
3. {
4. sampleArray [i] = i;
5. }
6.
7. foreach (var i in sampleArray )
8. {
9. Console.WriteLine(i);
10. }
Conclusion
In conclusion, this chapter has provided an in-depth exploration of the main
changes introduced in C# 11 and C# 12. From primary constructors to inline
arrays, we have examined a wide range of enhancements designed to improve
developer productivity and code quality.
By understanding these new features and their applications, developers can
unlock the full potential of C# and stay ahead in the rapidly evolving
landscape of software development. Whether it is leveraging collection
expressions for concise code or embracing optional parameters in lambda
expressions for greater flexibility, the advancements in C# 11 and C# 12 offer
exciting opportunities for innovation and optimization.
As we embrace these changes and continue to evolve with the language, we
look forward to seeing the creative solutions and groundbreaking applications
that developers will build using these new capabilities.
In the next chapter Mastering Entity Framework Core, we will discuss the
intricacies of data access and manipulation. In this exploration, we will
unravel the capabilities of Entity Framework Core, a powerful and versatile
Object-Relational Mapping (ORM) framework. From understanding its
core concepts to mastering advanced features, this chapter promises to equip
you with the knowledge and skills needed to leverage Entity Framework Core
for seamless interaction with databases. Join us as we navigate through
essential concepts, best practices, and hands-on applications, empowering
you to become a proficient master of Entity Framework Core.
Introduction
This chapter provides a comprehensive guide to using this powerful object-
relational mapping (ORM) tool for .NET development. The chapter covers
the two primary approaches for building a data model in Entity Framework
Core: Database-First and Code-First.
The chapter delves into Database-First modeling, which involves creating a
database schema before generating a data model. It provides step-by-step
instructions for using Entity Framework Core tools to reverse-engineer a
database schema and create a corresponding data model. The chapter also
covers Code-First modeling, which allows developers to design a data model
and have Entity Framework Core generate a database schema.
The chapter also provides an overview of data modeling concepts and
explains how to define data models using Entity Framework Core. It covers
the use of data annotations to define data models and explains how to use
them to customize data model generation.
Additionally, the chapter covers data management using Entity Framework
Core, including querying and modifying data in a database. It explains how to
use Language Integrated Query (LINQ) to Entities to write queries and
provides examples of common query operations. The chapter also covers data
modification operations, such as inserting, updating, and deleting data.
Structure
This chapter covers the following topics:
Mastering Entity Framework Core
Database First
Code First
LINQ to Entities
Data Annotations
Data Modeling
Data Management
Objectives
Upon completing this chapter, you will gain a comprehensive understanding
of Entity Framework Core (EF Core), a vital tool for database management
in your applications. Delve into Database First and Code First paradigms,
honing the ability to generate entities from schema and orchestrate database
migrations. Explore nuanced data modeling, leveraging LINQ to Entities for
intricate queries and performance optimization. Grasp the significance of
Data Annotations in mapping entities and enforcing validation rules.
Navigate complex relationship structures, ensuring transactional integrity and
concurrency control. Equipped with these skills, you'll adeptly navigate both
Database First and Code First workflows, adeptly managing data operations
in your applications with EF Core.
Database First
The Database First approach in Entity Framework Core is utilized when we
need to reverse engineer our existing database model into classes that can be
consumed by our application. It offers a valuable capability to seamlessly
connect new projects with pre-existing databases, making it particularly
useful during technology migrations.
Database First enables the translation of database objects into corresponding
classes within your project. This approach is supported by all Entity
Framework Core providers and effectively reflects data fields, data types, and
table relationships from various data sources into your project's language.
When integrating an existing database with a new project, the Database First
approach is highly recommended as it saves time by automatically generating
the necessary classes for you.
Code First
Entity Framework Core Code First simplifies database development by
allowing you to define your data model using C# classes and then
automatically generating the corresponding database schema. This approach
streamlines the process of creating and maintaining databases for your
applications.
With Entity Framework Core's Code First approach, you can focus on
designing your application's entities and relationships using familiar object-
oriented principles. By leveraging attributes or fluent API configuration, you
can fine-tune various aspects of your data model, such as specifying primary
keys, defining relationships, and applying constraints.
Implementation step-by-step
In this implementation we are setting up a database with a few entities using
Entity Framework Core with the Code First.
First, create a Web Application targeting .NET 8.0 and add the following
Nuget Packages needed to connect and manage the database:
Microsoft.EntityFrameworkCore.Tools
Microsoft.EntityFrameworkCore
Microsoft.EntityFrameworkCore.SqlServer
Create a few classes, as follows:
1. public class Supermarket
2. {
3. public int Id { get; set; }
4. public string Address { get; set; }
5. public string City { get; set; }
6. public ICollection<Product> Products {
get; set; }
7. }
8. public class Product
9. {
10. public int Id { get; set; }
11. public string Name { get; set; }
12. public double Price { get; set; }
13. public string Description { get; set;
}
14. }
15. public class Customer
16. {
17. public int Id { get; set; }
18. public string Name { get; set; }
19. public string Address { get; set; }
20. public int Age { get; set; }
21. public ICollection<Product> Products {
get; set; }
22. }
1. Create the DbContext as SampleDbContext:
1. public class SampleDbContext : DbContext
2. {
3. public
SampleDbContext(DbContextOptions<SampleDbContext>
options)
4. : base(options)
5. {
6. }
7.
8. public DbSet<Product> Products { get;
set; }
9. public DbSet<Customer> Customers {
get; set; }
10. public DbSet<Supermarket> Supermarkets
{ get; set; }
11. }
2. Update the Program.cs to configure dependency injection:
1. builder.Services.AddDbContext<SampleDbContext>
(
2. options => options.UseSqlServer("Data
Source=DESKTOP-H20O12E;Initial
Catalog=SampleThiagoDb;Integrated
Security=True;Connect
Timeout=30;Encrypt=False;Trust Server
Certificate=False;Application
Intent=ReadWrite;Multi Subnet
Failover=False"));
3. Open the Package Console Manager and run the following command:
1. Add-Migration InitialCreate
As output, you should have the migrations folder created with the following
files:
LINQ to Entities
LINQ to Entities allows developers to write queries using familiar
programming languages such as C# or Visual Basic, making it easier to
express complex data retrieval and manipulation operations. With LINQ to
Entities, developers can leverage the full power of LINQ to compose
expressive and type-safe queries. LINQ to Entities supports a wide range of
operations, including filtering, sorting, grouping, and aggregating data. It also
supports complex join operations and supports various LINQ operators and
methods for data manipulation and transformation.
When using LINQ to Entities, queries are represented by IQueryable<T>
objects. These objects define a query that can be executed against the
database. The LINQ to Entities provider translates the LINQ expressions and
operations into SQL queries that are executed on the underlying database.
One of the key advantages of LINQ to Entities is its integration with Entity
Framework Core and its components, such as change tracking, transaction
management, and database updates. This integration allows developers to
write efficient and optimized queries that take advantage of the underlying
capabilities of Entity Framework Core.
Here is the step-by-step procedure running on the background every time that
we execute a LINQ to Entities query:
1. Creates a new instance of the ObjectQuery<T> object from the
ObjectContext
The ObjectQuery object represents a typed query where the T is the
type of the entity that will be returned in the result.
2. Compose a LINQ To Entities query based on the ObjectQuery<T>
instance.
The ObjectQuery<T> class, implementing IQueryable<T>, acts as the
data source for LINQ to Entities queries, specifying the desired data to
retrieve and allowing sorting, grouping, and shaping. Queries are
stored in variables without taking immediate action or returning data
until executed. LINQ to Entities supports two syntaxes: query
expression and method-based, and LinQ was introduced back in C#
3.0 and Visual Basic 9.0.
3. Convert the LINQ code into command trees.
First the LINQ code must first be converted to a command tree
representation specific to the framework. LINQ queries consist of
standard operators and expressions, where operators are methods on a
class and expressions can contain a broad range of functionality. The
Entity Framework unifies both operators and expressions into a single
type of hierarchy within a command tree, which is then used for query
execution. The conversion process involves transforming both the
query operators and expressions accordingly.
4. Execute the query, represented by command trees, against the data
source.
During query execution, all query expressions or components are
evaluated either on the client or the server, depending on the context.
5. The query materialization returning the result.
The query results are delivered to the client as Common Language
Runtime (CLR) types, never returned as raw data records. The CLR
type can be defined by the user, the Entity Framework, or generated as
anonymous types by the compiler. The Entity Framework handles all
object materialization operations.
Data Annotations
Data Annotations are attributes applied to Classes or Classes’ attributes that
serves to modify or override the configuration derived from model building
conventions, while the configuration achieved through mapping attributes can
be further overridden using the model building API in OnModelCreating. To
ensure clarity and simplicity in their use, mapping attributes designed for EF
Core are exclusive to EF Core, avoiding any semantic variations across
technologies.
Data annotations has a higher priority over conventions, but the Fluent API
configuration has more priority than Data Annotations. So, your Data
Annotations can override the conventions configurations, but the Fluent API
Configuration can override the data annotations.
Figure 3.5: Package Manager Console with the output from the EF Core commands
In the following figure, database is reflecting the changes made in the code:
Figure 3.6: Database reflecting the changes made in the code
Data Modeling
Data Modeling is a fundamental aspect that focuses on defining relationships
between entities. These relationships are crucial for organizing and
representing data effectively. Three common types of relationships are
explored in this sub-topic: one-to-one, one-to-many, and many-to-many.
Understanding these relationships and their implementation in Entity
Framework Core is essential for designing a robust and efficient data model.
By establishing the appropriate relationships, developers can create a well-
structured and interconnected entity framework that accurately represents the
real-world data requirements of their applications.
The relationships are described below, with a practical example of them
working in our previous project created for the Code-First implementation
and updated with the Data Annotations examples.
For a better demonstration of the following relationships, the class
ProductCategory has been created:
1. public class ProductCategory
2. {
3. public int Id { get; set; }
4. public string Name { get; set; }
5. public string Description { get; set;
}
6. }
One-to-one relationship
A one-to-one relationship indicates that one instance of an entity is directly
associated with another instance. This type of relationship establishes a direct
connection between the entities, enabling them to share information
seamlessly. We can understand better about the one-to-one relationship types
below:
Required one-to-one
In this scenario, it is required that a Product is associated to a
ProductCategory:
[Table("NewProductTable")]
public class Product
{
public int Id { get; set; }
public string Name { get; set; }
[Column(TypeName = "money")]
public double Price { get; set; }
[MaxLength(50, ErrorMessage ="Max
length is 50!")]
public string Description { get;
set; }
public ProductCategory? Category {
get; set; }
}
public class ProductCategory
{
public int Id { get; set; }
public string Name { get; set; }
public string Description { get;
set; }
public int ProductId { get; set; }
public Product Product { get; set; }
= null!;
}
After running the “add-migration” command followed by the “update-
database” command, we have those changes in the database:
Figure 3.7: Database reflecting the changes made in the code.
Optional one-to-one
In this scenario, we may have a Product that is associated to a
ProductCategory, and we also may have Product that is not associated to any
ProductCategory,
as we can see from the code below:
1. [Table("NewProductTable")]
2. public class Product
3. {
4. public int Id { get; set; }
5. public string Name { get; set; }
6. [Column(TypeName = "money")]
7. public double Price { get; set; }
8. [MaxLength(50, ErrorMessage ="Max
length is 50!")]
9. public string Description { get; set; }
10. public ProductCategory? Category {
get; set; }
11. }
12. public class ProductCategory
13. {
14. public int Id { get; set; }
15. public string Name { get; set; }
16. public string Description { get; set; }
17. public int? ProductId { get; set; }
18. public Product? Product { get; set; }
19. }
After running the “add-migration” command followed by the “update-
database” command, we have those changes in the database:
Figure 3.8: The column ProductId nullable reflecting the changes in the code
One-to-many relationship
In a one-to-many relationship, a single instance of an entity is related to
multiple instances of another entity. This relationship enables efficient data
organization, as the "one" entity can maintain a collection of related instances
from the "many" entity.
Required one-to-many
In this scenario, a Product is associated from one until N ProductCategory:
1. [Table("NewProductTable")]
2. public class Product
3. {
4. public int Id { get; set; }
5. public string Name { get; set; }
6. [Column(TypeName = "money")]
7. public double Price { get; set; }
8. [MaxLength(50, ErrorMessage = "Max
length is 50!")]
9. public string Description { get; set;
}
10. public ICollection<ProductCategory>
Category { get; } = new List<ProductCategory>
();
11. }
12. public class ProductCategory
13. {
14. public int Id { get; set; }
15. public string Name { get; set; }
16. public string Description { get; set;
}
17. public int ProductId { get; set; }
18. public Product Product { get; set; } =
null!;
19. }
After running the “add-migration” command followed by the “update-
database” command, we have those changes in the database:
Figure 3.9: Database reflecting the changes made in the code
Optionally, the main entity can have a collection navigation like Product.
ProductCategories to reference the dependent entities, while the dependent
entity can have an optional reference navigation such as
ProductCategory.Product to reference the principal entity. These components
define and establish a one-to-many relationship within Entity Framework
Core.
Optional one-to-many
In this scenario, a Product is associated from zero until N ProductCategory:
1. [Table("NewProductTable")]
2. public class Product
3. {
4. public int Id { get; set; }
5. public string Name { get; set; }
6. [Column(TypeName = "money")]
7. public double Price { get; set; }
8. [MaxLength(50, ErrorMessage = "Max
length is 50!")]
9. public string Description { get; set;
}
10. public ICollection<ProductCategory>
Category { get; } = new List<ProductCategory>
();
11. }
12. public class ProductCategory
13. {
14. public int Id { get; set; }
15. public string Name { get; set; }
16. public string Description { get; set;
}
17. public int? ProductId { get; set; }
18. public Product? Product { get; set; }
= null!;
19. }
After running the “add-migration” command followed by the “update-
database” command, we have those changes in the database:
Figure 3.10: Database reflecting the changes made in the code
Many-to-many relationship
A many-to-many relationship represents a scenario where multiple
instances of one entity are associated with multiple instances of another
entity. To facilitate this relationship, an intermediate table is used to keep the
relationship between the entities.
Entity Framework Core can handle the management of this relationship
transparently, allowing the navigations of the many-to-many relationship to
be utilized naturally, with the ability to add or remove entities from either
side as required. Nevertheless, understanding what happens in the
background, the underlying mechanics, is beneficial to take the best out of
the overall behavior, especially in terms of the mapping to a relational
database.
In this scenario, it is required that many Products are associated with many
ProductCategories. We are creating a class Category and updating Product
and ProductyCategory:
1. [Table("NewProductTable")]
2. public class Product
3. {
4. public int Id { get; set; }
5. public string Name { get; set; }
6. [Column(TypeName = "money")]
7. public double Price { get; set; }
8. [MaxLength(50, ErrorMessage = "Max
length is 50!")]
9. public string Description { get; set;
}
10. public List<ProductCategory>
ProductCategories { get; } = new
List<ProductCategory>();
11. }
12.
13. public class Supermarket
14. {
15. public int Id { get; set; }
16. public string Address { get; set; }
17. [Required(ErrorMessage = "The City is
required.")]
18. public string City { get; set; }
19. public ICollection<Product> Products {
get; set; }
20. }
21.
22. public class ProductCategory
23. {
24. public int Id { get; set; }
25. public int? ProductId { get; set; }
26. public int? CategoryId { get; set; }
27. public Product? Product { get; set; }
= null!;
28. public Category? Category { get; set;
} = null!;
29. }
After running the “add-migration” command followed by the “update-
database” command, we have those changes in the database:
Figure 3.11: Database reflecting the changes made in the code
Data Management
Data Management plays a crucial role in any software application, and
Entity Framework Core provides a powerful toolset for managing data access
and manipulation. We will explore the key concepts of data management in
Entity Framework Core, focusing on three important aspects: normal usage,
unit of work, and the repository pattern.
For the following practical examples, we are continuing the project that we
created for the Code First approach and modified for the Data Modeling
topic.
Normal usage
Normal usage refers to the common practices and techniques used to interact
with the database using Entity Framework Core. It involves performing basic
Create, Read, Update, Delete (CRUD) operations on entities, querying the
database using Language-Integrated Query (LINQ), and handling
relationships between entities.
The DbContext class in Entity Framework Core acts as a bridge between the
application and the database. It represents a session with the database and
provides a set of APIs to perform database operations. Developers can create
a DbContext subclass specific to their application's needs and define DbSet
properties to represent the entity sets.
Entity Framework Core simplifies the execution of CRUD operations by
providing methods like Add, Remove, Update, and SaveChanges. These methods
abstract the underlying SQL statements required to perform the respective
operations, making it easier for developers to work with data.
A new Web API Controller was created to represent the normal usage of
Entity Framework Core and the DbContext, we have a CRUD on the
Customer entity, as we can see from the following code block:
1. [Route("api/[controller]")]
2. [ApiController]
3. public class CustomersNormalEFController :
ControllerBase
4. {
5. private readonly SampleDbContext
sampleDbContext;
6. public
CustomersNormalEFController(SampleDbContext
dbContext)
7. {
8. sampleDbContext = dbContext;
9. }
10.
11. [HttpGet("Customers/{id}")]
12. public IActionResult GetById(int id)
13. {
14. return new
OkObjectResult(sampleDbContext.Customers.FirstOrDefault
=> x.Id == id));
15. }
16. [HttpGet("Customers")]
17. public IActionResult GetAll()
18. {
19. return
Ok(sampleDbContext.Customers.ToList());
20. }
21. [HttpPost("Customer")]
22. public IActionResult
PostSingle([FromBody] Customer customer)
23. {
24.
sampleDbContext.Customers.Add(customer);
25. sampleDbContext.SaveChanges();
26.
27. return
CreatedAtAction(nameof(GetById), new { id =
customer.Id }, customer);
28. }
29. [HttpPut("Customer")]
30. public IActionResult
PutSingle([FromBody] Customer customer)
31. {
32. if
(!sampleDbContext.Customers.Any(x => x.Id ==
customer.Id))
33. return NotFound();
34.
35.
36.
sampleDbContext.Customers.Update(customer);
37. sampleDbContext.SaveChanges();
38.
39. return NoContent();
40. }
41. [HttpDelete("Customers/{id}")]
42. public IActionResult Delete(int id)
43. {
44. var obj =
sampleDbContext.Customers.Include(x=>x.Products).FirstO
=> x.Id == id);
45. if (obj ==null)
46. return NotFound();
47.
48.
sampleDbContext.Products.RemoveRange(obj.Products);
49.
sampleDbContext.Customers.Remove(obj);
50. sampleDbContext.SaveChanges();
51.
52. return NoContent();
53. }
54. }
Unit of work
The unit of work pattern is commonly used in software development to
manage transactions and ensure atomicity and consistency when working
with data. In Entity Framework Core, the unit of work pattern can be
implemented using the DbContext class and its underlying change tracking
mechanism.
Entity Framework Core tracks changes made to entities within a DbContext
instance. This means that any modifications, additions, or deletions
performed on entities are kept in memory until explicitly saved to the
database. The unit of work pattern leverages this change tracking mechanism
to manage transactions effectively.
With the unit of work pattern, developers can group multiple database
operations into a single transaction. By using the BeginTransaction and
Commit/Rollback methods of the DbContext, changes made to entities can be
either committed as a whole or rolled back entirely in case of an error or
exception.
A new Web API Controller was created to represent the unit of work with
Entity Framework Core, we have a CRUD on the Customer entity, as we can
see from the following code block:
1. [Route("api/[controller]")]
2. [ApiController]
3. public class CustomersUnitOfWorkController
: ControllerBase
4. {
5. private readonly UnitOfWork
unitOfWork;
6. public
CustomersUnitOfWorkController(SampleDbContext
dbContext)
7. {
8. unitOfWork = new
UnitOfWork(dbContext);
9. }
10. [HttpGet("Customers/{id}")]
11. public IActionResult GetById(int id)
12. {
13. return new
OkObjectResult(unitOfWork.CustomerRepository.GetByID(id
14. }
15. [HttpGet("Customers")]
16. public IActionResult GetAll()
17. {
18. return
Ok(unitOfWork.CustomerRepository.Get());
19. }
20. [HttpPost("Customer")]
21. public IActionResult
PostSingle([FromBody] Customer customer)
22. {
23.
unitOfWork.CustomerRepository.Insert(customer);
24. unitOfWork.Save();
25.
26. return
CreatedAtAction(nameof(GetById), new { id =
customer.Id }, customer);
27. }
28. [HttpPut("Customer")]
29. public IActionResult
PutSingle([FromBody] Customer customer)
30. {
31.
unitOfWork.CustomerRepository.Update(customer);
32. unitOfWork.Save();
33.
34. return NoContent();
35. }
36. [HttpDelete("Customers/{id}")]
37. public IActionResult Delete(int id)
38. {
39. var customer =
unitOfWork.CustomerRepository.Get(x =>
x.Id.Equals(id), includeProperties:
"Products").FirstOrDefault();
40. foreach (var item in
customer.Products)
41.
unitOfWork.ProductRepository.Delete(item.Id);
42.
43.
unitOfWork.CustomerRepository.Delete(id);
44. unitOfWork.Save();
45.
46. return NoContent();
47. }
48. }
Repository pattern
The repository pattern is a design pattern that provides an abstraction layer
between the application and the data access layer. It helps decouple the
application from the specific data access technology, such as Entity
Framework Core, by defining a set of interfaces and classes representing data
repositories.
In the repository pattern, interfaces are defined to specify the contract for data
access operations. These interfaces typically include methods for CRUD
operations, querying, and any other specific data access requirements of the
application.
Concrete implementations of repository interfaces are created to interact with
the underlying data store, which in this case is Entity Framework Core. These
implementations utilize the DbContext and its DbSet properties to perform the
required operations.
The repository pattern promotes the separation of concerns by isolating the
data access logic from the rest of the application. This enables better
testability, maintainability, and flexibility in switching between different data
access technologies.
A new Web API Controller was created to represent the repository pattern of
Entity Framework Core, we have a CRUD on the Customer entity, as we can
see from the following code block:
1. [Route("api/[controller]")]
2. [ApiController]
3. public class
CustomersRepoPatternController :
ControllerBase
4. {
5. private readonly
GenericRepository<Customer>
customerRepository;
6. private readonly
GenericRepository<Product> productRepository;
7. public
CustomersRepoPatternController(GenericRepository<Custom
customerRepository,
GenericRepository<Product> productRepository)
8. {
9. this.customerRepository =
customerRepository;
10. this.productRepository =
productRepository;
11. }
12.
13. [HttpGet("Customers/{id}")]
14. public IActionResult GetById(int id)
15. {
16. return new
OkObjectResult(customerRepository.GetByID(id));
17. }
18. [HttpGet("Customers")]
19. public IActionResult GetAll()
20. {
21. return
Ok(customerRepository.Get());
22. }
23. [HttpPost("Customer")]
24. public IActionResult
PostSingle([FromBody] Customer customer)
25. {
26.
customerRepository.Insert(customer);
27.
customerRepository.context.SaveChanges();
28.
29. return
CreatedAtAction(nameof(GetById), new { id =
customer.Id }, customer);
30. }
31. [HttpPut("Customer")]
32. public IActionResult
PutSingle([FromBody] Customer customer)
33. {
34.
customerRepository.Update(customer);
35.
customerRepository.context.SaveChanges();
36.
37. return NoContent();
38. }
39. [HttpDelete("Customers/{id}")]
40. public IActionResult Delete(int id)
41. {
42. var customer =
customerRepository.Get(x=> x.Id.Equals( id),
includeProperties:
"Products").FirstOrDefault();
43. foreach (var item in
customer.Products)
44.
productRepository.Delete(item.Id);
45.
46. customerRepository.Delete(id);
47.
customerRepository.context.SaveChanges();
48.
49. return NoContent();
50. }
51. }
A few changes were made in the Program.cs to Inject the dependencies
applied in the examples above. You can see the full Program.cs below:
1. using Microsoft.EntityFrameworkCore;
2. using WebAppEFCoreCodeFirst.Database;
3.
4. var builder =
WebApplication.CreateBuilder(args);
5. builder.Services.AddMvc(options =>
options.EnableEndpointRouting = false);
6. builder.Services.AddScoped<GenericRepository<Customer>>
();
7. builder.Services.AddScoped<GenericRepository<Product>>
();
8. // Add services to the container.
9. builder.Services.AddRazorPages();
10. builder.Services.AddDbContext<SampleDbContext>
(
11. options => options.UseSqlServer("Data
Source=DESKTOP-H20O12E;Initial
Catalog=SampleThiagoDb;Integrated
Security=True;Connect
Timeout=30;Encrypt=False;Trust Server
Certificate=False;Application
Intent=ReadWrite;Multi Subnet
Failover=False"));
12.
13. var app = builder.Build();
14.
15. // Configure the HTTP request pipeline.
16. if (!app.Environment.IsDevelopment())
17. {
18. app.UseExceptionHandler("/Error");
19. // The default HSTS value is 30 days. You may
want to change this for production scenarios, see
https://aka.ms/aspnetcore-hsts.
20. app.UseHsts();
21. }
22.
23.
24. app.UseHttpsRedirection();
25. app.UseStaticFiles();
26. app.UseMvc();
27. app.UseRouting();
28.
29. app.UseAuthorization();
30.
31. app.MapRazorPages();
32.
33. app.Run();
Conclusion
In conclusion, this chapter has provided a comprehensive exploration of
Entity Framework Core (EF Core) and its essential components, including
Database First, Code First, LINQ to Entities, Data Annotations, Data
Modeling, and Data Management. Throughout this chapter, we have delved
into the various approaches and techniques to master EF Core.
We began by understanding the Database First approach, which enables us to
generate entity classes and the DbContext from an existing database schema.
We explored customization options, allowing us to tailor the generated code
to our specific needs. This approach is particularly useful when working with
legacy databases or integrating EF Core into an existing project. Next, we
discussed the Code First approach, where we learned how to define entity
classes, configure relationships, and generate the database schema using
migrations. This approach provides flexibility and control over the database
design, making it ideal for greenfield projects or scenarios where the database
schema is developed alongside the application. We then dived into the power
of LINQ to Entities, which empowers us to write expressive queries and
perform advanced data manipulation within EF Core. By leveraging LINQ's
rich query syntax and operators, we can efficiently retrieve, filter, and shape
data to meet our application's requirements.
Data Annotations proved to be an invaluable tool in our arsenal, enabling us
to apply validation rules to entities, map them to database tables, and define
relationships. With Data Annotations, we can ensure data integrity,
streamline data access, and improve the overall efficiency of our application.
We also explored the realm of Data Modeling in EF Core, encompassing
entity relationships, inheritance, and complex data types. By mastering these
modeling techniques, we can design robust and scalable data models that
accurately represent the domain and enable efficient data management.
Lastly, we discussed Data Management, which covered various aspects such
as efficient data retrieval, modification, deletion, and optimization
techniques. We explored strategies to handle transactions, concurrency
control, and effectively manage large datasets. These skills are essential for
building high-performing applications that can handle complex data scenarios
with ease.
By accomplishing the objectives outlined in this chapter, you have acquired a
solid foundation in mastering Entity Framework Core. Whether you prefer
the Database First or Code First approach, have gained proficiency in using
LINQ to Entities for data manipulation, understand the benefits of Data
Annotations, can effectively design and manage data models, and are
equipped with the skills to optimize data operations.
With this newfound knowledge, you are well-prepared to harness the power
of Entity Framework Core and build robust, scalable, and efficient data
access layers in your applications. EF Core's versatility and feature-rich
ecosystem will empower you to tackle even the most challenging data
scenarios, ensuring the success of your projects.
In the upcoming chapter we will explore serverless computing with Azure
Functions. This chapter serves as a comprehensive guide to understanding the
fundamental concepts of Azure Functions, beginning with an introduction to
Triggers and Bindings. Through a step-by-step case study, you will learn how
to create Azure Functions, select appropriate triggers, and leverage bindings
to seamlessly integrate with other Azure services. By the end of this chapter,
you will be equipped with the knowledge and practical skills to build and
deploy your own Azure Functions, enabling you to implement scalable and
efficient cloud-based solutions.
CHAPTER 4
Getting Started with Azure
Functions
Introduction
Azure Functions is a powerful serverless computing service provided by
Microsoft Azure that allows you to build and deploy applications quickly and
efficiently. It enables you to focus on writing code for specific tasks or
functionalities without worrying about managing the underlying
infrastructure. With Azure Functions, you can create small, single-purpose
functions that respond to events and scale automatically to handle varying
workloads.
In this chapter, we will explore the fundamentals of Azure Functions and
guide you through getting started with this versatile service. We will begin by
clearly defining Azure Functions and highlighting their key features and
benefits. Understanding these foundational concepts will help you grasp the
true potential of Azure Functions and how they can streamline your
application development and deployment.
Next, we will delve into triggers and bindings, which are essential
components of Azure Functions. Triggers are the events that initiate the
execution of your functions, while bindings facilitate seamless integration
with various data sources and services.
Additionally, we will cover the concept of bindings and their role in
connecting your functions to external resources processing input data and
generating output from your functions simple.
We will present a case study with a practical example to reinforce the
concepts discussed. We will walk you through a real-world scenario, guiding
you in creating an Azure Function that demonstrates the power and versatility
of this serverless computing service. From selecting the appropriate trigger to
choosing the right binding and testing the output, you will gain hands-on
experience in building and deploying an Azure Function.
So, let us dive in and get started with Azure Functions!
Structure
This chapter covers the following topics:
Getting started with Azure Functions
Azure Function triggers
Azure Function bindings
Practical case-study
Creating the Azure Function
Selecting the trigger
Picking an appropriate binding
Testing the output
Objectives
Throughout this chapter, we aim to provide a comprehensive understanding
of Azure Functions and their practical implementation. We will begin by
exploring the core concepts and purposes of Azure Functions, shedding light
on their relevance in modern application development. Furthermore, we will
delve into the key features and benefits of Azure Functions, helping you
grasp their significance in building scalable and efficient solutions.
In our exploration, we will conduct a comparative analysis of Azure
Functions against other serverless offerings available in the market. We will
also explain Azure Function bindings, which serve as the bridges connecting
your functions to external resources and data.
We will explore the scaling options available for Azure Functions and delve
into the monitoring capabilities, helping you optimize the performance of
your functions.
As our chapter concludes, we will guide you through the deployment process,
demonstrating how to deploy Azure Functions to Azure. Additionally, you'll
learn about implementing CI/CD pipelines for continuous deployment,
ensuring a seamless and efficient deployment workflow.
Practical case-study
In this case study, we will create an Azure Function with an HTTP trigger,
and we will configure two output bindings: one for creating a Blob in an
Azure Storage Account and another for publishing an event to Azure Event
Hub. This setup involves a single trigger and two output bindings, enabling
us to seamlessly handle incoming HTTP requests and store data in Blob
storage while simultaneously triggering events for further processing or
analysis through Azure Event Hub. By leveraging these bindings, we can
easily integrate our Azure Function with multiple services, enabling efficient
data processing and event-driven workflows.
After creating the function, you will be presented with the output, indicating
that your function has been successfully created. At this point, your function
is ready to run, and you will have a boilerplate code provided as a starting
point. This code serves as a foundation for implementing the logic specific to
your function's requirements.
With the boilerplate code provided, you have a solid starting point for your
function. You can modify and enhance the code to meet your specific
requirements, incorporating the necessary business logic and interacting with
external resources through bindings. Take advantage of this foundation to
quickly build and deploy your custom function while leveraging the benefits
of Azure Functions. You can see a Azure Function running on Azure Portal
below:
Figure 4.4: Output for the created function with the HTTP trigger.
Before proceeding further, ensure that you have created an Azure Event Hub
namespace. You can see an example of an Azure Event Hub namespace
below:
Additionally, ensure that your Azure Event Hub namespace has an Event
Hub created within it. You can see an example of an Azure Event Hub below:
Figure 4.7: The recently created Event Hub, inside the Event Hub Namespace.
Now, let us shift our focus back to the Azure Function, as it is time to
configure the output bindings. Configuring the output bindings allows your
function to seamlessly interact with external resources and services as you
can see from the figure below:
Figure 4.8: Adding more output for the Azure Function.
Let us begin by setting up our Blob Storage binding. This binding will allow
your Azure Function to interact with Azure Blob storage. You may check
how to set the output from the picture below:
Figure 4.9: Configuring the Azure Blob Storage as output.
Now, let us proceed to set up our second output binding, the Event Hub. This
binding will enable your Azure Function to publish events or messages to an
Azure Event Hub. You may check how to set the second output from the
picture below:
Figure 4.10: Configuring the Event Hub as output.
Figure 4.11: Overview of the capture settings from the Event Hub.
Figure 4.12: The Azure Function flow after adding both outputs.
It is time to update our code so that we can create the Blob and publish the
Event. Here is an example of how your Run.csx file should look like:
1. #r "Newtonsoft.Json"
2.
3. using System.Net;
4. using Microsoft.AspNetCore.Mvc;
5. using Microsoft.Extensions.Primitives;
6. using Newtonsoft.Json;
7.
8. public static IActionResult Run(HttpRequest
req, ILogger log, out string outputBlob, out
string outputEventHubMessage)
9. {
10. log.LogInformation("C# HTTP trigger
function processed a request.");
11.
12. string name = req.Query["name"];
13.
14. string requestBody = new
StreamReader(req.Body).ReadToEnd();
15. dynamic data =
JsonConvert.DeserializeObject(requestBody);
16. name = name ?? data?.name;
17.
18. string responseMessage =
string.IsNullOrEmpty(name)
19. ? "This HTTP triggered function
executed successfully. Pass a name in the
query string or in the request body for a
personalized response."
20. : $"Hello, {name}. This HTTP
triggered function executed successfully.";
21.
22. outputBlob= "Blob:" +name;
23. outputEventHubMessage= "Message by:
"+name;
24. return new
OkObjectResult(responseMessage);
25. }
Here is an example of how your function.json file may look like, which is
automatically generated based on the bindings you have configured:
1. {
2. "bindings": [
3. {
4. "authLevel": "function",
5. "name": "req",
6. "type": "httpTrigger",
7. "direction": "in",
8. "methods": [
9. "get",
10. "post"
11. ]
12. },
13. {
14. "name": "$return",
15. "type": "http",
16. "direction": "out"
17. },
18. {
19. "name": "outputBlob",
20. "direction": "out",
21. "type": "blob",
22. "connection":
"samplethiagostorage_STORAGE",
23. "path": "outcontainer/{rand-guid}"
24. },
25. {
26. "name": "outputEventHubMessage",
27. "direction": "out",
28. "type": "eventHub",
29. "connection":
"eventhubthiago_RootManageSharedAccessKey_EVENTHUB"
30. "eventHubName": "outeventhub"
31. }
32. ]
33. }
In the function.json file:
The bindings property contains an array of binding configurations for
each input and output binding.
The HTTP trigger binding is defined with the "type": "httpTrigger"
and "direction": "in" properties, specifying the supported HTTP
methods ("methods": ["get", "post"]) and the name of the trigger
("name": "req").
The Blob Storage output binding is defined with the "type": "blob"
and "direction": "out" properties. It includes the Blob container path
("path": "containerName/{blobName}"), the connection string
("connection": "<connectionString>"), and the data type
("dataType": "string").
The Event Hub output binding is defined with the "type": "eventHub"
and "direction": "out" properties. It includes the Event Hub name
("eventHubName": "<eventHubName>"), the connection string
("connection": "<connectionString>"), and the data type
("dataType": "string").
The "scriptFile" property specifies the location of the function's
compiled code ("../bin/MyFunction.dll").
The "entryPoint" property defines the function's entry point method
("MyFunction.Run").
Figure 4.13: Running the Azure Function from the Azure Portal.
We can also trigger the function from the browser, as we can see from the
picture below:
Figure 4.14: Running the Azure Function from the Web Browser.
Let us proceed to validate the output bindings, starting with the Blob Storage.
You may check the blob successfully created from the following figure:
Now, let us check if the Event Hub was successfully created and if events are
being published to it. You may see the event created successfully from the
picture below:
Figure 4.16: Event successfully published to Event Hub.
Conclusion
In this chapter, we have embarked on our journey to explore Azure Functions
and get started with this powerful serverless computing service. We began by
understanding the concept and purpose of Azure Functions, recognizing their
benefits, and comparing them to other serverless offerings. We then delved
into the fundamental building blocks of Azure Functions, namely triggers and
bindings, which play a crucial role in enabling event-driven application
development and seamless integration with external resources.
To solidify our understanding, we dived into a practical case study, where we
walked through the process of creating an Azure Function, selecting an
appropriate trigger, and configuring the binding. This hands-on experience
allowed us to witness firsthand how Azure Functions can be utilized to solve
real-world challenges.
Throughout this chapter, we have covered essential topics such as testing the
output of Azure Functions, scaling options, and monitoring capabilities. We
also touched upon deployment strategies and best practices for managing and
monitoring Azure Functions throughout their lifecycle.
By now, you should have a strong foundation in Azure Functions and be
equipped with the knowledge and skills necessary to create your own
functions. You understand the power and versatility of triggers and bindings
and their role in building scalable, event-driven applications.
In the upcoming chapter we'll embark on a journey into the heart of Azure's
powerful data management and storage services. Get ready to explore the
capabilities of Azure SQL for robust relational data management, dive into
the world of globally-distributed NoSQL databases with Azure Cosmos DB,
and discover the versatility of Azure Blob Storage for handling unstructured
data. This chapter will equip you with the knowledge and practical skills to
harness these Azure services to efficiently store, manage, and query your
data, taking your cloud-based applications to the next level.
Points to remember
Azure Functions Fundamentals:
Azure Functions are serverless compute resources that allow you to
run code in response to various events.
They are highly scalable and cost-effective, as you only pay for the
resources used during execution.
Triggers and Bindings:
Triggers initiate the execution of Azure Functions based on events
or schedules, such as HTTP requests, timers, or queue messages.
Bindings facilitate interaction with external resources, allowing
input and output data to be seamlessly integrated into functions.
Creating Azure Functions:
You can create Azure Functions using the Azure Portal, Visual
Studio, Azure CLI, or other development tools.
Select an appropriate trigger type based on the event that should
trigger your function's execution.
Function Configuration:
Configure input and output bindings within your function to interact
with various Azure services like Blob Storage, Event Hub, and
more.
Testing and Debugging:
Test your functions locally using tools like Azure Functions Core
Tools.
Debug functions with built-in debugging capabilities.
Deployment and Scaling:
Deploy your Azure Functions to the Azure cloud for production use.
Azure Functions auto-scale based on demand, ensuring resource
efficiency.
Use cases:
Azure Functions are suitable for a wide range of use cases,
including data processing, webhooks, IoT event handling, and more.
Exercises
1. Reproduce the case-study.
2. Explore other input bindings.
3. Explorer other trigger bindings.
4. Explore other output bindings.
Introduction
In today's digital landscape, the demand for efficient and scalable data
storage and management solutions has grown exponentially. Microsoft Azure
offers a comprehensive suite of cloud-based services that cater to diverse data
storage and management requirements. This chapter will delve into three key
components of Azure's data services: Azure SQL, Cosmos DB, and Blob
Storage.
Azure SQL is a fully managed relational database service provided by Azure.
It enables you to build, deploy, and scale relational applications with ease.
Whether you are working on a small-scale project or managing large
enterprise-level applications, Azure SQL provides a reliable and secure
platform for your data storage needs. This chapter will provide an overview
of Azure SQL, discussing its features, benefits, and how it fits into the
broader Azure ecosystem. Additionally, we will explore various usage
examples that showcase the versatility and applicability of Azure SQL in
real-world scenarios.
Cosmos DB, on the other hand, is a globally distributed, multi-model
database service designed to handle massive-scale applications seamlessly.
With Cosmos DB, you can store and retrieve data using a variety of models,
including document, key-value, column-family, graph, and more. Its
impressive scalability, low-latency performance, and global distribution
capabilities make Cosmos DB an ideal choice for building highly responsive
and globally accessible applications. Throughout this chapter, we will
provide an overview of Cosmos DB, highlighting its key features and
discussing practical usage examples that illustrate its potential in different
domains.
Blob Storage, the third component covered in this chapter, is Azure's storage
solution specifically designed for storing unstructured data such as images,
videos, documents, and logs. Blob Storage offers scalable and cost-effective
storage options, allowing you to easily manage and access your unstructured
data in the cloud. In this chapter, we will explore the various features of Blob
Storage and discuss how it can be leveraged in different scenarios.
Additionally, we will provide practical usage examples that demonstrate the
versatility and benefits of Blob Storage for handling unstructured data.
By the end of this chapter, you will have a solid understanding of Azure
SQL, Cosmos DB, and Blob Storage. You will be equipped with the
knowledge needed to leverage these services effectively, enabling you to
build robust and scalable solutions that meet your data storage and
management requirements in the Azure cloud environment. So, let us dive in
and explore the world of Azure SQL, Cosmos DB, and Blob Storage!
Structure
This chapter covers the following topics:
Azure SQL, Cosmo DM and Blob Storage
Azure SQL
Cosmos DB
Blob Storage
Objectives
Gain a comprehensive understanding of key Azure services by chapter's end,
delving into Azure SQL's fundamentals and its pivotal role in the data
services ecosystem. Explore its features, emphasizing scalability and security,
with practical insights from various usage examples. Transition seamlessly to
Cosmos DB, a globally distributed, multi-model database service,
understanding diverse data models and real-world performance attributes.
Focus on Blob Storage as Azure's solution for unstructured data, covering
scalability and cost-effectiveness. Acquire skills for effective utilization of
Azure SQL, Cosmos DB, and Blob Storage, empowering readers to architect
and manage solutions within the Azure landscape.
Azure SQL
Azure SQL is a fully managed relational database service provided by
Microsoft Azure. It offers a reliable and scalable platform for storing and
managing relational data in the cloud. Azure SQL eliminates the need for
managing infrastructure and provides a range of features and capabilities that
enable developers to focus on building applications rather than managing
database servers.
This service is built on the Microsoft SQL Server engine and supports a wide
range of familiar tools and technologies used in SQL Server environments.
Azure SQL offers compatibility with existing SQL Server applications,
making it seamless for organizations to migrate their on-premises databases
to the cloud.
Key features of Azure SQL:
Scalability: Azure SQL can scale up or down based on demand,
allowing applications to handle varying workloads efficiently.
High availability: It provides built-in high availability with automatic
failover and redundancy, ensuring continuous access to data.
Security: Azure SQL offers robust security features, including data
encryption, threat detection, and identity management, to protect
sensitive information.
Managed service: Microsoft manages the underlying infrastructure,
taking care of tasks like patching, backups, and maintenance, so users
can focus on application development.
Integration: Azure SQL integrates seamlessly with other Azure
services, allowing for data integration, analytics, and application
development within the Azure ecosystem.
Usage examples of Azure SQL:
Web applications: Azure SQL is commonly used as the backend
database for web applications, providing a scalable and reliable data
storage solution.
Line-of-business applications: It is suitable for building and
managing business-critical applications that require secure and highly
available data storage.
Data warehousing: Azure SQL can be utilized for data warehousing,
enabling organizations to analyze and process large volumes of
structured data.
Reporting and analytics: It supports reporting and analytics solutions,
allowing users to gain insights from data and generate meaningful
reports.
Software as a Service (SaaS): Azure SQL is commonly used by SaaS
providers as the database backend for their multi-tenant applications.
By leveraging Azure SQL, organizations can benefit from the advantages of a
fully managed, scalable, and secure relational database service in the Azure
cloud environment. It provides the necessary tools and features to build and
manage high-performance applications without the overhead of infrastructure
management, allowing businesses to focus on innovation and growth.
Usage example
In this practical example, we will walk through the process of creating a web
app and establishing a connection with an Azure SQL Server. By following
these steps, you will be able to set up a web application that securely interacts
with the Azure SQL Server for data storage and retrieval.
After successfully creating the SQL Server, the next step is to create a
database within the server. The database serves as the container for
organizing and storing your data. Follow these steps to create a database in
Azure SQL Server:
1. Access the Azure portal (portal.azure.com) using your Azure account
credentials.
2. Navigate to the Azure SQL Server section and select the specific server
where you want to create the database.
3. Within the server's blade, locate the "Databases" section and click on the
"Create database" button.
4. Provide a unique name for the database, keeping in mind any naming
conventions or project-specific requirements.
5. Specify the desired database settings, such as the collation, which
determines the sorting and comparison rules for character data.
6. Choose the appropriate pricing tier based on your performance and
capacity needs. Azure offers different tiers, including Basic, Standard,
and Premium, each with varying performance and features.
7. Configure additional settings, such as enabling Advanced Threat
Protection or enabling Geo-Redundant Backup for enhanced security and
data protection.
8. Review the configuration details and click on the "Review + Create"
button to validate your choices.
9. After validation, click on "Create" to initiate the database creation
process. Azure will provision the database within the specified SQL
Server.
In the following figure, we have an overview of A SQL Server Database.
Update database:
Figure 5.6: Package Manager Consoler with update-database command.
Cosmos DB
Cosmos DB is a globally distributed, multi-model database service provided
by Microsoft Azure. It is designed to handle massive-scale applications with
ease, providing high throughput, low-latency performance, and global reach.
Cosmos DB supports various data models, including document, key-value,
column-family, graph, and more, making it a versatile choice for diverse
application requirements.
Key features of Cosmos DB:
Global distribution: Cosmos DB enables data to be replicated across
multiple regions worldwide, ensuring low-latency access and high
availability for users across the globe.
Scalability: It offers horizontal scalability with automatic partitioning,
allowing applications to scale seamlessly as data volumes and
throughput requirements grow.
Multi-model support: Cosmos DB supports multiple data models,
allowing developers to choose the most suitable model for their
applications and switch between them seamlessly.
Consistency levels: It provides a range of consistency options,
allowing developers to choose the level of consistency required for
their applications, from strong to eventual consistency.
Turnkey global distribution: Cosmos DB takes care of data
replication and distribution across regions, abstracting the complexity
and allowing developers to focus on building applications.
Usage examples of Cosmos DB:
High-volume web and mobile applications: Cosmos DB is ideal for
applications that require low-latency access to data across the globe,
such as e-commerce platforms or social media networks.
IoT and telemetry data: It can handle massive streams of IoT and
telemetry data, storing and processing high-volume data points in real-
time.
Personalized user experiences: Cosmos DB supports graph databases,
making it suitable for applications that require complex relationship
mapping, such as recommendation engines or social networks.
Time-series data: It can efficiently store and process time-series data,
making it suitable for applications in finance, IoT, and monitoring
systems.
Content management and catalogs: cosmos DB can be used to
manage and serve content catalogs, allowing efficient querying and
retrieval of large amounts of structured data.
Cosmos DB Containers
Azure Cosmos Containers provide scalability for both storage and throughput
in Azure Cosmos DB. They are also advantageous when you require different
configurations among your Azure Cosmos DBs, as each container can be
individually configured.
Azure Cosmos Containers have specific properties, which can be system-
generated or user-configurable, depending on the API used. These properties
range from unique identifiers for containers to purging policy configurations.
You can find the complete list of properties for each API in the
documentation.
During creation, you can choose between two throughput strategies:
Dedicated mode: In this mode, the provisioned throughput is
exclusively allocated to the container and comes with SLAs.
Shared mode: In this mode, the provisioned throughput is shared
among all containers operating in shared mode.
Cosmos DB Containers are available for all Cosmos DB APIs except the
Gremlin API and Table API.
Scaling Cosmos DB
Azure Cosmos DB offers manual and automatic scaling options without
service interruption or impact on Azure Cosmos DB SLAs.
With automatic scaling, Azure Cosmos DB dynamically adjusts your
throughput capacity based on usage without the need for manual logic or
code. You simply set the maximum throughput capacity, and Azure adjusts
the throughput within the range of 10% to 100% of the maximum capacity.
Manual scaling allows you to change the throughput capacity permanently
according to your requirements.
It is crucial to choose partition keys wisely before scaling Azure Cosmos DB
to avoid hot partitions, which can increase costs and degrade performance.
When configuring autoscale, consider important topics such as:
Time to Live (TTL): Define the TTL for the container, enabling item-
specific or all-items expiration.
Geospatial Configuration: Query items based on location.
Geography: Represents data in a round-earth coordinate system.
Geometry: Represents data in a flat coordinate system.
Partition Key: The key used for partitioning and scaling.
Indexing Policy: Define how indexes are applied to container items,
including properties to include or exclude, consistency modes, and
automatic index application.
This is the output when you have your Azure Cosmos DB account created:
Now, in our web application, we must install the latest version of following
Nuget package:
Microsoft.Azure.Cosmos
Now we have created three classes:
1. public class Address
2. {
3. [JsonProperty(PropertyName = "id")]
4. public string Id { get; set; }
5. public string City { get; set; }
6. public string StreetAndNumber { get;
set; }
7. }
8. public class Person
9. {
10. [JsonProperty(PropertyName = "id")]
11. public string Id { get; set; }
12. public DateTime BirthDate { get; set;
}
13. public string Name { get; set; }
14. public string LastName { get; set; }
15. public Address Address { get; set; }
16. public Vehicle Vehicle { get; set; }
17. }
18. public class Vehicle
19. {
20. [JsonProperty(PropertyName = "id")]
21. public string Id { get; set; }
22. public int Year { get; set; }
23. public string Model { get; set; }
24. public string Make { get; set; }
25. }
We have also created a helper to populate the DB with some data:
1. public static class CreatePerson
2. {
3. public static Person GetNewPerson()
4. {
5. Random random = new Random();
6. return new Person
7. {
8. BirthDate =
DateTime.Now.AddYears(28),
9. Id = random.Next() + "Thiago",
10. Name = "Thiago",
11. LastName = "Araujo",
12. Vehicle = new Vehicle
13. {
14. Id = random.Next() +
"BMW",
15. Make = "BMW",
16. Model = "116D",
17. Year = random.Next()
18. },
19. Address = new Address
20. {
21. Id = random.Next() +
"Portugal",
22. City = "Lisbonne",
23. StreetAndNumber = "Avenida
da Liberdade, 25"
24. }
25. };
26. }
27. }
We had to update the program.cs to inject the CosmosDB client:
1. using Microsoft.Azure.Cosmos;
2.
3. var builder =
WebApplication.CreateBuilder(args);
4.
5. // Add services to the container.
6. builder.Services.AddRazorPages();
7.
8. SocketsHttpHandler socketsHttpHandler = new
SocketsHttpHandler();
9. // Customize this value based on desired DNS
refresh timer
10. socketsHttpHandler.PooledConnectionLifetime =
TimeSpan.FromMinutes(5);
11. // Registering the Singleton SocketsHttpHandler
lets you reuse it across any HttpClient in your
application
12. builder.Services.AddSingleton<SocketsHttpHandler>
(socketsHttpHandler);
13.
14. // Use a Singleton instance of the CosmosClient
15. builder.Services.AddSingleton<CosmosClient>
(serviceProvider =>
16. {
17. SocketsHttpHandler socketsHttpHandler =
serviceProvider
18.
.GetRequiredService<SocketsHttpHandler>();
19. CosmosClientOptions cosmosClientOptions =
20. new CosmosClientOptions()
21. {
22. HttpClientFactory = () => new
HttpClient(socketsHttpHandler, disposeHandler:
false)
23. };
24.
25. return new CosmosClient(
26. "https://thiagocosmosdb.documents.azure.com:443/"
27. "tBXgs21B8293MdNZZefy8BCt4QaPrLsLhvxecyGDh61HpoUQUSb98K
28. "2mOfGkagZgP0MHACDb1rPeYQ==",
29. cosmosClientOptions) ;
30. });
31.
32. var app = builder.Build();
33.
34. // Configure the HTTP request pipeline.
35. if (!app.Environment.IsDevelopment())
36. {
37. app.UseExceptionHandler("/Error");
38. // The default HSTS value is 30 days. You may
want to change this for production scenarios, see
https://aka.ms/aspnetcore-hsts.
39. app.UseHsts();
40. }
41.
42. app.UseHttpsRedirection();
43. app.UseStaticFiles();
44.
45. app.UseRouting();
46.
47. app.UseAuthorization();
48.
49. app.MapRazorPages();
50.
51. app.Run();
52.
And this is the implementation of our CosmosDB for NOSql in the
index.cshtml.cs class:
1. using CosmosDBWebApp.Helper;
2. using CosmosDBWebApp.Models;
3. using Microsoft.AspNetCore.Mvc;
4. using Microsoft.AspNetCore.Mvc.RazorPages;
5. using Microsoft.Azure.Cosmos;
6.
7. namespace CosmosDBWebApp.Pages
8. {
9. public class IndexModel : PageModel
10. {
11. private readonly ILogger<IndexModel>
_logger;
12. private readonly CosmosClient
cosmosClient;
13. private readonly string databaseName =
"ThiagoCosmosDB";
14. private readonly string
sourceContainerName =
15. "ThiagoContainer";
16.
17. public IndexModel(ILogger<IndexModel>
logger,
18. CosmosClient cosmosClient)
19. {
20. this._logger = logger;
21. this.cosmosClient = cosmosClient;
22. }
23.
24. public async Task OnGet()
25. {
26. DatabaseResponse databaseResponse
=
27. await cosmosClient
28.
.CreateDatabaseIfNotExistsAsync(databaseName);
29. Database database =
databaseResponse.Database;
30.
31. ContainerResponse container =
32. await
database.CreateContainerIfNotExistsAsync(
33. new ContainerProperties(sourceContainerName,
"/id"));
34.
35. await
CreateItemsAsync(cosmosClient,
36. database.Id,
container.Container.Id);
37. }
38. private static async Task
CreateItemsAsync(
39. CosmosClient cosmosClient,
string databaseId,
40. string containerId)
41. {
42. Container sampleContainer =
cosmosClient
43. .GetContainer(databaseId,
containerId);
44.
45. for (int i = 0; i < 15; i++)
46. {
47. var person =
CreatePerson.GetNewPerson();
48. await sampleContainer
49. .CreateItemAsync<Person>(person,
50. new
PartitionKey(person.Id));
51. }
52. }
53. }
54. }
This is the output, with our container created and data in it:
Figure 5.11: Azure Cosmos DB Data Explorer with recently created data.
Blob Storage
Blob Storage is a storage solution provided by Microsoft Azure that is
designed specifically for storing unstructured data, such as images, videos,
documents, logs, and other large files. It offers scalable and cost-effective
storage options, making it easy to manage and access unstructured data in the
cloud.
Following are some key features of Blob Storage:
Scalability: Blob Storage can scale to accommodate the storage needs
of any application, whether it is a small-scale project or a high-traffic
enterprise application.
Cost-effectiveness: It provides flexible pricing options, allowing users
to choose storage tiers based on their data access frequency and cost
requirements.
Durability and availability: Blob Storage ensures durability by
storing multiple copies of data across different storage nodes. It also
offers built-in redundancy and high availability for data access.
Easy accessibility: Blob Storage provides multiple access methods,
allowing users to easily retrieve and manage their data
programmatically or through Azure portal, APIs, or command-line
tools.
Integration: It seamlessly integrates with other Azure services,
enabling users to leverage Blob Storage as a backend for various
applications, data processing, and analytics workflows.
Usage examples of Blob Storage:
Media storage and delivery: Blob Storage is commonly used to store
and deliver media files, such as images, videos, and audio files, for
web applications or content management systems.
Backup and disaster recovery: It provides an efficient and cost-
effective solution for backing up data and creating disaster recovery
copies in the cloud.
Logging and analytics: Blob Storage can be used to store logs
generated by applications or systems, enabling later analysis and
processing for operational insights.
Archiving and compliance: It is suitable for long-term archival of
data that needs to be stored for regulatory compliance or historical
purposes.
Data Distribution: Blob Storage supports the distribution of large
datasets across multiple regions, allowing efficient global access to
data by users.
By leveraging Blob Storage, organizations can effectively manage and store
unstructured data in the cloud. Its scalability, cost-effectiveness, and easy
accessibility make it a valuable tool for a wide range of applications, from
media storage and delivery to backup and analytics. Whether it is a small-
scale project or a large enterprise solution, Blob Storage provides a reliable
and efficient way to manage unstructured data in the Azure cloud
environment.
Scaling Blob Storage
Scaling Blob Storage in Azure involves adjusting the storage capacity and
performance to meet your data storage and retrieval needs. Blob Storage
provides flexible scaling options to accommodate varying workloads and
storage requirements. Here are the key approaches to scaling Blob Storage:
Capacity scaling
Blob Storage allows you to scale the storage capacity by increasing
or decreasing the amount of data you can store.
You can easily add more storage by provisioning additional storage
accounts or expanding the existing storage accounts to handle larger
data volumes.
Azure provides virtually limitless storage scalability, allowing you
to scale up as your storage needs grow.
Performance scaling
Blob Storage provides two performance tiers: Hot and cool.
Hot access tier offers higher performance and is optimized for
frequently accessed data.
Cool access tier provides lower-cost storage for data that is less
frequently accessed.
You can choose the appropriate tier based on the access patterns and
frequency of data retrieval to optimize costs and performance.
Parallelism and throughput scaling
Azure Blob Storage can handle high-throughput scenarios by
parallelizing access to the storage accounts.
By distributing the workload across multiple storage accounts, you
can achieve higher throughput and reduce latency.
You can leverage techniques such as sharding, parallel processing,
and load balancing to optimize performance and scale horizontally.
Content Delivery Network (CDN)
Azure CDN can be integrated with Blob Storage to scale content
delivery globally.
CDN caches and delivers content from Blob Storage to users around
the world, reducing latency and improving performance.
By leveraging Azure CDN, you can scale the delivery of static
content, such as images, videos, and documents, to a global
audience.
Lifecycle management policies
Blob Storage provides lifecycle management policies to automate
the movement and deletion of data based on defined rules.
By configuring lifecycle management policies, you can
automatically move less frequently accessed data to the Cool Access
Tier or archive it to Azure Archive Storage to optimize costs.
When scaling Blob Storage, consider factors such as data growth rate, access
patterns, performance requirements, and cost optimization. Regular
monitoring of storage utilization and access patterns helps determine when
and how to scale the storage effectively. Azure Portal, Azure PowerShell,
Azure CLI, or automation tools can be used to manage and automate scaling
operations.
Usage example
In this practical example, we will guide you through the process of creating a
web application and establishing a seamless connection with Azure Blob
Storage. By following these steps, you will be able to set up a robust web app
that leverages the power of Azure Blob Storage for efficient storage and
retrieval of unstructured data.
Conclusion
In this chapter, we delved into the world of Azure SQL, Cosmos DB, and
Blob Storage, three fundamental components of Azure's data services. We
began by understanding the capabilities and benefits of Azure SQL, a fully
managed relational database service that provides scalability, security, and
ease of use. Through various usage examples, we witnessed how Azure SQL
caters to a wide range of applications, from small-scale projects to enterprise-
level solutions.
Next, we explored Cosmos DB, a globally distributed, multi-model database
service that offers impressive scalability and low-latency performance. We
learned about its versatility in supporting various data models and its global
reach, making it suitable for applications requiring responsiveness and
availability across multiple regions. The usage examples highlighted the
potential of Cosmos DB in different domains, showcasing its ability to handle
massive-scale applications seamlessly.
Lastly, we discussed Blob Storage, Azure's storage solution for unstructured
data. With its scalable and cost-effective options, Blob Storage offers a
convenient way to manage and access unstructured data such as images,
videos, and documents. The practical usage examples demonstrated the utility
of Blob Storage in diverse scenarios, emphasizing its importance in modern
data management strategies.
By exploring Azure SQL, Cosmos DB, and Blob Storage in this chapter, you
have gained a solid understanding of their features, benefits, and real-world
applications. You are now equipped with the knowledge and skills necessary
to leverage these services effectively in the Azure cloud environment. As you
continue your journey with Azure, remember to consider the specific
requirements of your projects and apply best practices to ensure optimal
utilization of these powerful data services.
Azure SQL, Cosmos DB, and Blob Storage provide a robust foundation for
building scalable, secure, and globally accessible applications. Whether you
are working on a small project or managing a large enterprise solution, these
services offer the flexibility and reliability needed to meet your data storage
and management needs. Embrace the possibilities and continue exploring the
vast capabilities of Azure's data services in your future endeavors.
In the upcoming chapter, we venture into the dynamic realm of Async
Operations with Azure Service Bus. This topic unfolds a crucial facet of
Azure's ecosystem, exploring how asynchronous operations enhance
efficiency and scalability in distributed systems. We delve into the
functionalities of Azure Service Bus, a powerful messaging service, and
elucidate how it facilitates seamless communication between decoupled
components. Through practical insights and examples, readers will gain a
comprehensive understanding of how Azure Service Bus empowers
applications to perform asynchronous operations, fostering resilience and
responsiveness in modern cloud architectures.
CHAPTER 6
Unleashing the Power of Async
Operations with Azure Service Bus
Introduction
In today's dynamic landscape, where speed, scalability, and efficient
communication are paramount, mastering asynchronous operations is
indispensable. This chapter immerses us in the realm of Azure Service Bus, a
robust cloud-based messaging service from Microsoft Azure, offering an
array of features crucial for effective asynchronous operations.
We commence with an exploration of Queues, fundamental to Azure Service
Bus, where messages follow a First-In-First-Out (FIFO) approach. Queues
ensure reliable message delivery, enhancing system resilience. We delve into
creating Queues, sending/receiving messages, and navigating various
message processing scenarios.
Transitioning to Topics, we discover their power in enabling a
publish/subscribe pattern, facilitating efficient broadcasting to multiple
subscribers. Topics offer scalability and flexibility, empowering us to create
and define Subscriptions, manage message filtering, and customize routing.
A comprehensive case study guides us through the step-by-step
implementation of Azure Service Bus, offering hands-on experience and real-
world insights. We progress to advanced features, exploring message sessions
that streamline interactions with diverse clients. Message sessions aid in
managing complex workflows and maintaining message order, optimizing
system performance.
The chapter culminates in addressing error handling and exception
management—a critical aspect in the realm of distributed systems. Strategies
for handling exceptions, retrying message processing, and implementing
robust error handling mechanisms are discussed. By the chapter's end, readers
will possess a profound understanding of Azure Service Bus, equipped with
the skills to build resilient systems capable of handling large workloads and
facilitating seamless communication between components. This journey
promises to unlock the full potential of asynchronous operations with Azure
Service Bus.
Structure
This chapter covers the following topics:
Async operations with Service Bus
Azure Service Bus Topics
Azure Service Bus Subscriptions
Case study
Objectives
By the end of this chapter, you will understand the core concepts of Azure
Service Bus, including Queues, Topics, and Subscriptions. You will learn
how to create and manage Queues in Azure Service Bus for reliable message
delivery and processing. Explore the benefits and capabilities of Topics and
Subscriptions, enabling efficient publish/subscribe messaging patterns. Gain
practical insights through a step-by-step case study that demonstrates the
implementation of Azure Service Bus. We will create Topics, define
Subscriptions, and explore message filtering and routing techniques.
Discover the use of message sessions to handle interactions with different
clients and maintain message order within each session. Explore advanced
features and optimizations to enhance the performance and scalability of
Azure Service Bus. Understand exception management and error handling
strategies to ensure the resilience and stability of your applications. Learn
best practices for handling exceptions, retrying message processing, and
implementing robust error handling mechanisms. The readers will acquire the
knowledge and skills necessary to design and build efficient and scalable
systems using Azure Service Bus for asynchronous operations.
Session Queues
Session Queues are a specialized feature within Azure Service Bus that
allows related messages to be grouped together and processed sequentially by
the same receiver. In regular Queues, messages are processed independently
without any inherent relationship between them. However, session Queues
enable message sessions, where a logical stream of messages is associated
with a specific session ID.
Session Queues are particularly useful in scenarios where maintaining the
order and grouping of messages is critical, such as multi-step workflows,
transactional processing, or when different parts of a task need to be
processed in sequence by the same receiver.
By leveraging session Queues, developers can ensure that messages within a
session are processed in the order they were sent, and the state of the session
can be maintained across multiple message processing cycles. This ensures
consistent and reliable handling of related messages and provides greater
control over complex workflows and processing scenarios.
Case study
In the case study, we are working with various functionalities of Azure
Service Bus and implementing different components to achieve specific tasks
related to message handling and processing. For a better understanding we
are having one console application for each consumer, also one consoler
application for the publisher. Here is a breakdown of the tasks involved in the
case study:
Publishing messages
Implement a sender component to publish messages to Azure
Service Bus Queues.
Use the Azure Service Bus SDK for .NET to send messages to the
Queue.
Consuming messages
Implement a receiver component to consume and process messages
from the Azure Service Bus Queue.
Handle message processing and implement any necessary business
logic.
Consuming message batches
Implement a receiver component to handle messages in batches.
Improve message processing efficiency by processing messages in
batches.
Message processor
Implement a message processor component responsible for handling
message processing logic.
The message processor will be used by the receiver to process
individual messages or message batches.
Consuming sessions
Implement a receiver component to consume and process messages
from Azure Service Bus sessions.
Implement logic to group related messages together using session
IDs.
Session processor
Implement a session processor component responsible for handling
session-based message processing.
The session processor will be used by the receiver to process
messages within a session in the order they were sent.
Consuming Topics and Subscriptions
Implement subscriber components to consume and process
messages from Azure Service Bus Topics and Subscriptions.
Define multiple Subscriptions based on different criteria or interests
and implement logic for selective message delivery.
Nuget package used:
Azure.Messaging.ServiceBus
Now we have to get the connection string of our Service Bus, please follow
the following steps:
1. Access the Shared Access Policies: Inside the Service Bus namespace,
navigate to the left-hand menu, under "Settings," click on "Shared
access policies."
2. Create or Access an Existing Shared Access Policy: By default, there
will be a "RootManageSharedAccessKey" policy, which has full access
permissions. While this policy can be used to obtain the connection
string, it is recommended to create a custom policy with specific
permissions for your application.
3. Create a Custom Shared Access Policy (Optional): If you want to
create a custom policy, click on "Add" to create a new shared access
policy. Give it a name and specify the required permissions (for example,
Send, Listen, Manage) based on your application's needs.
4. Get the Connection String: After creating or selecting the desired
shared access policy, click on the policy name to view its details. The
connection string will be available in the "Primary Connection String"
field. This connection string contains the necessary information for your
application to authenticate and connect to the Azure Service Bus
namespace, as you can see in the image below:
It is time for us to create the subscription, proceed with the following steps:
1. Inside the Service Bus namespace, navigate to the left-hand menu, under
"Entities," click on "Topics."
2. Select the Topic: Click on the Topic for which you want to create the
Subscription.
3. Create a New Subscription: Inside the selected Topic, navigate to the
left-hand menu, under "Entities," click on "Subscriptions."
4. Create a new Subscription: Click on the "+ Subscription" button to
create a new Subscription.
5. Configure the Subscription:
Name: Enter a unique name for the Subscription. The name must be
unique within the selected Topic.
Filtering (Optional): Optionally, you can set up filters for the
Subscription to receive only specific messages based on message
properties.
6. Access control (Optional): Configure access control to set permissions
for different users or applications accessing the Subscription.
7. Create the Subscription: Click on the "Create" button to create the
Subscription. The Subscription will be provisioned within the selected
Azure Service Bus Topic, as we can see in the following image:
Consuming messages
In this process, receiver components retrieve messages from the Queue:
1. // See https://aka.ms/new-console-template for
more information
2. using Azure.Messaging.ServiceBus;
3.
4. string connectionString =
"Endpoint=sb://thiagosample.servicebus.windows.net/;Sha
5. string queueName = "samplequeue";
6.
7. await using ServiceBusClient client = new
ServiceBusClient(connectionString);
8.
9. // create the options to use for configuring the
processor
10. var options = new ServiceBusProcessorOptions
11. {
12. // By default or when AutoCompleteMessages is
set to true, the processor will complete the
message after executing the message handler
13. // Set AutoCompleteMessages to false to
[settle messages](https://docs.microsoft.com/en-
us/azure/service-bus-messaging/message-transfers-
locks-settlement#peeklock) on your own.
14. // In both cases, if the message handler
throws an exception without settling the message,
the processor will abandon the message.
15. AutoCompleteMessages = false,
16.
17. // I can also allow for multi-threading
18. MaxConcurrentCalls = 2
19. };
20.
21. // create a processor that we can use to process
the messages
22. await using ServiceBusProcessor processor =
client.CreateProcessor(queueName, options);
23.
24. // configure the message and error handler to use
25. processor.ProcessMessageAsync +=
MessageHandler;
26. processor.ProcessErrorAsync += ErrorHandler;
27.
28. async Task
MessageHandler(ProcessMessageEventArgs args)
29. {
30. string body =
args.Message.Body.ToString();
31. Console.WriteLine(body);
32.
33. // we can evaluate application logic and use
that to determine how to settle the message.
34. await
args.CompleteMessageAsync(args.Message);
35. }
36.
37. Task ErrorHandler(ProcessErrorEventArgs args)
38. {
39. // the error source tells me at what point in
the processing an error occurred
40. Console.WriteLine(args.ErrorSource);
41. // the fully qualified namespace is available
42.
Console.WriteLine(args.FullyQualifiedNamespace);
43. // as well as the entity path
44. Console.WriteLine(args.EntityPath);
45.
Console.WriteLine(args.Exception.ToString());
46. return Task.CompletedTask;
47. }
48.
49. // start processing
50. await processor.StartProcessingAsync();
51.
52. // since the processing happens in the background,
we add a Console.ReadKey to allow the processing
to continue until a key is pressed.
53. Console.ReadKey();
And in the image below we can see the output from our program:
Message processor
The concept of a message processor in Azure Service Bus revolves around
building robust and reliable mechanisms to handle incoming messages from
Queues or subscriptions. The Message processor acts as a fundamental
component responsible for receiving messages, processing them, and
executing the required application logic.
1. // See https://aka.ms/new-console-template for
more information
2. using Azure.Messaging.ServiceBus;
3.
4. string connectionString =
"Endpoint=sb://thiagosample.servicebus.windows.net/;Sha
5. string queueName = "samplequeue";
6.
7. await using ServiceBusClient client = new
ServiceBusClient(connectionString);
8.
9. // create the options to use for configuring the
processor
10. var options = new ServiceBusProcessorOptions
11. {
12. // By default or when AutoCompleteMessages is
set to true, the processor will complete the
message after executing the message handler
13. // Set AutoCompleteMessages to false to
[settle messages](https://docs.microsoft.com/en-
us/azure/service-bus-messaging/message-transfers-
locks-settlement#peeklock) on your own.
14. // In both cases, if the message handler
throws an exception without settling the message,
the processor will abandon the message.
15. AutoCompleteMessages = false,
16.
17. // I can also allow for multi-threading
18. MaxConcurrentCalls = 2
19. };
20.
21. // create a processor that we can use to process
the messages
22. await using ServiceBusProcessor processor =
client.CreateProcessor(queueName, options);
23.
24. // configure the message and error handler to use
25. processor.ProcessMessageAsync +=
MessageHandler;
26. processor.ProcessErrorAsync += ErrorHandler;
27.
28. async Task
MessageHandler(ProcessMessageEventArgs args)
29. {
30. string body =
args.Message.Body.ToString();
31. Console.WriteLine(body);
32.
33. // we can evaluate application logic and use
that to determine how to settle the message.
34. await
args.CompleteMessageAsync(args.Message);
35. }
36.
37. Task ErrorHandler(ProcessErrorEventArgs args)
38. {
39. // the error source tells me at what point in
the processing an error occurred
40. Console.WriteLine(args.ErrorSource);
41. // the fully qualified namespace is available
42.
Console.WriteLine(args.FullyQualifiedNamespace);
43. // as well as the entity path
44. Console.WriteLine(args.EntityPath);
45.
Console.WriteLine(args.Exception.ToString());
46. return Task.CompletedTask;
47. }
48.
49. // start processing
50. await processor.StartProcessingAsync();
51.
52. // since the processing happens in the background,
we add a Console.ReadKey to allow the processing
to continue until a key is pressed.
53. Console.ReadKey();
54.
After the execution of our program, we can see our messages being processed
as the following image shows:
Figure 6.11: Console Application output from Message Processor
Consuming sessions
Consuming sessions in Azure Service Bus focuses on handling related
messages that are grouped together into logical units known as message
sessions. Each session contains a sequence of messages with the same
session ID, allowing them to be processed in a coordinated and ordered
manner.
To consume session we need the following code, this will be responsible to
connect with service bus and receive the messages in the session:
1. // See https://aka.ms/new-console-template for
more information
2.
3. using Azure.Messaging.ServiceBus;
4.
5. string connectionString =
"Endpoint=sb://thiagosample.servicebus.windows.net/;Sha
6. string sessionQueueName =
"samplesessionqueue";
7.
8. await using ServiceBusClient client = new
ServiceBusClient(connectionString);
9.
10. // create a receiver specifying a particular
session
11. await using ServiceBusSessionReceiver
receiver = await
client.AcceptSessionAsync(sessionQueueName,
"sampleSessionId");
12.
13. // the received message is a different type as it
contains some service set properties
14. ServiceBusReceivedMessage receivedMessage =
await receiver.ReceiveMessageAsync();
15. Console.WriteLine("Session Id: " +
receivedMessage.SessionId);
16. Console.WriteLine("Body" +
receivedMessage.Body);
17.
18. // we can also set arbitrary session state using
this receiver
19. // the state is specific to the session, and not
any particular message
20. await receiver.SetSessionStateAsync(new
BinaryData("brand new state"));
21.
22. // complete the message, thereby deleting it from
the service
23. await
receiver.CompleteMessageAsync(receivedMessage);
After successful execution of our session receiver, we have the following
output:
Figure 6.12: Console application output from consuming a session
Session processor
The session processor acts as a key component responsible for processing
messages within a session, ensuring sequential and coordinated operations.
The following code is responsible to create and execute the session processor:
1. // See https://aka.ms/new-console-template for
more information
2. using Azure.Messaging.ServiceBus;
3.
4. string connectionString =
"Endpoint=sb://thiagosample.servicebus.windows.net/;Sha
5. string queueName = "samplequeue";
6. string sessionQueueName =
"samplesessionqueue";
7. string topicName = "sampletopic";
8. string subscriptionName =
"samplesubscription";
9.
10. // since ServiceBusClient implements
IAsyncDisposable we create it with "await using"
11. await using ServiceBusClient client = new
ServiceBusClient(connectionString);
12.
13. // create the options to use for configuring the
processor
14. var options = new
ServiceBusSessionProcessorOptions
15. {
16. // By default after the message handler
returns, the processor will complete the message
17. // If I want more fine-grained control over
settlement, I can set this to false.
18. AutoCompleteMessages = false,
19.
20. // I can also allow for processing multiple
sessions
21. MaxConcurrentSessions = 5,
22.
23. // By default or when AutoCompleteMessages is
set to true, the processor will complete the
message after executing the message handler
24. // Set AutoCompleteMessages to false to
[settle messages](https://docs.microsoft.com/en-
us/azure/service-bus-messaging/message-transfers-
locks-settlement#peeklock) on your own.
25. // In both cases, if the message handler
throws an exception without settling the message,
the processor will abandon the message.
26. MaxConcurrentCallsPerSession = 2,
27.
28. // Processing can be optionally limited to a
subset of session Ids.
29. SessionIds = { "sampleSessionId",
"sampleSessionId2" },
30. };
31.
32. // create a session processor that we can use to
process the messages
33. await using ServiceBusSessionProcessor
processor =
client.CreateSessionProcessor(sessionQueueName,
options);
34.
35. // configure the message and error handler to use
36. processor.ProcessMessageAsync +=
MessageHandler;
37. processor.ProcessErrorAsync += ErrorHandler;
38.
39. async Task
MessageHandler(ProcessSessionMessageEventArgs
args)
40. {
41.
42. var body = args.Message.Body.ToString();
43.
44. Console.WriteLine("Session Id: " +
args.Message.SessionId);
45. Console.WriteLine("Body" + body);
46.
47. // we can evaluate application logic and use
that to determine how to settle the message.
48. await
args.CompleteMessageAsync(args.Message);
49.
50. // we can also set arbitrary session state
using this receiver
51. // the state is specific to the session, and
not any particular message
52. await args.SetSessionStateAsync(new
BinaryData("sample state"));
53. }
54.
55. Task ErrorHandler(ProcessErrorEventArgs args)
56. {
57. // the error source tells me at what point in
the processing an error occurred
58. Console.WriteLine(args.ErrorSource);
59. // the fully qualified namespace is available
60.
Console.WriteLine(args.FullyQualifiedNamespace);
61. // as well as the entity path
62. Console.WriteLine(args.EntityPath);
63.
Console.WriteLine(args.Exception.ToString());
64. return Task.CompletedTask;
65. }
66.
67. // start processing
68. await processor.StartProcessingAsync();
69.
70. // since the processing happens in the background,
we add a Conole.ReadKey to allow the processing to
continue until a key is pressed.
71. Console.ReadKey();
72.
After successful executing our session processor, we have the following
output:
Figure 6.13: Console application output from session processor
Figure 6.14: Console application output from consuming Topic and Subscription
In the end, this is our project solution. We have one console application per
consumer, plus one console application for the publisher as we can see in the
following image:
Figure 6.15: ProjectsSolution with console applications
Conclusion
In this chapter, we have explored the power and potential of Azure Service
Bus for unleashing the capabilities of asynchronous operations. We began by
understanding the core concepts of Queues, Topics, and Subscriptions, which
form the foundation of the Azure Service Bus messaging model. We learned
how to create Queues, send and receive messages, and ensure reliable
message delivery and processing.
Moving forward, we delved into Topics and Subscriptions, enabling us to
implement a publish/subscribe pattern. We discovered how to create Topics,
define Subscriptions, and leverage message filtering and routing techniques
to efficiently distribute messages to interested parties. This flexibility and
scalability are essential for building responsive and decoupled systems.
The case study provided valuable hands-on experience, guiding us through
the step-by-step implementation of Azure Service Bus. We created a new
Azure Service Bus instance, defined Topics, Subscriptions, and notifications,
and gained practical insights into real-world scenarios. This case study
equipped us with the knowledge and skills required to apply Azure Service
Bus effectively in our own projects.
We also explored the use of message sessions, which allowed us to handle
interactions with different clients and maintain message order within each
session. Message sessions proved to be a valuable tool for managing complex
workflows and optimizing performance in distributed systems.
Finally, we discussed exception management and error handling strategies,
acknowledging the importance of being prepared for unexpected scenarios
and failures. We learned best practices for handling exceptions, retrying
message processing, and implementing robust error handling mechanisms to
ensure the resilience and stability of our applications.
By mastering Azure Service Bus and its features, you are now equipped to
design and build efficient and scalable systems that leverage asynchronous
operations. Whether it is handling large workloads, decoupling components,
or ensuring reliable message delivery, Azure Service Bus offers the tools and
capabilities you need.
As you continue your journey, remember to keep exploring the vast array of
features and optimizations available within Azure Service Bus. Stay up-to-
date with the latest updates and best practices to maximize the potential of
this powerful messaging service.
Embrace the power of Azure Service Bus and unlock the full potential of
asynchronous operations in your applications. With the knowledge gained in
this chapter, you are well-prepared to design, implement, and optimize
systems that can handle complex workflows, distribute tasks seamlessly, and
facilitate efficient communication.
In the upcoming chapter, we delve into the critical domain of "Azure Key
Vault." This topic unfolds as a cornerstone in Azure's security and identity
management framework. We explore the pivotal role of Azure Key Vault in
safeguarding sensitive information such as cryptographic keys, secrets, and
certificates. Readers will gain insights into the principles of secure key
management, encryption, and the seamless integration of Azure Key Vault
into cloud applications. Through practical examples and use cases, this
chapter aims to equip readers with the knowledge to enhance the security
posture of their Azure-based solutions by leveraging the robust capabilities of
Azure Key Vault.
Introduction
In today's interconnected world, the security of our applications and sensitive
data is of utmost importance. With the increasing number of cyber threats and
data breaches, it has become essential to adopt robust security measures to
protect our valuable assets. Azure Key Vault is a powerful cloud-based
service provided by Microsoft Azure that allows you to safeguard
cryptographic keys, secrets, and certificates used by your applications.
In this chapter, we will discuss securing your applications with Azure Key
Vault. We will begin with an overview of Azure Key Vault, exploring its
capabilities and benefits. You will gain a comprehensive understanding of
why Azure Key Vault is a crucial component in your application security
strategy.
Authentication is a vital aspect of any secure system, and Azure Key Vault
provides various authentication mechanisms to ensure only authorized access.
We will explore these authentication options in detail, discussing how to
configure and manage them effectively.
Access policies play a significant role in controlling who can perform what
actions on the resources within Azure Key Vault. We will delve into the
intricacies of access policies, learning how to define granular permissions and
manage them to maintain a strong security posture.
To provide a practical perspective, we will present a case study that
demonstrates the step-by-step implementation of Azure Key Vault in a real-
world scenario. You will witness how to create an Azure Key Vault, define
and manage access policies, and leverage its features to secure your
applications effectively.
Finally, we will showcase an example of accessing a key stored in Azure Key
Vault through a .NET console application. You will learn how to integrate
your applications with Azure Key Vault, ensuring secure retrieval and
utilization of keys.
By the end of this chapter, you will have gained the knowledge and skills
necessary to secure your applications using Azure Key Vault. You will be
equipped with the tools to protect your cryptographic keys, secrets, and
certificates, enabling you to build robust and secure applications that
safeguard your sensitive data. So, let us embark on this journey to fortify
your applications and defend against potential security threats with Azure
Key Vault.
Structure
This chapter covers the following topics:
Azure Key Vault Overview
Azure Key Vault Authentication
Azure Key Vault access policies
Case study
Creating Azure Key Vault
Managing Key Vault Access policies
Accessing a key through a .Net Web Application
Objectives
This chapter equips you with a solid understanding of Azure Key Vault's core
concepts, highlighting its pivotal role in securing applications. Navigate
diverse authentication mechanisms within Azure Key Vault, optimizing
configurations for efficacy. Master access policies to define granular
permissions, and follow a case study for practical implementation in real-
world scenarios. Seamlessly integrate a .NET console application with Azure
Key Vault to securely access keys. This chapter empowers you with practical
skills and best practices to fortify your applications, enhancing confidence in
crafting resilient security strategies. Apply these insights to bolster the
security of your applications with Azure Key Vault.
Case study
In this case study, we will walk through the process of creating a Web
Application and securely retrieving a secret from Azure Key Vault. Azure
Key Vault plays a pivotal role in ensuring the confidentiality of sensitive
information, such as passwords, API keys, and connection strings, used by
our application.
By leveraging Azure Key Vault's secure storage and fine-grained access
control, we will demonstrate how to protect these critical secrets from
exposure within our code or configuration files. Instead, our Web Application
will authenticate with Azure Key Vault and request the required secret when
needed, following the principle of least privilege.
Throughout the case study, we will showcase the step-by-step
implementation of this approach, emphasizing the best practices for securing
secrets within Azure Key Vault. By the end of this study, you will gain the
knowledge and practical skills to safeguard your web applications' sensitive
information using Azure Key Vault, enhancing the overall security of your
application infrastructure.
Accessing a key
To kickstart our project, we will create a web application and then proceed to
install the required NuGet packages listed below:
Azure.Security.KeyVault.Secrets
Azure.Identity
We can see the solution of our recently created project with the nuget
packages in the image below:
Figure 7.3: Web application project solution.
To ensure seamless integration and access to Azure Key Vault from Visual
Studio, it is essential to configure Azure Service Authentication with the
same Azure account used to create the Azure Key Vault. Follow these steps
to verify and set up the configuration:
1. Verify Azure Account: Confirm that you are signed in to Visual Studio
with the correct Azure account that was used to create the Azure Key
Vault.
2. Open Visual Studio: Launch Visual Studio and go to the "Tools" menu.
3. Options: From the "Tools" menu, select "Options."
4. Azure Service Authentication: In the "Options" window, expand
"Azure Service Authentication" under the "Azure" category.
5. Choose account: Ensure that the dropdown menu for "Account
Selection" displays the same Azure account associated with the Azure
Key Vault.
6. Verify credentials: If needed, click "Manage Account" to verify the
account's credentials or add the correct account if it's not listed.
7. Authenticate: If prompted, authenticate with the Azure account to
ensure the credentials are up-to-date and valid.
By confirming that your Visual Studio's Azure Service Authentication is
correctly configured with the same Azure account as the one used to create
the Azure Key Vault, you will enable smooth communication between your
application and the Key Vault. This alignment ensures that your Web
Application can securely retrieve and manage secrets from the Azure Key
Vault without any authentication issues, as we can see the account
authenticated from the picture below:
Now, we have to create a secret and to create a secret in your Azure Key
Vault securely, follow these steps:
1. Navigate to Key Vault: Sign in to the Azure portal
(https://portal.azure.com) and navigate to your Azure Key Vault by
searching for its name or finding it in the "All resources" list.
2. Access Secrets: Inside the Key Vault, select the "Secrets" option from
the left-hand navigation menu.
3. Add a New Secret: Click on the "+ Generate/Import" button to create a
new secret.
4. Define Secret Details: In the "Create a secret" panel, provide the
necessary information:
Name: Enter a unique name for the secret.
Value: Enter the actual secret value you want to store securely.
Content type (optional): You can specify the content type if needed,
for example, "text/plain" or "application/json".
5. Set Activation Date and Expiration (optional): If required, you can set
the activation date and expiration time for the secret. This is useful for
managing the secret's validity period.
6. Click "Create": After filling in the details, click the "Create" button to
add the secret to your Key Vault.
The secret is now securely stored in your Azure Key Vault, protected by
Azure's robust security measures. By utilizing the Key Vault for secret
management, you ensure sensitive information, such as connection strings,
API keys, and passwords, remains safe and inaccessible to unauthorized
users. Furthermore, your Web Application can now retrieve this secret
securely using Azure Key Vault APIs, ensuring the confidentiality and
integrity of your sensitive data. We can see our recently created secret from
the picture below:
Figure 7.5: A new secret in Azure Key Vault.
To retrieve our secret from the Web Application, we must change the
Index.cshtml.cs as follows:
1. public class IndexModel : PageModel
2. {
3. private readonly ILogger<IndexModel>
_logger;
4. public string Message { get; set; }
5.
6. public IndexModel(ILogger<IndexModel>
logger)
7. {
8. _logger = logger;
9. }
10.
11. public void OnGet()
12. {
13. try
14. {
15. SecretClientOptions options =
new SecretClientOptions()
16. {
17. Retry =
18. {
19. Delay=
TimeSpan.FromSeconds(2),
20. MaxDelay =
TimeSpan.FromSeconds(16),
21. MaxRetries = 5,
22. Mode =
RetryMode.Exponential
23. }
24. };
25. var client = new
SecretClient(new
Uri("https://keyvaultthiago.vault.azure.net/"),
new DefaultAzureCredential(), options);
26.
27. KeyVaultSecret secret =
client.GetSecret("thiagosecret");
28.
29. Message = secret.Value;
30. }
31. catch (Exception ex)
32. {
33.
34. Message = "An error ocurred
while trying to retrieve the secret";
35. }
36.
37. }
38. }
To display our secret message, we must update the Index.cshtml as follows:
1. @page
2. @model IndexModel
3. @{
4. ViewData["Title"] = "Home page";
5. }
6.
7. <div class="text-center">
8. <h1 class="display-4">Welcome</h1>
9. <p>This is the secret message:
@Model.Message</p>
10. </div>
The following will be the successful output:
Figure 7.6: Web application displaying the secret retrieved from Azure Key Vault.
Conclusion
In this chapter, we explored the importance of securing your applications
with Azure Key Vault and discussed various aspects of its implementation.
We began with an overview of Azure Key Vault, understanding its
capabilities and benefits in ensuring the security of cryptographic keys,
secrets, and certificates.
Authentication emerged as a crucial factor in securing Azure Key Vault, and
we delved into different authentication mechanisms available, learning how
to configure and manage them effectively. By implementing strong
authentication measures, you can ensure that only authorized entities can
access your sensitive data.
Access policies played a significant role in controlling access to Azure Key
Vault resources, and we examined how to define and manage granular
permissions through access policies. By carefully managing these policies,
you can maintain a strong security posture and minimize the risk of
unauthorized access.
To provide a practical perspective, we explored a case study that
demonstrated the step-by-step implementation of Azure Key Vault in a real-
world scenario. From creating and configuring the key vault to defining and
managing access policies, the case study exemplified the practical application
of Azure Key Vault for securing your applications.
Additionally, we showcased an example of accessing a key stored in Azure
Key Vault through a .NET console application. This demonstrated the
integration of your applications with Azure Key Vault, enabling secure
retrieval and utilization of keys.
By following the knowledge and skills presented in this chapter, you now
possess the tools and understanding to secure your applications effectively
using Azure Key Vault. You have learned the importance of robust
authentication, the significance of access policies, and how to leverage Azure
Key Vault's features for enhanced application security.
Remember to adhere to best practices for securing your applications with
Azure Key Vault, and regularly review and update your security measures to
stay ahead of potential threats. By adopting Azure Key Vault as an integral
part of your application security strategy, you can protect your valuable
assets, ensure data confidentiality, and build trust among your users.
Congratulations on completing this chapter! You are now equipped with the
knowledge and skills to fortify your applications and defend against potential
security threats with Azure Key Vault. Embrace this newfound understanding
and continue to prioritize the security of your applications in today's ever-
evolving digital landscape.
The next chapter, Building Dynamic Web Apps with Blazor and ASP.NET,
explores essential topics in the development of dynamic web applications.
The overview sets the stage for discussions on key features like Hot Reload,
security considerations, and the advantages of strongly-typed databinding.
The chapter also compares Blazor and Razor, providing insights into their
roles and use cases. A case study with a step-by-step implementation offers
practical learning, while guidance on creating a Blazor project and testing the
Hot Reload feature enhances developers' understanding. The chapter
concludes with discussions on implementing authorization/authentication and
leveraging strongly-typed databinding effectively.
CHAPTER 8
Building Dynamic Web Apps with
Blazor and ASP.NET
Introduction
Welcome to the chapter on building dynamic web applications with Blazor
and ASP.NET. In this chapter, we will explore the powerful features and
techniques that enable the development of interactive and responsive web
apps using these technologies.
Blazor is a modern web framework that brings the power of .NET to the
client-side. With Blazor, developers can use C# and Razor syntax to build
rich web applications, seamlessly integrating them with server-side logic.
Complementing Blazor, ASP.NET is a mature and robust web development
framework that provides a solid foundation for building scalable and high-
performance web applications. It offers a wide range of features and tools
that simplify the development process and enhance the overall user
experience.
Throughout this chapter, we will embark on a journey to explore the core
concepts and techniques required to build dynamic web apps with Blazor and
ASP.NET. We will cover several key topics that are crucial for building
dynamic web apps with Blazor and ASP.NET.
Structure
This chapter covers the following topics:
Web Apps with Blazor and .NET
Hot reload
Security
Data binding
Blazor vs Razor
Practical case study
Objectives
As the chapter concludes, you will grasp key aspects. Explore Hot Reload, a
feature enabling real-time code changes for immediate results, enhancing
development efficiency. Navigate security essentials for Blazor and
ASP.NET web apps, focusing on authentication, data protection, and best
practices. Dive into Data Binding, ensuring type safety and code
maintainability. Differentiate Blazor and Razor, understanding their strengths
for informed web app development decisions. The chapter closes with a
practical case study, reinforcing concepts. This comprehensive understanding
equips you to construct dynamic web apps with Blazor and ASP.NET,
empowering the development of interactive, feature-rich applications.
Hot reload
With .NET hot reload, you can apply code changes, including modifications
to stylesheets, to a running app without the need to restart it or lose the app's
current state. This feature is available for all ASP.NET Core 6.0 and later
projects.
When making code updates, the changes take effect in the running app under
certain conditions. For instance, startup logic that runs only once, such as
middleware (except for inline middleware delegates), configured services,
and route creation and configuration, will be rerun to incorporate the changes.
Inline middleware delegates are delegates used within the context of
middleware functions in frameworks such as asp.net core that are entirely
implemented inline within the middleware pipeline configuration. This means
that language developers can define the logic which the middleware will
execute from within the middleware configuration code without creating
separate methods or classes for defining named middleware.
In Blazor apps, the Razor component render is automatically triggered by the
framework. On the other hand, in MVC and Razor Pages apps, hot reload
will automatically refresh the browser. It's important to note that removing a
Razor component parameter attribute does not cause the component to re-
render. To reflect such changes, restarting the app is necessary.
The introduction of .NET hot reload significantly enhances the development
experience by providing seamless and immediate code updates during the
app's runtime. It empowers developers to iterate quickly and efficiently,
eliminating the need for frequent restarts and allowing them to focus on
building and refining their applications.
ASP.NET Razor (Blazor Server and ASP.NET Core) Yes 17.0 17.0
WinUI3 No 16.11 --
Unsupported Scenarios
While hot reload provides a convenient and efficient method for updating
code on-the-fly during development, it is important to note that there are
certain scenarios where hot reload may not be supported. These unsupported
scenarios can arise due to various factors, such as the nature of code changes,
specific runtime conditions, or limitations in the development environment. It
is crucial for developers to be aware of these scenarios to ensure a smooth
and effective coding experience. By understanding the limitations of hot
reload, developers can make informed decisions and utilize alternative
approaches when necessary. You can find the most common scenarios where
Hot-Reload is not supported, mentioned as follows:
Xamarin.Forms in iOS and Android scenarios, with partial support for
a UWP app.
If the Edit and Continue settings are disabled in your Visual Studio.
If PublishTrimmed is set to True in your debug profile.
If PublishReadyToRun is set to True in your debug profile.
WinUI 3 apps with the property nativeDebbuging not set, or set to true
in your LaunchSettings.json file.
Security
Security considerations vary between Blazor Server and Blazor
WebAssembly apps. In Blazor Server apps, running on the server-side,
authorization checks have the capability to determine the UI options available
to a user, such as menu entries, as well as access rules for different app areas
and components.
On the other hand, Blazor WebAssembly apps run on the client-side, and
authorization is primarily used to determine which UI options to display.
Since client-side checks can be modified or bypassed by users, Blazor
WebAssembly apps are unable to enforce authorization access rules.
When it comes to authorization conventions in Razor Pages, they do not
directly apply to routable Razor components. However, if a non-routable
Razor component is embedded within a page of a Razor Pages app, the page's
authorization conventions indirectly impact the Razor component along with
the rest of the page's content.
ASP.NET Core Identity, designed for HTTP request and response
communication, does not align perfectly with the Blazor app client-server
communication model. Therefore, it is recommended that ASP.NET Core
apps utilizing ASP.NET Core Identity for user management opt for Razor
Pages instead of Razor components for Identity-related UI tasks such as user
registration, login, logout, and user management. While it is possible to build
Razor components that handle Identity tasks directly in certain scenarios,
Microsoft does not endorse or provide support for this approach.
It's important to note that ASP.NET Core abstractions like
SignInManager<TUser> and UserManager<TUser> are not supported within
Razor components. For detailed guidance on utilizing ASP.NET Core
Identity with Blazor, you can refer to the "Scaffold ASP.NET Core Identity
into a Blazor Server app" resource through the following link
https://learn.microsoft.com/en-
us/aspnet/core/security/authentication/scaffold-identity.
Considering these security aspects and following recommended practices will
help ensure the effective implementation of authorization and authentication
mechanisms within your Blazor applications.
Blazor leverages the authentication mechanisms already present in ASP.NET
Core to authenticate the user's identity. The specific mechanism employed
depends on the hosting model of the Blazor application, whether it is Blazor
Server or Blazor WebAssembly.
Authorization
The authorization verification starts exactly after a user is successfully
authenticated, if a user cannot authenticate then the authorization verification
is not needed. Authorization is the process of controlling access to specific
components, pages, or actions within a Blazor application based on the
authenticated user's identity and assigned permissions. It involves
implementing security measures, such as authentication, to verify the user's
identity, and authorization, which determines whether the user possesses the
necessary rights to perform certain operations or view a specific content.
Blazor provides built-in authorization features, including the AuthorizeView
component and the Authorize attribute, which allow developers to easily
apply authorization rules and restrict access based on roles or policies. By
leveraging these features, developers can ensure that only authorized users
can interact with sensitive features or view confidential information within a
Blazor application, ensuring that only authenticated users with the
appropriate role, claim, or policy fulfillment can access restricted
functionality and information.
AuthorizeView component
The AuthorizeView component in Blazor offers the ability to selectively show
or hide UI content based on the user's authorization status. This feature
proves beneficial when there is a need to display user-specific data without
utilizing the user's identity in procedural logic.
By utilizing the AuthorizeView component in a Razor page, developers can
access the context variable of type AuthenticationState (@context in Razor
syntax), which provides access to essential information about the currently
signed-in user.
Authorize attribute
In Razor components, you can utilize the [Authorize] attribute for
authorization purposes. This attribute supports both role-based and policy-
based authorization. To implement role-based authorization, you can specify
the Roles parameter, whereas the Policy parameter is used for policy-based
authorization.
If neither roles nor policy is specified, the [Authorize] attribute applies the
default policy, where authenticated users are authorized and unauthenticated
users are unauthorized. In cases where the user is not authorized and the app
does not customize unauthorized content using the Router component, the
framework automatically displays the fallback message "Not Authorized."
Data binding
Razor components offer convenient data binding capabilities through the use
of the @bind Razor directive attribute, which can be applied to fields,
properties, or Razor expressions. It's important to note that the UI is updated
when the component is rendered, rather than immediately upon modifying the
field or property value. Typically, field and property updates are reflected in
the UI after the execution of event handler code, as components render
themselves following event triggers.
To bind a property or field to other Document Object Model (DOM) events,
you can include an @bind:event="{EVENT}" attribute, replacing {EVENT} with
the desired DOM event. Additionally, you can use the @bind:after="
{EVENT}" attribute with a DOM event placeholder to execute asynchronous
logic after the binding process. It's worth mentioning that assigned C#
methods are only executed when the bound value is synchronously assigned.
For two-way data binding, components support the definition of a pair of
parameters: @bind:get and @bind:set. The @bind:get specifies the value to
bind, while the @bind:set defines a callback for handling value changes. It's
important to note that the @bind:get and @bind:set modifiers should always
be used together.
Please note that using an event callback parameter with @bind:set (e.g.,
[Parameter] public EventCallback<string> ValueChanged { get; set; })
is not supported. Instead, it is recommended to pass a method that returns an
Action or Task to the @bind:set.
Lastly, it is crucial to remember that Razor attribute binding is case-sensitive.
Valid attributes include @bind, @bind:event, and @bind:after. Any variations
with capital letters, such as @Bind/@bind:Event/@bind:aftEr or
@BIND/@BIND:EVENT/@BIND:AFTER, are considered invalid.
Blazor vs Razor
Comparing blazor to razor is a similar comparison like the comparison of
Java to Javascript, they are totally different despite the similarity in their
naming.
Blazor is a powerful framework that combines the flexibility of Razor
components with the capability to run C# code and leverage the Razor view
engine directly in the browser. Unlike Razor, which primarily focuses on
server-based architecture and server-side templating, Blazor extends the
functionality by enabling client-side logic using C# instead of JavaScript.
Razor components serve as the fundamental building blocks of any Blazor
application, combining markup and code into cohesive units. These
components are implemented with a .razor extension and allow developers
to create dynamic user interfaces using the Razor syntax. With Blazor as the
client-side hosting model, you can leverage the full potential of Razor
components on the client-side.
Best practices
Blazor development is best approached by following certain recommended
practices to ensure efficient, maintainable, and high-quality applications.
These best practices encompass various aspects of Blazor development,
including code organization, performance optimization, security, and user
experience. By adhering to these best practices, developers can enhance the
reliability, scalability, and overall success of their Blazor applications. The
best practices are listed below, grouped by categories:
Blazor Server
Avoid using IHttpContextAccessor or HttpContext.
The presence of HttpContext cannot be guaranteed within the
IHttpContextAccessor, and even if HttpContext is accessible, it
does not necessarily hold the context that initiated the Blazor
application.
To pass the request state to the Blazor application, the
recommended approach is to utilize root component parameters
during the initial rendering of the app. This allows for seamless
integration and easy access to the required data. Alternatively,
the application can also copy the relevant data into a scoped
service during the initialization lifecycle event of the root
component. This ensures the availability of the data throughout
the application for efficient and consistent usage
Avoid singleton services to share state
It is important to exercise caution when passing request state in
Blazor applications, as it can potentially introduce security
vulnerabilities. For instance, in Blazor Server apps, where
multiple app sessions coexist within the same server process,
there is a risk of leaking user state across circuits. This is due to
the fact that Blazor Server apps reside in the server’s memory.
If designed appropriately, stateful singleton services can be
utilized in Blazor apps. By following specific design
considerations, these services can effectively maintain their state
across multiple requests and provide consistent functionality
throughout the Blazor application.
WeatherForecastService.cs
This page is rendered every time a user requests any page of the
app.
Defines the location where the root App component (App.razor)
is rendered.
This page is defined in the Program.cs class, in the
MapFallbackToPage method.
Counter.razor
FetchData.razor
Index.razor
Razor MainLayout component and the default layout for the app.
It is defined in the App.razor.
NavMenu.razor
SurveyPrompt.razor
_Imports.razor:
The file was saved, and we now have the page, as shown in Figure 8.7. The
page is automatically updated to reflect the code changes without having to
rebuild nor re-deploy it.
Figure 8.12: The Fetch Data page after using the Authorize attribute.
Registering a new account, go to /Identity/Account/Register URL:
Data binding
In the first Data binding example we are applying the two-way data binding
with the @bind:get/@bind:set modifiers. For this, we are updating the
Counter component.
Now we are data binding using the C# get and set accessors. Again, we are
using the Counter component for this example.
This is the Counter component after using the C# get and set accessor.
Attention that we are not using the number in the input. We are always
incrementing the previous number starting from 0. You can see the whole
code block below:
1. @page “/counter"
2.
3. <PageTitle>Counter</PageTitle>
4.
5. <h1>Counter</h1>
6.
7. <p>
8. <input @bind="IncrementCount" />
9. </p>
10.
11. <p>
12. <code>inputValue</code>: @currentCount
13. </p>
14.
15. @code {
16. private int currentCount = 0;
17.
18. private int IncrementCount
19. {
20. get => currentCount;
21. set => currentCount = currentCount + 1;
22. }
23. }
After running the code, we have this:
Figure 8.20: The counter page before the event occurs. The values come from the C# objects.
Conclusion
In this chapter, we covered building dynamic web applications with Blazor
and ASP.NET. We covered a range of important topics that are fundamental
to creating interactive and responsive web apps.
We explored the convenience and productivity of hot reload, allowing
developers to make real-time code changes and instantly see the impact. This
feature accelerates development cycles and reduces downtime, resulting in a
more efficient and enjoyable development experience.
We emphasized the significance of implementing robust security measures in
web applications. From authentication and authorization to safeguarding
sensitive data, we highlighted the importance of adopting best practices to
ensure our applications remain secure and protected against potential threats.
We discovered the benefits of data binding, which promotes type safety and
enhances code maintainability. By leveraging this feature, we can bind data
between components with confidence, benefiting from compile-time
validation and IntelliSense support.
We explored the differences between Blazor and Razor, two powerful
technologies for web app development. By understanding their unique
strengths and use cases, we gained insights into choosing the most suitable
approach for our specific project requirements.
By mastering these concepts, we are now equipped with the knowledge and
skills necessary to develop dynamic web applications with Blazor and
ASP.NET. The combination of Blazor's client-side capabilities and
ASP.NET's robust web development framework empowers us to create
highly interactive and feature-rich applications.
As you continue your journey in web app development, it is essential to
explore further and stay updated with the evolving features and
advancements in Blazor and ASP.NET. These technologies offer a wealth of
possibilities for building cutting-edge web applications that provide engaging
user experiences.
We hope this chapter has provided you with valuable insights and practical
knowledge to embark on your own dynamic web app projects. Remember to
leverage the concepts and techniques covered here to create exceptional web
applications that meet the needs of your users.
Thank you for joining us on this exciting exploration of building dynamic
web apps with Blazor and ASP.NET. Keep pushing the boundaries of web
development and continue to innovate as you embark on your next projects.
In the upcoming chapter, we immerse ourselves in the world of Real-Time
Communication with SignalR and ASP.NET. This topic marks a pivotal
exploration of how SignalR, a robust library for real-time web functionality,
seamlessly integrates with ASP.NET to enable dynamic, two-way
communication between clients and servers. As we navigate through the
intricacies of real-time communication, readers will uncover the power of
SignalR in creating interactive and responsive web applications. Through
practical demonstrations and insights, this chapter aims to equip readers with
the knowledge to implement real-time features, transforming the traditional
web experience into a dynamic and engaging platform.
Introduction
In today's fast-paced digital world, users expect applications to deliver real-
time updates and interactions. Whether it is a live chat, collaborative editing,
or dynamic data visualization, the ability to provide real-time communication
is crucial for building engaging and interactive applications.
In this chapter, we will explore SignalR, a powerful framework provided by
ASP.NET, which simplifies the process of adding real-time capabilities to
your applications. With SignalR, you can easily establish bidirectional
communication channels between the server and the client, enabling seamless
and instantaneous data exchange.
We will begin by diving into the configuration aspects of SignalR. You will
learn how to set up and configure SignalR in your ASP.NET application,
ensuring you have a solid foundation to build. We will explore various
options and settings that allow you to customize the behavior of SignalR to
suit your specific needs.
Next, we will delve into the crucial aspects of authentication and
authorization in real-time communication scenarios. You will discover how
to secure your SignalR connections, ensuring only authorized users can
access and interact with your real-time features. We will explore different
authentication mechanisms and explore how to implement them effectively in
your application.
Once we have a solid understanding of the fundamentals, we will explore the
Streaming Hub feature of SignalR. Streaming Hubs allow you to build real-
time streaming scenarios where clients can receive continuous streams of data
from the server. We will examine how to implement streaming hubs, handle
large data sets, and optimize performance for efficient streaming.
Structure
This chapter covers the following topics:
Real-time communication with SignalR and ASP.NET
Configuration
Authentication and authorization
Streaming hub
Case study
Objectives
By the chapter's end, you will grasp crucial aspects of SignalR and ASP.NET
for real-time communication, covering configuration intricacies,
authentication, and authorization essentials. Configuration insights include
SignalR setup, exploring customization options. Authentication and
authorization significance are unraveled, guiding the implementation of
secure SignalR connections and control over real-time feature access.
Streaming Hubs are demystified, covering client-to-server and server-to-
client streaming. In a culmination, a comprehensive case study showcases
real-world implementation, offering a step-by-step guide through a practical
application. Readers will gain a profound understanding of SignalR's
capabilities, from initial setup to secure communication and practical
application integration.
Message transports
SignalR supports multiple transportation methods to establish a real-time
connection between the client and the server. The following transport
methods are supported by SignalR:
WebSockets: It provide a full-duplex communication channel between
the client and the server, allowing for efficient real-time data transfer
with low latency.
Server-Sent Events (SSE): It is a unidirectional communication
method where the server sends continuous updates to the client over a
single HTTP connection.
Long polling: Long polling is a technique where the client sends a
request to the server and keeps the connection open until the server has
new data to send. Once the server responds with the data or a timeout
occurs, the client immediately sends a new request.
Hubs
Hubs are the core component of SignalR that manages the communication
between the client and the server. When the backend server receives a
message for a hub, it automatically invokes the corresponding client-side
code based on the method name provided in the message.
SignalR supports two built-in Hub Protocols for transporting messages:
Text protocol based on JSON: Messages are serialized in JSON
format before being sent to the clients.
Binary protocol based on MessagePack: Messages are serialized
using the efficient MessagePack format, which reduces the payload
size and improves performance.
SignalR methods
SignalR methods are used to send messages between the server and
connected clients. Here are some commonly used methods:
Clients.All.SendAsync: Invokes the specified method on the hub and
sends the objects to all connected clients.
Clients.Caller.SendAsync: Invokes the specified method on the hub
and sends the objects back to the calling client itself.
Clients.Client({connectionId}).SendAsync: Invokes the specified
method on the hub and sends the objects to the specified client based
on their connection ID.
Groups.AddToGroupAsync: Adds a client to the specified group, allowing
targeted message delivery to clients belonging to that group.
Clients.Group({group}).SendAsync: Invokes the specified method on
the hub and sends the objects to all clients in the specified group.
These methods enable efficient communication and message delivery
between the server and clients in various scenarios.
Overall, SignalR offers flexible message transports, powerful hubs for
managing communication, and convenient methods for sending messages to
specific clients or groups of clients.
Configuration
Configuration plays a vital role in harnessing the full potential of SignalR for
real-time communication. By understanding and utilizing the various
configuration options, developers can fine-tune the behavior of SignalR to
align with their application's specific needs. From establishing connection
settings and managing hub configuration to optimizing performance and
scalability, mastering SignalR's configuration allows for seamless integration
and efficient utilization of real-time capabilities in cloud, web, and desktop
applications. Whether it is adjusting message size limits, enabling transport
protocols, or configuring connection timeouts, a well-configured SignalR
environment ensures reliable and high-performance real-time communication
between server and client, creating engaging and interactive user experiences.
ASP.NET Core SignalR offers support for two protocols when encoding
messages: JSON and MessagePack. Each protocol comes with its own set of
serialization configuration options, providing flexibility in adapting to
specific requirements and optimizing message encoding for efficient
communication.
JSON encoding
Within the SignalR Protocol's JSON Encoding, every message is represented
as a standalone JSON object, serving as the sole content of the underlying
transport message. It is important to note that all property names in the JSON
object are case-sensitive. The responsibility of encoding and decoding the
text lies with the underlying protocol implementation, requiring the JSON
string to be encoded in the format expected by the specific transport used. For
instance, when employing the ASP.NET Sockets transports, UTF-8 encoding
is consistently utilized for text representation.
To configure JSON serialization on the server side in SignalR, you can utilize
the AddJSONProtocol extension method. This method is added after the
AddSignalR method within the ConfigureServices method in the Startup
class. When using AddJSONProtocol, you can pass a delegate that receives an
options object, and within that object, you can access the
PayloadSerializerOptions property. This property allows you to configure
the serialization of arguments and return values using a System.Text.JSON
JSONSerializerOptions object.
MessagePack encoding
MessagePack is a highly efficient and compact binary serialization format,
ideal for scenarios where performance and bandwidth optimization are
crucial. Compared to JSON, MessagePack produces smaller message sizes,
making it beneficial in reducing network traffic. However, due to its binary
nature, the messages appear unreadable when inspecting network traces and
logs unless they are parsed through a MessagePack parser. SignalR
acknowledges the significance of MessagePack and incorporates built-in
support for this format, offering dedicated APIs for both the client and server
to seamlessly utilize MessagePack serialization.
To activate the MessagePack Hub Protocol on the server-side, simply install
Microsoft.AspNetCore.SignalR.Protocols.MessagePack package within your
application. In the Startup.ConfigureServices method, include the
AddMessagePackProtocol method alongside the AddSignalR call to enable
seamless MessagePack support on the server.
Currently, it is not feasible to configure MessagePack serialization within the
JavaScript client.
WebSockets:
Default value:
CloseTimeout:
When the server closes, there is a specified time interval within
which the client should also close the connection. If the client
fails to close within this interval, the connection is terminated
by the server.
Default value: 5s
SubProtocolSelector:
A delegate can be utilized to set the Sec-WebSocket-Protocol
header to a custom value. This delegate receives the requested
values from the client as input and is responsible for returning
the desired value for the Sec-WebSocket-Protocol header. By
using this delegate, you have the flexibility to customize and
control the value of the Sec-WebSocket-Protocol header based
on your specific requirements.
Default value: Null
MinimumProtocolVersion:
Configure logging
In the .NET Client, logging configuration is achieved through the
ConfigureLogging method. This method allows you to register logging
providers and filters, similar to how they are configured on the server. By
utilizing ConfigureLogging, you can customize the logging behavior of your
client application, enabling comprehensive monitoring, troubleshooting, and
analysis. With the flexibility to choose and configure logging providers and
filters, you can effectively manage and track client-side activities, ensuring
optimal performance and addressing potential issues, as we can see an
example of logging configuration below:
1. var connection = new HubConnectionBuilder()
2. .WithUrl("https://example.com/chathub")
3. .ConfigureLogging(logging => {
4.
logging.SetMinimumLevel(LogLevel.Information);
5. logging.AddConsole();
6. })
7. .Build();
Additional options
SignalR provides a comprehensive set of client options that go beyond the
basic configuration, enabling you to fine-tune and enhance the behavior of
your SignalR clients. These additional client options empower you to
optimize performance, customize connectivity, and adapt to specific
requirements, as you can see the main options below:
ServerTimeout:
Cookie authentication
In browser-based applications, cookie authentication enables seamless
integration of existing user credentials with SignalR connections. When using
the browser client, no additional configuration is required. If the user is
logged in to the application, the SignalR connection automatically inherits
this authentication.
Cookies serve as a browser-specific mechanism for transmitting access
tokens, but they can also be used by non-browser clients. In the .NET Client,
you can configure the Cookies property within the WithUrl method to provide
a cookie for authentication. However, utilizing cookie authentication from the
.NET client necessitates the application to offer an API endpoint for
exchanging authentication data in order to obtain the required cookie. This
enables seamless authentication between the client and server components of
the application.
Windows authentication
If Windows authentication is configured in the app, SignalR can utilize that
identity to secure hubs. However, to send messages to individual users, a
custom user ID provider needs to be added. It is important to note that the
Windows authentication system does not provide the name identifier claim,
which SignalR relies on to determine the username.
Please be aware that while Windows authentication is supported in Microsoft
Edge, it may not be supported in all browsers. For example, attempting to use
Windows authentication and WebSockets in Chrome and Safari will result in
failure. In such cases, the client will attempt to fallback to other transports
that might work, as you can see in the following example:
1. var connection = new HubConnectionBuilder()
2. .WithUrl("https://example.com/chathub",
options =>
3. {
4. options.UseDefaultCredentials = true;
5. })
6. .Build();
Claims
To derive SignalR user IDs from user claims in an app that authenticates
users, you can implement the IUserIdProvider interface and register the
implementation. By implementing this interface, you can specify how
SignalR creates user IDs based on the user's claims, as you can see in the
following example where we get the email value from the user claims:
1. public class SampleEmailProvider :
IUserIdProvider
2. {
3. public virtual string
GetUserId(HubConnectionContext
4. connection)
5. {
6. return
connection.User?.FindFirst(ClaimTypes.Email)?
7. .Value!;
8. }
9. }
Authorization handlers
By default, SignalR provides a robust built-in authorization framework that
allows you to authenticate and authorize users. However, there may be
scenarios where you need to implement custom logic to enforce additional
access restrictions or business rules. This is where custom authorization
policies come in.
Custom authorization policies empower you to define your own rules and
criteria for granting or denying access to specific features or functionality
within your SignalR application. With custom policies, you have the
flexibility to implement fine-grained control over who can perform certain
actions, such as sending messages, joining specific groups, or accessing
sensitive data.
Streaming hub
In ASP.NET Core SignalR, you can utilize streaming to enable data
transmission between the client and the server in fragments or chunks.
Streaming is particularly beneficial in scenarios where data arrives gradually
over time, allowing each fragment to be sent to the client or server as soon as
it becomes available.
With streaming, you do not need to wait for the entire data set to be ready
before sending it. Instead, you can start sending fragments of data as they
become available, providing a more responsive and efficient communication
mechanism.
This capability enables real-time data streaming and processing, making it
suitable for various use cases such as live updates, media streaming, and
handling large data sets.
By leveraging the streaming feature in ASP.NET Core SignalR, you can
achieve more efficient data transmission, improved responsiveness, and
enhanced real-time capabilities in your applications.
Server-to-client streaming hub
ASP.NET Core SignalR allows you to perform streaming between the server
and the client. You have the option to return an IAsyncEnumerable<T> or a
ChannelReader<T> from your streaming hub methods.
Case study
Welcome to the case study, where we will apply the concepts of SignalR
configuration, authentication and authorization, and custom policy in the
development of a real-time chat application. This case study will provide you
with a practical demonstration of how these elements come together to create
a secure and efficient real-time communication platform.
Throughout this case study, we will focus on building a robust chat
application that enables users to exchange messages in real-time. We will
start by configuring SignalR in our ASP.NET application, ensuring that we
have a solid foundation for real-time communication. This includes setting up
the necessary hubs, establishing connections, and defining the desired
behavior.
Next, we will address the crucial aspect of authentication and authorization.
We will implement a secure login system and explore Entity Framework
Identity authentication mechanism available with SignalR. We will also delve
into custom policy creation to control access to the chat application, ensuring
that only authenticated users can participate.
To enhance the user experience and maintain the integrity of the chat, we will
implement a sample custom policiy. We will explore the implementation of
these policies using SignalR's built-in authorization framework, empowering
you to tailor the chat application to your specific needs.
So, let us embark on this case study journey, where we will bring together the
power of SignalR, authentication, authorization, and custom policy to create a
secure and feature-rich real-time chat application. Get ready to dive into the
exciting world of real-time communication with SignalR!
First, we create a project of type Web Application and add a client-side
library @microsoft/signalr@latest
To do this, you should right-click on the project | Add | Client-side library.
Search for the SignalR library as shown in the figure below:
Figure 9.1: Adding SignalR client-side library.
After successfully adding the client-side library then, you must have your
project like the following figure:
Figure 9.2: Project Solution after adding SignalR client-side library.
Now, let us create the Hub, responsible for communicating among clients.
Add a new class, that must inherit from Hub. The sample Hub class is the one
as follows:
1. public class SampleHub : Hub
2. {
3. public async Task
SendMessageToAll(string message)
4. {
5. await
Clients.All.SendAsync("ReceiveMessage",
message);
6. }
7. public async Task
SendMessageToCaller(string message)
8. {
9. await
Clients.Caller.SendAsync("ReceiveMessage",
message);
10. }
11. public async Task
SendMessageToUser(string connectionId,
12. string message)
13. {
14. await Clients.Client(connectionId)
15. .SendAsync("ReceiveMessage",
message);
16. }
17. public async Task JoinGroup(string
group)
18. {
19. await
Groups.AddToGroupAsync(Context.ConnectionId,
group);
20. }
21. public async Task
SendMessageToGroup(string group,
22. string message)
23. {
24. await
Clients.Group(group).SendAsync("ReceiveMessage"
25. message);
26. }
27. public override async Task
OnConnectedAsync()
28. {
29. await
Clients.All.SendAsync("UserConnected",
30. Context.ConnectionId);
31. await base.OnConnectedAsync();
32. }
33. public override async Task
OnDisconnectedAsync(Exception ex)
34. {
35. await
Clients.All.SendAsync("UserDisconnected",
36. Context.ConnectionId);
37. await
base.OnDisconnectedAsync(ex);
38. }
39. }
Now that we have our Hub, we must have the client-side code, this client-side
code should communicate with our Hub. The sample client-side code here is
named Chat.js and has the following content:
1. "use strict";
2. var connection = new
signalR.HubConnectionBuilder().
3.
withUrl("/sampleHubRoutePattern").build();
4. connection.on("ReceiveMessage", function
(message) {
5. var msg = message.replace(/&/g,
"&").replace(/</g, "<")
6. .replace(/>/g, ">");
7. var div = document.createElement("div");
8. div.innerHTML = msg + "<hr/>";
9.
document.getElementById("messages").appendChild(div);
10. });
11. connection.on("UserConnected", function
(connectionId) {
12. var groupElement =
document.getElementById("group");
13. var option =
document.createElement("option");
14. option.text = connectionId;
15. option.value = connectionId;
16. groupElement.add(option);
17. });
18. connection.on("UserDisconnected", function
(connectionId) {
19. var groupElement =
document.getElementById("group");
20. for (var i = 0; i < groupElement.length;
i++) {
21. if (groupElement.options[i].value ==
connectionId) {
22. groupElement.remove(i);
23. }
24. }
25. });
26. connection.start().catch(function (err) {
27. return console.error(err.toString());
28. });
29. document.getElementById("sendButton").addEventListener(
Figure 9.3: Visual studio debug console with the success message
The following error message is a common error when not being able to
connect with SignalR:
To fix the previous error, you should open the command prompt and execute
both commands:
dotnet dev-certs https --clean
Microsoft.AspNetCore.Identity.UI
Microsoft.EntityFrameworkCore
Microsoft.EntityFrameworkCore.SqlServer
Microsoft.EntityFrameworkCore.Tools
Add a new scaffolded item of type identity. You can find it in the identity
sub-menu on the left:
Figure 9.5: Adding an Identity Scaffolded item.
Check the Override all files option and set your DbContext class:
Figure 9.6: Setting up Identity scaffolded items.
After successfully completing the registration process and going to the chat,
this is the output:
Figure 9.10: My user sending a “hello” message to everyone.
Conclusion
Throughout this chapter, we have explored the exciting world of real-time
communication and how SignalR, together with ASP.NET, empowers you to
build engaging and interactive applications.
We began by understanding the importance of real-time communication in
today's digital landscape and how it enhances user experiences. SignalR
emerged as a powerful framework that simplifies the process of adding real-
time capabilities to your applications, enabling seamless bidirectional
communication between the server and the client.
We dived into the configuration aspects of SignalR, ensuring that you have a
solid foundation to work with. By understanding the various configuration
options and settings, you can customize SignalR to suit your specific
application requirements, achieving optimal performance and scalability.
Authentication and authorization were essential considerations in real-time
communication scenarios, and we explored how to secure SignalR
connections. By implementing robust authentication mechanisms, you can
control access to your real-time features, ensuring that only authorized users
can interact with your application.
Furthermore, we delved into the Streaming Hub feature of SignalR, which
enables real-time streaming scenarios. You learned how to implement
streaming hubs and optimize performance for efficient streaming of large
data sets. This knowledge equips you to build applications that deliver
continuous streams of data to clients, enriching the user experience with real-
time updates.
To reinforce your understanding, we concluded the chapter with a
comprehensive case study. Step by step, you followed along and witnessed
the practical implementation of SignalR and ASP.NET in a real-world
scenario. This case study provided a hands-on experience, allowing you to
apply the concepts and techniques covered throughout the chapter in a
practical context.
With the knowledge gained in this chapter, you are now equipped to leverage
the power of SignalR and ASP.NET to create dynamic, real-time applications
for cloud, web, and desktop platforms. Real-time communication opens up a
realm of possibilities for engaging user experiences, collaborative
environments, and data-driven interactions.
As you continue your journey in application development, remember to
explore and experiment with the concepts covered in this chapter. Stay
updated with the latest advancements in SignalR and ASP.NET, as new
features and improvements are constantly being introduced. By embracing
real-time communication, you can create applications that captivate and
delight your users.
In the upcoming chapter, we will discuss implementing microservices with
Web APIs. This pivotal topic addresses the contemporary approach of
building scalable and agile systems through microservices architecture. We
delve into the intricacies of designing and deploying microservices, with a
specific focus on Web APIs as the foundation for seamless communication
between services. Readers will explore the principles, benefits, and practical
considerations of microservices, gaining insights into how this architectural
paradigm enhances flexibility, scalability, and overall system resilience. Join
us as we navigate the landscape of microservices, unraveling the key
concepts and practical strategies to effectively implement them using Web
APIs.
Best of luck as you continue building innovative applications with SignalR
and ASP.NET!
CHAPTER 10
Implementing MicroServices with
Web APIs
Introduction
This chapter provides an in-depth guide to building scalable and resilient
microservices using Web APIs. The chapter focuses on scaling, which is one
of the critical challenges of building microservices-based applications.
The chapter begins by providing an overview of microservices architecture
and the benefits of using Web APIs for building microservices. It then
explains how to design a scalable microservices architecture, including topics
like service discovery, load balancing, and fault tolerance.
The chapter then dives into the various scaling techniques that can be used
for microservices-based applications, including horizontal scaling, vertical
scaling, and auto-scaling. It provides step-by-step instructions for
implementing each scaling technique, along with best practices and common
pitfalls to avoid.
To provide a practical case study, the chapter walks through building a
simple but functional microservices-based application that incorporates all
the scaling techniques discussed in the chapter. It demonstrates how to design
a scalable microservices architecture, how to implement scaling technique
and use Web APIs to communicate between microservices.
Structure
This chapter covers the following topics:
Implementing MicroServices with WebAPIs
Asynchronous communication
RabbitMQ
MicroServices scalability
Horizontal scalability
Vertical scalability
Orchestrators
Most used architectural patterns
Backend for frontend
Command Query Responsibility Segregation
Domain Driven Design
Case study
Objectives
This chapter equips you with fundamental insights into microservices,
including an overview and asynchronous communication. It delves into
scaling, covering horizontal and vertical scaling, along with considerations,
benefits, and nuanced management. Best practices for scaling are outlined,
accompanied by an introduction to key microservices architectural patterns
clarifying their roles in scalability and resilience. A detailed case study guides
you through implementing a microservices-based application, applying the
BFF design pattern and exploring both synchronous and asynchronous
communication. This chapter provides both theoretical insights and practical
skills for constructing scalable and resilient microservices with Web APIs,
spanning scaling strategies, architectural patterns, and real-world application.
Asynchronous communication
Asynchronous communication is extremely important when working with
microservices architectures, making it a must-have when trying to scale those
microservices. With asynchronous communication we may communicate
between microservices, or we can process heavier workloads in the
background while providing immediate response to the original request. This
allows us to have a better responsiveness.
By processing heavier workloads in the background, we can have a higher
control over the workload being processed among the microservices, this
approach offers several benefits:
Increased responsiveness: A microservice does not need to be stuck
waiting for a response, it can continuedly process other tasks in the
background while waiting for a response from an asynchronous
operation.
Improved resilience: If a microservice fails or gets unavailable it does
not affect other microservices, because they can continue operating
independently and handle messages asynchronously. When the
microservice is fixed and goes back online it will process all the
requests in the pipeline.
Scalability and load balancing: Asynchronous communication
facilitates horizontal scaling of microservices by sharing the exchanged
messages among different processors. This approach allows us to scale
and load balance the workload among all the instances of this
processor.
Loose coupling and flexibility: Asynchronous communication
promotes loose coupling between microservices. They can evolve
independently without impacting others as long as the message
contracts remain compatible.
To implement asynchronous communication, microservices publish messages
to designated channels or queues. Then, we have other microservices acting
as consumers of those messages and processing the request asynchronously.
Let us explore how to handle asynchronous communication using RabbitMQ.
RabbitMQ
RabbitMQ is an open-source message broker that serves as a reliable
intermediary for exchanging messages between publishers and listeners. It
implements the message queue protocol, enabling seamless communication
across various channels and routes. With RabbitMQ, publishers can send
messages to specific queues, and listeners can consume those messages when
they are ready. This decoupling of message producers and consumers allows
for flexible and scalable communication between different components of a
system. RabbitMQ provides robust features such as message persistence,
message routing, and support for multiple messaging patterns. It is widely
used in distributed systems, microservices architectures, and asynchronous
communication scenarios where reliable message exchange is essential.
Among RabbitMQ's main features and keywords, several important ones are
highlighted below, along with a brief explanation of each:
Message broker: RabbitMQ acts as a message broker, facilitating the
exchange of messages between different components of a system. It
ensures reliable delivery and efficient routing of messages.
Message queue protocol: RabbitMQ implements a message queue
protocol, which defines the rules and formats for exchanging messages.
This protocol allows publishers to send messages to specific queues
and listeners to consume messages from those queues.
Message: Messages play a vital role as they traverse channels from
publishers to listeners. These messages can contain a wide range of
information, ranging from basic text to intricate serialized objects.
They act as carriers of data, enabling seamless communication and
exchange of information between different components within the
RabbitMQ system. Whether it is transmitting straightforward textual
content or intricate serialized objects, messages facilitate the efficient
flow of data throughout the RabbitMQ infrastructure.
Channel: Serves as a logical communication line between publishers
and listeners. It operates within an established connection and
facilitates the transfer of messages using various protocols. By utilizing
channels, publishers can efficiently send messages to listeners,
enabling seamless communication and data exchange within the
RabbitMQ system. Channels enhance the flexibility and versatility of
message transfer, providing a reliable and efficient means of
communication between different components.
Queue: A RabbitMQ queue operates on the First-In-First-Out
(FIFO) principle, providing a mechanism for storing messages
between publishers and listeners. Messages are stored in the queue in
the order they are received, and they are retrieved and processed by
listeners in the same order. This ensures that the messages maintain
their original sequence and are processed in a fair and consistent
manner.
Connection: Forms the foundation for communication between the
server and the client. It establishes the link between the two by
leveraging protocols. Once the connection is established, it enables the
creation and operation of channels, which serve as logical
communication pathways within the RabbitMQ system. The
connection acts as a bridge, facilitating the exchange of messages and
data between the server and the client, enabling seamless
communication and collaboration.
Publisher-subscriber model: RabbitMQ supports the publisher-
subscriber model, where publishers send messages to specific queues,
and subscribers (or listeners) consume those messages as needed. This
decoupling enables asynchronous communication and flexible scaling.
Consumer: A consumer is a component connected to a RabbitMQ
client that listens to specific queues for message consumption. It
retrieves and processes messages published on those queues.
Publisher: A publisher is a component connected to a RabbitMQ client
that sends messages to a specific queue for publishing. It is responsible
for producing and delivering messages to the designated queue for
further processing or distribution.
Notification: Notifications in the context of microservices are crucial
for monitoring the health of services and can be customized to trigger
alerts based on specific metrics. These notifications serve as a
mechanism to keep track of the performance, availability, and overall
well-being of microservices. By defining thresholds and conditions,
notifications can be set up to proactively detect and report any
anomalies or deviations from expected behavior. This enables timely
response and intervention, allowing teams to address potential issues
and maintain the smooth operation of their services. Customizable
notifications provide flexibility in tailoring alerts to the specific needs
and requirements of the system, ensuring that the right stakeholders are
promptly notified when critical events or metrics are triggered.
Dead letter: Dead letters are utilized to store messages that were
unable to be consumed by their intended listeners. This can occur if the
message is rejected by the listeners, the queue becomes full, or the
message reaches its expiration time. Dead letter queues serve as a
fallback mechanism, allowing these unprocessed messages to be
redirected and stored for further analysis or alternative processing. By
leveraging dead letters, RabbitMQ provides a way to handle and
manage messages that could not be consumed in their original context,
ensuring message reliability and fault tolerance within the system.
Route: RabbitMQ routes play a crucial role in ensuring the targeted
delivery of messages to their respective queues based on routing keys
and exchanges. These routes act as a mechanism for directing messages
to their intended recipients, enabling precise and efficient message
distribution within the RabbitMQ system. By evaluating the routing
key associated with each message, RabbitMQ routes determine the
appropriate destination queue to which the message should be
delivered. This ensures that messages reach the specific consumers or
services that are interested in processing them, facilitating effective
communication and message handling in a structured and organized
manner.
Virtual host: In comparison with a SQL Server database, a RabbitMQ
virtual host can be seen as an equivalent to a SQL Server database. Just
like a database in SQL Server, a virtual host in RabbitMQ is a self-
contained environment with its own set of settings and configurations.
Each virtual host operates independently of others, having its own
channels, bindings, protocols, users, and other relevant attributes. It
provides a logical separation of resources, allowing different
applications or services to operate in isolation within their dedicated
virtual host. This segregation ensures that the settings and entities
within one virtual host do not interfere with or affect those in other
virtual hosts, providing a level of autonomy and control over the
messaging environment.
Exchange: In RabbitMQ, exchanges play a critical role in routing
messages to their respective queues based on their attributes. An
exchange acts as a routing agent, receiving messages from publishers
and determining their destination queues. The routing decision is made
by evaluating attributes such as the message's routing key, headers, or
other specified criteria. The exchange then forwards the message to the
appropriate queues that match the defined routing rules. This
mechanism enables precise and targeted message delivery, ensuring
that messages reach the queues that are specifically interested in
consuming them. By leveraging exchanges, RabbitMQ provides a
flexible and configurable routing mechanism that supports various
message distribution patterns and facilitates efficient communication
between publishers and consumers. RabbitMQ provides several types
of exchanges:
Direct exchange: This exchange delivers messages to queues based
on an exact match between the routing key of the message and the
routing key specified in the binding of the queue.
Topic exchange: Messages sent to a topic exchange are routed to
queues based on patterns defined by the routing key. Wildcard
characters such as "*" (matches a single word) and "#" (matches
zero or more words) allow for flexible routing based on specific
patterns.
Fanout exchange: Fanout exchanges broadcast messages to all the
queues that are bound to them. The routing key is ignored, and the
message is distributed to all the queues.
Headers exchange: Headers exchanges use message headers
instead of the routing key for routing decisions. The headers are
matched against those specified in the bindings to determine the
appropriate destination queues.
Bindings: A RabbitMQ binding links a queue to an exchange, defining
the relationship between exchanges and queues. They determine how
messages are routed from an exchange to specific queue(s) based on
routing rules.
The main benefits of RabbitMQ usage:
Multi-platform communication: RabbitMQ supports message
serialization and deserialization in common languages like JSON,
enabling seamless communication between different platforms and
technologies.
Asynchronous operations: RabbitMQ allows for asynchronous
messaging, ensuring that services are not locked or blocked while
waiting for a response. This enables efficient and non-blocking
communication between components.
Open-source and community-driven: RabbitMQ is an open-source
message broker with a vibrant community actively working on its
development and improvement. This ensures continuous
enhancements, bug fixes, and the availability of extensive resources
and support.
Multi-language support: RabbitMQ offers compatibility with a wide
range of programming languages, allowing developers to use their
preferred language for message production and consumption. This
flexibility promotes language diversity and enables teams to work with
their preferred tech stack.
Multi-protocol support: RabbitMQ supports multiple protocols for
exchanging messages. It provides compatibility with popular
messaging protocols like Advanced Message Queuing Protocol
(AMQP), Message Queuing Telemetry Transport (MQTT), and
more. This versatility allows for seamless integration with diverse
systems and technologies.
Reliability and fault-tolerance: RabbitMQ ensures reliable message
delivery by providing features such as message persistence, delivery
acknowledgments, and durable queues. It also supports fault-tolerant
setups like clustering and mirrored queues, replicating messages across
nodes for high availability and data redundancy.
Scalability: RabbitMQ is designed to handle high message throughput
and can scale horizontally by adding more nodes to distribute the
message processing load. This allows applications to accommodate
increasing workloads and handle peak traffic efficiently.
Flexible routing and messaging patterns: RabbitMQ offers various
exchange types and routing mechanisms, enabling flexible message
routing and supporting different messaging patterns such as
publish/subscribe, request/reply, and topic-based filtering. This
flexibility allows for the implementation of complex communication
patterns in distributed systems.
Dead-letter queues: RabbitMQ provides dead-letter queues, which
capture messages that cannot be processed successfully. This feature
allows for proper handling and analysis of failed messages, aiding in
troubleshooting and debugging of the messaging system.
Management and monitoring: RabbitMQ provides a user-friendly
management interface and comprehensive monitoring capabilities.
These tools allow administrators to monitor queues, connections,
message rates, and other metrics, helping them gain insights into
system performance, troubleshoot issues, and perform effective
resource management.
Extensibility and integration: RabbitMQ offers a plugin system that
allows its functionality to be extended with additional features and
protocols. It integrates well with other systems and frameworks,
making it compatible with a wide range of tools and technologies
commonly used in modern software development.
By leveraging these benefits, RabbitMQ empowers developers to build robust
and flexible messaging solutions that facilitate efficient communication
between different components and systems.
MicroServices scalability
Scalability is a critical aspect of building microservices-based applications
that can handle increased workloads and adapt to changing demands. In this
section, we will delve into the topic of microservices scalability, exploring
three key strategies: Horizontal scalability, vertical scalability, and
orchestrators.
Microservices scalability refers to the ability to efficiently and effectively
handle growing demands by adding resources or redistributing workloads
across the system. It plays a pivotal role in ensuring that microservices can
handle increased traffic, maintain optimal performance, and provide a
seamless user experience.
Horizontal scalability involves scaling out the application horizontally by
adding more instances of microservices to the system. This approach allows
for distributing the workload across multiple instances, enabling improved
performance, higher availability, and easier load balancing.
Vertical scalability, on the other hand, focuses on scaling up individual
microservices by increasing the resources allocated to them. This can involve
upgrading the hardware, increasing memory capacity, or enhancing
processing power. Vertical scalability is particularly useful when a
microservice requires more resources to handle specific tasks efficiently.
In addition to horizontal and vertical scalability, we will also explore the role
of Orchestrators in managing and coordinating the scaling process.
Orchestrators, such as Kubernetes or Docker Swarm, provide
containerization and orchestration capabilities, allowing for efficient
deployment, scaling, and management of microservices across a cluster of
machines.
By understanding and implementing these scalability strategies and utilizing
orchestrators effectively, microservices-based applications can achieve the
necessary flexibility, performance, and resilience to adapt to varying
workloads and ensure a seamless user experience.
Horizontal scalability
Horizontal scalability, also known as scaling out, is a strategy for
increasing the capacity of a microservices-based application by adding more
instances of microservices to the system. Instead of upgrading individual
microservices, horizontal scalability focuses on distributing the workload
across multiple instances, allowing for improved performance, higher
availability, and easier load balancing. You can see how horizontal scalability
works in the figure below:
Figure 10.1: Visual explanation of how the Horizontal Scalability works.
Orchestrators
Orchestrators play a crucial role in the scalability of microservices-based
applications by providing containerization and orchestration capabilities.
They help manage and coordinate the deployment, scaling, and management
of microservices across a cluster of machines. Some popular orchestrators
include Kubernetes, Service Fabric, Docker Swarm, and Apache Mesos.
The main roles of orchestrators when talking about microservices are the
ones as follows:
Deployment and containerization: Orchestrators facilitate the
deployment of microservices by encapsulating them into containers.
Containers provide a lightweight and isolated runtime environment for
microservices, ensuring consistent deployment across different
environments and simplifying the packaging and distribution process.
Scalability and load balancing: Orchestrators enable horizontal
scalability by automatically scaling the number of microservice
instances based on demand. They monitor the resource usage and can
dynamically add or remove instances to balance the workload and
ensure optimal resource utilization. Load balancing techniques, such as
round-robin or least-connection, are employed to distribute incoming
requests across the available microservice instances.
Service discovery: Orchestrators assist in the discovery and
registration of microservices within the system. They provide
mechanisms for microservices to discover and communicate with each
other, allowing for dynamic scaling and seamless communication
between services. Service registries and DNS-based service discovery
are commonly used to facilitate service discovery in orchestrator
environments.
Health monitoring and self-healing: Orchestrators continuously
monitor the health and availability of microservices. They can detect
failures or unresponsive instances and automatically perform self-
healing actions, such as restarting or rescheduling the failed instances
on healthy nodes. This helps ensure high availability and resilience of
the overall system.
Configuration management: Orchestrators provide capabilities for
managing the configuration of microservices across different
environments. They enable centralized configuration management,
allowing for dynamic updates and ensuring consistency in
configurations across instances.
Rolling deployments and versioning: Orchestrators support rolling
deployments, allowing new versions of microservices to be deployed
gradually without causing downtime or disruption to the system. They
enable rolling updates, canary deployments, or blue-green
deployments, ensuring smooth transitions between different versions of
microservices.
Resource management: Orchestrators help optimize resource
allocation by managing the allocation of computing resources, such as
CPU, memory, and storage, to microservices instances. They ensure
that resources are allocated efficiently based on the workload and
prioritize critical services.
By leveraging orchestrators, microservices architectures can achieve
enhanced scalability, flexibility, and manageability. Orchestrators simplify
the management of microservices deployment, scaling, and maintenance
tasks, enabling efficient utilization of resources, seamless communication
between services, and robust fault tolerance mechanisms.
Case study
In this case study, we are implementing the Backend for Frontend (BFF)
design pattern, where a Weather Microservice acts as the frontend service for
two distinct BFFs. The BFFs cater to clients with different weather
preferences, one focusing on hot weather and the other on cold weather.
Additionally, we incorporate an event processor to handle asynchronous
requests efficiently.
For clients living in hot weather locations, one BFF is dedicated to serving
their specific needs. It orchestrates requests and retrieves relevant data from
the underlying microservices, aggregating and transforming it into a format
suitable for the client. Similarly, the BFF for clients favoring cold weather
provides a specialized interface and retrieves data specific to their
preferences.
To handle asynchronous requests and events, we employ an event processor.
This component efficiently processes events in an asynchronous manner,
ensuring that the BFFs can handle concurrent requests and maintain
responsiveness. The event processor plays a vital role in managing the flow
of data, processing events in the background while allowing the BFFs to
remain performant and scalable.
Overall, the combination of the BFF design pattern, weather microservice,
and event processor enables us to deliver customized weather information to
clients based on their preferences. It ensures a seamless user experience by
abstracting complexities, handling asynchronous requests, and providing
tailored responses for hot and cold weather clients as we can see this process
described better in the figure below:
Figure 10.3: Diagram representing the practical study case.
3. Add two new Web API projects, they are going to be our BFFs. We have
named it as BFFOne and BFFTwo:
Add nuget packages
Newtonsoft.Json
RabbitMQ.Client
In the end, this is the project solution that we have. The 3 Web APIs are
basically the same with a console application as we can see in the figure
below:
Figure 10.4: Solution explorer with all the 4 projects.
Now, let us modify our projects to apply the BFF design pattern with
asynchronous communication. A few modifications are needed to represent
the design pattern.
First, let us adjust the MicroServiceWebAPI project. We have broken the
weather summaries into hot and cold summaries, and those summaries will
work as our database.
This is the WeatherForecastController for the MicroServiceWebAPI project.
1. [ApiController]
2. [Route("[controller]")]
3. public class WeatherForecastController :
ControllerBase
4. {
5. private static readonly string[]
HotSummaries = new[]
6. { "Cool", "Mild", "Warm",
"Balmy", "Hot",
7. "Sweltering", "Scorching" };
8.
9. private static readonly string[]
ColdSummaries = new[]
10. { "Freezing", "Bracing",
"Chilly", "Cool", "Mild",
11. "Warm", "Balmy" };
12.
13. private readonly
ILogger<WeatherForecastController> _logger;
14.
15. public WeatherForecastController(
16.
ILogger<WeatherForecastController> logger)
17. {
18. _logger = logger;
19. }
20.
21. [HttpGet(Name = "GetWeatherForecast")]
22. public IEnumerable<WeatherForecast>
Get([FromQuery]
23. string weather)
24. {
25. switch (weather.ToLower())
26. {
27. case "cold":
28. return Enumerable.Range(1,
5).Select(index =>
29. new WeatherForecast
30. {
31. Date =
DateTime.Now.AddDays(index),
32. TemperatureC =
Random.Shared.Next(-20, 15),
33. Summary =
ColdSummaries[Random.Shared
34.
.Next(ColdSummaries.Length)]
35. }).ToArray();
36. case "hot":
37. return Enumerable.Range(1,
5).Select(index =>
38. new WeatherForecast
39. {
40. Date =
DateTime.Now.AddDays(index),
41. TemperatureC =
Random.Shared.Next(15, 55),
42. Summary =
HotSummaries[Random.Shared
43.
.Next(HotSummaries.Length)]
44. }).ToArray(); ;
45. default:
46.
47. return
Enumerable.Empty<WeatherForecast>();
48. }
49.
50. }
51. }
A small change was also made in the WeatherForecast class for all the 3
projects:
1. public class WeatherForecast
2. {
3. public DateTime Date { get; set; }
4.
5. public int TemperatureC { get; set; }
6.
7. public int TemperatureF => 32 + (int)
(TemperatureC / 0.5556);
8.
9. public string? Summary { get; set; }
10. }
Now, let us update both BFFs controllers. Each BFF sends specific
information about his weather to the EventProcessor when posting Weather
and to the client microservice when getting weather.
The HTTP GET adds its specific information and makes a HTTP request to
the Weather Microservice.
The HTTP POST adds its specific information and publishes a message in the
RabbitMQ queue, this message will be handled by the EventProcessor.
The WeatherForecastController from the Hot Weather BFF, specifically
designed for handling hot weather scenarios, as you can see in the code
below:
1. [ApiController]
2. [Route("[controller]")]
3. public class WeatherForecastController :
ControllerBase
4. {
5.
6. private readonly
ILogger<WeatherForecastController> _logger;
7.
8. public WeatherForecastController(
9.
ILogger<WeatherForecastController> logger)
10. {
11. _logger = logger;
12. }
13.
14. [HttpGet(Name = "GetWeatherForecast")]
15. public async
Task<IEnumerable<WeatherForecast>> Get()
16. {
17. var result = new
List<WeatherForecast>();
18. string baseURL =
"https://localhost:7173/";
19. string url = baseURL +
"WeatherForecast?weather=hot";
20. using (HttpClient client = new
HttpClient())
21. {
22. using (HttpResponseMessage
responseMessage =
23. await
client.GetAsync(url))
24. {
25. using (HttpContent content
= responseMessage
26.
.Content)
27. {
28. string data = await
content
29.
.ReadAsStringAsync();
30. result = JsonConvert
31.
.DeserializeObject<List<WeatherForecast>>
(data);
32. }
33. }
34. }
35. return result;
36. }
37. [HttpPost]
38. public void Post([FromBody]
WeatherForecast weatherForecast)
39. {
40. var factory = new
ConnectionFactory() { HostName
41. = "localhost" };
42. using (var connection =
factory.CreateConnection())
43. using (var channel =
connection.CreateModel())
44. {
45. channel.QueueDeclare(
46.
queue:"weatherForecastSampleQueue",
47. durable: false, exclusive:
false, autoDelete: false,
48. arguments: null);
49.
50. string message = "From BFF
One, Date: "
51. + weatherForecast.Date + ",
Temperature in Cº: "
52. + weatherForecast.TemperatureC
+ " and Summary: "
53. + weatherForecast.Summary;
54.
55. var body =
Encoding.UTF8.GetBytes(message);
56.
57. channel.BasicPublish(exchange:
"",
58. routingKey:
"weatherForecastSampleQueue",
59. basicProperties: null,
body: body);
60. }
61. }
62. }
The WeatherForecastController from the Cold Weather BFF, specifically
designed for handling cold weather scenarios:
1. [ApiController]
2. [Route("[controller]")]
3. public class WeatherForecastController :
ControllerBase
4. {
5.
6. private readonly
ILogger<WeatherForecastController> _logger;
7.
8. public WeatherForecastController(
9.
ILogger<WeatherForecastController> logger)
10. {
11. _logger = logger;
12. }
13.
14. [HttpGet(Name = "GetWeatherForecast")]
15. public async Task<
IEnumerable<WeatherForecast>> Get()
16. {
17. var result = new
List<WeatherForecast>();
18. string baseURL =
"https://localhost:7173/";
19. string url = baseURL +
"WeatherForecast?weather=cold";
20. using (HttpClient client = new
HttpClient())
21. {
22. using (HttpResponseMessage
responseMessage =
23. await
client.GetAsync(url))
24. {
25. using (HttpContent content
= responseMessage
26. .Content)
27. {
28. string data = await
content
29.
.ReadAsStringAsync();
30. result = JsonConvert
31.
.DeserializeObject<List<WeatherForecast>>
(data);
32. }
33. }
34. }
35. return result;
36. }
37. [HttpPost]
38. public void Post([FromBody]
WeatherForecast weatherForecast)
39. {
40. var factory = new
ConnectionFactory()
41. { HostName =
"localhost" };
42. using (var connection =
factory.CreateConnection())
43. using (var channel =
connection.CreateModel())
44. {
45. channel.QueueDeclare(
46. queue:
"weatherForecastSampleQueue",
47. durable: false, exclusive:
false, autoDelete: false,
48. arguments: null);
49.
50. string message = "From BFF
Two, Date: "
51. + weatherForecast.Date + ",
Temperature in Cº: "
52. +
weatherForecast.TemperatureC + " and Summary:
"
53. + weatherForecast.Summary;
54.
55. var body =
Encoding.UTF8.GetBytes(message);
56.
57. channel.BasicPublish(exchange:
"",
58. routingKey:
"weatherForecastSampleQueue",
59. basicProperties: null,
body: body);
60. }
61. }
62. }
The event processor is responsible for subscribing to a queue and processing
its incoming messages.
This is the Program.cs from the event processor:
1. // See https://aka.ms/new-console-template for
more information
2. using RabbitMQ.Client;
3. using RabbitMQ.Client.Events;
4. using System;
5. using System.Text;
6. using System.Threading;
7. using System.Threading.Tasks;
8.
9. Console.WriteLine("Hello, World!");
10. var factory = new ConnectionFactory() {
HostName = "localhost" };
11. string queueName =
"weatherForecastSampleQueue";
12. using (var rabbitMqConnection =
factory.CreateConnection())
13. {
14.
15. using (var rabbitMqChannel =
rabbitMqConnection.CreateModel())
16. {
17. Thread.Sleep(5000);
18.
19. rabbitMqChannel.QueueDeclare(queue:
queueName,
20. durable: false,
21. exclusive: false,
22. autoDelete:
false,
23. arguments: null);
24.
25. rabbitMqChannel.BasicQos(prefetchSize:
0, prefetchCount: 1,
26. global: false);
27.
28. int messageCount =
Convert.ToInt16(rabbitMqChannel
29. .MessageCount(queueName));
30. Console.WriteLine(" Listening to the
queue.
31. This channels has {0} messages on the
queue», messageCount);
32.
33. var consumer = new
EventingBasicConsumer(rabbitMqChannel);
34. consumer.Received += (model, ea)
=>
35. {
36. var message =
Encoding.UTF8.GetString(ea.Body
37. .ToArray());
38. Console.WriteLine(" Weather
Forecast received: "
39. + message);
40.
41.
rabbitMqChannel.BasicAck(deliveryTag:
ea.DeliveryTag,
42. multiple:
false);
43. Thread.Sleep(1000);
44. };
45.
rabbitMqChannel.BasicConsume(queue: queueName,
46. autoAck:
false,
47. consumer:
consumer);
48. }
49. }
Before pushing F5 and running the projects, you must configure the solution
to debug all the projects at the same time.
Right-click on the Project solution | Properties:
From BFF 1, the hot BFF making a GET request we have the following
outputs as show in the figure below from our swagger:
Figure 10.6: Swagger response from a GET Request for the BFF 1
BFF 1, the hot BFF making a POST request, as we can see in our swagger
from the figure below:
Figure 10.7: Swagger request from a POST Request for the BFF 1
From our BFF 2, the cold BFF. We are making a GET request from our
swagger as you can see the response from the figure below:
Figure 10.9: Swagger response from a GET Request for the BFF 2
In the BFF 2, the cold weather BFF, we are making a POST request from our
swagger as you can see from the figure below:
Figure 10.10: Swagger request from a POST Request for the BFF 2
Figure 10.11: Console application displaying the processed message from BFF 2 POST operation
Conclusion
In conclusion, this chapter has provided an in-depth guide to building
scalable and resilient microservices using Web APIs. We began by exploring
the benefits of microservices architecture and the importance of Web APIs in
facilitating communication between microservices. Throughout the chapter,
we focused on the critical challenge of scaling microservices-based
applications.
We discussed the various scaling techniques available, including horizontal
scaling, vertical scaling, and auto-scaling. Step-by-step instructions were
provided for implementing each technique, along with best practices and
common pitfalls to avoid. We also delved into architectural patterns
commonly used in microservices and their role in scalability and resilience.
The practical case study presented a real-world scenario for building a
microservices-based application. By following the case study, readers gained
hands-on experience in designing a scalable microservices architecture,
implementing horizontal scaling, and utilizing Web APIs for inter-
microservice communication.
Additionally, we explored communication patterns between microservices,
including synchronous and asynchronous approaches. We discussed the
importance of choosing the appropriate communication mechanism based on
the specific requirements of the application.
By mastering the concepts and techniques presented in this chapter, readers
are now equipped with the knowledge and skills necessary to tackle the
challenges of building scalable microservices architectures. They have gained
a solid understanding of scaling techniques, architectural patterns, and
effective communication strategies, enabling them to build resilient and
highly scalable microservices-based applications.
As microservices continue to gain popularity in the software development
landscape, the ability to design, implement, and scale microservices
effectively becomes increasingly crucial. The knowledge gained from this
chapter will empower readers to create robust microservices architectures that
can handle growing workloads and adapt to changing demands.
In conclusion, building scalable and resilient microservices using Web APIs
requires careful consideration of architectural design, scaling techniques, and
communication patterns. Armed with the insights and practical guidance
provided in this chapter, readers are well-prepared to embark on their journey
of implementing microservices-based applications with confidence and
success.
In the upcoming chapter, we venture into the dynamic landscape of
Continuous Integration and Continuous Deployment (CI/CD) with
Docker and Azure DevOps. CI/CD form the backbone of modern software
development, and this chapter explores their seamless integration with
Docker technology and Azure DevOps services. We delve into the pivotal
role Docker plays in containerization, ensuring consistency across diverse
environments. The synergy with Azure DevOps amplifies the efficiency of
the CI/CD pipeline, enabling automated testing, deployment, and delivery.
Join us as we unravel the power of this integration, providing practical
insights and strategies to streamline your development workflows and
enhance the agility of your software delivery process.
Introduction
This chapter provides a practical guide to implementing a continuous
integration and continuous deployment (CI/CD) pipeline for containerized
applications using Docker and Azure DevOps. The chapter begins by
providing an overview of Docker and its role in containerization. It then
introduces Azure DevOps and explains how it can be used to automate the
CI/CD process for containerized applications.
The chapter then walks through the various Docker commands required for
building and deploying containerized applications. It explains how to build
Docker images, push them to Docker Hub, and deploy them to an Azure
container registry. The chapter then provides an overview of continuous
integration and continuous deployment and how it can be used to streamline
the application development and deployment process. It explains how Azure
DevOps can be used to automate the CI/CD process, including topics like
configuring build pipelines, release pipelines, and environment management.
To provide a practical case study, the chapter walks through building a
sample containerized application, setting up a CI/CD pipeline using Azure
DevOps, and deploying the application to a production environment.
Structure
This chapter covers the following topics:
Overview
Docker
Docker containers
Container images
Docker images
Container images X Docker images
Docker advantages for your apps
Understanding the Dockerfile
Docker commands
Azure DevOps
Continuous integration
Benefits of continuous integration
Continuous deployment
Benefits of continuous deployment
Case study
Creating a Dockerfile
Creating a Docker image
Applying continuous integration
Applying continuous deployment
Objectives
In this chapter, readers will grasp the fundamentals of Docker's role in
containerization and delve into Azure DevOps as a comprehensive DevOps
platform. Exploring Docker commands, they will learn to build and deploy
containerized applications, understanding CI significance in software
development. Configuring Azure DevOps pipelines automates CI processes,
leading to insights into CD benefits. The chapter showcases Azure DevOps'
role in CD pipeline automation, emphasizing release pipelines and
environment management. Through hands-on experience and a practical case
study, readers will acquire skills to set up CI/CD pipelines efficiently using
Docker and Azure DevOps, ensuring a comprehensive understanding of
containerization and DevOps practices.
Overview
CI and CD with Docker in Azure DevOps enable teams to automate the
build, test, and deployment of containerized applications. It leverages the
capabilities of Docker, a popular containerization platform, and Azure
DevOps, a comprehensive DevOps toolset provided by Microsoft.
By integrating Docker into Azure DevOps pipelines, developers can easily
build Docker images, push them to Docker registries, and deploy them to
various environments, including development, staging, and production. This
integration facilitates consistent and reliable application deployments in a
containerized environment.
With CI, code changes are automatically built, tested, and integrated into a
shared repository whenever developers push their code. This ensures that the
application remains in a continuously deployable state, enabling faster
feedback and reduced integration issues.
CD takes CI a step further by automating the deployment process, allowing
for seamless delivery of containerized applications to different environments.
Azure DevOps provides features for configuring release pipelines, defining
deployment strategies, and orchestrating the deployment of Docker
containers to Azure Kubernetes Service (AKS), Azure App Service, or
other target platforms.
The combination of Docker and Azure DevOps also offers benefits such as
scalability, portability, and reproducibility. Docker's containerization
technology enables applications to run consistently across different
environments, reducing deployment and compatibility challenges. Azure
DevOps provides a centralized platform for managing CI and CD pipelines,
fostering collaboration, version control, and monitoring the entire application
lifecycle.
By utilizing CI and CD with Docker in Azure DevOps, development teams
can achieve faster time-to-market, improved software quality, and greater
agility in responding to customer needs. It empowers teams to focus on
delivering value to users while ensuring consistency, reliability, and
efficiency in the deployment process.
Docker
Docker is an open-source platform that revolutionizes the development,
packaging, and execution of applications. Its swift setup process allows for
easy creation and deployment of Docker containers, providing a time
advantage in managing environments efficiently.
One of the key advantages of Docker is its ability to package applications
into Docker images, which can then be run within Docker containers. These
containers operate in isolation from one another, ensuring that they can have
distinct or identical configurations while maintaining consistent behavior
across all instances of the Docker images.
By utilizing Docker, developers can encapsulate their applications into
portable and self-contained units, making it effortless to reproduce and
deploy them across different environments. This eliminates compatibility
issues and guarantees consistency in the behavior of the applications,
regardless of the underlying infrastructure.
Furthermore, Docker simplifies the management of dependencies and ensures
the reproducibility of the application's runtime environment. Developers can
specify the necessary dependencies within the Docker image, making it easier
to maintain and distribute the application across various platforms.
In summary, Docker provides developers with a powerful toolset for
packaging and deploying applications, enhancing efficiency, portability, and
reproducibility. By leveraging Docker's containerization technology,
developers can streamline their development processes and confidently
deploy applications with consistent behavior across diverse environments.
Docker containers
A Docker container functions as a self-contained entity, with its processes
separate from others running on the host machine. Each container operates
within its own environment, allowing for individualized settings without
impacting or being affected by external processes.
Key functionalities of Docker containers include:
Cross-platform compatibility: Docker containers can run on any
operating system, providing flexibility and portability across different
environments.
Isolation and control: Each container operates within its isolated and
independent settings, creating a controlled environment where
applications can run reliably without interference from external factors.
Versatility: Docker containers can be deployed on various machines,
including virtual machines or cloud instances, enabling seamless
deployment across different infrastructures.
Image-based architecture: Multiple Docker images can run
concurrently within a container, facilitating the execution of diverse
applications while maintaining separation and encapsulation.
Ease of management: Docker containers are straightforward to
handle. With a few simple commands, you can effortlessly create, start,
stop, delete, or move containers, allowing for efficient management of
application deployment and resource utilization.
By utilizing Docker containers, developers can leverage the advantages of
isolated environments, simplifying application deployment, enhancing
scalability, and ensuring consistent behavior across different systems. This
approach streamlines development processes, facilitates collaboration, and
promotes reproducibility, making Docker an invaluable tool in modern
software development and deployment workflows.
Container images
A Docker container image has the software binaries, scripts, configurations,
and its dependencies for running equally every time it is instantiated in a
Docker Container. With your Docker container image, you may easily
replicate your software as many times as needed, being useful when scaling
vertically or horizontally your software.
Key features and benefits of Docker container images include:
Configuration management: Docker container images allow you to
define and store all the required configurations for your containers.
This includes environment variables, network settings, file system
mappings, and more, ensuring consistency across container instances.
Dependency management: By bundling dependencies within a
Docker container image, you eliminate the need for manual installation
and ensure that all required components are readily available. Making
it easier to install your applications, with less compatibility problems.
Portability and reproducibility: you can easily store and distribute
your Docker container image, being able to share it in public or private
repositories. Every image deployed created from this Docker container
image must have the same behavior, no matter the differences across
different environments.
Versioning and rollbacks: Docker container images are versioned,
like NuGet packages. This allows you to manage your software
versions installed enabling fast rollbacks, also helping when back-
tracing issues in different environments, and to organize different
functionalities per version.
Scalability and performance: Docker container images provide a
lightweight and efficient runtime environment. As containerized
applications shares the same machine hardware, meaning that the we
have multiple containers running on the same host, it is easier to
manage resource utilization and to scale those Docker container
images.
By leveraging Docker container images, developers can streamline the
deployment process, simplify application management, and promote
consistency and reproducibility across different environments. With the
ability to create, share, and replicate container images, Docker facilitates
collaboration and accelerates the software development lifecycle.
Docker images
A Docker image encompasses all the necessary components to package and
deploy your application seamlessly. It encapsulates various tasks, including
restoring NuGet packages, downloading additional Docker images, building,
testing, and packaging your application.
Key features and benefits of Docker images include:
Dependency management: Docker images allow you to define and
manage application dependencies, ensuring that all required packages
and libraries are readily available within the image. This eliminates
manual installation and streamlines the deployment process.
Reproducibility: Docker images provide a consistent and reproducible
environment for running your application. By bundling all the required
tasks and dependencies within the image, you can ensure that the
application behaves consistently across different environments and
deployments.
Portability: Docker images are portable and platform-agnostic. Once
an image is created, it can be shared and deployed on any machine or
platform that supports Docker, making it easy to move and run
applications across various environments without compatibility issues.
Build and test automation: Docker images enable the automation of
build and test processes. You can define the necessary steps and
commands within the image to perform tasks such as building, testing,
and packaging your application, ensuring consistency and reliability
throughout the development and deployment lifecycle.
Versioning and Rollbacks: Docker images support versioning,
allowing you to track and manage changes to the image over time. This
enables seamless rollbacks to previous versions if needed, providing a
safety net for managing application updates and maintaining stability.
By utilizing Docker images, developers can streamline the packaging,
deployment, and testing of their applications. The encapsulation of tasks and
dependencies within the image simplifies the development workflow,
promotes reproducibility, and ensures consistent behavior across different
environments. With Docker images, you have a powerful tool to achieve
efficient and reliable application delivery in containerized environments.
Container images X Docker images
In the context of Docker, the terms Docker Container images and Docker
images are often used interchangeably. However, if we want to make a
distinction, we can consider the following:
Docker images: The term Docker images refers to the standardized,
read-only templates that are used to create Docker containers. A
Docker image is a standalone, executable package that includes
everything needed to run an application, including the code, runtime,
system tools, libraries, and dependencies. Docker images are created
using a Dockerfile, which contains instructions on how to build the
image.
Docker container images: The term Docker container images can be
used to specifically emphasize that we are referring to images that are
used for configuring and running Docker containers. Docker container
images are essentially the same as Docker images. They are portable,
self-contained units that contain all the necessary components to run an
application within a containerized environment.
To summarize, both Docker container images and Docker images refer to the
same concept: the self-contained templates used to create Docker containers.
The term Docker images is more commonly used, while Docker container
images can be used to emphasize the context of using images for configuring
and running containers.
Docker commands
Docker provides a comprehensive set of commands that are fundamental for
creating, managing, and interacting with containers and images. These
essential Docker commands empower you to effectively manage your
containerized environments:
Docker build: This command builds a Docker image based on the
instructions specified in the Dockerfile. It is used to create a
customized image that includes all the necessary dependencies and
configurations for your application.
Docker run: With this command, you can instantiate a container from
a specific Docker image and start running it. It sets up the container's
runtime environment, network, and other configurations defined in the
image, allowing your application to run in an isolated and portable
manner.
Docker ps: This command lists the running containers on your system.
It provides valuable information such as container IDs, names, status,
and resource usage. The docker ps command allows you to monitor
and manage your running containers effectively.
Docker stop: When you need to stop one or more running containers,
the docker stop command comes in handy. It gracefully stops the
specified container(s), allowing for a controlled shutdown and release
of resources.
Docker rm: This command enables you to remove one or more
containers from your system. It permanently deletes the specified
container(s) and frees up system resources. Properly cleaning up
containers that are no longer needed is important for efficient resource
utilization.
Docker rmi: When you want to remove one or more Docker images
from your local repository, the docker rmi command is used. It deletes
the specified image(s) and frees up disk space. This command helps
manage your image repository and ensures that you only retain
necessary images.
Docker image: The docker image command is a versatile tool for
managing Docker images. It allows you to list, build, tag, inspect, and
perform various operations related to Docker images. This command
provides a range of functionalities to effectively manage your image
collection.
By leveraging these key Docker commands, such as build, run, ps, stop, rm,
rmi, and image, you can efficiently create, manage, and interact with
containers and images, enabling seamless development, deployment, and
maintenance of your applications.
Azure DevOps
Azure DevOps is a comprehensive set of development tools and services
provided by Microsoft to support the entire DevOps lifecycle. It enables
teams to plan, develop, test, and deploy applications efficiently, fostering
collaboration, automation, and continuous integration and delivery.
Azure DevOps offers a range of features and capabilities that can be used
individually or as an integrated suite. These include:
Project management: Azure Boards provides agile planning and
tracking capabilities, allowing teams to manage work items, track
progress, and visualize workflows.
Source control: Azure Repos offers version control for code
repositories, supporting both Git and Team Foundation Version
Control (TFVC). It enables collaboration, branching, and merging of
code changes.
CI/CD: Azure Pipelines automates the build, test, and deployment
processes, allowing teams to define pipelines as code and achieve
continuous integration and delivery. It supports various programming
languages and platforms.
Artifact management: Azure Artifacts provides a centralized
repository for managing and sharing dependencies, such as NuGet
packages, npm packages, and Maven artifacts. It enables easy
versioning, publishing, and consumption of artifacts.
Test management: Azure Test Plans facilitates test case management,
exploratory testing, and test execution. It helps teams plan, track, and
analyze their testing efforts, ensuring quality throughout the
development lifecycle.
Collaboration: Azure Boards, Azure Repos, and Azure Pipelines
integrate with popular collaboration tools like Microsoft Teams,
enabling seamless communication, visibility, and transparency across
teams.
Analytics and insights: Azure DevOps provides built-in analytics and
reporting capabilities, offering insights into code quality, pipeline
performance, work item tracking, and more. This data-driven approach
helps teams make informed decisions and continuously improve their
processes.
Azure DevOps supports various development scenarios and can be used for
projects of any size, from small teams to enterprise-scale deployments. It
integrates well with other Azure services and popular development tools,
providing flexibility and extensibility.
By leveraging Azure DevOps, development teams can streamline their
workflows, improve collaboration, automate processes, and achieve faster
and more reliable delivery of software applications.
Continuous integration
Continuous integration is a crucial development practice that promotes
frequent integration of code changes into a shared repository, triggered
automatically whenever a developer pushes a commit. It facilitates an
automated build process and allows for the use of tools that analyze code,
promptly detecting and highlighting any issues that arise.
Continuous deployment
Continuous deployment is a software delivery approach that automates the
build and deployment process, enabling rapid and reliable deployment of
software into production. By automating the entire deployment phase, it
eliminates the need for manual and time-consuming steps, streamlining the
release process and saving valuable time and resources.
Case study
In this practical case study, we will apply the concepts and techniques learned
in this chapter to the project created in the previous chapter about SignalR.
Our goal is to demonstrate the step-by-step implementation of the following
actions:
1. Creating a Dockerfile: We will create a Dockerfile, which is a text file
that contains instructions for building a Docker image. The Dockerfile
defines the environment, dependencies, and configurations needed for
our application. We will carefully craft the Dockerfile to ensure the
desired runtime environment and include any necessary build steps.
2. Building a Docker image: Using the Dockerfile, we will build a Docker
image. The image is a lightweight, portable, and self-contained package
that includes everything needed to run our application. We will follow
the best practices and leverage Docker commands to build the image
efficiently, considering factors such as image size, caching, and layering.
3. Applying CI and CD: Once we have our Docker image, we will
integrate it into a CI/CD pipeline using Azure DevOps. We will
configure a build pipeline to automatically build the Docker image
whenever changes are pushed to the repository. This will ensure that the
image stays up to date with the latest code changes and dependencies.
Next, we will set up a release pipeline to deploy the Docker image to the
desired target environment, such as a development, staging, or production
environment. The release pipeline will handle the deployment process,
including steps like container registry authentication, image tagging, and
orchestrating the deployment to the target platform.
Throughout this practical study case, we will explore the various
configuration options, settings, and best practices for building Docker
images, implementing CI, and orchestrating CD using Azure DevOps. By
following along with the hands-on examples, you will gain practical
experience in creating Dockerfiles, building Docker images, and automating
the CI/CD process for containerized applications.
The following project solution will serve as our focal point for applying the
concepts and practices covered in this case study, as we can see from the
figure below:
Validating if the image was successfully created, we can see the output from
command prompt below by executing the command:
Docker images
Figure 11.3: Command prompt with the command to list all the images from the Docker
Figure 11.5: Command prompt with the output from running the image
From Docker Desktop, we can see the container created with the image
running on the specified ports:
Figure 11.6: Docker desktop with the container created and image running on it
Figure 11.8: Repository with the Web APP project in Azure DevOps
Start with an empty .yml, paste the following code that will be the .yml that
we are using:
1. # Docker
2. # Build a Docker image
3. #
https://docs.microsoft.com/azure/devops/pipelines/languages/
4.
5. trigger:
6. - main
7.
8. resources:
9. - repo: self
10.
11. variables:
12. tag: ‘$(Build.BuildId)’
13. stages:
14. - stage: Build
15. displayName: Build image
16. jobs:
17. - job: Build
18. displayName: Build
19. pool:
20. vmImage: ‘ubuntu-latest’
21. steps:
22. - task: Docker@2
23. inputs:
24. containerRegistry: ‘dockerVivastaa’
25. repository: ‘vivastaa/devops’
26. command: ‘build’
27. Dockerfile: ‘**/Dockerfile’
28. - task: Docker@2
29. inputs:
30. containerRegistry: ‘dockerVivastaa’
31. repository: ‘vivastaa/devops’
32. command: ‘logout’
33. Dockerfile: ‘**/Dockerfile’
34. - task: Docker@2
35. inputs:
36. containerRegistry: ‘dockerVivastaa’
37. repository: ‘vivastaa/devops’
38. command: ‘login’
39. Dockerfile: ‘**/Dockerfile’
40. - task: Docker@2
41. inputs:
42. containerRegistry: ‘dockerVivastaa’
43. repository: ‘vivastaa/devops’
44. command: ‘push’
45. Dockerfile: ‘**/Dockerfile’
The .yml above has four tasks, that are highlighted in yellow:
The first builds the project and create the Docker image.
This is a workaround because of authentication issues in Azure
DevOps, we have to make sure that we have logged out from the
container registry.
Now, we will login again.
We push the image created in the first step.
In the previous .yml there are some references to dockerVivastaa, which is a
service connection. It handles the connection between Azure DevOps and the
Docker Container Registry used in this study case. We can see the
dockerVivastaa service connection from the image below:
Figure 11.11: Azure DevOps Service Connection for Docker Container Registry
After saving the .yml and running the pipeline, we have the output like in the
image below.
To manually start a pipeline from the Pipelines section in Azure DevOps,
follow these steps:
1. Navigate to your Azure DevOps project and select Pipelines from the left
sidebar.
2. In the Pipelines section, you will see a list of all your pipelines. Find the
pipeline you want to run.
3. Click on the name of the pipeline to open its details page.
4. On the pipeline details page, click the Run pipeline button at the top
right corner.
5. A dialog box will appear, allowing you to select the branch and
configure any parameters for the run.
6. After configuring the desired settings, click Run to start the pipeline:
After configuring, we can deploy the pipeline, and this must be the output:
Conclusion
In this chapter, we explored the implementation of a CI/CD pipeline for
containerized applications using Docker and Azure DevOps. We began by
understanding the fundamentals of Docker and its role in containerization,
followed by an overview of Azure DevOps as a comprehensive DevOps
platform. By leveraging Docker and Azure DevOps together, we can
streamline the CI/CD process, ensuring consistent and efficient application
deployment.
We covered essential Docker commands for building and deploying
containerized applications, enabling us to package and distribute our
applications effectively. Additionally, we discussed the concept of CI and its
significance in automating code builds and tests, improving development
efficiency and software quality.
Furthermore, we delved into CD and learned how Azure DevOps automates
the CD pipeline, allowing for seamless application deployment across
different environments. By configuring build pipelines, release pipelines, and
managing environments in Azure DevOps, we achieved a streamlined and
automated CI/CD workflow.
Through a practical case study, we applied our knowledge and skills to build
a sample containerized application, set up a complete CI/CD pipeline using
Azure DevOps, and successfully deployed the application to production
environments. This hands-on experience solidified our understanding of the
CI/CD process with Docker and Azure DevOps.
By mastering the concepts and techniques covered in this chapter, you are
now equipped to implement efficient and scalable CI/CD pipelines for
containerized applications. Leveraging Docker and Azure DevOps, you can
ensure consistent and reliable application deployment, enabling faster
delivery and improved software quality. With CI/CD, you can enhance your
development process, respond to changing demands swiftly, and deliver
value to your users more efficiently. In the upcoming chapter, you will
explore the powerful capabilities of .NET MAUI and the unique features of
Blazor Hybrid. The chapter begins with a comprehensive overview of .NET
MAUI, delving into its fundamental concepts and capabilities to establish a
solid foundation for multi-platform app development. Discover the
distinctions between Blazor and Blazor Hybrid, gaining insights into when to
leverage each technology.
Introduction
Welcome to the exciting world of building multi-platform apps with .NET
Multi-platform App UI (MAUI) and Blazor Hybrid. In this chapter, we
will embark on a journey that combines the power of .NET MAUI with the
innovative approach of Blazor Hybrid to create cutting-edge applications that
run seamlessly across various platforms.
In the first section, we will provide you with a comprehensive overview of
.NET MAUI, the next evolution of Xamarin.Forms. We will explore its
capabilities, advantages, and how it empowers developers to build native
applications for iOS, Android, macOS, and Windows using a single
codebase. Understanding the fundamentals of .NET MAUI is crucial as it sets
the foundation for the rest of our exploration.
Next, we will delve into the key differences between Blazor and Blazor
Hybrid. While Blazor allows developers to build web applications using C#
and .NET, Blazor Hybrid introduces a revolutionary concept that combines
web technologies with native app development. By understanding these
distinctions, we will gain insight into how Blazor Hybrid can enhance our
cross-platform development process.
The heart of this chapter lies in our in-depth case study, where we will take
you through a step-by-step implementation of a multi-platform app using
.NET MAUI and Blazor Hybrid. You will learn how to create a .NET MAUI
project from scratch and set up the Blazor Hybrid UI, where we will leverage
web technologies to build interactive user interfaces. By following this
practical example, you will gain hands-on experience and a solid
understanding of the integration between .NET MAUI and Blazor Hybrid.
So, whether you are a seasoned .NET developer curious about the latest
advancements or someone eager to explore the potential of cross-platform
development, this chapter will equip you with the knowledge and skills to
build versatile and feature-rich applications that cater to diverse platforms
and audiences.
Let us dive into the world of multi-platform apps with .NET MAUI and
Blazor Hybrid and unlock the possibilities of creating groundbreaking
experiences for users worldwide.
Structure
This chapter covers the following topics:
.NET MAUI overview
Differences between Blazor X Blazor Hybrid
Case study with step-by-step implementation
Creating the .NET MAUI project
Using Blazor Hybrid UI from Desktop client
Using Blazor Hybrid UI from mobile client
Objectives
In this chapter, we will discuss .NET MAUI essentials, exploring core
concepts for robust cross-platform apps. Differentiate between Blazor and
Blazor Hybrid, understanding the web framework and its hybrid integration
for diverse platforms. Follow a step-by-step case study to create a multi-
platform app, seamlessly blending .NET MAUI with Blazor Hybrid UI.
Leverage web technologies like HTML, CSS, and C# within native
applications for responsive interfaces. Craft cross-platform apps effortlessly
on iOS, Android, macOS, and Windows from a single codebase. Uncover the
power of .NET MAUI and Blazor Hybrid, enhancing development skills for
innovative, feature-rich applications across varied platforms, fostering
creativity and staying at the forefront of modern development.
Technologies Uses HTML, CSS, Razor syntax Utilizes web technologies and Razor
syntax within natives
Server Dependencies Requires a web server when Integrates with native components and
running server-side APIs
Code Sharing Code specific to the web, less code Facilitates code sharing between
sharing different platforms
Table 12.1: Differences between Blazor and Blazor Hybrid.
In summary, Blazor is primarily focused on building web applications that
run in the browser, whereas Blazor Hybrid is designed for creating cross-
platform apps that combine web technologies with native app development.
Blazor Hybrid leverages .NET MAUI to extend its reach to various platforms
and provide a more native-like experience to users.
Visual Studio will provide us a project ready to be run. Our project solution
must look like the picture below, now push F5:
Figure 12.2: Visual Studio solution with the items from the template.
Figure 12.3: Visual Studio message saying that you should enable developer mode.
If this is your first time running an Android Emulator, then you would need
to set it up first by creating an Android Device as follows:
Figure 12.7: Creating an Android Device for the Android Emulator.
After successfully setting up the Android Emulator, we can finally run our
project from an Android device:
Figure 12.9: Our Blazor Hybrid project running from a Pixel 5 Android Emulator.
Note: We have not made any changes to the original project created
from Visual Studio’s template.
Working with Blazor Hybrid UI is an exhilarating endeavor that offers a
seamless experience for developers. The beauty of this technology lies in its
ability to fuse web and native app development, creating components that
resemble those from web applications. The best part is that it is remarkably
straightforward.
Blazor Hybrid UI empowers developers to build components that are rich in
interactivity and responsive design, much like those seen in web applications.
By using familiar web technologies, such as HTML, CSS, and C#, you can
effortlessly craft dynamic user interfaces, complete with event handling and
real-time updates.
So, whether you are accustomed to web development or are a seasoned app
developer, you will find that creating Blazor Hybrid UI components is an
intuitive process. This ease of use, combined with the power of .NET MAUI,
makes it an exciting platform to bring your app development ideas to life, all
while providing your users with a captivating and seamless experience across
multiple platforms. Let us dive into the details and discover just how
accessible and versatile working with Blazor Hybrid UI can be.
Conclusion
Throughout our exploration, we began by understanding the essence of .NET
MAUI and discovering how it simplifies the process of building native apps
for multiple platforms. By learning the key differences between Blazor and
Blazor Hybrid, we gained valuable insights into the potential of integrating
web technologies with native app development.
The highlight of our chapter was the comprehensive case study, where we
took you through a practical implementation of a multi-platform app using
.NET MAUI and Blazor Hybrid. By following the step-by-step instructions,
you've experienced firsthand the power of combining these technologies to
create interactive and versatile user interfaces that run seamlessly on various
devices.
As you continue your journey as a developer, remember that the world of
technology is ever-evolving. Staying curious and continuously learning is
essential to stay ahead in the rapidly changing landscape of cross-platform
app development.
We hope you feel inspired to experiment further with .NET MAUI and
Blazor Hybrid, incorporating these cutting-edge technologies into your own
projects and exploring their potential to revolutionize the way we build
applications.
Thank you for joining us on this adventure! We wish you success and
fulfillment as you create remarkable multi-platform apps that make a positive
impact on the lives of users worldwide.
In the upcoming chapter, Introducing WinUI, the native UX for Windows
Desktop and UWP Apps, we will discuss the fundamentals of WinUI,
Microsoft's native user experience (UX) framework for Windows Desktop
and Universal Windows Platform (UWP) applications. The overview
provides insights into WinUI's role in enhancing the visual and interactive
aspects of applications on the Windows platform. Topics covered include the
core concepts of WinUI, its integration with Windows development, and its
capabilities for creating modern and responsive user interfaces. The chapter
aims to equip developers with a foundational understanding of WinUI,
empowering them to leverage this framework for building immersive and
user-friendly Windows applications.
CHAPTER 13
Windows UI Library: Crafting
Native Windows Experience
Introduction
In this chapter, we will discuss about Windows UI Library (WinUI), the
native User Experience (UX) framework for Windows Desktop and
Universal Windows Platform (UWP) apps. We will explore the powerful
and visually appealing capabilities of WinUI, as well as the elegance of the
Fluent Design System, Microsoft's design language for creating modern
applications.
With WinUI, developers can build native Windows applications that
seamlessly blend into the user's environment, delivering a consistent and
immersive experience across different devices and form factors. Whether you
are developing for traditional desktop computers or cutting-edge UWP
devices, WinUI provides the tools and components you need to create
applications that feel right at home on the Windows platform.
We will begin with an overview of WinUI, understanding its significance as
the native UX for Windows apps and how it enhances the overall user
experience. Then, we will dive into the Fluent Design System, which serves
as the foundation for crafting beautiful and intuitive interfaces. By following
a step-by-step case study, you will witness the transformation of a concept
into a fully functional WinUI-powered application, learning key
implementation techniques along the way.
Let us discuss about WinUI, where we will discover the art of designing and
developing stunning Windows applications that captivate users and leave a
lasting impression.
Structure
This chapter covers the following topics:
Introducing WinUI, the native UX for Windows Desktop and UWP
Apps
Fluent Design System
Case study with step-by-step implementation
Creating the project
Design the UI
Implementing the cache
Data transfer between pages
Objectives
In this chapter, you will grasp WinUI's importance in crafting native UX for
Windows apps, exploring the Fluent Design System and its principles.
Follow a step-by-step case study to build a real app, from setup to
deployment. Learn to design engaging UIs and implement app logic
seamlessly. By chapter's end, you will confidently deploy WinUI apps, ready
to create delightful user experiences on Windows Desktop and UWP
platforms, showcasing expertise in modern app development.
Disabled 0 The page is never cached and a new instance of the page is created on
each visit.
Enabled 2 The page is cached, but the cached instance is discarded when the size
of the cache for the frame is exceeded.
Required 1 The page is cached and the cached instance is reused for every visit
regardless of the cache size for the frame.
Table 13.1: Navigation cache mode options
x:Class="WinUISampleProject.NewPage"
xmlns="http://schemas.microsoft.com/winfx/2006/xaml/pre
xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml"
xmlns:local="using:WinUISampleProject"
xmlns:muxc="using:Microsoft.UI.Xaml.Controls"
xmlns:d="http://schemas.microsoft.com/expression/blend/
xmlns:mc="http://schemas.openxmlformats.org/markup-
compatibility/2006"
mc:Ignorable="d"
Background="{ThemeResource
ApplicationPageBackgroundThemeBrush}"
NavigationCacheMode="Enabled">
<StackPanel
VerticalAlignment="Center">
<TextBlock x:Name="greeting"
HorizontalAlignment="Center"/>
<TextBlock
x:Name="dateAndTime"
HorizontalAlignment="Center"/>
<HyperlinkButton
Content="Click to go back"
Click="HyperlinkButton_Click"
HorizontalAlignment="Center"/>
</StackPanel>
</Page>
The new page C# code, with the button event:
NewPage.xaml.cs:
1. public sealed partial class NewPage : Page
2. {
3. public NewPage()
4. {
5. this.InitializeComponent();
6. }
7. private void HyperlinkButton_Click(object
sender, RoutedEventArgs e)
8. {
9. Frame.Navigate(typeof(MainPage));
10. }
11. protected override void
OnNavigatedTo(NavigationEventArgs e)
12. {
13. if (e.Parameter is PayloadDTO &&
((PayloadDTO)e.Parameter)
14. .Name != null
15. && ((PayloadDTO)e.Parameter).Feel
!= null)
16. {
17. var payload =
(PayloadDTO)e.Parameter;
18. greeting.Text = $"Hello,
{payload.Name}. We are happy
19. to know that you are feeling
{payload.Feel}";
20. }
21. else
22. {
23. greeting.Text = "Hello!";
24. }
25.
26. dateAndTime.Text = $"Today is
{DateTime.UtcNow}";
27.
28. base.OnNavigatedTo(e);
29. }
30. }
Below we can see the animation page, with a grid and our image inside it:
DestinationAnimation.xaml:
1. <Page
2.
x:Class="WinUISampleProject.DestinationAnimation"
3.
xmlns="http://schemas.microsoft.com/winfx/2006/xaml/pre
4.
xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml"
5. xmlns:local="using:WinUISampleProject"
6.
xmlns:muxc="using:Microsoft.UI.Xaml.Controls"
7.
xmlns:d="http://schemas.microsoft.com/expression/blend/
8.
xmlns:mc="http://schemas.openxmlformats.org/markup-
compatibility/2006"
9. mc:Ignorable="d"
10. Background="{ThemeResource
ApplicationPageBackgroundThemeBrush}">
11.
12. <Grid>
13. <Image x:Name="DestinationImage"
14. Width="400" Height="400"
15. Stretch="Fill"
16. Source="/Images/DestinationImage.png"/>
17. </Grid>
18. </Page>
Below we can see the animation code in C#, configuring the OnNavigatedTo
event:DestinationAnimation.xaml.cs:
1. public sealed partial class
DestinationAnimation : Page
2. {
3. public DestinationAnimation()
4. {
5. this.InitializeComponent();
6. }
7. protected override void
OnNavigatedTo(NavigationEventArgs e)
8. {
9. base.OnNavigatedTo(e);
10.
11. ConnectedAnimation animation =
12.
ConnectedAnimationService.GetForCurrentView()
13.
.GetAnimation("forwardAnimation");
14. if (animation != null)
15. {
16.
animation.TryStart(DestinationImage);
17. }
18. }
19. }
You may access this project on our GitHub repository.
Conclusion
We have reached the end of our exploration into WinUI, the native UX for
Windows Desktop and UWP Apps. We hope this chapter has provided you
with valuable insights into the power and potential of WinUI, empowering
you to create immersive and visually stunning applications for the Windows
platform.
Throughout this chapter, we covered the foundational concepts of WinUI and
how it acts as the bridge between traditional Windows Desktop applications
and the modern Universal Windows Platform. By leveraging the Fluent
Design System, you now have the tools to design user interfaces that are not
only aesthetically pleasing but also intuitive and user-friendly.
The step-by-step case study allowed you to witness the practical
implementation of WinUI in action, taking a concept from the drawing board
to a fully functional application. Armed with this knowledge, you are well-
equipped to embark on your own projects, delivering delightful experiences
to users on various Windows devices.
As you continue your journey as a Windows app developer, remember to stay
updated with the latest advancements in the WinUI framework and the Fluent
Design System. The landscape of UX design is ever-evolving, and keeping
abreast of new features and best practices will ensure that your applications
remain at the forefront of innovation.
Thank you for joining us in this exploration of WinUI and the Fluent Design
System. We look forward to seeing the incredible applications you will
create, enriching the Windows ecosystem with your creativity and expertise.
In the upcoming chapter, we will discuss essential practices for ensuring the
robustness and reliability of your code. Covering topics such as Unit Testing
with nUnit and Xunit, you will learn how to systematically validate
individual units of your code for correctness. Additionally, we will explore
the usage of Mocks to simulate dependencies for more effective testing.
Moreover, you will master debugging techniques to efficiently identify and
resolve issues within your codebase. This chapter equips you with the
necessary skills to enhance code quality and streamline the development
process.
Introduction
In the world of software development, testing and debugging are crucial
aspects of ensuring the reliability and stability of our code. Unit testing
allows us to validate the individual units of our code in isolation, ensuring
that each component works as expected. Additionally, debugging is an
essential skill that helps us identify and resolve issues when our code does
not behave as intended. Throughout this chapter, we will explore the
fundamentals of unit testing using xUnit, a popular testing framework that
provides a robust and efficient way to write unit tests. We will learn how to
create test cases, execute them, and interpret the results to ensure our code is
performing as expected. Making use of mocks is another essential concept
that will be covered in this chapter. Mocks help us simulate the behavior of
certain components or dependencies within our code, allowing us to focus
solely on the unit being tested. Understanding how to effectively utilize
mocks is crucial for writing comprehensive and efficient unit tests.
Finally, we will discuss mastering debugging. Even the most skilled
developers encounter bugs from time to time. Being able to identify, isolate,
and resolve these issues efficiently is what sets great developers apart. We
will explore various debugging techniques and tools that will empower you to
become a more proficient problem solver. By the end of this chapter, you will
gain the knowledge and skills necessary to write robust unit tests, use mocks
effectively, and confidently tackle the debugging process.
Structure
This chapter covers the following topics:
Unit testing with xUnit
Making usage of mocks
Mastering debugging
Applying xUnit and mocks
Objectives
In this chapter, we will explore a structured journey through software testing
and debugging. Explore unit testing's role, xUnit framework, and crafting
effective tests. Dive into assertions, mocks, and advanced techniques for
scalable test code. Master debugging fundamentals, tools, and best practices.
Apply knowledge to real-world scenarios. Gain proficiency in xUnit for
comprehensive tests. Become skilled in debugging, delivering higher-quality
software. This comprehensive chapter covers everything from the basics to
advanced strategies, ensuring you are equipped to tackle any testing or
debugging challenge in your software development journey.
Mastering debugging
Debugging is the process of identifying, analyzing, and resolving issues or
bugs within software code. It is a crucial skill for software developers to
ensure the reliability, correctness, and performance of their applications.
Debugging involves systematically investigating the source of unexpected
behavior or errors and finding solutions to correct them. Here is an overview
of the debugging process:
Identifying the issue: The first step in debugging is recognizing that
there is a problem. This may be through user-reported issues, error
messages, unexpected behavior, or failing test cases. Understanding the
symptoms and gathering relevant information is vital for effective
debugging.
Reproducing the problem: Once the issue is identified, developers
need to reproduce the problem consistently. This involves identifying
the steps or conditions that trigger the bug and replicating them in a
controlled environment. Reproduction helps ensure that developers can
verify the issue and test potential fixes.
Inspecting the code: With the problem reproducible, developers
examine the code related to the issue. This involves carefully
inspecting the affected code and looking for logical errors, incorrect
assumptions, or unintended consequences.
Using debugging tools: Debugging is often facilitated by using
specialized debugging tools provided by integrated development
environments (IDEs) or language-specific debuggers. These tools
allow developers to set breakpoints, inspect variable values, step
through code execution, and observe the program's state during
runtime.
Setting breakpoints: Breakpoints are markers placed in the code to
pause its execution at specific points. When the program reaches a
breakpoint, developers can examine the current state of variables and
the call stack, helping them understand the flow of the program.
Stepping through code: Debuggers allow developers to step through
the code one line at a time, either forward (step into) or over (step over)
function calls. This helps to observe how the code behaves and identify
the location of the issue.
Inspecting variables: During debugging, developers can inspect the
values of variables at runtime to identify unexpected or incorrect data,
which may be causing the problem.
Fixing the issue: Once the root cause of the problem is identified,
developers work on implementing a solution. This may involve
correcting logical errors, adjusting algorithmic approaches, or fixing
implementation mistakes.
Regression testing: After implementing the fix, developers run
regression tests to ensure that the changes have not introduced new
issues and that the original problem has been resolved.
Continuous improvement: Debugging is an iterative process, and
developers continuously improve their debugging skills. They learn
from previous debugging experiences, adopt best practices, and seek to
write more robust code to minimize future issues.
Debugging is an essential part of the software development lifecycle and
requires a combination of technical skills, analytical thinking, and attention to
detail. Effective debugging leads to improved software quality, reduced
maintenance efforts, and higher customer satisfaction. It is a skill that
developers continually refine and leverage to create reliable and resilient
software systems.
Conclusion
We hope you found this journey through the world of testing and debugging
both insightful and practical. Unit testing with xUnit has equipped you with
the ability to write automated tests that verify the correctness of individual
units in your code. You now understand the importance of testing in the
software development lifecycle and have the tools to build a suite of tests that
will provide you with confidence in your codebase.
The knowledge of using mocks has expanded your testing horizons, enabling
you to isolate components and dependencies when writing unit tests. This
technique empowers you to create more focused and efficient tests that can
adapt to different scenarios.
Mastering debugging is an invaluable skill that you have developed. You can
now skillfully identify, diagnose, and resolve issues within your code,
ensuring that your software functions as intended and delivers the expected
results to users. Remember that testing and debugging are continuous
processes in the life of a developer. Keep honing your skills, exploring new
testing methodologies, and staying up-to-date with the latest tools and
practices in the field.
As you move forward in your software development journey, always
remember that reliable code is the backbone of exceptional software
products. Embrace testing and debugging as integral parts of your
development workflow, and they will serve as the pillars of your success.
B
Backend for Frontend (BFF) 261
benefits 261, 262
case study 265-278
Blazor 175, 176
authorization and authentication 192-201
best practices 184, 185
practical case study 185, 186
versus, Blazor Hybrid 309, 310
versus, Razor 184
Blazor Hybrid 307
Blazor Server App project
creating 186-189
Blazor WebAssembly apps 180
Blob Storage 100, 120
Azure resource, creating 123-125
database, connecting 125-127
examples 121
features 121
scaling 122, 123
usage example 123
C
C# 11 19
C# 11 updates 21
auto-default structs 34-36
extended nameof scope 37
file-local types 30-32
generic attributes 24
generic math support 22
IntPtr 38
list patterns 27-30
method group conversion 40,-42
newlines, in string interpolation expressions 25-27
pattern match Span<char>, on constant string 36
raw string literals 21
ref fields 38, 39
required members 32-34
scoped ref 38, 39
UIntPtr 38
UTF-8 string literals 25
warning wave 42, 43
C# 12 19
C# 12 updates 43
alias any type 48
collection expression 44
default lambda parameters 45, 46
inline arrays 45
primary constructors 43
ref readonly parameters 47, 48
CI/CD pipeline
case study 294, 295
with Docker, in Azure DevOps 282, 283
client configuration options, SignalR 215
additional options 216-218
allowed transports configuration 215
bearer authentication configuration 216
logging configuration 215
Code First 55
benefits 55
implementation 55-57
Command Query Responsibility Segregation (CQRS) 261, 262
Command Model 262
Query Model 263
Commit Graph 3
Common Language Runtime (CLR) 59
continuous deployment 293
applying 304, 305
benefits 293, 294
continuous integration 292
applying 298-303
benefits 292, 293
Cosmos DB 99, 109
account, creating 113, 114
change feed notifications service 112, 113
Cosmos Containers 110
database, connecting to 115-120
examples 110
features 109, 110
scaling 111
stored procedures 112
triggers 111, 112
usage examples, for NoSQL 113
User-Defined Functions (UDFs) 112
CRUD operations 70
cross-platform application development 310
Blazor Hybrid UI, using from Desktop client 312, 313
Blazor Hybrid UI, using from mobile client 313-316
.NET MAUI project, creating 310, 311
D
Data Annotations 59
applying 59-61
common annotations 59
Database First approach 52, 53
advantages 53
implementation 53, 54
data binding 182, 183
chained data binding 183, 184
example 201-204
two-way data binding 183
Data Management 70
normal usage 70
repository pattern 74-76
unit of work pattern 72
Data Modeling 61
many-to-many relationship 68, 69
one-to-many relationship 65
one-to-one relationship 62
DbContext class 105
debugging 335, 338
overview 339
Denial-of-Service (DoS) attack 213
Discard Pattern 28
Docker 283, 284
advantages, for apps 287
Docker commands 290
docker build 290
docker image 291
docker ps 290
docker rm 291
docker rmi 291
docker run 290
docker stop 290
Docker container 284
functionalities 284
Docker container image 284
benefits 285
Dockerfile 287-289
creating 295, 296
for multi-stage builds 289, 290
Docker images
benefits 285, 286
creating 296, 297
running 297, 298
Document Object Model (DOM) elements 183
Domain Driven Design (DDD) 261, 264
principles 264, 265
E
Entity Framework Core (EF Core) 52
mastering 52
F
First-In-First-Out (FIFO) approach 129
Fluent Design System 319
applications 320
key principles 319, 320
H
horizontal scalability 256
benefits 257
considerations 258
hot reload
working 190, 191
I
integrated development environments (IDEs) 339
IntelliCode 4
Internet of Things (IoT) messaging 132
L
Language Integrated Query (LINQ) 26, 51, 70
LINQ to Entities 58, 59
Live Unit Testing 7
customizing, according to needs 10
supported testing frameworks 9
test methods, excluding and including 9, 10
test projects, excluding and including 9, 10
using 7, 8
M
many-to-many relationship 68, 69
MessagePack 211
Message Queuing Telemetry Transport (MQTT) 255
message sessions 152
microservices
architectural patterns 261
asynchronous communication 251
implementing, with WebAPIs 250, 251
microservices scalability 256
horizontal scalability 256-258
orchestrators 259, 260
vertical scalability 256-259
Microsoft Design Language 2 319
mocks 337
applying 340-342
using, in testing 337, 338
N
.NET hot reload 177
configuring 179, 180
supported frameworks and application types 177, 178
unsupported scenarios 178, 179
.NET MAUI 307
components 309
overview 308
O
object-relational mapping (ORM) tool 51
one-to-many relationship 65
optional one-to-many 66, 67
required one-to-many 65, 66
one-to-one relationship 62
optional one-to-one 63, 64
required one-to-one 62, 63
orchestrators 259, 260
P
Program Synthesis using Examples (PROSE) 4
R
RabbitMQ 251, 252
benefits 254, 255
features 252-254
raw string literals 21
Razor 184
repository pattern 74-76
S
SampleThiagoDb 53
scaling out 256
security, Blazor 180, 181
authorization 182
Authorize attribute 182
AuthorizeView component 182
Blazor server authentication 181
Blazor WebAssembly authentication 181
Session Queues 133
SignalR 208
advanced HTTP configuration 213-215
authentication 219, 232-242
authorization 219, 232-42
authorization handlers 222, 223
case study 224-232
claims 221
client configuration options 215
configuration 210, 211
custom authorization policy 243-247
examples 209
hubs 209, 210
hubs and hubs methods authorization 222
JSON encoding 211
MessagePack encoding 211
message transports 209
methods 210
real-time communication 208
server configuration options 211-213
Snapshot Debugger 11
snapshot debugging 11
required permissions and limitations 15
supported frameworks and environments 14, 15
using 11-14
SQL Server Management Studio (SSMS) 104
streaming hub 223
client-to-server streaming hub 224
server-to-client streaming hub 223
T
Team Foundation Version Control (TFVC) 291
test-driven development (TDD) 337
Time-to-Live (TTL) value 132
Time Travel Debugging (TTD) 15
limitation 17
snapshot, recording 16, 17
snapshot, viewing 17
using 16
U
unit of work patter 72
unit testing 335
with xUnit 336, 337
Universal Windows Platform (UWP) apps 320
V
Var Pattern 30
vertical scalability 256-258
benefits 259
limitations 259
Visual Studio 2022 2
64 -bit support 3, 4
Commit Graph 3
highlights 6, 7
hot reload, with new capabilities 5
interactive staging support 6
interface improvements and customization 6
multi-repository support 5
performance improvements 2, 3
Razor editor improvements 4, 5
Smarter IntelliCode 4
W
Web APIs 249
microservices, implementing with 250
Web apps
with Blazor and.NET 176
WinUI 318
advantages 318, 319
case study 320
WinUI project
cache, implementing 323, 324
creating 321
data transfer between pages 324-333
user interface, designing 322, 323
X
xUnit 336
applying 340-342
unit testing with 336