Developer's Guide To EnterpriseLibrary 5 RC
Developer's Guide To EnterpriseLibrary 5 RC
Developer's Guide To EnterpriseLibrary 5 RC
Copyright
This document is provided as-is. Information and views expressed in this document, including URL
and other Internet Web site references, may change without notice. You bear the risk of using it.
Some examples depicted herein are provided for illustration only and are fictitious. No real association
or connection is intended or should be inferred.
This document does not provide you with any legal rights to any intellectual property in any Microsoft
product. You may copy and use this document for your internal, reference purposes. You may modify
this document for your internal, reference purposes
2010 Microsoft. All rights reserved.
Microsoft, Visual Studio, and Windows are trademarks of the Microsoft group of companies. All other
trademarks are property of their respective owners.
-1-
Foreword
You are holding in your hands a book that will make your life as an enterprise developer a whole lot
easier.
Its a guide on Microsoft Enterprise Library and its meant to guide you through how to apply .NET for
enterprise development. Enterprise Library, developed by the patterns & practices group, is a collection
of reusable components, each addressing a specific cross cutting concernbe it system logging, or data
validation, or exception management. Many of these can be taken advantage of easily. These
components are architecture agnostic and can be applied in a multitude of different contexts.
The book walks you through functional blocks of the Enterprise Library, which include data access,
caching, cryptography, exception handling, logging, security, and validation. It contains a large collection
of exercises, tricks and tips.
Developing robust, reusable and maintainable application requires knowledge of design patterns,
software architectures and solid coding skills. We can help you develop those skills with Enterprise
Library since it encapsulates proven and recommended practices of developing enterprise applications
on the .NET platform. Though this guide does not go into the depth of discussions of architecture and
patterns, it provides a solid basis for you to discover and implement these patterns from a reusable set
of components. Thats why I also encourage you to check out the Enterprise Library source code and
read it.
This guide is not meant to be a complete reference on Enterprise Library. For that, you should go to
MSDN. Instead, the guide covers most commonly used scenarios and illustrates how Enterprise Library
can be applied in implementing those. The powerful message manifesting from the guide is the
importance of code reuse. In todays world of complex large software systems, high-quality pluggable
components are a must. After all, who can afford to write and then maintain dozens of different
frameworks in a systemall to accomplish the same thing? Enterprise Library allows you to take
advantage of the proven code complements to manage a wide range of task and leaves you free to
concentrate on the core business logic and other working parts of your application.
Another important emphasis that the guide makes is on software designs, which are easy to configure,
testable and maintainable. Enterprise Library has a flexible configuration subsystem driven from either
external config files or programmatically or both. Leading by example, the Enterprise Library itself is
designed in a loosely-coupled manner. It promotes key design principles of the separation of concerns,
single responsibility principle, principle of least knowledge and the DRY principle (Dont Repeat
Yourself). Having said this, dont expect this particular guide to be a comprehensive reference on design
patterns. It is not. It provides just enough to demonstrate how key patterns are used with Enterprise
Library. Once you see and understand them, try to extrapolate them to other problems, contexts,
scenarios.
The authors succeeded in writing a book that is targeted at both those who are seasoned Enterprise
Library developers and who would like to learn about the improvements in version 5.0, and those, who
are brand new to Enterprise Library. Hopefully, for the first group, it will help orientate you and also get
a quick refresher of some of the key concepts. For the second group, the book may lower your learning
curve and get you going with Enterprise Library quickly.
Lastly, dont just read this book. It is meant to be a practical tutorial. And learning comes only through
practice. Experience Enterprise Library. Build something with it. Apply the concepts learnt in practice.
And dont forget to share your experience.
In conclusion, I am excited about both the release of Enterprise Library 5.0 and this book. Especially,
since they ship and support some of our great new releases Visual Studio 2010, .NET Framework 4.0
and Silverlight 4, which together will make you, the developer, ever more productive.
Scott Guthrie
Corporate Vice-President
Microsoft .NET Developer Platform
Redmond, Washington
May 18, 2010
Preface
Contents
of your applications. By the way, if you are not familiar with the term crosscutting concerns, don't
worry; we'll explain it as we go along.
Enterprise Library is an extensive collection, with a great many moving parts. To the beginner it can
seem overwhelming and confusing, and knowing how to best take advantage of it is not completely
intuitive. Therefore, in this guide we'll help you to quickly understand what Enterprise Library is,
what it contains, how you can select and use just the specific features you require, and how easy it is
to get started using them. You will see how you can quickly and simply add Enterprise Library to your
applications, configure it to do exactly what you need, and then benefit from the simple-to-use, yet
extremely compelling opportunities it provides for writing less code that achieves more.
The first chapter of this guide discusses Enterprise Library in general, and provides details of the
individual parts so that you become familiar with the framework as a whole. The aim is for you to
understand the basic principles of each of the application blocks in Enterprise Library, and how you
can choose exactly which blocks and features you require. Chapter 1 also discusses the
fundamentals of using the blocks, such as how to configure them, how to instantiate the
components, and how to use these components in your code.
The remaining seven chapters discuss in detail the application blocks that provide the basic
crosscutting functionality such as data access, caching, logging, and exception handling. These
chapters explain the concepts that drove development of the blocks, the kinds of tasks they can
accomplish, and how they help you implement many well-known design patterns. And, of course,
they explainby way of code extracts and sample programshow you actually use the blocks in
your applications. After you've read each chapter, you should be familiar with the block and be able
to use it to perform a range of functions quickly and easily, in both new and existing applications.
Finally, the appendices present more detailed information on specific topics that you don't need to
know about in detail to use Enterprise Library, but are useful as additional resources and will help
you understand how features such as dependency injection, interception, and encryption fit into the
Enterprise Library world.
You can also download and work through the Hands-On Labs for Enterprise Library, which are
available at http://go.microsoft.com/fwlink/?LinkId=188936.
guide, we provide pointers to how you can do this and explain the kinds of providers that you may
be tempted to create, but it is not a topic that we cover in depth. These topics are discussed more
fully in the documentation installed with Enterprise Library and available online at
http://go.microsoft.com/fwlink/?LinkId=188874, and in the many other resources available from our
community Web site at http://www.codeplex.com/entlib.
For more information about the Dependency Injection (DI) design pattern and the associated
patterns, see "Inversion of Control Containers and the Dependency Injection pattern" at
http://martinfowler.com/articles/injection.html.
Microsoft .NET Framework 3.5 with Service Pack 1 or Microsoft .NET Framework 4.0.
Microsoft Visual Studio 2008 Professional, Visual Studio 2008 Team Edition, Visual
Studio 2010 Premium, Visual Studio 2010 Professional, or Visual Studio 2010
Ultimate Edition.
For the Data Access Application Block, the following is also required:
Microsoft Visual Studio 2008 Development System with Service Pack 1 (any edition)
or Microsoft Visual Studio 2010 Development System (any edition).
For the Logging Application Block, the following are also required:
Stores to maintain log messages. If you are using the MSMQ trace listener to store
log messages, you need the Microsoft Message Queuing (MSMQ) component
installed. If you are using the Database trace listener to store log messages, you
need access to a database server. If you are using the Email trace listener to store
log messages, you need access to an SMTP server.
Other than that, all you require is some spare time to sit and read, and to play with the example
programs. Hopefully you will find the contents interesting (and perhaps even entertaining), as well
as a useful source for learning about Enterprise Library.
Grigori Melnik
Main Author
Alex Homer
Contributing Authors
Reviewers
Graphic Artists
Editors
Architecture/
Development
Testing
User Experience
Damon van Vessem, Heidi Adkisson, Jen Amsterlaw, and Kelly Franznick
(Blink Interactive); and Brad Cunningham (Interknowlodgy).
Documentation
Alex Homer (Microsoft Corporation) and Dennis DeWitt (Linda Werner &
Associates Inc).
Editing/Production
Release Management Richard Burte (ChannelCatalyst.com, Inc.) and Jennifer Burch (DCB Software
Testing, Inc).
Administrative
Support
Advisory Council
Thank you!
10
11
12
handling configuration and serialization, for exampleare exposed and available for you to use in
your own applications.
And, on the grounds that you need to learn how to use any new tool that is more complicated than a
hammer or screwdriver, Enterprise Library includes a range of sample applications, descriptions of
key scenarios for each block, hands-on labs, and comprehensive reference documentation. You even
get all of the source code and the unit tests that the team created when building each block (the
team follows a test-driven design approach by writing tests before writing code). So you can
understand how it works, see how the team followed good practices to create it, and then modify it
if you want it to do something different. Figure 1 shows the big picture for Enterprise Library.
Figure 1
Enterprise Library - the big picture
13
OK, so Enterprise Library relies on the features of Unity to create objects within the blocks, but that
just shows how generally useful Unity is. In this book we'll be concentrating on the seven functional
blocks. If you want to know more about how you can use Unity and the Policy Injection Application
Block, check out the appendices for this guide. They describe the capabilities of Unity as a
dependency injection mechanism and the use of policy injection in more detail.
The following list describes the crosscutting scenarios you'll learn about in this book:
Caching. The Caching Application Block lets you incorporate a local cache in your
applications that uses an in-memory cache and, optionally, a database or isolated storage
backing store. The block provides all the functionality needed to retrieve, add, and remove
cached data, and supports configurable expiration and scavenging policies. You can also
extend it by creating your own pluggable providers or by using third-party providersfor
example, to support distributed caching and other features. Caching can provide
considerable improvements in performance and efficiency in many application scenarios.
Credential Management. The Security Application Block lets you easily implement common
authorization-related functionality, such as caching the user's authorization and
authentication data and integrating with the Microsoft .NET Framework security features.
Data Access. The Data Access Application Block simplifies many common data access tasks
such as reading data for display, passing data through application layers, and submitting
changed data back to the database system. It includes support for both stored procedures
and in-line SQL, can expose the data as a sequence of objects for client-side querying, and
provides access to the most frequently used features of ADO.NET in simple-to-use classes.
Exception Handling. The Exception Handling Application Block lets you quickly and easily
design and implement a consistent strategy for managing exceptions that occur in various
architectural layers of your application. It can log exception information, hide sensitive
information by replacing the original exception with another exception, and maintain
contextual information for an exception by wrapping the original exception inside another
exception.
Logging. The Logging Application Block simplifies the implementation of common logging
functions such as writing information to the Windows Event Log, an e-mail message, a
database, Windows Message Queuing, a text file, a Windows Management Instrumentation
(WMI) event, or a custom location.
Validation. The Validation Application Block provides a range of features for implementing
structured and easy-to-maintain validation mechanisms using attributes and rule sets, and
integrating with most types of application interface technologies.
14
15
the source code section of the site to see what the Enterprise Library team is working on as you read
this guide.
Optional dependencies
Caching Block
Logging Block
Security Block
16
The configuration tools will automatically add the required block to your application configuration
file with the default configuration when required. For example, when you add a Logging handler to
an Exception Handling block policy, the configuration tool will add the Logging block to the
configuration with the default settings.
The seven application blocks we cover in this guide are the functional blocks that are specifically
designed to help you manage a range of crosscutting concerns. All of these blocks depend on the
core features of Enterprise Library, which in turn depend on the Unity dependency injection and
interception mechanism (the Unity Application Block) to perform object creation and additional
basic functions.
You only need to use those directly connected with your own scenario.
The runtime assemblies you will use in your applications are mostly less than 100 KB in size;
and the largest of all is only around 500 KB.
In most applications, the total size of all the assemblies you will use will be between 1 and 2
MB.
17
The assemblies you should add to any application that uses Enterprise Library are the common
(core) assembly, the Unity dependency injection mechanism (if you are using the default Unity
container), and the container service location assembly:
Microsoft.Practices.EnterpriseLibrary.Common.dll
Microsoft.Practices.Unity.dll
Microsoft.Practices.Unity.Interception.dll
Microsoft.Practices.ServiceLocation.dll
You will also need the assembly Microsoft.Practices.Unity.Configuration.dll if you wish to reference
specific Unity configuration classes in your code. However, in the majority of cases, you will not
require this assembly.
In addition to the required assemblies, you must reference the assemblies that implement the
Enterprise Library features you will use in your application. There are several assemblies for each
application block. Generally, these comprise a main assembly that has the same name as the block
(such as Microsoft.Practices.EnterpriseLibrary.Logging.dll), plus additional assemblies that
implement specific handlers or capabilities for the block. You only need these additional assemblies
if you want to use the features they add. For example, in the case of the Logging block, there is a
separate assembly for logging to a database
(Microsoft.Practices.EnterpriseLibrary.Logging.Database.dll). If you do not log to a database, you
do not need to reference this additional assembly.
GAC or Bin, Signed or Unsigned?
All of the assemblies are provided as precompiled signed versions that you can install into the global
assembly cache (GAC) if you wish. However, if you need to run different versions of Enterprise
Library assemblies side by side, this may be problematic and you may prefer to locate them in
folders close to your application.
You can then reference the compiled assemblies in your projects, which automatically copies them
to the bin folder. In a Web application, you can simply copy them directly to your application's bin
folder. This approach gives you simple portability and easy installation.
Alternatively, you can install the source code for Enterprise Library and use the scripts provided to
compile unsigned versions of the assemblies. This is useful if you decide to modify the source code
to suit your own specific requirements. You can strong name and sign the assemblies using your own
credentials afterwards if required.
For more information about side-by-side operation and other deployment issues, see the
documentation installed with Enterprise Library and available online at
http://go.microsoft.com/fwlink/?LinkId=188874.
Importing Namespaces
After you reference the appropriate assemblies in your projects, you will probably want to add using
statements to your project files to simplify your code and avoid specifying objects using the full
18
namespace names. Start by importing the two core namespaces that you will require in every
project that uses Enterprise Library:
Microsoft.Practices.EnterpriseLibrary.Common
Microsoft.Practices.EnterpriseLibrary.Common.Configuration
Depending on how you decide to work with Enterprise Library in terms of instantiating the objects it
contains, you may need to import two more namespaces. We'll come to this when we look at object
instantiation in Enterprise Library a little later in this chapter.
You will also need to import the namespaces for the specific application blocks you are using. Most
of the Enterprise Library assemblies contain several namespaces to organize the contents. For
example, as you can see in Figure 2, the main assembly for the Logging block (one of the more
complex blocks) contains a dozen subsidiary namespaces. If you use classes from these namespaces,
such as specific filters, listeners, or formatters, you may need to import several of these
namespaces.
Figure 2
Namespaces in the Logging block
19
This flexibility comes about because Enterprise Library uses configuration sources to expose
configuration information to the application blocks and the core features of the library. The
configuration sources can read configuration from standard .NET configuration files (such as
App.config and Web.config), from other files, from a database (using the example SQL Configuration
Source available from http://entlib.codeplex.com), and can also take into account Group Policy rules
for a machine or a domain.
In addition, you can use the fluent interface or the .NET configuration API to create and populate
configuration sources programmatically, merge parts of your configuration with a central shared
configuration, generate merged configuration files, and generate different configurations for
individual run-time environments. For more information about these more advanced configuration
scenarios, see Appendix D, "Enterprise Library Configuration Scenarios."
20
The Visual Studio configuration editor displays an interface very similar to that shown in Figure 3,
but allows you to edit your configuration files with a simple right-click in Solution Explorer.
21
2. Click the Blocks menu and select the block you want to add to the configuration. This adds
the block with the default settings.
If you want to use the configuration console to edit values in the <appSettings>
section of your configuration file, select Add Application Settings.
If you want to use an alternative source for your configuration, such as a custom
XML file, select Add Configuration Settings.
3. To view the configuration settings for each section, block, or provider, click the right-facing
arrow next to the name of that section, block, or provider. Click it again to collapse this
section.
4. To view the properties pane for each main configuration section, click the downward-facing
double arrow. Click it again to close the properties pane.
5. To add a provider to a block, depending on the block or the type of provider, you either
right-click the section in the left column and select the appropriate Add item on the
shortcut menu, or click the plus-sign icon in the appropriate column of the configuration
tool. For example, to add a new exception type to a policy in the Exception Handling block,
right-click the Policy item and click Add Exception Type.
When you rename items, the heading of that item changes to match the name. For
example, if you renamed the default Policy item in the Exception Handling block, the item
will show the new name instead of "Policy."
6. Edit the properties of the section, block, or provider using the controls in that section for
that block. You will see information about the settings required, and what they do, in the
subsequent chapters of this guide. For full details of all of the settings that you can specify,
see the documentation installed with Enterprise Library for that block.
7. To delete a section or provider, right-click the section or provider and click Delete on the
shortcut menu. To change the order of providers when more than one is configured for a
block, right-click the section or provider and click the Move Up or Move Down command on
the shortcut menu.
8. To set the default provider for a block, such as the default Database for the Data Access
block, click the down-pointing double arrow icon next to the block name and select the
default provider name from the drop-down list. In this section you can also specify the type
of provider used to encrypt this section, and whether the block should demand full
permissions.
For more details about encrypting configuration, see the next section of this chapter. For
information about running the block in partial trust environments, which requires you to
turn off the Require Permission setting, see the documentation installed with Enterprise
Library.
22
9. To use a wizard to simplify configuration for a common task, such as configuring logging to a
database, open the Wizards menu and select the one you require. The wizard will display a
series of dialogs that guide you through setting the required configuration.
10. If you want to configure different settings for an application based on different deployment
scenarios or environments, open the Environments menu and click New Environment. This
adds a drop-down list, Overrides on Environment, to each section. If you select Override
Properties in this list, you can specify the settings for each new environment that you add to
the configuration. This feature is useful if you have multiple environments that share the
same basic configuration but require different property settings. It allows you to create a
base configuration file (.config) and an environment delta file that contains the differences
(.dconfig). See Appendix D, "Enterprise Library Configuration Scenarios" for information on
configuring and using multiple environments.
11. As you edit the configuration, the lower section of the tool displays any warnings or errors
in your configuration. You must resolve all errors before you can save the configuration.
12. When you have finished configuring your application, use the commands on the File menu
to save it as a file in your application folder with the appropriate name; for example, use
Web.config for a Web application and App.config for a Windows Forms application.
You can, of course, edit the configuration files using a text or XML editor, but this is likely to be a
more tedious process compared to using the configuration console. However, it may be a useful
approach for minor changes to the configuration when the application is running on a server where
the configuration console is not installed. Enterprise Library also contains an XML configuration
schema that you can use to enable IntelliSense and simplify hand editing of the configuration files.
To enable the Enterprise Library XML schema in Visual Studio, open the configuration file, open the
XML menu, and click Schemas. In the XML Schemas dialog, locate the Enterprise Library schema and
change the value in the Use column to Use this schema. Then click OK.
23
Caching
ICacheManager
Cryptography
CryptographyManager
Data Access
Database
Exception Handling
ExceptionManager
Logging
LogWriter
TraceManager
Security
ISecurityCacheProvider
IAuthorizationProvider
Validation
ValidatorFactory
ConfigurationValidatorFactory
AttributeValidatorFactory
ValidationAttributeValidatorFactory
24
There are also task-specific objects that you can create directly in your code in the traditional way
using the new operator. For example, you can create individual validators from the Validation
Application Block, or log entries from the Logging Application Block. We show how to do this in the
examples for each application block chapter.
To use the features of an application block, all you need to do is create an instance of the
appropriate object, facade, or factory listed in the table above and then call its methods. The
behavior of the block is controlled by the configuration you specified, and often you can carry out
tasks such as exception handling, logging, caching, and encrypting values with just a single line of
code. Even tasks such as accessing data or validating instances of your custom types require only a
few lines of simple code. So, let's look at how you create instances of the Enterprise Library objects
you want to use.
Notice that this code uses type inference through the var keyword. The variable will assume the type
returned by the assignment; this technique can make your code more maintainable.
25
If you configured more than one instance of a type for a block, such as more than one Database for
the Data Access Application Block, you can specify the name when you call the GetInstance method.
For example, you may configure an Enterprise Library Database instance named Customers that
specifies a Microsoft SQL Server database, and a separate Database instance named Products that
specifies another type of database. In this case, you specify the name of the object you want to
resolve when you call the GetInstance method, as shown here.
var customerDb
= EnterpriseLibraryContainer.Current.GetInstance<Database>("Customers");
You don't have to initialize the block, read configuration information, or do anything other than call
the methods of the service locator. For many application scenarios, this simple approach is ideal for
obtaining instances of the Enterprise Library types you want to use.
The Sophisticated Approach Accessing the Container Directly
If you want to take advantage of design patterns such as Dependency Injection and Inversion of
Control in your application, you will probably already be considering the use of a dependency
injection mechanism to decouple your components and layers, and to resolve types. If this is the
case, the more sophisticated approach to incorporating Enterprise Library into your applications will
fit well with your solution architecture.
Instead of allowing Enterprise Library to create, populate, and expose a default container that holds
just Enterprise Library configuration information, you can create the container and populate it
yourselfand hold onto a reference to the container for use in your application code. This not only
allows you to obtain instances of Enterprise Library objects, it also lets you use the container to
implement dependency injection for your own custom types. Effectively, the container itself
becomes your service locator.
For example, you can create registrations and mappings in the container that specify features such
as the dependencies between the components of your application, mappings between types, the
values of parameters and properties, interception for methods, and deferred object creation.
You may be thinking that all of these wondrous capabilities will require a great deal of code and
effort to achieve; however, they don't. To initialize and populate the default Unity container with the
Enterprise Library configuration information and make it available to your application, only a single
line of code is required. It is shown here:
var theContainer = new UnityContainer()
.AddNewExtension<EnterpriseLibraryCoreExtension>();
Now that you have a reference to the container, you can obtain an instance of any Enterprise Library
type by calling the container methods directly. For example, if you are using the Logging Application
Block, you can obtain a reference to a LogWriter using a single line of code, and then call its Write
method to write your log entry to the configured targets.
var writer = theContainer.Resolve<LogWriter>();
writer.Write("I'm a log entry created by the Logging block!");
And if you configured more than one instance of a type for a block, such as more than one database
for the Data Access Application Block, you can specify the name when you call the Resolve method,
as shown here:
26
You may have noticed the similarity in syntax between the Resolve method and the GetInstance
method we used earlier. Effectively, when you are using the default Unity container, the
GetInstance method of the service locator simply calls the Resolve method of the Unity container. It
therefore makes sense that the syntax and parameters are similar. Both the container and the
service locator expose other methods that allow you to get collections of objects, and there are both
generic and non-generic overloads that allow you to use the methods in languages that do not
support generics.
One point to note if you choose this more sophisticated approach to using Enterprise Library in your
applications is that you should import two additional namespaces into your code. These namespaces
include the container and core extension definitions:
Microsoft.Practices.EnterpriseLibrary.Common.Configuration.Unity
Microsoft.Practices.Unity
Advantages
Considerations
Using the
Enterprise Library
service locator
One of the prime advantages of the more sophisticated approach of accessing the container directly
is that you can use it to resolve dependencies of your own custom types. For example, assume you
have a class named TaxCalculator that needs to perform logging and implement a consistent policy
for handling exceptions that you apply across your entire application. Your class will contain a
constructor that accepts an instance of an ExceptionManager and a LogWriter as dependencies.
public class TaxCalculator
27
{
private ExceptionManager _exceptionManager;
private LogWriter _logWriter;
public TaxCalculator(ExceptionManager em, LogWriter lw)
{
this._exceptionManager = em;
this._logWriter = lw;
}
...
}
If you use the Enterprise Library service locator approach, you could simply obtain these instances
within the class constructor or methods when required, rather than passing them in as parameters.
However, a more commonly used approach is to generate and reuse the instances in your main
application code, and pass them to the TaxCalculator when you create an instance.
var exManager
= EnterpriseLibraryContainer.Current.GetInstance<ExceptionManager>();
var writer
= EnterpriseLibraryContainer.Current.GetInstance<LogWriter>();
TaxCalculator calc = new TaxCalculator(exManager, writer);
Alternatively, if you have created and held a reference to the container, you just need to resolve the
TaxCalculator type through the container. Unity will instantiate the type, examine the constructor
parameters, and automatically inject instances of the ExceptionManager and a LogWriter into them.
It returns your new TaxCalculator instance with all of the dependencies populated.
TaxCalculator calc = theContainer.Resolve<TaxCalculator>();
Manage the lifetime of your custom types. They can be resolved by the container as
singletons, with a lifetime based on the lifetime of the object that created them, or as a new
instance per execution thread.
Implement patterns such as plug-in and service locator by mapping interfaces and abstract
types to concrete implementations of your custom types.
Specify dependencies and values for parameters and properties of the resolved instances of
your custom types.
Apply interception to your custom types to modify their behavior, implement management
of crosscutting concerns, or add additional functionality.
28
When you use the default Unity container, you have a powerful general-purpose dependency
injection mechanism in your arsenal. You can define and modify registrations and mappings in the
container programmatically at run time, or you can define them using configuration files. Appendix
A, "Dependency Injection with Unity," contains more information about using Unity.
To give you a sense of how easy it is to use, the following code registers a mapping between an
interface named IMyService and a concrete type named CustomerService, specifying that it should
be a singleton.
theContainer.RegisterType<IMyService, CustomerService>(
new ContainerControlledLifetimeManager());
Then you can resolve the single instance of the concrete type using the following code.
IMyService myServiceInstance = theContainer.Resolve<IMyService>();
This returns an instance of the CustomerService type, though you can change the actual type
returned at run time by changing the mapping in the container. Alternatively, you can create
multiple registrations or mappings for an interface or base class with different names and specify the
name when you resolve the type.
Unity can also read its configuration from your application's App.config or Web.config file (or any
other configuration file). This means that you can use the sophisticated approach to creating
Enterprise Library objects and your own custom types, while being able to change the behavior of
your application just by editing the configuration file.
If you want to load type registrations and mappings into a Unity container from a configuration file,
you must add the assembly Microsoft.Practices.Unity.Configuration.dll to your project, and
optionally import the namespace Microsoft.Practices.Unity.Configuration into your code. This
assembly and namespace contains the extension to the Unity container for loading configuration
information.
For example, the following extract from a configuration file initializes the container and adds the
same custom mapping to it as the RegisterType example shown above.
<unity>
<alias alias="CoreExtension"
type="Microsoft.Practices.EnterpriseLibrary.Common.Configuration
.Unity.EnterpriseLibraryCoreExtension,
Microsoft.Practices.EnterpriseLibrary.Common" />
<namespace name="Your.Custom.Types.Namespace" />
<assembly name="Your.Custom.Types.Assembly.Name" />
<container>
<extension type="CoreExtension" />
<register type="IMyService" mapTo="CustomerService">
<lifetime type="singleton" />
</register>
</container>
29
</unity>
Then, all you need to do is load this configuration into a new Unity container. This requires just one
line of code, as shown here.
var theContainer = new UnityContainer().LoadConfiguration();
Other techniques we demonstrate in Appendix A, "Dependency Injection with Unity," include using
attributes to register type mappings and dependencies, defining named registrations, and specifying
dependencies and values for parameters and properties.
The one point to be aware of when you use the more sophisticated technique for creating objects is
that your application is responsible for managing the container, holding a reference to it, and making
that reference available to code that must access the container. In forms-based applications that
automatically maintain global state (for example, applications built using technologies such as
Windows Forms, Windows Presentation Foundation (WPF), and Silverlight), you can use an
application-wide variable for this.
However, in request-based applications built using technologies such as ASP.NET, ASMX, and
Windows Communication Foundation (WCF), you generally require additional code to maintain the
container and make it available for each request. We discuss some of the ways that you can achieve
this in Appendix B, "Dependency Injection in Enterprise Library," and you will find full details in the
documentation installed with Enterprise Library and available online at
http://go.microsoft.com/fwlink/?LinkId=188874.
Summary
This brief introduction to Enterprise Library will help you to get started if you are not familiar with its
capabilities and the basics of using it in applications. This chapter described what Enterprise Library
is, where you can get it, and how it can make it much easier to manage your crosscutting concerns.
This book concentrates on the application blocks in Enterprise Library that "do stuff" (as opposed to
30
those that "wire up stuff"). The blocks we concentrate on in this book include the Caching,
Cryptography, Data Access, Exception Handling, Logging, Security, and Validation Application Blocks.
The aim of this chapter was also to help you get started with Enterprise Library by explaining how
you deploy and reference the assemblies it contains, how you configure your applications to use
Enterprise Library, how you instantiate Enterprise Library objects, and the example applications we
provide. Some of the more advanced features and configuration options were omitted so that you
may concentrate on the fundamental requirements. However, each appendix in this guide provides
more detailed information, while Enterprise Library contains substantial reference documentation,
samples, and other resources that will guide you as you explore these more advanced features.
31
32
In addition to the more common approaches familiar to users of ADO.NET, the Data Access block
also provides techniques for asynchronous data access for databases that support this feature, and
provides the ability to return data as a sequence of objects suitable for client-side querying using
techniques such as Language Integrated Query (LINQ). However, the block is not intended to be an
Object/Relational Mapping (O/RM) solution. It uses mappings to relate parameters and relational
data with the properties of objects, but does not implement an O/RM modeling solution.
The major advantage of using the Data Access block, besides the simplicity achieved through the
encapsulation of the boilerplate code that you would otherwise need to write, is that it provides a
way to create provider-independent applications that can easily be moved to use a different source
database type. In most cases, unless your code takes advantage of methods specific to a particular
database, the only change required is to update the contents of your configuration file with the
appropriate connection string. You dont have to change the way you specify queries (such as SQL
statements or stored procedure names), create and populate parameters, or handle return values.
This also means reduced requirements for testing, and the configuration changes can even be
accomplished through Group Policy.
Methods
Executing a Command.
Creating a Command.
33
You can see from this table that the Data Access block supports almost all of the common scenarios
that you will encounter when working with relational databases. Each data access method also has
multiple overloads, designed to simplify usage and integratewhen necessarywith existing data
transactions. In general, you should choose the overload you use based on the following guidelines:
Overloads that accept an ADO.NET DbCommand object provide the most flexibility and
control for each method.
Overloads that accept a stored procedure name and a collection of values to be used as
parameter values for the stored procedure are convenient when your application calls
stored procedures that require parameters.
Overloads that accept a CommandType value and a string that represents the command are
convenient when your application executes inline SQL statements, or stored procedures
that require no parameters.
Overloads that accept a transaction allow you to execute the method within an existing
transaction.
If you use the SqlDatabase type, you can execute several of the common methods
asynchronously by using the Begin and End versions of the methods.
You can use the Database class to create Accessor instances that execute data access
operations both synchronously and asynchronously, and return the results as a series of
objects suitable for client-side querying using technologies such as LINQ.
34
other configuration file to store the individual database connection strings, with the addition of a
small Enterprise Library-specific section that defines which of the configured databases is the
default. You can configure all of these settings using the Enterprise Library configuration console, as
shown in Figure 1.
Figure 1
Creating a new configuration for the Data Access Application Block
After you configure the databases you need, you must instantiate them in your application code.
Add references to the assemblies you will require, and add using statements to your code for the
namespaces containing the objects you will use. In addition to the Enterprise Library assemblies you
require in every Enterprise Library project (listed in Chapter 1, "Introduction"), you must reference
or add to your bin folder the assembly Microsoft.Practices.EnterpriseLibrary.Data.dll. This assembly
includes the classes for working with SQL Server databases.
If you are working with a SQL Server Compact Edition database, you must also reference or add the
assembly Microsoft.Practices.EnterpriseLibrary.Data.SqlCe.dll. If you are working with an Oracle
database, you can use the Oracle provider included with Enterprise Library and the ADO.NET Oracle
provider, which requires you to reference or add the assembly System.Data.OracleClient.dll.
However, keep in mind that the OracleClient provider is deprecated in version 4.0 of the .NET
Framework, although it is still supported by Enterprise Library. For future development, consider
choosing a different Oracle driver, such as that available from the Enterprise Library Contrib site at
http://codeplex.com/entlibcontrib.
To make it easier to use the objects in the Data Access block, you can add references to the relevant
namespaces, such as Microsoft.Practices.EnterpriseLibrary.Data and
Microsoft.Practices.EnterpriseLibrary.Data.Sql to your project.
35
the different approaches you can use. The examples you can download for this chapter use the
simplest approach: calling the GetInstance method of the service locator available from the Current
property of the EnterpriseLibraryContainer, as shown here, and storing these instances in
application-wide variables so that they can be accessed from anywhere in the code.
// Resolve the default Database object from the container.
// The actual concrete type is determined by the configuration settings.
Database defaultDB = EnterpriseLibraryContainer.Current.GetInstance<Database>();
// Resolve a Database object from the container using the connection string name.
Database namedDB
= EnterpriseLibraryContainer.Current.GetInstance<Database>("ExampleDatabase");
The code above shows how you can get an instance of the default database and a named instance
(using the name in the connection strings section). Using the default database is a useful approach
because you can change which of the databases defined in your configuration is the default simply
by editing the configuration file, without requiring recompilation or redeployment of the application.
Notice that the code above references the database instances as instances of the Database base
class. This is required for compatibility if you want to be able to change the database type at some
later stage. However, it means that you can only use the features available across all of the possible
database types (the methods and properties defined in the Database class).
Some features are only available in the concrete types for a specific database. For example, the
ExecuteXmlReader method is only available in the SqlDatabase class. If you want to use such
features, you must cast the database type you instantiate to the appropriate concrete type. The
following code creates an instance of the SqlDatabase class.
// Resolve a SqlDatabase object from the container using the default database.
SqlDatabase sqlServerDB
= EnterpriseLibraryContainer.Current.GetInstance<Database>() as SqlDatabase;
In addition to using configuration to define the databases you will use, the Data Access block allows
you to create instances of concrete types that inherit from the Database class directly in your code,
as shown here. All you need to do is provide a connection string that specifies the appropriate
ADO.NET data provider type (such as SqlClient).
// Assume the method GetConnectionString exists in your application and
// returns a valid connection string.
string myConnectionString = GetConnectionString();
SqlDatabase sqlDatabase = new SqlDatabase(myConnectionString);
36
Data Source=.\SQLEXPRESS;AttachDbFilename=|DataDirectory|\WWPlatform.mdf;Integrated
Security=True;User Instance=TrueAsynchronous Processing=true;
Data Source=.\SQLEXPRESS;AttachDbFilename=|DataDirectory|\WWPlatform.mdf;Integrated
Security=True;User Instance=True
If you have configured a different database using the scripts provided with the example, you may
find that you get an error when you run this example. It is likely that you have an invalid
connection string in your App.config file for your database. In addition, use the Services MMC
snap-in in your Administrative Tools folder to check that the SQL Server (SQLEXPRESS) database
service (the service is named MSSQL$SQLEXPRESS) is running.
In addition, the final example for this block uses the Distributed Transaction Coordinator (DTC)
service. This service may not be set to auto-start on your machine. If you receive an error that the
DTC service is not available, open the Services MMC snap-in from your Administrative Tools menu
and start the service manually; then run the example again.
To use an inline SQL statement, you must specify the appropriate CommandType value, as shown
here.
// Call the ExecuteReader method by specifying the command type
// as a SQL statement, and passing in the SQL statement.
using (IDataReader reader = namedDB.ExecuteReader(CommandType.Text,
"SELECT TOP 1 * FROM OrderList"))
{
// Use the values in the rows as required - here we are just displaying them.
DisplayRowValues(reader);
}
37
The example named Return rows using a SQL statement with no parameters uses this code to
retrieve a DataReader containing the first order in the sample database, and then displays the
values in this single row. It uses a simple auxiliary routine that iterates through all the rows and
columns, writing the values to the console screen.
void DisplayRowValues(IDataReader reader)
{
while (reader.Read())
{
for (int i = 0; i < reader.FieldCount; i++)
{
Console.WriteLine("{0} = {1}", reader.GetName(i), reader[i].ToString());
}
Console.WriteLine();
}
}
The result is a list of the columns and their values in the DataReader, as shown here.
Id = 1
Status = DRAFT
CreatedOn = 01/02/2009 11:12:06
Name = Adjustable Race
LastName = Abbas
FirstName = Syed
ShipStreet = 123 Elm Street
ShipCity = Denver
ShipZipCode = 12345
ShippingOption = Two-day shipping
State = Colorado
The example named Return rows using a stored procedure with parameters uses this code to query
the sample database, and generates the following output.
38
Id = 1
Status = DRAFT
CreatedOn = 01/02/2009 11:12:06
Name = Adjustable Race
LastName = Abbas
FirstName = Syed
ShipStreet = 123 Elm Street
ShipCity = Denver
ShipZipCode = 12345
ShippingOption = Two-day shipping
State = Colorado
Id = 2
Status = DRAFT
CreatedOn = 03/02/2009 01:12:06
Name = All-Purpose Bike Stand
LastName = Abel
FirstName = Catherine
ShipStreet = 321 Cedar Court
ShipCity = Denver
ShipZipCode = 12345
ShippingOption = One-day shipping
State = Colorado
39
The example named Return rows using a SQL statement or stored procedure with named parameters
uses the code you see above to execute a SQL statement and a stored procedure against the sample
database. The code provides the same parameter value to each, and both queries return the same
single row, as shown here.
Id = 4
Status = DRAFT
CreatedOn = 07/02/2009 05:12:06
Name = BB Ball Bearing
LastName = Abel
FirstName = Catherine
ShipStreet = 888 Main Street
ShipCity = New York
40
ShipZipCode = 54321
ShippingOption = Three-day shipping
State = New York
41
The accessor will attempt to resolve the parameters automatically using a default mapper if you do
not specify a parameter mapper. However, this feature is only available for stored procedures
executed against SQL Server and Oracle databases. It is not available when using SQL statements, or
for other databases and providers, where you must specify a custom parameter mapper that can
resolve the parameters.
If you do not specify an output mapper, the block uses a default map builder class that maps the
column names of the returned data to properties of the objects it creates. Alternatively, you can
create a custom mapping to specify the relationship between columns in the row set and the
properties of the objects.
Inferring the details required to create the correct mappings means that the default parameter and
output mappers can have an effect on performance. You may prefer to create your own custom
mappers and retain a reference to them for reuse when possible to maximize performance of your
data access processes when using accessors.
For a full description of the techniques for using accessors, see the Enterprise Library documentation
on MSDN at http://go.microsoft.com/fwlink/?LinkId=188874, or installed with Enterprise Library.
42
This chapter covers only the simplest approach: using the ExecuteSprocAccessor method of the
Database class.
Creating and Executing an Accessor
The following code shows how you can use an accessor to execute a stored procedure and then
manipulate the sequence of objects that is returned. You must specify the object type that you want
the data returned asin this example it is a simple class named Product that has the three
properties: ID, Name, and Description.
The stored procedure takes a single parameter that is a search string, and returns details of all
products in the database that contain this string. Therefore, the code first creates an array of
parameter values to pass to the accessor, and then calls the ExecuteSprocAccessor method. It
specifies the Product class as the type of object to return, and passes to the method the name of the
stored procedure to execute and the array of parameter values.
// Create an object array and populate it with the required parameter values.
object[] paramArray = new object[] { "%bike%" };
// Create and execute a sproc accessor that uses the default
// parameter and output mappings.
var productData = defaultDB.ExecuteSprocAccessor<Product>("GetProductList",
paramArray);
// Perform a client-side query on the returned data. Be aware that
// the orderby and filtering is happening on the client, not in the database.
var results = from productItem in productData
where productItem.Description != null
orderby productItem.Name
select new { productItem.Name, productItem.Description };
// Display the results
foreach (var item in results)
{
Console.WriteLine("Product Name: {0}", item.Name);
Console.WriteLine("Description: {0}", item.Description);
Console.WriteLine();
}
The accessor returns the data as a sequence that, in this example, the code handles using a LINQ
query to remove all items where the description is empty, sort the list by name, and then create a
new sequence of objects that have just the Name and Description properties. For more information
on using LINQ to query sequences, see http://msdn.microsoft.com/en-us/library/bb397676.
Keep in mind that returning sets of data that you manipulate on the client can have an impact on
performance. In general, you should attempt to return data in the format required by the client,
and minimize client-side data operations.
The example Return data as a sequence of objects using a stored procedure uses the code you see
above to query the sample database and process the resulting rows. The output it generates is
shown here.
43
For an example of creating an accessor and then calling the Execute method, see the section
"Retrieving Data as Objects Asynchronously" later in this chapter.
Creating and Using Mappers
In some cases, you may need to create a custom parameter mapper to pass your parameters to the
query that the accessor will execute. This typically occurs when you need to execute a SQL
statement to work with a database system that does not support parameter resolution, or when a
default mapping cannot be inferred due to a mismatch in the number or types of the parameters.
The parameter mapper class must implement the IParameterMapper interface and contain a
method named AssignParameters that takes a reference to the current Command instance and the
array of parameters. The method simply needs to add the required parameters to the Command
object's Parameters collection.
More often, you will need to create a custom output mapper. To help you do this, the block provides
a class called MapBuilder that you can use to create the set of mappings you require between the
columns of the data set returned by the query and the properties of the objects you need.
By default, the accessor will expect to generate a simple sequence of a single type of object (in our
earlier example, this was a sequence of the Product class). However, you can use an accessor to
return a more complex graph of objects if you wish. For example, you might execute a query that
returns a series of Order objects and the related OrderLines objects for all of the selected orders.
Simple output mapping cannot cope with this scenario, and neither can the MapBuilder class. In this
case, you would create a result set mapper by implementing the IResultSetMapper interface. Your
custom row set mapper must contain a method named MapSet that receives a reference to an
object that implements the IDataReader interface. The method should read all of the data available
through the data reader, processes it to create the sequence of objects you require, and return this
sequence.
44
way you require. For a description of the capabilities of SQLXML, see http://msdn.microsoft.com/enus/library/aa286527(v=MSDN.10).aspx.
The Data Access block provides the ExecuteXmlReader method for querying data as XML. It takes a
SQL statement that contains the FOR XML statement and executes it against the database, returning
the result as an XmlReader. You can iterate through the resulting XML elements or work with them
in any of the ways supported by the XML classes in the .NET Framework. However, as SQLXML is
limited to SQL Server (the implementations of this type of query differ in other database systems), it
is only available when you specifically use the SqlDatabase class (rather than the Database class).
The following code shows how you can obtain a SqlDatabase instance, specify a suitable SQLXML
query, and execute it using the ExecuteXmlReader method.
// Resolve a SqlDatabase object from the container using the default database.
SqlDatabase sqlServerDB
= EnterpriseLibraryContainer.Current.GetInstance<Database>() as SqlDatabase;
// Specify a SQL query that returns XML data.
string xmlQuery = "SELECT * FROM OrderList WHERE State = @state FOR XML AUTO";
// Create a suitable command type and add the required parameter
// NB: ExecuteXmlReader is only available for SQL Server databases
using (DbCommand xmlCmd = sqlServerDB.GetSqlStringCommand(xmlQuery))
{
xmlCmd.Parameters.Add(new SqlParameter("state", "Colorado"));
using (XmlReader reader = sqlServerDB.ExecuteXmlReader(xmlCmd))
{
// Iterate through the elements in the XmlReader
while (!reader.EOF)
{
if (reader.IsStartElement())
{
Console.WriteLine(reader.ReadOuterXml());
}
}
}
}
The code above also shows a simple approach to extracting the XML data from the XmlReader
returned from the ExecuteXmlReader method. One point to note is that, by default, the result is an
XML fragment, and not a valid XML document. It is, effectively, a sequence of XML elements that
represent each row in the results set. Therefore, at minimum, you must wrap the output with a
single root element so that it is well-formed. For more information about using an XmlReader, see
"Reading XML with the XmlReader" in the online MSDN documentation at
http://msdn.microsoft.com/en-us/library/9d83k261.aspx.
The example Return data as an XML fragment using a SQL Server XML query uses the code you see
above to query a SQL Server database. It returns two XML elements in the default format for a FOR
XML AUTO query, with the values of each column in the data set represented as attributes, as shown
here.
45
You might use this approach when you want to populate an XML document, transform the data for
display, or persist it in some other form. You might use an XSLT style sheet to transform the data to
the required format. For more information on XSLT, see "XSLT Transformations" at
http://msdn.microsoft.com/en-us/library/14689742.aspx.
46
You can see the code listed above running in the example Return a single scalar value from a SQL
statement or stored procedure. The somewhat unexciting result it produces is shown here.
Result using a SQL statement: Alabama
Result using a stored procedure: Alabama
47
In addition, asynchronous processing in the Data Access block is only available for SQL Server
databases. The Database class includes a property named SupportsAsync that you can query to see
if the current Database instance does, in fact, support asynchronous operations. The example for
this chapter contains a simple check for this.
One other point to note is that asynchronous data access usually involves the use of a callback that
runs on a different thread from the calling code. A common approach to writing callback code in
modern applications is to use Lambda expressions rather than a separate callback handler routine.
This callback usually cannot directly access the user interface in a Windows Forms or Windows
Presentation Foundation (WPF) application. You will, in most cases, need to use a delegate to call a
method in the original UI class to update the data returned by the callback.
Other points to note about asynchronous data access are the following:
You can use the standard .NET methods and classes from the System.Threading namespace,
such as wait handles and manual reset events, to manage asynchronous execution of the
Data Access block methods. You can also cancel a pending or executing command by calling
the Cancel method of the command you used to initiate the operation. For more
information, see "Asynchronous Command Execution in ADO.NET 2.0" on MSDN at
http://msdn.microsoft.com/en-us/library/ms379553(VS.80).aspx.
Always ensure you call the appropriate EndExecute method when you use asynchronous
data access, even if you do not actually require access to the results, or call the Cancel
method on the connection. Failing to do so can cause memory leaks and consume additional
system resources.
Using asynchronous data access with the Multiple Active Results Set (MARS) feature of
ADO.NET may produce unexpected behavior, and should generally be avoided.
Asynchronous data access is only available if the database is SQL Server 7.0 or later. Also, for
SQL Server 7.0 and SQL Server 2000, the database connection must use TCP. It cannot use
shared memory. To ensure that TCP is used for SQL Server 7.0 and SQL Server 2000, use
localhost, tcp:server_name, or tcp:ip_address for the server name in the connection string.
Asynchronous code is notoriously difficult to write, test, and debug for all edge cases, and you
should only consider using it where it really can provide a performance benefit. For guidance on
performance testing and setting performance goals see "patterns & practices Performance Testing
Guidance for Web Applications" at http://perftestingguide.codeplex.com/.
48
The Lambda expression then calls the EndExecuteReader method to obtain the results of the query
execution. At this point you can consume the row set in your application or, as the code above does,
just display the values. Notice that the callback expression should handle any errors that may occur
during the asynchronous operation.
You can also, of course, use the separate callback approach instead of an inline Lambda expression
if you wish.
The AsyncState parameter can be used to pass any required state information into the callback. For
example, when you use a separate callback, you would pass a reference to the current Database
instance as the AsyncState parameter so that the callback code can call the EndExecuteReader (or
other appropriate End method) to obtain the results. When you use a Lambda expression, the
current Database instance is available within the expression and, therefore, you do not need to
populate the AsyncState parameter.
The example Execute a command that retrieves data asynchronously uses the code shown above to
fetch two rows from the database and display the contents. As well as the code above, it uses a
simple routine that displays a "Waiting..." message every second as the code executes. The result is
shown here.
49
Of course, as we don't have a multi-million-row database handy to query, the example uses a stored
procedure that contains a WAIT statement to simulate a long-running data access operation. It also
uses ManualResetEvent objects to manage the threads so that you can see the results more clearly.
Open the sample in Visual Studio, or view the Program.cs file, to see the way this is done.
Retrieving Data as Objects Asynchronously
You can also execute data accessors asynchronously when you want to return your data as a
sequence of objects rather than as rows and columns. The example Execute a command that
retrieves data as objects asynchronously demonstrates this technique. You can create your accessor
and associated mappers in the same way as shown in the previous section of this chapter, and then
call the BeginExecute method of the accessor. This works in much the same way as when using the
BeginExecuteReader method described in the previous example.
You pass to the BeginExecute method the lambda expression or callback to execute when the
asynchronous data access process completes, along with the AsyncState and an array of Object
instances that represent the parameters to apply to the stored procedure or SQL statement you are
executing. The lambda expression or callback method can obtain a reference to the accessor that
was executed from the AsyncState (casting it to an instance of the DataAccessor base type so that
the code will work with any accessor implementation), and then call the EndExecute method of the
accessor to obtain a reference to the sequence of objects the accessor retrieved from the database.
50
Updating Data
So far, we've looked at retrieving data from a database using the classes and methods of the Data
Access block. Of course, while this is typically the major focus of many applications, you will often
need to update data in your database. The Data Access block provides features that support data
updates. You can execute update queries (such as INSERT, DELETE, and UPDATE statements) directly
against a database using the ExecuteNonQuery method. In addition, you can use the
ExecuteDataSet, LoadDataSet, and UpdateDataSet methods to populate a DataSet and push
changes to the rows back into the database. We'll look at both of these approaches here.
Executing an Update Query
The Data Access block makes it easy to execute update queries against a database. By update
queries, we mean inline SQL statements, or SQL statements within stored procedures, that use the
UPDATE, DELETE, or INSERT keywords. You can execute these kinds of queries using the
ExecuteNonQuery method of the Database class.
Like the ExecuteReader method we used earlier in this chapter, the ExecuteNonQuery method has a
broad set of overloads. You can specify a CommandType (the default is StoredProcedure) and either
a SQL statement or a stored procedure name. You can also pass in an array of Object instances that
represent the parameters for the query. Alternatively, you can pass to the method a Command
object that contains any parameters you require. There are also Begin and End versions that allow
you to execute update queries asynchronously.
The following code from the example application for this chapter shows how you can use the
ExecuteNonQuery method to update a row in a table in the database. It updates the Description
column of a single row in the Products table, checks that the update succeeded, and then updates it
again to return it to the original value (so that you can run the example again). The first step is to
create the command and add the required parameters, as you've seen in earlier examples, and then
call the ExecuteNonQuery method with the command as the single parameter. Next, the code
changes the value of the command parameter named description to the original value in the
database, and then executes the compensating update.
string oldDescription
= "Carries 4 bikes securely; steel construction, fits 2\" receiver hitch.";
string newDescription = "Bikes tend to fall off after a few miles.";
// Create command to execute the stored procedure and add the parameters.
DbCommand cmd = defaultDB.GetStoredProcCommand("UpdateProductsTable");
defaultDB.AddInParameter(cmd, "productID", DbType.Int32, 84);
defaultDB.AddInParameter(cmd, "description", DbType.String, newDescription);
// Execute the query and check if one row was updated.
if (defaultDB.ExecuteNonQuery(cmd) == 1)
{
// Update succeeded.
}
else
{
Console.WriteLine("ERROR: Could not update just one row.");
}
51
Notice the pattern used to execute the query and check that it succeeded. The ExecuteNonQuery
method returns an integer value that is the number of rows updated (or, to use the more accurate
term, affected) by the query. In this example, we are specifying a single row as the target for the
update by selecting on the unique ID column. Therefore, we expect only one row to be updated
any other value means there was a problem.
If you are expecting to update multiple rows, you would check for a non-zero returned value.
Typically, if you need to ensure integrity in the database, you could perform the update within a
connection-based transaction, and roll it back if the result was not what you expected. We look at
how you can use transactions with the Data Access block methods in the section "Working with
Connection-Based Transactions" later in this chapter.
The example Update data using a Command object, which uses the code you see above, produces
the following output.
Contents of row before update:
Id = 84
Name = Hitch Rack - 4-Bike
Description = Carries 4 bikes securely; steel construction, fits 2" receiver
hitch.
Contents of row after first update:
Id = 84
Name = Hitch Rack - 4-Bike
Description = Bikes tend to fall off after a few miles.
Contents of row after second update:
Id = 84
Name = Hitch Rack - 4-Bike
Description = Carries 4 bikes securely; steel construction, fits 2" receiver
hitch.
52
To fill a DataSet, you use the ExecuteDataSet method, which returns a new instance of the DataSet
class populated with a table containing the data for each row set returned by the query (which may
be a multiple-statement batch query). The tables in this DataSet will have default names such as
Table, Table1, and Table2.
If you want to load data into an existing DataSet, you use the LoadDataSet method. This allows you
to specify the name(s) of the target table(s) in the DataSet, and lets you add additional tables to an
existing DataSet or refresh the contents of specific tables in the DataSet.
Both of these methods, ExecuteDataSet and LoadDataSet, have a similar broad set of overloads to
the ExecuteReader and other methods you've seen earlier in this chapter. You can specify a
CommandType (the default is StoredProcedure) and either a SQL statement or a stored procedure
name. You can also pass in an array of Object instances that represent the parameters for the query.
Alternatively, you can pass to the method a Command object that contains any parameters you
require.
For example, the following code lines show how you can use the ExecuteDataSet method with a SQL
statement; with a stored procedure and a parameter array; and with a command pre-populated with
parameters. The code assumes you have created the Data Access block Database instance named
db.
DataSet productDataSet;
// Using a SQL statement.
string sql = "SELECT CustomerName, CustomerPhone FROM Customers";
productDataSet = db.ExecuteDataSet(CommandType.Text, sql);
// Using a stored procedure and a parameter array.
productDataSet = db.ExecuteDataSet("GetProductsByCategory",
new Object[] { "%bike%" });
// Using a stored procedure and a named parameter.
DbCommand cmd = db.GetStoredProcCommand("GetProductsByCategory");
db.AddInParameter(cmd, "CategoryID", DbType.Int32, 7);
productDataSet = db.ExecuteDataSet(cmd);
Standard. If the underlying ADO.NET update process encounters an error, the update stops
and no subsequent updates are applied to the target table.
53
Continue. If the underlying ADO.NET update process encounters an error, the update will
continue and attempt to apply any subsequent updates.
Transactional. If the underlying ADO.NET update process encounters an error, all the
updates made to all rows will be rolled back.
Finally, you canif you wishprovide a value for the UpdateBatchSize parameter of the
UpdateDataSet method. This forces the method to attempt to perform updates in batches instead
of sending each one to the database individually. This is more efficient, but the return value for the
method will show only the number of updates made in the final batch, and not the total number for
all batches. Typically, you are likely to use a batch size value between 10 and 100. You should
experiment to find the most appropriate batch size; it depends on the type of database you are
using, the query you are executing, and the number of parameters for the query.
The examples for this chapter include one named Fill a DataSet and update the source data, which
demonstrates the ExecuteDataSet and UpdateDataSet methods. It uses the simple overloads of the
ExecuteDataSet and LoadDataSet methods to fill two DataSet instances, using a separate routine
named DisplayTableNames (not shown here) to display the table names and a count of the number
of rows in these tables. This shows one of the differences between these two methods. Note that
the LoadDataSet method requires a reference to an existing DataSet instance, and an array
containing the names of the tables to populate.
string selectSQL = "SELECT Id, Name, Description FROM Products WHERE Id > 90";
// Fill a DataSet from the Products table using the simple approach.
DataSet simpleDS = defaultDB.ExecuteDataSet(CommandType.Text, selectSQL);
DisplayTableNames(simpleDS, "ExecuteDataSet");
// Fill a DataSet from the Products table using the LoadDataSet method.
// This allows you to specify the name(s) for the table(s) in the DataSet.
DataSet loadedDS = new DataSet("ProductsDataSet");
defaultDB.LoadDataSet(CommandType.Text, selectSQL, loadedDS,
new string[] { "Products" });
DisplayTableNames(loadedDS, "LoadDataSet");
The example then accesses the rows in the DataSet to delete a row, add a new row, and change the
Description column in another row. After this, it displays the updated contents of the DataSet table.
// get a reference to the Products table in the DataSet.
DataTable dt = loadedDS.Tables["Products"];
// Delete a row in the DataSet table.
dt.Rows[0].Delete();
54
This produces the following output. To make it easier to see the changes, we've omitted the
unchanged rows from the listing. Of course, the deleted row does not show in the listing, and the
new row has the default ID of -1 that we specified in the code above.
Rows in the table named 'Products':
Id = 91
Name = HL Mountain Frame - Black, 44
Description = A new description at 14:25
...
Id = -1
Name = A New Row
Description = Added to the table at 14:25
The next stage is to create the commands that the UpdateDataSet method will use to update the
target table in the database. The code declares three suitable SQL statements, and then builds the
commands and adds the requisite parameters to them. Note that each parameter may be applied to
multiple rows in the target table, so the actual value must be dynamically set based on the contents
of the DataSet row whose updates are currently being applied to the target table.
This means that you must specify, in addition to the parameter name and data type, the name and
the version (Current or Original) of the row in the DataSet to take the value from. For an INSERT
command, you need the current version of the row that contains the new values. For a DELETE
command, you need the original value of the ID to locate the row in the table that will be deleted.
For an UPDATE command, you need the original value of the ID to locate the row in the table that
will be updated, and the current version of the values with which to update the remaining columns
in the target table row.
string addSQL = "INSERT INTO Products (Name, Description) VALUES (@name,
@description);";
string updateSQL = "UPDATE Products SET Name = @name, "
+ "Description = @description WHERE Id = @id";
string deleteSQL = "DELETE FROM Products WHERE Id = @id";
// Create the commands to update the original table in the database
DbCommand insertCommand = defaultDB.GetSqlStringCommand(addSQL);
defaultDB.AddInParameter(insertCommand, "name", DbType.String, "Name",
DataRowVersion.Current);
55
Finally, you can apply the changes by calling the UpdateDataSet method, as shown here.
// Apply the updates in the DataSet to the original table in the database.
int rowsAffected = defaultDB.UpdateDataSet(loadedDS, "Products",
insertCommand, updateCommand, deleteCommand,
UpdateBehavior.Standard);
Console.WriteLine("Updated a total of {0} rows in the database.", rowsAffected);
The code captures and displays the number of rows affected by the updates. As expected, this is
three, as shown in the final section of the output from the example.
Updated a total of 3 rows in the database.
Managing Connections
For many years, developers have fretted about the ideal way to manage connections in data access
code. Connections are scarce, expensive in terms of resource usage, and can cause a big
performance hit if not managed correctly. You must obviously open a connection before you can
access data, and you should make sure it is closed after you have finished with it. However, if the
operating system does actually create a new connection, and then closes and destroys it every time,
execution in your applications would flow like molasses.
Instead, ADO.NET holds a pool of open connections that it hands out to applications that require
them. Data access code must still go through the motions of calling the methods to create, open,
and close connections, but ADO.NET automatically retrieves connections from the connection pool
when possible, and decides when and whether to actually close the underlying connection and
dispose it. The main issues arise when you have to decide when and how your code should call the
Close method. The Data Access block helps to resolve these issues by automatically managing
connections as far as is reasonably possible.
When you use the Data Access block to retrieve a DataSet, the ExecuteDataSet method
automatically opens and closes the connection to the database. If an error occurs, it will ensure that
the connection is closed. If you want to keep a connection open, perhaps to perform multiple
operations over that connection, you can access the ActiveConnection property of your
DbCommand object and open it before calling the ExecuteDataSet method. The ExecuteDataSet
method will leave the connection open when it completes, so you must ensure that your code closes
it afterwards.
56
In contrast, when you retrieve a DataReader or an XmlReader, the ExecuteReader method (or, in
the case of the XmlReader, the ExecuteXmlReader method) must leave the connection open so that
you can read the data. The ExecuteReader method sets the CommandBehavior property of the
reader to CloseConnection so that the connection is closed when you dispose the reader.
Commonly, you will use a using construct to ensure that the reader is disposed, as shown here:
using (IDataReader reader = db.ExecuteReader(cmd))
{
// use the reader here
}
This code, and code later in this section, assumes you have created the Data Access block Database
instance named db and a DbCommand instance named cmd.
Typically, when you use the ExecuteXmlReader method, you will explicitly close the connection after
you dispose the reader. This is because the underlying XmlReader class does not expose a
CommandBehavior property. However, you should still use the same approach as with a
DataReader (a using statement) to ensure that the XmlReader is correctly closed and disposed.
using (XmlReader reader = db.ExecuteXmlReader(cmd))
{
// use the reader here
}
Finally, if you want to be able to access the connection your code is using, perhaps to create
connection-based transactions in your code, you can use the Data Access block methods to explicitly
create a connection for your data access methods to use. This means that you must manage the
connection yourself, usually through a using statement as shown below, which automatically closes
and disposes the connection:
using (DbConnection conn = db.CreateConnection())
{
conn.Open();
try
{
// perform data access here
}
catch
{
// handle any errors here
}
}
57
Transactions should follow the four ACID principles. These are Atomicity (all of the tasks of a
transaction are performed or none of them are), Consistency (the database remains in a consistent
state before and after the transaction), Isolation (other operations cannot access or see the data in
an intermediate state during a transaction), and Durability (the results of a successful transaction
are persisted and will survive system failure).
You can execute transactions when all of the updates occur in a single database by using the
features of your database system (by including the relevant commands such as BEGIN TRANSACTION
and ROLLBACK TRANSACTION in your stored procedures). ADO.NET also provides features that allow
you to perform connection-based transactions over a single connection. This allows you to perform
multiple actions on different tables in the same database, and manage the commit or rollback in
your data access code.
All of the methods of the Data Access block that retrieve or update data have overloads that accept
a reference to an existing transaction as a DbTransaction type. As an example of their use, the
following code explicitly creates a transaction over a connection. It assumes you have created the
Data Access block Database instance named db and two DbCommand instances named cmdA and
cmdB.
using (DbConnection conn = db.CreateConnection())
{
conn.Open();
DbTransaction trans = conn.BeginTransaction();
try
{
// execute commands, passing in the current transaction to each one
db.ExecuteNonQuery(cmdA, trans);
db.ExecuteNonQuery(cmdB, trans);
trans.Commit();
// commit the transaction
}
catch
{
trans.Rollback(); // rollback the transaction
}
}
The examples for this chapter include one named Use a connection-based transaction, which
demonstrates the approach shown above. It starts by displaying the values of two rows in the
Products table, and then uses the ExecuteNonQuery method twice to update the Description
column of two rows in the database within the context of a connection-based transaction. As it does
so, it displays the new description for these rows. Finally, it rolls back the transaction, which restores
the original values, and then displays these values to prove that it worked.
Contents of rows before update:
Id = 53
Name = Half-Finger Gloves, L
Description = Full padding, improved finger flex, durable palm, adjustable
closure.
58
Id = 84
Name = Hitch Rack - 4-Bike
Description = Carries 4 bikes securely; steel construction, fits 2" receiver
hitch.
------------------------------------------------------------------------------Updated row with ID = 53 to 'Third and little fingers tend to get cold.'.
Updated row with ID = 84 to 'Bikes tend to fall off after a few miles.'.
------------------------------------------------------------------------------Contents of row after rolling back transaction:
Id = 53
Name = Half-Finger Gloves, L
Description = Full padding, improved finger flex, durable palm, adjustable
closure.
Id = 84
Name = Hitch Rack - 4-Bike
Description = Carries 4 bikes securely; steel construction, fits 2" receiver
hitch.
59
For more details about using a DTC and transaction scope, see "Distributed Transactions (ADO.NET)"
at http://msdn.microsoft.com/en-us/library/ms254973.aspx and "System.Transactions Integration
with SQL Server (ADO.NET)" at http://msdn.microsoft.com/en-us/library/ms172070.aspx.
The examples for this chapter contain one named Use a TransactionScope for a distributed
transaction, which demonstrates the use of a TransactionScope with the Data Access block. It
performs the same updates to the Products table in the database as you saw in the previous
example of using a connection-based transaction. However, there are subtle differences in the way
this example works.
In addition, as it uses the Windows Distributed Transaction Coordinator (DTC) service, you must
ensure that this service is running before you execute the example; depending on your operating
system it may not be set to start automatically. To start the service, open the Services MMC snap-in
from your Administrative Tools menu, right-click on the Distributed Transaction Coordinator service,
and click Start. To see the effects of the TransactionScope and the way that it promotes a
transaction, open the Component Services MMC snap-in from your Administrative Tools menu and
expand the Component Services node until you can see the Transaction List in the central pane of
the snap-in.
When you execute the example, it creates a new TransactionScope and executes the
ExecuteNonQuery method twice to update two rows in the database table. At this point, the code
stops until you press a key. This gives you the opportunity to confirm that there is no distributed
transactionas you can see if you look in the transaction list in the Component Services MMC snapin.
After you press a key, the application creates a new connection to the database (when we used a
connection-based transaction in the previous example, we just updated the parameter values and
executed the same commands over the same connection). This new connection, which is within the
scope of the existing TransactionScope instance, causes the DTC to start a new distributed
transaction and enroll the existing lightweight transaction into it; as shown in Figure 3.
Figure 3
Viewing DTC transactions
60
The code then waits until you press a key again, at which point it exits from the using clause that
created the TransactionScope, and the transaction is no longer in scope. As the code did not call the
Complete method of the TransactionScope to preserve the changes in the database, they are rolled
back automatically. To prove that this is the case, the code displays the values of the rows in the
database again. This is the complete output from the example.
Contents of rows before update:
Id = 53
Name = Half-Finger Gloves, L
Description = Full padding, improved finger flex, durable palm, adjustable
closure.
Id = 84
Name = Hitch Rack - 4-Bike
Description = Carries 4 bikes securely; steel construction, fits 2" receiver
hitch.
------------------------------------------------------------------------------Updated row with ID = 53 to 'Third and little fingers tend to get cold.'.
No distributed transaction. Press any key to continue...
Updated row with ID = 84 to 'Bikes tend to fall off after a few miles.'.
New distributed transaction created. Press any key to continue...
------------------------------------------------------------------------------Contents of row after disposing TransactionScope:
Id = 53
Name = Half-Finger Gloves, L
Description = Full padding, improved finger flex, durable palm, adjustable
closure.
Id = 84
Name = Hitch Rack - 4-Bike
61
This default behavior of the TransactionScope ensures that an error or problem that stops the code
from completing the transaction will automatically roll back changes. If your code does not seem to
be updating the database, make sure you remembered to call the Complete method!
Summary
This chapter discussed the Data Access Application Block; one of the most commonly used blocks in
Enterprise Library. The Data Access block provides two key advantages for developers and
administrators. Firstly, it abstracts the database so that developers and administrators can switch
the application from one type of database to another with only changes to the configuration files
required. Secondly, it helps developers by making it easier to write the most commonly used
sections of data access code with less effort, and it hides some of the complexity of working directly
with ADO.NET.
In terms of abstracting the database, the block allows developers to write code in such a way that
(for most functions) they do not need to worry which database (such as SQL Server, SQL Server CE,
or Oracle) their applications will use. They write the same code for all of them, and configure the
application to specify the actual database at run time. This means that administrators and
operations staff can change the targeted database without requiring changes to the code,
recompilation, retesting, and redeployment.
62
In terms of simplifying data access code, the block provides a small number of methods that
encompass most data access requirements, such as retrieving a DataSet, a DataReader, a scalar
(single) value, one or more values as output parameters, or a series of XML elements. It also
provides methods for updating a database from a DataSet, and integrates with the ADO.NET
TransactionScope class to allow a range of options for working with transactions. However, the
block does not limit your options to use more advanced ADO.NET techniques, as it allows you to
access the underlying objects such as the connection and the DataAdapter.
The chapter also described general issues such as managing connections and integration with
transactions, and explored the actual capabilities of the block in more depth. Finally, we looked
briefly at how you can use the block with other databases, including those supported by third-party
providers.
63
Assisting support staff in cross-referencing the exception and tracing the cause
So, having decided that you probably should implement some kind of structured exception handling
strategy in your code, how do you go about it? A good starting point, as usual, is to see if there are
any recommendations in the form of well-known patterns that you can implement. In this case,
there are. The primary pattern that helps you to build secure applications is called Exception
Shielding. Exception Shielding is the process of ensuring that your application does not leak sensitive
information, no matter what runtime or system event may occur to interrupt normal operation. And
on a more granular level, it can prevent your assets from being revealed across layer, tier, process,
or service boundaries.
Two more exception handling patterns that you should consider implementing are the Exception
Logging pattern and the Exception Translation pattern. The Exception Logging pattern can help you
diagnose and troubleshoot errors, audit user actions, and track malicious activity and security issues.
The Exception Translation pattern describes wrapping exceptions within other exceptions specific to
64
a layer to ensure that they actually reflect user or code actions within the layer at that time, and not
some miscellaneous details that may not be useful.
In this chapter, you will see how the Enterprise Library Exception Handling block can help you to
implement these patterns, and become familiar with the other techniques that make up a
comprehensive exception management strategy. You'll see how to replace, wrap, and log
exceptions; and how to modify exception messages to make them more useful. And, as a bonus,
you'll see how you can easily implement exception shielding for Windows Communication
Foundation (WCF) Web services.
B: Replace them
65
You can, of course, phone a friend or ask the audience if you think it will help. However, unlike most
quiz games, all of the answers are actually correct (which is why we don't offer prizes). If you
answered A, B, or C, you can move on to the section "About Exception Handling Policies." However,
if you answered D: Allow them to propagate, read the following section.
One or more exception handlers that the block will execute when a matching exception
occurs. You can choose from four out-of-the-box handlers: the Replace handler, the Wrap
handler, the Logging handler, and the Fault Contract exception handler. Alternatively, you
can create custom exception handlers and choose these (see "Extending your Exception
Handling" near the end of this chapter for more information).
A post-handling action value that specifies what happens after the Exception Handling block
executes the handlers you specify. Effectively, this setting tells the calling code whether to
continue executing. You can choose from:
NotifyRethrow (the default). Return true to the calling code to indicate that it
should throw an exception, which may be the one that was actually caught or the
one generated by the policy.
ThrowNewException. The Exception Handling block will throw the exception that
results from executing all of the handlers.
None. Returns false to the calling code to indicate that it should continue executing.
66
Figure 1
Configuration of the MyTestExceptionPolicy exception handling policy
Notice how you can specify the properties for each type of exception handler. For example, in the
previous screenshot you can see that the Replace Handler has properties for the exception message
and the type of exception you want to use to replace the original exception. Also, notice that you
can localize your policy by specifying the name and type of the resource containing the localized
message string.
Replace the exception with a different one and throw the new exception. This is an
implementation of the Exception Shielding pattern. In your exception handling code, you
67
can clean up resources or perform any other relevant processing. You use a Replace handler
in your exception handling policy to replace the exception with a different exception
containing sanitized or new information that does not reveal sensitive details about the
source of the error, the application, or the operating system. Add a Logging handler to the
exception policy if you want to log the exception. Place it before the Replace handler to log
the original exception, or after it to log the replacement exception (if you log sensitive
information, make sure your log files are properly secured). Set the post-handling action to
ThrowNewException so that the block will throw the new exception.
Wrap the exception to preserve the content and then throw the new exception. This is an
implementation of the Exception Translation pattern. In your exception handling code, you
can clean up resources or perform any other relevant processing. You use a Wrap handler in
your exception-handling policy to wrap the exception within another exception that is more
relevant to the caller and then throw the new exception so that code higher in the code
stack can handle it. This approach is useful when you want to keep the original exception
and its information intact, and/or provide additional information to the code that will
handle the exception. Add a Logging handler to the exception policy if you want to log the
exception. Place it before the Wrap handler to log the original exception, or after it to log
the enclosing exception. Set the post-handling action to ThrowNewException so that the
block will throw the new exception.
Log and, optionally, re-throw the original exception. In your exception handling code, you
can clean up resources or perform any other relevant processing. You use a Logging handler
in your exception handling policy to write details of the exception to the configured logging
store such as Windows Event Log or a file (an implementation of the Exception Logging
pattern). If the exception does not require any further action by code elsewhere in the
application (for example, if a retry action succeeds), set the post-handling action to None.
Otherwise, set the post-handling action to NotifyRethrow. Your event handler code can
then decide whether to throw the exception. Alternatively, you can set it to
ThrowNewException if you always want the Exception Handling block to throw the
exception for you.
Remember that the whole idea of using the Exception Handling block is to implement a strategy
made up of configurable policies that you can change without having to edit, recompile, and
redeploy the application. For example, the block allows you (or an administrator) to:
Add, remove, and change the types of handlers (such as the Wrap, Replace, and Logging
handlers) that you use for each exception policy, and change the order in which they
execute.
Add, remove, and change the exception types that each policy will handle, and the types
of exceptions used to wrap or replace the original exceptions.
Modify the target and style of logging, including modifying the log messages, for each type
of exception you decide to log. This is useful, for example, when testing and debugging
applications.
68
Decide what to do after the block handles the exception. Provided that the exceptionhandling code you write checks the return value from the call to the Exception Handling
block, the post-handling action will allow you or an administrator to specify whether the
exception should be thrown. Again, this is extremely useful when testing and debugging
applications.
Process or HandleException?
The Exception Handling block provides two ways for you to manage exceptions. You can use the
Process method to execute any method in your application, and have the block automatically
perform management and throwing of the exception. Alternatively, if you want to apply more
granular control over the process, you can use the HandleException method. The following will help
you to understand which approach to choose.
The Process method is the most common approach, and is useful in the majority of cases.
You specify either a delegate (the address of a method) or a lambda expression that you
want to execute. The Exception Handling block executes the method or expression, and
automatically manages any exception that occurs. You will generally specify a
PostHandlingAction of ThrowNewException so that the block automatically throws the
exception that results from executing the exception handling policy. However, if you want
the code to continue to execute (instead of throwing the exception), you can set the
PostHandlingAction of your exception handling policy to None.
The HandleException method is useful if you want to be able to detect the result of
executing the exception handling policy. For example, if you set the PostHandlingAction of a
policy to NotifyRethrow, you can use the return value of the HandleException method to
determine whether or not to throw the exception. You can also use the HandleException
method to pass an exception to the block and have it return the exception that results from
executing the policywhich might be the original exception, a replacement exception, or
the original exception wrapped inside a new exception.
You will see both the Process and the HandleException techniques described in the following
examples, although most of them use the Process method.
Using the Process Method
The Process method has several overloads that make it easy to execute functions that return a
value, and methods that do not. Typically, you will use the Process method in one of the following
ways:
To execute a routine or method that does not accept parameters and does not return a
value:
exManager.Process(method_name, "Exception Policy Name");
To execute a routine that does accept parameters but does not return a value:
exManager.Process(() => method_name(param1, param2),
"Exception Policy Name");
69
To execute a routine that accepts parameters and returns a value, and to also supply a
default value to be returned should an exception occur and the policy that executes does
not throw the exception. If you do not specify a default value and the PostHandlingAction is
set to None, the Process method will return null for reference types, zero for numeric types,
or the default empty value for other types should an exception occur.
var result = exManager.Process(() => method-name(param1, param2),
default_result_value,
"Exception Policy Name");
The Process method is optimized for use with lambda expressions, which are supported in C# 3.0
on version 3.5 of the .NET Framework and in Microsoft Visual Studio 2008 onwards. If you are
not familiar with lambda functions or their syntax, see http://msdn.microsoft.com/enus/library/bb397687.aspx. For a full explanation of using the HandleException method, see the
"Key Scenarios" topic in the online documentation for Enterprise Library 4.1 at
http://msdn.microsoft.com/en-us/library/dd203198.aspx.
70
To see how you can apply exception handling strategies and configure exception handling policies,
we'll start with a simple example that causes an exception when it executes. First, we need a class
that contains a method that we can call from our main routine, such as the following in the
SalaryCalculator class of the example application.
public Decimal GetWeeklySalary(string employeeId, int weeks)
{
String connString = string.Empty;
String employeeName = String.Empty;
Decimal salary = 0;
try
{
connString = ConfigurationManager.ConnectionStrings
["EmployeeDatabase"].ConnectionString;
// Access database to get salary for employee here...
// In this example, just assume it's some large number.
employeeName = "John Smith";
salary = 1000000;
return salary / weeks;
}
catch (Exception ex)
{
// provide error information for debugging
string template = "Error calculating salary for {0}."
+ " Salary: {1}. Weeks: {2}\n"
+ "Data connection: {3}\n{4}";
Exception informationException = new Exception(
string.Format(template, employeeName, salary, weeks,
connString, ex.Message));
throw informationException;
}
}
You can see that a call to the GetWeeklySalary method will cause an exception of type
DivideByZeroException when called with a value of zero for the number of weeks parameter. The
exception message contains the values of the variables used in the calculation, and other
information useful to administrators when debugging the application. Unfortunately, the current
code has several issues. It trashes the original exception and loses the stack trace, preventing
meaningful debugging. Even worse, the global exception handler for the application presents any
user of the application with all of the sensitive information when an error occurs.
If you run the example for this chapter, and select option Typical Default Behavior without Exception
Shielding, you will see this result generated by the code in the catch statement:
Exception type System.Exception was thrown.
Message: 'Error calculating salary for John Smith.
Salary: 1000000. Weeks: 0
Connection: Database=Employees;Server=CorpHQ;
User ID=admin;Password=2g$tXD76qr Attempted to divide by zero.'
Source: 'ExceptionHandlingExample'
No inner exception
71
Wrapping an Exception
If you want to retain the original exception and the information it contains, you can wrap the
exception in another exception and specify a sanitized user-friendly error message for the containing
exception. This is the error message that the global error handler will display. However, code
elsewhere in the application (such as code in a calling layer that needs to access and log the
exception details) can access the contained exception and retrieve the information it requires before
passing the exception on to another layer or to the global exception handler. This intermediate code
could alternatively remove the contained exceptionor use an Exception Handling block policy to
replace it at that point in the application.
72
Microsoft.Practices.EnterpriseLibrary.ExceptionHandling.dll
Microsoft.Practices.EnterpriseLibrary.ExceptionHandling.WCF.dll
Microsoft.Practices.EnterpriseLibrary.ExceptionHandling.Logging.dll
Microsoft.Practices.EnterpriseLibrary.Logging.dll
If you are only wrapping and replacing exceptions in your application but not logging them, you
don't need to add the assemblies and references for logging. If you are not using the block to shield
WCF services, you don't need to add the assemblies and references for WCF.
To make it easier to use the objects in the Exception Handling block, you can add references to the
relevant namespaces to your project.
Now you can resolve an instance of the ExceptionManager class you'll use to perform exception
management. You can use the dependency injection approach described in Chapter 1,
"Introduction" and Appendices A and B, or the GetInstance method. This example uses the simple
GetInstance approach.
// Global variable to store the ExceptionManager instance.
ExceptionManager exManager;
// Resolve the default ExceptionManager object from the container.
73
exManager = EnterpriseLibraryContainer.Current.GetInstance<ExceptionManager>();
The body of your logic is placed inside a lambda function and passed to the Process method. If an
exception occurs during the execution of the expression, it is caught and handled according to the
configured policy. The name of the policy to execute is specified in the second parameter of the
Process method.
Alternatively, you can use the Process method in your main code to call the method of your class.
This is a useful approach if you want to perform exception shielding at the boundary of other classes
or objects. If you do not need to return a value from the function or routine you execute, you can
create any instance you need and work with it inside the lambda expression, as shown here.
exManager.Process(() =>
{
SalaryCalculator calc = new SalaryCalculator();
Console.WriteLine("Result is: {0}", calc.GetWeeklySalary("jsmith", 0));
},
"ExceptionShielding");
If you want to be able to return a value from the method or routine, you can use the overload of the
Process method that returns the lambda expression value, like this.
SalaryCalculator calc = new SalaryCalculator();
var result = exManager.Process(() =>
calc.GetWeeklySalary("jsmith", 0), "ExceptionShielding");
Console.WriteLine("Result is: {0}", result);
74
Notice that this approach creates the instance of the SalaryCalculator class outside of the Process
method, and therefore it will not pass any exception that occurs in the constructor of that class to
the exception handling policy. But when any other error occurs, the global application exception
handler sees the wrapped exception instead of the original informational exception. If you run the
example Behavior After Applying Exception Shielding with a Wrap Handler, the catch section now
displays the following. You can see that the original exception is hidden in the Inner Exception, and
the exception that wraps it contains the generic error message.
Exception type System.Exception was thrown.
Message: 'Application Error. Please contact your administrator.'
Source: 'Microsoft.Practices.EnterpriseLibrary.ExceptionHandling'
Inner Exception: System.Exception: Error calculating salary for John Smith.
Salary: 1000000. Weeks: 0
Connection: Database=Employees;Server=CorpHQ;User ID=admin;Password=2g$tXD76qr
Attempted to divide by zero.
at ExceptionHandlingExample.SalaryCalculator.GetWeeklySalary(String employeeI
d, Int32 weeks) in ...\ExceptionHandling\ExceptionHandling\SalaryCalculator.cs:
line 34
at ExceptionHandlingExample.Program.<WithWrapExceptionShielding>b__0() in
...\ExceptionHandling\ExceptionHandling\Program.cs:line 109
at Microsoft.Practices.EnterpriseLibrary.ExceptionHandling.ExceptionManagerIm
pl.Process(Action action, String policyName)
This means that developers and administrators can examine the wrapped (inner) exception to get
more information. However, bear in mind that the sensitive information is still available in the
exception, which could lead to an information leak if the exception propagates beyond your secure
perimeter. While this approach may be suitable for highly technical, specific errors, for complete
security and exception shielding, you should use the technique shown in the next section to replace
the exception with one that does not contain any sensitive information.
For simplicity, this example shows the principles of exception shielding at the level of the UI view.
The business functionality it uses may be in the same layer, in a separate business layer, or even on
a separate physical tier. Remember that you should design and implement an exception handling
strategy for individual layers or tiers in order to shield exceptions on the layer or service
boundaries.
Replacing an Exception
Having seen how easy it is to use exception handling policies, we'll now look at how you can
implement exception shielding by replacing an exception with a different exception. This approach is
also useful if you need to perform cleanup operations in your code, and then use the exception to
expose only what is relevant. To configure this scenario, simply create a policy in the same way as
the previous example, but with a Replace handler instead of a Wrap handler, as shown in Figure 3.
Figure 3
Configuring a Replace handler
75
When you call the method that generates an exception, you see the same generic exception
message as in the previous example. However, there is no inner exception this time. If you run the
example Behavior After Applying Exception Shielding with a Replace Handler, the Exception Handling
block replaces the original exception with the new one specified in the exception handling policy.
This is the result:
Exception type System.Exception was thrown.
Message: 'Application Error. Please contact your administrator.'
Source: 'Microsoft.Practices.EnterpriseLibrary.ExceptionHandling'
No Inner Exception
Logging an Exception
The previous section shows how you can perform exception shielding by replacing an exception with
a new sanitized version. However, you now lose all the valuable debugging and testing information
that was available in the original exception. Of course, the Librarian (remember him?) realized that
you would need to retain this information and make it available in some way when implementing
the Exception Shielding pattern. You preserve this information by chaining exception handlers within
your exception handling policy. In other words, you add a Logging handler to the policy.
That doesnt mean that the Logging handler is only useful as part of a chain of handlers. If you only
want to log details of an exception (and then throw it or ignore it, depending on the requirements of
the application), you can define a policy that contains just a Logging handler. However, in most
cases, you will use a Logging handler with other handlers that wrap or replace exceptions.
Figure 4 shows what happens when you add a Logging handler to your exception handling policy.
The configuration tool automatically adds the Logging Application block to the configuration with a
set of default properties that will write log entries to the Windows Application Event Log. You do,
however, need to set a few properties of the Logging exception handler in the Exception Handling
Settings section:
Specify the ID for the log event your code will generate as the Event ID property.
76
Specify the TextExceptionFormatter as the type of formatter the Exception Handling block
will use. Click the ellipsis (...) button in the Formatter Type property and select
TextExceptionFormatter in the type selector dialog that appears.
Set the category for the log event. The Logging block contains a default category named
General, and this is the default for the Logging exception handler. However, if you configure
other categories for the Logging block, you can select one of these from the drop-down list
that is available when you click on the Logging Category property of the Logging handler.
Figure 4
Adding a logging handler
The configuration tool adds new exception handlers to the end of the handler chain by default.
However, you will obviously want to log the details of the original exception rather than the new
77
exception that replaces it. You can right-click on the Logging handler and use the shortcut menu to
move it up to the first position in the chain of handlers if required.
In addition, if you did not already do so, you must add a reference to the Logging Application block
assembly to your project and (optionally) add a using statement to your class, as shown here.
using Microsoft.Practices.EnterpriseLibrary.Logging;
Now, when the application causes an exception, the global exception handler continues to display
the same sanitized error message. However, the Logging handler captures details of the original
exception before the Exception Handling block policy replaces it, and writes the details to whichever
logging sink you specify in the configuration for the Logging block. The default in this example is
Windows Application Event Log. If you run the example Logging an Exception to Preserve the
Information it Contains, you will see an exception like the one in Figure 5.
Figure 5
Details of the logged exception
This example shows the Exception Handling block using the default settings for the Logging block.
However, as you can see in Chapter 4, "As Easy As Falling Off a Log," the Logging block is extremely
configurable. So you can arrange for the Logging handler in your exception handling policy to write
the information to any Windows Event Log, an e-mail message, a database, a message queue, a text
78
file, a Windows Management Instrumentation (WMI) event, or a custom location using classes you
create that take advantage of the application block extension points.
79
Notice that we specified Property Mappings for the handler that map the Message property of the
exception generated within the service to the FaultMessage property of the SalaryCalculationFault
class, and map the unique Handling Instance ID of the exception (specified by setting the Source to
"{Guid}") to the FaultID property, as shown in Figure 6.
You can now call the Process method of the ExceptionManager class from code in your service in
exactly the same way as shown in the previous examples of wrapping and replacing exceptions in a
Windows Forms application. Alternatively, you can add attributes to the methods in your service
class to specify the policy they should use when an exception occurs, as shown in this code:
[ServiceContract]
public interface ISalaryService
{
[OperationContract]
[FaultContract(typeof(SalaryCalculationFault))]
decimal GetWeeklySalary(string employeeId, int weeks);
}
[ExceptionShielding("SalaryServicePolicy")]
public class SalaryService : ISalaryService
{
public decimal GetWeeklySalary(string employeeId, int weeks)
{
SalaryCalculator calc = new SalaryCalculator();
return calc.GetWeeklySalary(employeeId, weeks);
}
}
You add the ExceptionShielding attribute to a service implementation class or to a service contract
interface, and use it to specify the name of the exception policy to use. If you do not specify the
80
name of a policy in the constructor parameter, or if the specified policy is not defined in the
configuration, the Exception Handling block will automatically look for a policy named WCF
Exception Shielding.
It generates a new instance of the fault contract class you specify for the FaultContractType
property of the Fault Contract exception handler.
It extracts the values from the properties of the exception that you pass to the method.
It sets the values of the new fault contract instance to the values extracted from the original
exception. It uses mappings between the exception property names and the names of the
properties exposed by the fault contract to assign the exception values to the appropriate
properties. If you do not specify a mapping, it matches the source and target properties that
have the same name.
The result is that, instead of a general service failure message, the client receives a fault message
containing the appropriate information about the exception.
The example Applying Exception Shielding at WCF Application Boundaries uses the service described
above and the Exception Handling block WCF Fault Contract handler to demonstrate exception
shielding. You can run this example in one of three ways:
Inside Visual Studio by starting it with F5 (debugging mode) and then pressing F5 again
when the debugger halts at the exception in the SalaryCalculator class.
By starting the SalaryService in Visual Studio (as described in the previous bullet) and then
running the executable file ExceptionHandlingExample.exe in the bin\debug folder directly.
The result is shown below. You can see that the exception raised by the SalaryCalculator class
causes the service to return an instance of the SalaryCalculationFault type that contains the fault ID
and fault message. However, the Exception Handling block captures this exception and replaces the
sensitive information in the message with text that suggests the user contact their administrator.
Research shows that users really appreciate this type of informative error message.
Getting salary for 'jsmith' from WCF Salary Service...
Exception type System.ServiceModel.FaultException`1[ExceptionHandlingExample.Sal
aryService.SalaryCalculationFault] was thrown.
Message: 'Service error. Please contact your administrator.'
Source: 'mscorlib'
81
No Inner Exception
Fault contract detail:
Fault ID: bafb7ec2-ed05-4036-b4d5-56d6af9046a5
Message: Error calculating salary for John Smith. Salary: 1000000. Weeks: 0
Connection: Database=Employees;Server=CorpHQ;User ID=admin;Password=2g$tXD76qr
Attempted to divide by zero.
You can also see, below the details of the exception, the contents of the original fault contract,
which are obtained by casting the exception to the type FaultException<SalaryCalculationFault> and
querying the properties. You can see that this contains the original exception message generated
within the service. Look at the code in the example file, and run it, to see more details.
82
The advantage of this capability should be obvious. You can create policies that will handle different
types of exceptions in different ways and, for each exception type, can have different messages and
post-handling actions as well as different handler combinations. And, best of all, administrators can
modify the policies post deployment to change the behavior of the exception handling as required.
They can add new exception types, modify the types specified, change the properties for each
exception type and the associated handlers, and generally fine-tune the strategy to suit day-to-day
operational requirements.
Of course, this will only work if your application code throws the appropriate exception types. If you
generate informational exceptions that are all of the base type Exception, as we did in earlier
examples in this chapter, only the handlers for that exception type will execute.
83
However, as you saw earlier in this chapter, the Process method does not allow you to detect the
return value from the exception handling policy executed by the Exception Handling block (it returns
the value of the method or function it executes). In some cases, though perhaps rarely, you may
want to detect the return value from the exception handling policy and perform some processing
based on this value, and perhaps even capture the exception returned by the Exception Handling
block to manipulate it or decide whether or not to throw it in your code.
In this case, you can use the HandleException method to pass an exception to the block as an out
parameter to be populated by the policy, and retrieve the Boolean result that indicates if the policy
determined that the exception should be thrown or ignored.
The example Executing Custom Code Before and After Handling an Exception, demonstrates this
approach. The SalaryCalculator class contains two methods in addition to the GetWeeklySalary
method weve used so far in this chapter. These two methods, named RaiseDivideByZeroException
and RaiseArgumentOutOfRangeException, will cause an exception of the type indicated by the
method name when called.
The sample first attempts to execute the RaiseDivideByZeroException method, like this.
SalaryCalculator calc = new SalaryCalculator();
Console.WriteLine("Result is: {0}", calc.RaiseDivideByZeroException("jsmith", 0));
This exception is caught in the main routine using the exception handling code shown below. This
creates a new Exception instance and passes it to the Exception Handling block as the out
parameter, specifying that the block should use the NotifyingRethrow policy. This policy specifies
that the block should log DivideByZero exceptions, and replace the message with a sanitized one.
However, it also has the PostHandlingAction set to None, which means that the HandleException
method will return false. The sample code simply displays a message and continues.
...
catch (Exception ex)
{
Exception newException;
bool rethrow = exManager.HandleException(ex, "NotifyingRethrow",
out newException);
if (rethrow)
{
// Exception policy setting is "ThrowNewException".
// Code here to perform any clean up tasks required.
// Then throw the exception returned by the exception handling policy.
throw newException;
}
else
{
// Exception policy setting is "None" so exception is not thrown.
// Code here to perform any other processing required.
// In this example, just ignore the exception and do nothing.
Console.WriteLine("Detected and ignored Divide By Zero Error "
+ "- no value returned.");
}
}
Therefore, when you execute this sample, the following message is displayed.
84
Getting salary for 'jsmith' ... this will raise a DivideByZero exception.
Detected and ignored Divide By Zero Error - no value returned.
This section of the sample also contains a catch section, which isother than the message displayed
to the screenidentical to that shown earlier. However, the NotifyingRethrow policy specifies that
exceptions of type Exception (or any exceptions that are not of type DivideByZeroException) should
simply be wrapped in a new exception that has a sanitized error message. The PostHandlingAction
for the Exception type is set to ThrowNewException, which means that the HandleException
method will return true. Therefore the code in the catch block will throw the exception returned
from the block, resulting in the output shown here.
Getting salary for 'jsmith' ... this will raise an ArgumentOutOfRange exception.
Exception type System.Exception was thrown.
Message: 'An application error has occurred.'
Source: 'ExceptionHandlingExample'
Inner Exception: System.ArgumentOutOfRangeException: startIndex cannot be larger
than length of string.
Parameter name: startIndex
at System.String.InternalSubStringWithChecks(Int32 startIndex, Int32 length,
Boolean fAlwaysCopy)
at System.String.Substring(Int32 startIndex, Int32 length)
at ExceptionHandlingExample.SalaryCalculator.RaiseArgumentOutOfRangeException
(String employeeId, Int32 weeks) in ...\ExceptionHandling\ExceptionHandling\Sala
ryCalculator.cs:line 57
at ExceptionHandlingExample.Program.ExecutingCodeAroundException(Int32 positi
onInTitleArray) in ...\ExceptionHandling\ExceptionHandling\Program.cs:line 222
Assisting Administrators
Some would say that the Exception Handling block already does plenty to make an administrator's
life easy. However, it also contains features that allow you to exert extra control over the way that
exception information is made available, and the way that it can be used by administrators and
operations staff. If you have ever worked in a technical support role, you'll recognize the scenario. A
user calls to tell you that an error has occurred in the application. If you are lucky, the user will be
able to tell you exactly what they were doing at the time, and the exact text of the error message.
More likely, he or she will tell you that they weren't really doing anything, and that the message said
something about contacting the administrator.
To resolve this regularly occurring problem, you can make use of the HandlingInstanceID value
generated by the block to associate logged exception details with specific exceptions, and with
related exceptions. The Exception Handling block creates a unique GUID value for the
HandlingInstanceID of every execution of a policy. The value is available to all of the handlers in the
85
policy while that policy is executing. The Logging handler automatically writes the
HandlingInstanceID value into every log message it creates. The Wrap and Replace handlers can
access the HandlingInstanceID value and include it in a message using the special token
{handlingInstanceID}.
Figure 8 shows how you can configure a Logging handler and a Replace handler in a policy, and
include the {handlingInstanceID} token in the Exception Message property of the Replace handler.
Figure 8
Configuring a unique exception handling instance identifier
Now your application can display the unique exception identifier to the user, and they can pass it to
the administrator who can use it to identify the matching logged exception information. This logged
information will include the information from the original exception, before the Replace handler
replaced it with the sanitized exception. If you select the option Providing Assistance to
Administrators for Locating Exception Details in the example application, you can see this in
operation. The example displays the following details of the exception returned from the exception
handling policy:
Exception type System.Exception was thrown.
Message: 'Application error. Please advise your administrator and provide them
with this error code: 22f759d3-8f58-43dc-9adc-93b953a4f733'
Source: 'Microsoft.Practices.EnterpriseLibrary.ExceptionHandling'
No Inner Exception
In a production application, you will probably show this message in a dialog of some type. One issue,
however, is that users may not copy the GUID correctly from a standard error dialog (such as a
message box). If you decide to use the HandlingInstanceID value to assist administrators, consider
using a form containing a read-only text box or an error page in a Web application to display the
GUID value in a way that allows users to copy it to the clipboard and paste into a document or e-mail
message. Figure 9 shows a simple Windows Form displayed as a modal dialog. It contains a read-only
TextBox control that displays the Message property of the exception, which contains the
HandlingInstanceID GUID value.
86
Figure 9
Displaying and correlating the handling instance identifier
87
new class that derives from the ExceptionFormatter base class in the Exception Handling block, and
override the several methods it contains for formatting the exception information as required.
Summary
In this chapter you have seen why, when, and how you can use the Enterprise Library Exception
Handling block to create and implement exception handling strategies. Poor error handling can make
your application difficult to manage and maintain, hard to debug and test, and may allow it to
expose sensitive information that would be useful to attackers and malicious users.
A good practice for exception management is to implement strategies that provide a controlled and
decoupled approach to exception handling through configurable policies. The Exception Handling
block makes it easy to implement such strategies for your applications, irrespective of their type and
complexity. You can use the Exception Handling block in Web and Windows Forms applications, Web
services, console-based applications and utilities, and even in administration scripts and applications
hosted in environments such as SharePoint, Microsoft Office applications, other enterprise
systems.
This chapter demonstrated how you can implement common exception handling patterns, such as
Exception Shielding, using techniques such as wrapping, replacing, and logging exceptions. It also
demonstrated how you can handle different types of exceptions, assist administrators by using
unique exception identifiers, and extend the Exception Handling block to perform tasks that are
specific to your own requirements.
88
89
meets your requirements, you can create a provider that sends the log entry to any other custom
location or executes some other action.
In your application, you simply generate a log entry using a suitable logging object, such as the
LogWriter class, and then call a method to write the information it contains to the logging system.
The Logging block routes the log message through any filters you define in your configuration, and
on to the listeners that you configure. Each listener defines the target of the log entry, such as
Windows Event Log or an e-mail message, and uses a formatter to generate suitably formatted
content for that logging target.
You can see from this that there are many objects involved in this multi-step process, and it is
important to understand how they interact and how the log message flows through the pipeline of
processes. Figure 1 shows the overall process in more detail, and provides an explanation of each
stage.
Figure 1
An overview of the logging process and the objects in the Logging block
90
Stage
Description
The user creates a LogWriter instance, uses it to create a new LogEntry, and
passes it to the Logging block for processing. Alternatively, the user can create a
new LogEntry explicitly, populate it with the required information, and use a
LogWriter to pass it to the Logging block for processing.
The Logging block filters the LogEntry (based on your configuration settings) for
message priority, or categories you added to the LogEntry when you created it. It
also checks to see if logging is enabled. These filters can prevent any further
processing of the log entries. This is useful, for example, when you want to allow
administrators to enable and disable additional debug information logging without
requiring them to restart the application.
Trace sources act as the link between the log entries and the log targets. There is a
trace source for each category you define in the logging block configuration; plus,
there are three built-in trace sources that capture all log entries, unprocessed
entries that do not match any category, and entries that cannot be processed due
to an error while logging (such as an error while writing to the target log).
Each trace source has one or more trace listeners defined. These listeners are
responsible for taking the log entry, passing it through a separate log formatter that
translates the content into a suitable format, and passing it to the target log.
Several trace listeners are provided with the block, and you can create your own if
required.
Each trace listener can use a log formatter to format the information contained in
the log entry. The block contains log message formatters, and you can create your
own formatter if required. The text formatter uses a template containing
placeholders that makes it easy to generate the required format for log entries.
Logging Categories
Categories allow you to specify the target(s) for log entries processed by the block. You can define
categories that relate to one or more targets. For example, you might create a category named
General containing trace listeners that write to text files and XML files, and a category named
Auditing for administrative information that is configured to use trace listeners that write to one or
more databases. Then you can assign a log entry to one or more categories, effectively mapping it to
multiple targets. The three log sources shown in the schematic in Figure 1 (all events log source, not
processed log source, and errors log source) are themselves categories for which you can define
trace listeners.
Logging is an added-value service for applications, and so any failures in the logging process must
be handled gracefully without raising an exception to the main business processes. The Logging
block achieves this by sending all logging failures to a special category (the errors log source) which
is named Logging Errors & Warnings. By default, these error messages are written to Windows
Event Log, though you can configure this category to write to other targets using different trace
listeners if you wish.
91
minimize performance impact. However, you should be aware of this impact, and consider how your
own logging strategy will affect it. For example, a complex configuration that writes log entries to
multiple logs and uses multiple filters is likely to have more impact than simple configurations. You
must balance your requirements for logging against performance and scalability needs.
To maximize performance, the LogWriter class by default exposes properties only for the commonly
required information. This includes the event ID, message, priority, and categories you specify in the
configuration. The LogWriter also automatically collects some context information such as the time,
the application domain, the machine name, and the process IDwhere possibleusing cached
values in order to minimize performance impact.
However, collecting additional context information can be expensive in processing terms and, if you
are not going to use the information, wastes precious resources and may affect performance.
Therefore, the Logging block only collects other less commonly used information from the
environment, which you might require only occasionally, if you specify that you want this
information when you create the LogEntry instance. Four classes within the Logging block can collect
specific sets of context information that you can add to your log entry. This includes COM+
diagnostic information, the current stack trace, the security-related information from the managed
runtime, and security-related information from the operating system. There is also a dictionary
property for the log entry where you can add any additional custom information you require, and
which must appear in your logs.
92
The easiest way to learn about how the Logging block configuration works is to run the configuration
tool yourself and open the App.config file from the example application. You can expand each of the
sections to see the property settings, and to relate each item to the others.
Microsoft.Practices.EnterpriseLibrary.Logging.dll
Microsoft.Practices.EnterpriseLibrary.Logging.Database.dll
Microsoft.Practices.EnterpriseLibrary.Data.dll
However, if you do not intend to send log entries to a database, you will not require the last two
assemblies on this list.
Now you are ready to write some code.
93
Now you can call the Write method and pass in any parameter values you require. There are many
overloads of the Write method. They allow you to specify the message text, the category, the
priority (a numeric value), the event ID, the severity (a value from the TraceEventType
enumeration), and a title for the event. There is also an overload that allows you to add custom
values to the log entry by populating a Dictionary with name and value pairs (you will see this used
in a later example). Our example code uses several of these overloads. We've removed some of the
Console.WriteLine statements from the code listed here to make it easier to see what it actually
does.
// Check if logging is enabled before creating log entries.
if (defaultWriter.IsLoggingEnabled())
{
defaultWriter.Write("Log entry created using the simplest overload.");
defaultWriter.Write("Log entry with a single category.", "General");
defaultWriter.Write("Log entry with a category, priority, and event ID.",
"General", 6, 9001);
defaultWriter.Write("Log entry with a category, priority, event ID, "
94
Notice how the code first checks to see if logging is enabled. There is no point using valuable
processor cycles and memory generating log entries if they aren't going anywhere. The Filters
section of the Logging block configuration can contain a special filter named the Log Enabled Filter
(we have configured one in our example application). This filter has the single property, Enabled,
that allows administrators to enable and disable all logging for the block. When it is set to False, the
IsLoggingEnabled property of the LogWriter will return false as well.
The example produces the following result. All of the events are sent to the General category, which
is configured to write events to the Windows Application Event Log (this is the default configuration
for the block).
Created a Log Entry using the simplest overload.
Created a Log Entry with a single category.
Created a Log Entry with a category, priority, and event ID.
Created a Log Entry with a category, priority, event ID, and severity.
Created a Log Entry with a category, priority, event ID, severity, and title.
Open Windows Event Viewer 'Application' Log to see the results.
You can open Windows Event Viewer to see the results. Figure 3 shows the event generated by the
last of the Write statements in this example.
Figure 3
The logged event
95
If you do not specify a value for one of the parameters of the Write method, the Logging block uses
the default value for that parameter. The defaults are Category = General, Priority = -1, Event ID =
1, Severity = Information, and an empty string for Title.
About Logging Categories
Categories are the way that Enterprise Library routes events sent to the block to the appropriate
target, such as a database, the event log, an e-mail message, and more. The previous example makes
use of the default configuration for the Logging block. When you add the Logging block to your
application configuration using the Enterprise Library configuration tools, it contains the single
category named General that is configured to write events to the Windows Application Event Log.
You can change the behavior of logging for any category. For example, you can change the behavior
of the previous example by reconfiguring the event log trace listener specified for the General
category, or by reconfiguring the text formatter that this trace listener uses. You can change the
event log to which the event log trace listener sends events; edit the template used by the text
formatter; or add other trace listeners.
96
However, it's likely that your application will need to perform different types of logging for different
tasks. The typical way to achieve this is to define additional categories, and then specify the type of
trace listener you need for each category. For example, you may want to send audit information to a
text file or an XML file, to a database, or both; instead of to Windows Event Log. Or you may want to
send indications of catastrophic failures to administrators as e-mail messages. If you are using an
enterprise-level monitoring system, you may instead prefer to write events to the WMI subsystem,
or send them to another system through Windows Message Queuing.
You can easily add categories to your application configuration. The approach is to add the trace
listeners for the logging targets you require, such as the flat file trace listener or database trace
listener to the Logging Target Listeners section, and then add the categories you require to the
Category Filters section. Finally, you link them together in any combination by adding each of the
required trace listener(s) to the category filter in the Category Filters section. Figure 4 shows this
type of configuration, were the General category will output log messages to an event log listener
and a database trace listener.
Figure 4
Configuring trace listeners for different categories
97
You can specify two properties for each category (source) you add, and for the default General
category. You can set the Auto Flush property to specify that the block should flush log entries to
their configured target trace listeners each time as soon as they are written to the block, or only
when you call the FlushContextItems method of the LogWriter. If you set the Auto Flush property to
False, ensure that your code calls this method when an exception or failure occurs to avoid losing
any cached logging information.
The other property you can set for each category is the Minimum Severity (which sets the Source
Levels property of each listener). This specifies the minimum severity (such as Warning or Critical)
for the log entries that the category filter will pass to its configured trace listeners. Any log entries
with a lower severity will be blocked. The default severity is All, and so no log entries will be blocked
unless you change this value. You can also configure a Severity Filter (which sets the Filter property)
for each individual trace listener, and these values can be different for trace listeners in the same
98
category. You will see how to use the Filter property of a trace listener in the next example in this
chapter.
Filtering by Category
The Logging Filters section of the Logging block configuration can contain a filter that you can use to
filter log entries sent to the block based on their membership in specified categories. You can add
multiple categories to your configuration to manage filtering, though overuse of this capability can
make it difficult to manage logging.
To help you define filters, the configuration tool contains a filter editor dialog that allows you to
specify the filter mode (Allow all except..., or Deny all except...) and then build a list of categories to
which this filter will apply. The example application contains only a single filter that is configured to
allow logging to all categories except for the category named (rather appropriately) BlockedByFilter.
You will see the BlockedByFilter category used in the section "Capturing Unprocessed Events and
Logging Errors" later in this chapter.
Writing Log Entries to Multiple Categories
In addition to being able to define multiple categories, you can send a log entry to more than one
category in a single operation. This approach often means you can define fewer categories, and it
simplifies the configuration because each category can focus on a specific task. You don't need to
have multiple categories with similar sets of trace listeners.
The second example, Logging to multiple categories with the Write method of a LogWriter, shows
how to write to multiple categories. The example has two categories, named DiskFiles and
Important, defined in the configuration. The DiskFiles category contains references to a flat file trace
listener and an XML trace listener. The Important category contains references to an event log trace
listener and a rolling flat file trace listener.
The example uses the following code to create an array of the two category names, DiskFiles and
Important, and then it writes three log messages to these two categories using the Write method of
the LogWriter in the same way as in the previous example. Again, we've removed some of the
Console.WriteLine statements to make it easier to see what the code actually does.
// Check if logging is enabled before creating log entries.
if (defaultWriter.IsLoggingEnabled())
{
// Create a string array (or List<>) containing the categories.
string[] logCategories = new string[] {"DiskFiles", "Important"};
// Write the log entries using these categories.
defaultWriter.Write("Log entry with multiple categories.", logCategories);
defaultWriter.Write("Log entry with multiple categories, a priority, "
+ "and an event ID.", logCategories, 7, 9004);
defaultWriter.Write("Log entry with multiple categories, a priority, "
+ "event ID, severity, and title.", logCategories, 10,
9005, TraceEventType.Critical, "Logging Block Examples");
}
else
{
Console.WriteLine("Logging is disabled in the configuration.");
99
The reason is that the flat file trace listener is configured to use a different text formatterin this
case one named Brief Format Text (listed in the Formatters section of the configuration tool). All
trace listeners use a formatter to translate the contents of the log entry properties into the
appropriate format for the target of that trace listener. Trace listeners that create text output, such
as a text file or an e-mail message, use a text formatter defined within the configuration of the block.
If you examine the configured text formatter, you will see that it has a Template property. You can
use the Template Editor dialog available for editing this property to change the format of the output
by adding tokens (using the drop-down list of available tokens) and text, or by removing tokens and
text. Figure 5 shows the default template for a text formatter, and how you can edit this template. A
full list of tokens and their meaning is available in the online documentation for Enterprise Library,
although most are fairly self-explanatory.
Figure 5
Editing the template for a text formatter
100
The template we used in the Brief Format text formatter is shown here.
Timestamp: {timestamp(local)}{newline}Message: {message}{newline}Category:
{category}{newline}Priority: {priority}{newline}EventId:
{eventid}{newline}ActivityId: {property(ActivityId)}{newline}Severity:
{severity}{newline}Title:{title}{newline}
101
should open it in Microsoft Internet Explorer (or another Web browser or text editor) to see the
structure.
You will see that the file contains only one event from the previous example, not the three that the
code in the example generated. This is because the XML trace listener has the Filter property in its
configuration set to Error. Therefore, it will log only events with a severity of Error or higher. If you
look back at the example code, you will see that only the last of the three calls to the Write method
specified a value for the severity (TraceEventType.Critical in this case), and so the default value
Information was used for the other two events.
If you get an error indicating that the XML document created by the XML trace listener is invalid,
it's probably because you have more than one log entry in the file. This means that it is not a valid
XML documentit contains separate event log entries added to the file each time you ran this
example. To view it as XML, you must open the file in a text editor and add an opening and closing
element (such as <root> and </root>) around the content. Or, just delete it and run the example
once more.
All of the trace listeners provided with Enterprise Library expose the Filter property, and you can use
this to limit the log entries written to the logging target to only those that are important to you. If
your code generates many information events that you use for monitoring and debugging only
under specific circumstances, you can filter these to reduce the growth and size of the log when they
are not required.
Alternatively, (as in the example) you can use the Filter property to differentiate the granularity of
logging for different listeners in the same category. It may be that a flat file trace listener will log all
entries to an audit log file for some particular event, but an Email trace listener in the same category
will send e-mail messages to administrators only when an Error or Critical event occurs.
Filtering All Log Entries by Priority
As well as being able to filter log entries in individual trace listeners based on their severity, you can
set the Logging block to filter all log entries sent to it based on their priority. Alongside the logenabled filter and category filter in the Filters section of the configuration (which we discussed
earlier in this chapter), you can add a filter named Priority Filter.
This filter has two properties that you can set: Minimum Priority and Maximum Priority. The default
setting for the priority of a log entry is -1, which is the same as the default setting of the Minimum
Priority property of the filter, and there is no maximum priority set. Therefore, this filter will not
block any log entries. However, if you change the defaults for these properties, only log entries with
a priority between the configured values (including the specified maximum and minimum values)
will be logged. The exception is log entries that have the default priority of -1. These are never
filtered.
102
The example, Creating and writing log entries with a LogEntry object, demonstrates this approach. It
creates two LogEntry instances. The code first calls the most complex constructor of the LogEntry
class that accepts all of the possible values. This includes a Dictionary of objects with a string key (in
this example, the single item Extra Information) that will be included in the output of the trace
listener and formatter. Then it writes this log entry using an overload of the Write method of the
LogWriter that accepts a LogEntry instance.
Next, the code creates a new empty LogEntry using the default constructor and populates this by
setting individual properties, before writing it using the same Write method of the LogWriter.
// Check if logging is enabled before creating log entries.
if (defaultWriter.IsLoggingEnabled())
{
// Create a Dictionary of extended properties
Dictionary<string, object> exProperties = new Dictionary<string, object>();
exProperties.Add("Extra Information", "Some Special Value");
// Create a LogEntry using the constructor parameters.
LogEntry entry1 = new LogEntry("LogEntry with category, priority, event ID, "
+ "severity, and title.", "General", 8, 9006,
TraceEventType.Error, "Logging Block Examples",
exProperties);
defaultWriter.Write(entry1);
// Create a LogEntry and populate the individual properties.
LogEntry entry2 = new LogEntry();
entry2.Categories = new string[] {"General"};
entry2.EventId = 9007;
entry2.Message = "LogEntry with individual properties specified.";
entry2.Priority = 9;
entry2.Severity = TraceEventType.Warning;
entry2.Title = "Logging Block Examples";
entry2.ExtendedProperties = exProperties;
defaultWriter.Write(entry2);
}
else
{
Console.WriteLine("Logging is disabled in the configuration.");
}
This example writes the log entries to the Windows Application Event Log by using the General
category. If you view the events this example generates, you will see the values set in the code
above including (at the end of the list) the extended property we specified using a Dictionary. You
can see this in Figure 6.
Figure 6
A log entry written to the General category
103
104
You might expect that neither of these log entries would actually make it to their target. However,
the example generates the following messages that indicate where to look for the log entries that
are generated.
Created a Log Entry with a category name not defined in the configuration.
The Log Entry will appear in the Unprocessed.log file in the C:\Temp folder.
Created a Log Entry that causes a logging error.
The Log Entry will appear in the Windows Application Event Log.
This occurs because we configured the Unprocessed Category in the Special Sources section with a
reference to a flat file trace listener that writes log entries to a file named Unprocessed.log. If you
open this file, you will see the log entry that was sent to the InvalidCategory category.
The example uses the default configuration for the Logging Errors & Warnings special source. This
means that the log entry that caused a logging error will be sent to the formatted event log trace
listener referenced in this category. If you open the application event log, you will see this log entry.
The listing below shows some of the content.
Timestamp: 24/11/2009 15:14:30
Message: Tracing to LogSource 'CauseLoggingError' failed. Processing for other
sources will continue. See summary information below for more information. Should
this problem persist, stop the service and check the configuration file(s) for
possible error(s) in the configuration of the categories and sinks.
Summary for Enterprise Library Distributor Service:
======================================
-->
Message:
Timestamp: 24/11/2009 15:14:30
Message: Entry that causes a logging error.
Category: CauseLoggingError
...
...
Exception Information Details:
======================================
Exception Type: System.Data.SqlClient.SqlException
Errors: System.Data.SqlClient.SqlErrorCollection
Class: 11
105
LineNumber: 65536
Number: 4060
Procedure:
Server: (local)\SQLEXPRESS
State: 1
Source: .Net SqlClient Data Provider
ErrorCode: -2146232060
Message: Cannot open database "DoesNotExist" requested by the login. The login
failed.
Login failed for user 'xxxxxxx\xxx'.
...
...
StackTrace Information Details:
======================================
...
...
In addition to the log entry itself, you can see that the event contains a wealth of information to help
you to debug the error. It contains a message indicating that a logging error occurred, followed by
the log entry itself. However, after that is a section containing details of the exception raised by the
logging mechanism (you can see the error message generated by the SqlClient data access code),
and after this is the full stack trace.
One point to be aware of is that logging database and security exceptions should always be done in
such a way as to protect sensitive information that may be contained in the logs. You must ensure
that you appropriately restrict access to the logs, and only expose non-sensitive information to
other users. You may want to consider applying exception shielding, as described in Chapter 3,
"Error Management Made Exceptionally Easy."
Logging to a Database
One of the most common requirements for logging, after Windows Event Log and text files, is to
store log entries in a database. The Logging block contains the database trace listener that makes
this easy. You configure the database using a script provided with Enterprise Library, located in the
\Blocks\Logging\Src\DatabaseTraceListener\Scripts folder of the source code. We also include these
scripts with the example for this chapter.
The scripts assume that you will use the locally installed SQL Server Express database, but you can
edit the CreateLoggingDb.cmd file to change the target to a different database server. The SQL
script that the command file executes creates a database named Logging, and adds the required
tables and stored procedures to it.
However, if you only want to run the example application we provide for this chapter, you do not
need to create a database. The project contains a preconfigured database file named Logging.mdf
(located in the bin\Debug folder) that is auto-attached to your local SQL Server Express instance. You
can connect to this database using Visual Studio Server Explorer to see the contents. The
configuration of the database trace listener contains the Database Instance property, which is a
reference to this database as configured in the settings section for the Data Access application block
(see Figure 7).
Figure 7
106
The database trace listener uses a text formatter to format the output, and so you can edit the
template used to generate the log message to suit your requirements. You can also add extended
properties to the log entry if you wish. In addition, as with all trace listeners, you can filter log entries
based on their severity if you like.
The Log table in the database contains columns for only the commonly required values, such as the
message, event ID, priority, severity, title, timestamp, machine and process details, and more. It also
contains a column named FormattedMessage that contains the message generated by the text
formatter.
Using the Database Trace Listener
The example, Sending log entries to a database, demonstrates the use of the database trace listener.
The code is relatively simple, following the same style as the earlier example of creating a Dictionary
of extended properties, and then using the Write method of the LogWriter to write two log entries.
The first log entry is created by the LogWriter from the parameter values provided to the Write
method. The second is generated in code as a new LogEntry instance by specifying the values for the
107
constructor parameters. Also notice how easy it is to add additional information to a log entry using
a simple Dictionary as the ExtendedProperties of the log entry.
// Check if logging is enabled before creating log entries.
if (defaultWriter.IsLoggingEnabled())
{
// Create a Dictionary of extended properties
Dictionary<string, object> exProperties = new Dictionary<string, object>();
exProperties.Add("Extra Information", "Some Special Value");
// Create a LogEntry using the constructor parameters.
defaultWriter.Write("Log entry with category, priority, event ID, severity, "
+ "title, and extended properties.", "Database",
5, 9008, TraceEventType.Warning,
"Logging Block Examples", exProperties);
// Create a LogEntry using the constructor parameters.
LogEntry entry = new LogEntry("LogEntry with category, priority, event ID, "
+ "severity, title, and extended properties.",
"Database", 8, 9009, TraceEventType.Error,
"Logging Block Examples", exProperties);
defaultWriter.Write(entry);
}
else
{
Console.WriteLine("Logging is disabled in the configuration.");
}
To see the two log messages created by this example, you can open the Logging.mdf database from
the bin\Debug folder using Visual Studio Server Explorer. You will find that the FormattedMessage
column of the second message contains the following. You can see the extended property
information we added using a Dictionary at the end of the message.
Timestamp: 03/12/2009 17:14:02
Message: LogEntry with category, priority, event ID, severity, title, and extended
properties.
Category: Database
Priority: 8
EventId: 9009
Severity: Error
Title: Logging Block Examples
Activity ID: 00000000-0000-0000-0000-000000000000
Machine: BIGFOOT
App Domain: LoggingExample.vshost.exe
ProcessId: 5860
Process Name: E:\Logging\Logging\bin\Debug\LoggingExample.vshost.exe
Thread Name:
Win32 ThreadId:3208
Extended Properties: Extra Information - Some Special Value
Note that you cannot simply delete logged information due to the references between the Log and
CategoryLog tables. However, the database contains a stored procedure named ClearLogs that you
can execute to remove all log entries.
108
The connection string for the database we provide with this example is:
Data Source=.\SQLEXPRESS;AttachDbFilename=|DataDirectory|\Logging.mdf;Integrated
Security=True;User Instance=True
If you have configured a different database using the scripts provided with Enterprise Library, you
may find that you get an error when you run this example. It is likely to be that you have an invalid
connection string in your App.config file for your database. In addition, use the Services applet in
your Administrative Tools folder to check that the SQL Server (SQLEXPRESS) database service (the
service is named MSSQL$SQLEXPRESS) is running.
109
The ShowDetailsAndAddExtraInfo method takes a LogEntry instance and does two different things.
Firstly, it shows how you can obtain information about the way that the Logging block will handle
the log entry. This may be useful in advanced scenarios where you need to be able to
programmatically determine if a specific log entry was detected by a specific trace source, or will be
written to a specific target. Secondly, it demonstrates how you can check if specific filters, or all
filters, will block a log entry from being written to its target.
Obtaining Information about Trace Sources and Trace Listeners
The first section of the ShowDetailsAndAddExtraInfo method iterates through the collection of
trace sources (LogSource instances) exposed by the GetMatchingTraceSources method of the
LogWriter class. Each LogSource instance exposes a Listeners collection that contains information
about the listeners (which specify the targets to which the log entry will be sent).
void ShowDetailsAndAddExtraInfo(LogEntry entry)
{
// Display information about the Trace Sources and Listeners for this LogEntry.
IEnumerable<LogSource> sources = defaultWriter.GetMatchingTraceSources(entry);
foreach (LogSource source in sources)
{
Console.WriteLine("Log Source name: '{0}'", source.Name);
foreach (TraceListener listener in source.Listeners)
{
Console.WriteLine(" - Listener name: '{0}'", listener.Name);
}
}
...
110
After you determine that logging will succeed, you can add extra context information and write the
log entry. You'll see the code to achieve this shortly. In the meantime, this is the output generated
by the example. You can see that it contains details of the log (trace) sources and listeners for each
of the two log entries created by the earlier code, and the result of checking if any category filters
will block each log entry.
Created a LogEntry with categories 'General' and 'DiskFiles'.
Log Source name: 'General'
- Listener name: 'Formatted EventLog TraceListener'
Log Source name: 'DiskFiles'
- Listener name: 'FlatFile TraceListener'
- Listener name: 'XML Trace Listener'
Category Filter(s) will not block this LogEntry.
Priority Filter(s) will not block this LogEntry.
This LogEntry will not be blocked due to configuration settings.
...
Created a LogEntry with category 'BlockedByFilter', and Priority 1.
Log Source name: 'BlockedByFilter'
- Listener name: 'Formatted EventLog TraceListener'
A Category Filter will block this LogEntry.
A Priority Filter will block this LogEntry.
This LogEntry will be blocked due to configuration settings.
111
The DebugInformationProvider, which adds the current stack trace to the Dictionary.
The ComPlusInformationProvider, which adds the current activity ID, application ID,
transaction ID (if any), direct caller account name, and original caller account name to the
Dictionary.
The following code shows how you can use these helper classes to create additional information for
a log entry. It also demonstrates how you can add custom information to the log entryin this case
by reading the contents of the application configuration file into the Dictionary. After populating the
Dictionary, you simply set it as the value of the ExtendedProperties property of the log entry before
writing that log entry.
...
// Create additional context information to add to the LogEntry.
Dictionary<string, object> dict = new Dictionary<string, object>();
// Use the information helper classes to get information about
// the environment and add it to the dictionary.
DebugInformationProvider debugHelper = new DebugInformationProvider();
debugHelper.PopulateDictionary(dict);
ManagedSecurityContextInformationProvider infoHelper
= new ManagedSecurityContextInformationProvider();
infoHelper.PopulateDictionary(dict);
UnmanagedSecurityContextInformationProvider secHelper
= new UnmanagedSecurityContextInformationProvider();
112
secHelper.PopulateDictionary(dict);
ComPlusInformationProvider comHelper = new ComPlusInformationProvider();
comHelper.PopulateDictionary(dict);
// Get any other information you require and add it to the dictionary.
string configInfo = File.ReadAllText(@"..\..\App.config");
dict.Add("Config information", configInfo);
// Set dictionary in the LogEntry and write it using the default LogWriter.
entry.ExtendedProperties = dict;
defaultWriter.Write(entry);
....
To see the additional information added to the log entry, open Windows Event Viewer and locate
the new log entry. We haven't shown the contents of this log entry here as it runs to more than 350
lines and contains just about all of the information about an event occurring in your application that
you could possibly require!
113
the specified activity ID. If you start a new nested tracer instance within the scope of a previous one,
it will have the same activity ID as the parent tracer unless you specify a different one when you
create and start the nested tracer; in that case, this new activity ID will be used in subsequent log
entries within the scope of this tracer.
Although the Logging block automatically adds the activity ID to each log entry, this does not
appear in the resulting message when you use the text formatter with the default template. To
include the activity ID in the logged message that uses a text formatter, you must edit the template
property in the configuration tools to include the token {property(ActivityId)}. Note that property
names are case-sensitive in the template definition.
An Example of Tracing Activities
The example, Tracing activities and publishing activity information to categories, should help to
make this clear. At the start of the application, the code resolves a TraceManager instance from the
Enterprise Library container in the same way as we resolved the LogWriter we've been using so far.
// Resolve a TraceManager object from the container.
TraceManager traceMgr
= EnterpriseLibraryContainer.Current.GetInstance<TraceManager>();
Next, the code creates and starts a new Tracer instance using the StartTrace method of the
TraceManager, specifying the category named General. As it does not specify an Activity ID value,
the TraceManager creates one automatically. This is the preferred approach, because each separate
process running an instance of this code will generate a different GUID value. This means you can
isolate individual events for each process.
The code then creates and writes a log entry within the context of this tracer, specifying that it
belongs to the DiskFiles category in addition to the General category defined by the tracer. Next, it
creates a nested Tracer instance that specifies the category named Database, and writes another log
entry that itself specifies the category named Important. This log entry will therefore belong to the
General, Database, and Important categories. Then, after the Database tracer goes out of scope,
the code creates a new Tracer that again specifies the Database category, but this time it also
specifies the Activity ID to use in the context of this new tracer. Finally, it writes another log entry
within the context of the new Database tracer scope.
// Start tracing for category 'General'. All log entries within trace context
// will be included in this category and use any specified Activity ID (GUID).
// If you do not specify an Activity ID, the TraceManager will create a new one.
using (traceMgr.StartTrace("General"))
{
// Write a log entry with another category, will be assigned to both.
defaultWriter.Write("LogEntry with category 'DiskFiles' created within "
+ "context of 'General' category tracer.", "DiskFiles");
// Start tracing for category 'Database' within context of 'General' tracer.
// Do not specify a GUID to use so that the existing one is used.
using (traceMgr.StartTrace("Database"))
{
// Write a log entry with another category, will be assigned to all three.
defaultWriter.Write("LogEntry with category 'Important' created within "
114
Not shown above are the lines of code that, at each stage, write the current Activity ID to the screen.
The output generated by the example is shown here. You can see that, initially, there is no Activity
ID. The first tracer instance then sets the Activity ID to a random value (you will get a different value
if you run the example yourself), which is also applied to the nested tracer.
However, the second tracer for the Database category changes the Activity ID to the value we
specified in the StartTrace method. When this tracer goes out of scope, the Activity ID is reset to
that for the parent tracer. When all tracers go out of scope, the Activity ID is reset to the original
(empty) value.
- Current Activity ID is: 00000000-0000-0000-0000-000000000000
Written LogEntry with category 'DiskFiles' created within context of 'General'
category tracer.
- Current Activity ID is: a246ada3-e4d5-404a-bc28-4146a190731d
Written LogEntry with category 'Important' created within context of first
'Database' category tracer nested within 'DiskFiles' category TraceManager.
- Current Activity ID is: a246ada3-e4d5-404a-bc28-4146a190731d
Leaving the context of the first Database tracer
- Current Activity ID is: a246ada3-e4d5-404a-bc28-4146a190731d
Written LogEntry with category 'Important' created within context of second
'Database' category tracer nested within 'DiskFiles' category TraceManager.
- Current Activity ID is: 12345678-1234-1234-1234-123456789abc
Leaving the context of the second Database tracer
- Current Activity ID is: a246ada3-e4d5-404a-bc28-4146a190731d
Leaving the context of the General tracer
115
If you open the RollingFlatFile.log file you will see the two log entries generated within the context
of the nested tracers. These belong to the categories Important, Database, and General. You will also
see the Activity ID for each one, and can confirm that it is different for these two entries. For
example, this is the first part of the log message for the second nested tracer, which specifies the
Activity ID GUID in the StartTrace method.
Timestamp: 01/12/2009 12:12:00
Message: LogEntry with category 'Important' created within context of second
nested 'Database' category tracer.
Category: Important, Database, General
Priority: -1
EventId: 1
Severity: Information
Title:
Activity ID: 12345678-1234-1234-1234-123456789abc
Be aware that other software and services may use the Activity ID of the Correlation Manager to
provide information and monitoring facilities. An example is Windows Communication Foundation
(WCF), which uses the Activity ID to implement tracing.
You must also ensure that you correctly dispose Tracer instances. If you do not take advantage of
the using construct to automatically dispose instances, you must ensure that you dispose nested
instances in the reverse order you created themby disposing the child instance before you
dispose the parent instance. You must also ensure that you dispose Tracer instances on the same
thread that created them.
116
For more information about extending the Logging block, see the online documentation at
http://go.microsoft.com/fwlink/?LinkId=188874 or consult the installed help files.
Summary
This chapter described the Enterprise Library Logging Application Block. This block is extremely
useful for logging activities, events, messages, and other information that your application must
persist or exposeboth to monitor performance and to generate auditing information. The Logging
block is, like all of the other Enterprise Library blocks, highly customizable and driven through
configuration so that you (or administrators and operations staff) can modify the behavior to suit
your requirements exactly. Chances are good that you are often likely to be executing enterpriselevel or other vital tasks for which accurate and reliable monitoring is a necessity.
You can use the Logging block to categorize, filter, and write logging information to a wide variety of
targets, including Windows event logs, e-mail messages, disk files, Windows Message Queuing, and a
database. You can even collect additional context information and add it to the log entries
automatically, and add activity IDs to help you correlate related messages and activities. And, if none
of the built-in features meets your requirements, you can create and integrate custom listeners,
filters, and formatters.
This chapter explained why you should consider decoupling your logging features from your
application code, what the Logging block can do to help you implement flexible and configurable
logging, and how you actually perform the common tasks related to logging. For more information
about using the Logging block, see the online documentation at
http://go.microsoft.com/fwlink/?LinkId=188874 or consult the installed help files.
117
What?
Data that applies to all users of the application and does not change frequently, or data
that you can use to optimize reference data lookups, avoid network round-trips, and
avoid unnecessary and duplicate processing. Examples are data such as product lists,
constant values, and values read from configuration or a database. Where possible,
cache data in a ready-to-use format. Do not cache volatile data, and do not cache
sensitive data unless you encrypt it.
When?
You can cache data when the application starts if you know it will be required and it is
unlikely to change. However, you should cache data that may or may not be used, or
data that is relatively volatile, only when your application first accesses it.
Where?
Ideally, you should cache data as near as possible to the code that will use it, especially
in a layered application that is distributed across physical tiers. For example, cache data
you use for controls in your user interface in the presentation layer, cache business data
in the business layer, and cache parameters for stored procedures in your data layer. If
your application runs on multiple servers and the data may change as the application
runs, you will usually need to use a distributed cache accessible from all servers. If you
are caching data for a user interface, you can usually cache the data on the client.
How ?
118
This chapter concentrates (obviously) on the patterns & practices Caching Application Block, which is
designed for use as a non-distributed cache on a client machine. It is ideal for caching data in
Windows Forms, Windows Presentation Foundation (WPF), and console-based applications. You
can use it in server-based roles such as ASP.NET applications, services, business layer code, or data
layer code; but only where you have a single instance of the code running.
Out of the box, the Caching Application Block does not provide the features required for distributed
caching across multiple servers. Other solutions you may consider for caching are the ASP.NET cache
mechanism, which can be used on a single server (in-process) and on multiple servers (using a state
server or a SQL Server database), or a third party solution that uses the Caching Application Block
extension points.
Also keep in mind that version 4.0 of the .NET Framework includes the System.Runtime.Caching
namespace, which provides features to support in-memory caching. The current version of the
Caching block is likely to be deprecated after this release, and Enterprise Library will instead make
use of the caching features of the .NET Framework.
Flushed or Expired?
One of the main factors that can affect application performance is memory availability. While
caching data can improve performance, caching too much data can (as you saw earlier) reduce
performance if the cache uses too much of the available memory. To counter this, the Caching block
119
performs scavenging on a fixed cycle in order remove items when memory is in short supply. Items
may be removed from the cache in two ways:
When they expire. If you specify an expiration setting, the item is removed from the cache
during the next scavenging cycle if they have expired. You can specify a combination of
settings based on the absolute time, sliding time, extended time format (for example, every
evening at midnight), file dependency, or never. You can also specify a priority, so that
lower priority items are scavenged first. The scavenging interval and the maximum number
of items to scavenge on each pass are configurable.
When they are flushed. You can explicitly expire (mark for removal) individual items in the
cache, or explicitly expire all items, using methods exposed by the Caching block. This allows
you to control which items are available from the cache. The scavenging mechanism
removes items that it detects have expired and are no longer valid. However, until the
scavenging cycle occurs, the items remain in the cache but are marked as expired, and you
cannot retrieve them.
The difference is that flushing might remove valid cache items to make space for more frequently
used items, whereas expiration removes invalid and expired items. Remember that items may have
been removed from the cache by the scavenging mechanism even if they haven't expired, and you
should always check that the cached item exists when you try to retrieve and use it. You may choose
to recreate the item and re-cache it at this point.
120
Figure 1 shows the configuration for the examples in this chapter of the guide. You can see the four
cache managers we use, with the section for the EncryptedCacheManager expanded to show its
property settings.
Figure 1
Configuring caching in Enterprise Library
For each cache manager, you can specify the expiration poll frequency (the interval in seconds at
which the block will check for expired items and remove them), the maximum number of items in
the cache before scavenging will occur irrespective of the polling frequency, and the number of
items to remove when scavenging the cache.
You can also specify, in the configuration properties of the Caching Application Block root node,
which of the cache managers you configure should be the default. The Caching block will use the one
you specify if you instantiate a cache manager without providing the name of that cache manager.
Persistent Caching
The cache manager caches items in memory only. If you want to persist cached items across
application and system restarts, you can add a persistent backing store to your configuration. You
can specify only a single backing store for each cache manager (obviously, or it would get extremely
confused), and the Caching block contains providers for caching in both a database and isolated
121
storage. You can specify a partition name for each persistent backing store, which allows you to
target multiple cache storage providers at isolated storage or at the same database.
If you add a data cache store to your configuration, the configuration tool automatically adds the
Data Access Application Block to the configuration. You configure a database connection in the Data
Access block configuration section, and then select this connection in the properties of the data
cache store provider. For details of how you configure the Data Access Application Block, see
Chapter 2 "Much ADO about Data Access."
Microsoft.Practices.EnterpriseLibrary.Caching.dll
Microsoft.Practices.EnterpriseLibrary.Caching.Cryptography.dll
Microsoft.Practices.EnterpriseLibrary.Caching.Database.dll
Microsoft.Practices.EnterpriseLibrary.Data.dll
Microsoft.Practices.EnterpriseLibrary.Security.Cryptography.dll
122
If you do not wish to cache items in a database, you don't need to add the Database and Data
assemblies. If you do not wish to encrypt cached items, you don't need to add the two Cryptography
assemblies.
To make it easier to use the objects in the Caching block, you can add references to the relevant
namespaces to your project. Then you are ready to write some code.
123
NullBackingStore class defined) and so uses only the in-memory cache. It stores this reference as the
interface type ICacheManager.
Next, it calls a separate routine that adds items to the cache and then displays the contents of the
cache. This routine is reused in many of the examples in this chapter.
// Resolve the default CacheManager object from the container.
// The actual concrete type is determined by the configuration settings.
// In this example, the default is the InMemoryCacheManager instance.
ICacheManager defaultCache
= EnterpriseLibraryContainer.Current.GetInstance<ICacheManager>();
// Store some items in the cache and show the contents using a separate routine.
CacheItemsAndShowCacheContents(defaultCache);
The CacheItemsAndShowCacheContents routine uses the cache manager passed to it; in this first
example, this is the in-memory only cache manager. However, the code to add items to the cache
and manipulate the cache is (as you would expect) identical for all configurations of cache managers.
Notice that the code defines a set of string values that it uses as the cache keys. This makes it easier
for the code later on to examine the contents of the cache. This is the declaration of the cache keys
array and the first part of the code in the CacheItemsAndShowCacheContents routine.
// Declare an array of string values to use as the keys of the cached items.
string[] DemoCacheKeys
= {"ItemOne", "ItemTwo", "ItemThree", "ItemFour", "ItemFive"};
void CacheItemsAndShowCacheContents(ICacheManager theCache)
{
// Add some items to the cache using the key names in the DemoCacheKeys array.
theCache.Add(DemoCacheKeys[0], "Some Text");
theCache.Add(DemoCacheKeys[1],
new StringBuilder("Some text in a StringBuilder"));
theCache.Add(DemoCacheKeys[2], 42, CacheItemPriority.High, null,
new NeverExpired());
theCache.Add(DemoCacheKeys[3], new DataSet(), CacheItemPriority.Normal,
null, new AbsoluteTime(new DateTime(2099, 12, 31)));
// Note that the next item will expire after three seconds
theCache.Add(DemoCacheKeys[4],
new Product(10, "Exciting Thing", "Useful for everything"),
CacheItemPriority.Low, null,
new SlidingTime(new TimeSpan(0, 0, 3)));
// Display the contents of the cache.
ShowCacheContents(theCache);
...
In the code shown above, you can see that the CacheItemsAndShowCacheContents routine uses the
simplest overload to cache the first two items; a String value and an instance of the StringBuilder
class. For the third item, the code specifies the item to cache as the Integer value 42 and indicates
that it should have high priority (it will remain in the cache after lower priority items when the cache
has to be minimized due to memory or other constraints). There is no callback required, and the
item will never expire.
124
The fourth item cached by the code is a new instance of the DataSet class, with normal priority and
no callback. However, the expiry of the cached item is set to an absolute date and time (which
should be well after the time that you run the example).
The final item added to the cache is a new instance of a custom class defined within the application.
The Product class is a simple class with just three properties: ID, Name, and Description. The class
has a constructor that accepts these three values and sets the properties in the usual way. It is
cached with low priority, and a sliding time expiration set to three seconds.
The final line of code above calls another routine named ShowCacheContents that displays the
contents of the cache. Not shown here is code that forces execution of the main application to halt
for five seconds, redisplay the contents of the cache, and repeat this process again. This is the
output you see when you run this example.
The cache contains the following 5 item(s):
Item key 'ItemOne' (System.String) = Some Text
Item key 'ItemTwo' (System.Text.StringBuilder) = Some text in a StringBuilder
Item key 'ItemThree' (System.Int32) = 42
Item key 'ItemFour' (System.Data.DataSet) = System.Data.DataSet
Item key 'ItemFive' (CachingExample.Product) = CachingExample.Product
Waiting for last item to expire...
Waiting... Waiting... Waiting... Waiting... Waiting...
The cache contains the following 5 item(s):
Item key 'ItemOne' (System.String) = Some Text
Item key 'ItemTwo' (System.Text.StringBuilder) = Some text in a StringBuilder
Item key 'ItemThree' (System.Int32) = 42
Item key 'ItemFour' (System.Data.DataSet) = System.Data.DataSet
Item with key 'ItemFive' has been invalidated.
Waiting for the cache to be scavenged...
Waiting... Waiting... Waiting... Waiting... Waiting...
The cache contains the following 4 item(s):
Item key 'ItemOne' (System.String) = Some Text
Item key 'ItemTwo' (System.Text.StringBuilder) = Some text in a StringBuilder
Item key 'ItemThree' (System.Int32) = 42
Item key 'ItemFour' (System.Data.DataSet) = System.Data.DataSet
You can see in this output that the cache initially contains the five items we added to it. However,
after a few seconds, the last one expires. When the code examines the contents of the cache again,
the last item (with key ItemFive) has expired but is still in the cache. However, the code detects this
and shows it as invalidated. After a further five seconds, the code checks the contents of the cache
again, and you can see that the invalidated item has been removed.
Depending on the performance of your machine, you may need to change the value configured for
the expiration poll frequency of the cache manager in order to see the invalidated item in the cache
and the contents after the scavenging cycle completes.
125
What's In My Cache?
The example you've just seen displays the contents of the cache, indicating which items are still
available in the cache, and which (if any) are in the cache but not available because they are waiting
to be scavenged. So how can you tell what is actually in the cache and available for use? In the timehonored way, you might like to answer "Yes" or "No" to the following questions:
Can I use the Contains method to check if an item with the key I specify is available in the
cache?
Can I query the Count property and retrieve each item using its index?
Can I iterate over the collection of cached items, reading each one in turn?
If you answered "Yes" to any of these, the bad news is that you are wrong. All of these are false.
Why? Because the cache is managed by more than one process. The cache manager you are using is
responsible for adding items to the cache and retrieving them through the public methods available
to your code. However, a background process also manages the cache, checking for any items that
have expired and removing (scavenging) those that are no longer valid. Cached items may be
removed when memory is scarce, or in response to dependencies on other items, as well as when
the expiry date and time you specified when you added an item to the cache has passed.
So, even if the Contains method returns true for a specified cache key, that item might have been
invalidated and is only in the cache until the next scavenging operation. You can see this in the
output for the previous example, where the two waits force the code to halt until the item has been
flagged as expired, and then halt again until it is scavenged. The actual delay before scavenging takes
place is determined by the expiration poll frequency configuration setting of the cache manager. In
the previous example, this is 10 seconds.
The correct approach to extracting cached items is to simply call the GetData method and check that
it did not return null. However, you can use the Contains method to see if an item was previously
cached and will (in most cases) still be available in the cache. This is efficient, but you must still (and
always) check that the returned item is not null after you attempt to retrieve it from the cache.
The code used in the examples to read the cached items depends on the fact that we use an array of
cache keys throughout the examples, and we can therefore check if any of these items are in the
cache. The code we use is shown here.
void ShowCacheContents(ICacheManager theCache)
{
if (theCache.Count > 0)
{
Console.WriteLine("Cache contains the following {0} item(s):",
theCache.Count);
// Cannot iterate the cache, so use the five known keys
foreach (string key in DemoCacheKeys)
{
if (theCache.Contains(key))
{
// Try and get the item from the cache
object theData = theCache.GetData(key);
126
//
//
//
if
If item has expired but not yet been scavenged, it will still show
in the count of the number of cached items, but the GetData method
will return null.
(null != theData)
Console.WriteLine("Item key '{0}' ({1}) = {2}", key,
theData.GetType().ToString(), theData.ToString());
else
Console.WriteLine("Item with key '{0}' has been invalidated.", key);
}
}
}
else
{
Console.WriteLine("The cache is empty.");
}
}
127
Notice that you can specify a partition name for your cache. This allows you to separate the cached
data for different applications (or different cache managers) for the same user by effectively
segregating each one in a different partition within that users isolated storage area.
Other than the configuration of the cache manager to use the isolated storage backing store, the
code you use to cache and retrieve data is identical. The example, Cache data locally in the isolated
storage backing store, uses a cache manager named IsoStorageCacheManager that is configured
with an isolated storage backing store. It retrieves a reference to this cache manager by specifying
the name when calling the GetInstance method of the current Enterprise Library container.
// Resolve a named CacheManager object from the container.
// In this example, this one uses the Isolated Storage Backing Store.
ICacheManager isoStorageCache
= EnterpriseLibraryContainer.Current.GetInstance<ICacheManager>(
"IsoStorageCacheManager");
...
CacheItemsAndShowCacheContents(isoStorageCache);
The code then executes the same CacheItemsAndShowCacheContents routine you saw in the first
example, and passes to it the reference to the isoStorageCache cache manager. The result you see
when you run this example is the same as you saw in the first example in this chapter.
If you find that you get an error when you re-run this example, it may be because the backing store
provider cannot correctly access your local isolated storage store. In most cases, you can resolve this
by deleting the previously cached contents. Open the folder Users\<your-username>\AppData\Local\IsolatedStorage, and expand each of the subfolders until you find the
Files\CachingExample subfolder. Then delete this entire folder tree. You should avoid deleting all of
the folders in your IsolatedStorage folder as these may contain data used by other applications.
128
To use encryption, you simple add an encryption provider to the configuration of the backing store.
When you first add an encryption provider, the configuration tool automatically adds the
Cryptography block to your configuration. Therefore, you must ensure that the relevant assembly,
Microsoft.Practices.EnterpriseLibrary.Security.Cryptography.dll, is referenced in your project.
After you add the encryption provider to the configuration of the backing store, configure the
Cryptography section by adding a new symmetric provider, and use the Key wizard to generate a
new encryption key file or import an existing key. Then, back in the configuration for the Caching
block, select the new symmetric provider you added for the symmetric encryption property of the
backing store. For more information about configuring the Cryptography block, see Chapter 7,
"Relieving Cryptography Complexity."
The examples provided for this chapter include one named Encrypt cached data in a backing store,
which demonstrates how you can encrypt the persisted data. It instantiates the cache manager
defined in the configuration of the application with the name EncryptedCacheManager:
// Resolve a CacheManager instance that encrypts the cached data.
ICacheManager encryptedCache
= EnterpriseLibraryContainer.Current.GetInstance<ICacheManager>(
"EncryptedCacheManager");
...
CacheItemsAndShowCacheContents(encryptedCache);
The code then executes the same CacheItemsAndShowCacheContents routine you saw in the first
example, and passes to it the reference to the encryptedCache cache manager. And, again, the
result you see when you run this example is the same as you saw in the first example in this chapter.
If you find that you get an error when you run this example, it is likely to be that you have not
created a suitable encryption key that the Cryptography block can use, or the absolute path to the
key file in the App.config file is not correct. To resolve this, open the configuration console, navigate
to the Symmetric Providers section of the Cryptography Application Block Settings, and select the
RijndaelManaged provider. Click the "..." button in the Key property to start the Cryptographic Key
Wizard. Use this wizard to generate a new key, save the key file, and automatically update the
contents of App.config.
129
to this database using the Microsoft Visual Studio Server Explorer to see the contents, as shown in
Figure 3.
Figure 3
Viewing the contents of the cache in the database table
To configure caching to a database, you simply add the database cache storage provider to the cache
manager using the configuration console, and specify the connection string and ADO.NET data
provider type (the default is System.Data.SqlClient, though you can change this if you are using a
different database system).
You can also specify a partition name for your cache, in the same way as you can for the isolated
storage backing store provider. This allows you to separate the cached data for different applications
(or different cache managers) for the same user by effectively segregating each one in a different
partition within the database table.
Other than the configuration of the cache manager to use the database backing store, the code you
use to cache and retrieve data is identical. The example, Cache data in a database backing store,
uses a cache manager named DatabaseCacheManager that is configured with a data cache storage
backing store. As with the earlier example, the code retrieves a reference to this cache manager by
specifying the name when calling the GetInstance method of the current Enterprise Library
container.
// Resolve a CacheManager instance that uses a Database Backing Store.
ICacheManager databaseCache
= EnterpriseLibraryContainer.Current.GetInstance<ICacheManager>(
"DatabaseCacheManager");
...
CacheItemsAndShowCacheContents(databaseCache);
The code then executes the same CacheItemsAndShowCacheContents routine you saw in the first
example, and passes to it the reference to the databaseCache cache manager. As you will be
expecting by now, the result you see when you run this example is the same as you saw in the first
example in this chapter.
130
The connection string for the database we provide with this example is:
Data Source=.\SQLEXPRESS; AttachDbFilename=|DataDirectory|\Caching.mdf; Integrated
Security=True; User Instance=True
If you have configured a different database using the scripts provided with the example, you may
find that you get an error when you run this example. It is likely to be that you have an invalid
connection string in your App.config file for your database. In addition, use the Services applet in
your Administrative Tools folder to check that the SQL Server (SQLEXPRESS) database service (the
service is named MSSQL$SQLEXPRESS) is running.
Next, the code creates an instance of the ExtendedFormatTime class. This class allows you to specify
expiration times for the cached item based on a repeating schedule. It provides additional
opportunities compared to the more common SlidingTime and AbsoluteTime expiration types you
have seen so far.
The constructor of the ExtendedFormatTime class accepts a string value that it parses into individual
values for the minute, hour, day, month, and weekday (where zero is Sunday) that together specify
the frequency with which the cached item will expire. Each value is delimited by a space. An asterisk
indicates that there is no value for that part of the format string, and effectively means that
expiration will occur for every occurrence of that item. It all sounds very complicated, so some
examples will no doubt be useful (see Table 2).
131
Table 2
Expiration
Extended Format String
Meaning
*****
5****
* 21 * * *
31 15 * * *
74**6
15 21 4 7 *
The example generates an ExtendedFormatTime that expires at 30 minutes past every hour. Then it
creates an array of type ICacheItemExpiration that contains the FileDependency created earlier and
the new ExtendedFormatTime instance.
// Create an extended expiration for 30 minutes past every hour
ExtendedFormatTime extTime = new ExtendedFormatTime("30 * * * *");
// Create array of expirations containing the file dependency and extended format
ICacheItemExpiration[] expirations
= new ICacheItemExpiration[] { fileDep, extTime };
The following is the output you see at this point in the execution.
Created a 'never expired' dependency.
Created a text file named ATextFile.txt to use as a dependency.
Created an expiration for 30 minutes past every hour.
132
When you press a key, the code continues by deleting the text file, and then re-displaying the
contents of the cache. Then, as in earlier examples, it waits for the items to be scavenged from the
cache. The output you see is shown here.
Cache contains the following 4 item(s):
Item key 'ItemOne' (System.String) = A cached item that never expires
Item with key 'ItemTwo' has been invalidated.
Item key 'ItemThree' (System.String) = Another cached item
Item key 'ItemFour' (System.String) = And yet another cached item.
Waiting for the dependent item to be scavenged from the cache...
Waiting... Waiting... Waiting... Waiting...
Cache contains the following 3 item(s):
Item key 'ItemOne' (System.String) = A cached item that never expires
Item key 'ItemThree' (System.String) = Another cached item
Item key 'ItemFour' (System.String) = And yet another cached item.
You can see that deleting the text file caused the item with key ItemTwo that depended on it to be
invalidated and removed during the next scavenging cycle.
At this point, the code is again waiting for you to press a key. When you do, it continues by calling
the Remove method of the cache manager to remove the item having the key ItemOne, and displays
the cache contents again. Then, after you press a key for the third time, it calls the Flush method of
the cache manager to remove all the items from the cache, and again calls the method that displays
the contents of the cache. This is the code for this part of the example.
Console.Write("Press any key to remove {0} from the cache...", DemoCacheKeys[0]);
Console.ReadKey(true);
defaultCache.Remove(DemoCacheKeys[0]);
ShowCacheContents(defaultCache);
Console.Write("Press any key to flush the cache...");
Console.ReadKey(true);
defaultCache.Flush();
ShowCacheContents(defaultCache);
133
134
new MyCacheRefreshAction(),
new SlidingTime(new TimeSpan(0, 0, 10)));
Console.WriteLine("Refreshed the item by adding it to the cache again.");
}
}
}
To use the implementation of the ICacheItemRefreshAction interface, you simply specify it as the
refreshAction parameter of the Add method when you add an item to the cache. The example uses
the following code to cache an instance of the Product class that will expire after three seconds.
defaultCache.Add(DemoCacheKeys[0], new Product(10, "Exciting Thing",
"Useful for everything"),
CacheItemPriority.Low, new MyCacheRefreshAction(),
new SlidingTime(new TimeSpan(0, 0, 3)));
The code then does the same as the earlier examples: it displays the contents of the cache, waits five
seconds for the item to expire, displays the contents again, waits five more seconds until the item is
scavenged, and then displays the contents for the third time. However, this time the Caching block
executes the Refresh method of our ICacheItemRefreshAction callback as soon as the item is
removed from the cache. This callback displays a message indicating that the cached item was
removed because it had expired, and that it has been added back into the cache. You can see it in
the final listing of the cache contents shown here.
The cache contains the following 1 item(s):
Item key 'ItemOne' (CachingExample.Product) = CachingExample.Product
Waiting... Waiting... Waiting... Waiting... Waiting...
The cache contains the following 1 item(s):
Item with key 'ItemOne' has been invalidated.
Cached item ItemOne was expired in the cache with the reason 'Expired'
Item values were: ID = 10, Name = 'Exciting Thing', Description = Useful for
everything
Refreshed the item by adding it to the cache again.
Waiting... Waiting... Waiting...
The cache contains the following 1 item(s):
Item key 'ItemOne' (CachingExample.Product) = CachingExample.Product
135
Alternatively, you may prefer to use reactive cache loading. This approach is useful for data that
may or may not be used, or data that is relatively volatile. In this case (if you are using a persistent
backing store), you may choose to instantiate the cache manager only when you need to load the
data. Alternatively, you can flush the cache (probably when your application ends) and then load
specific items into it as required and when required. For example, you might find that you need to
retrieve the details of a specific product from your corporate data store for display in your
application. At this point, you could choose to cache it if it may be used again within a reasonable
period and is unlikely to change during that period.
Proactive Cache Loading
The example, Load the cache proactively on application startup, provides a simple demonstration of
proactive cache loading. In the startup code of your application you add code to load the cache with
the items your application will require. The example creates a list of Product items, and then iterates
through the list calling the Add method of the cache manager for each one. You would, of course,
fetch the items to cache from the location (such as a database) appropriate for your own
application. It may be that the items are available as a list, orfor exampleby iterating through
the rows in a DataSet or a DataReader.
// Create a list of products - may come from a database or other repository
List<Product> products = new List<Product>();
products.Add(new Product(42, "Exciting Thing",
"Something that will change your view of life."));
products.Add(new Product(79, "Useful Thing",
"Something that is useful for everything."));
products.Add(new Product(412, "Fun Thing",
"Something that will keep the grandchildren quiet."));
// Iterate the list loading each one into the cache
for (int i = 0; i < products.Count; i++)
{
theCache.Add(DemoCacheKeys[i], products[i]);
}
136
If the item is in the cache, the code displays the values of its properties. If it is not in the cache, the
code executes a routine to load the cache with all of the products. This routine is the same as you
saw in the previous example of loading the cache proactively.
Console.WriteLine("Getting an item from the cache...");
Product theItem = (Product)defaultCache.GetData(DemoCacheKeys[1]);
//
//
//
if
{
You could test for the item in the cache using CacheManager.Contains(key)
method, but you still must check if the retrieved item is null even
if the Contains method indicates that the item is in the cache:
(null != theItem)
Console.WriteLine("Cached item values are: ID = {0}, Name = '{1}', "
+ "Description = {2}", theItem.ID, theItem.Name,
theItem.Description);
}
else
{
Console.WriteLine("The item could not be obtained from the cache.");
// Item not found, so reactively load the cache
LoadCacheWithProductList(defaultCache);
Console.WriteLine("Loaded the cache with the list of products.");
ShowCacheContents(defaultCache);
}
After displaying the contents of the cache after loading the list of products, the example code then
continues by attempting once again to retrieve the value and display its properties. You can see the
entire output from this example here.
The cache is empty.
Getting an item from the cache...
The item could not be obtained from the cache.
Loaded the cache with the list of products.
The cache contains the following 3 item(s):
Item key 'ItemOne' (CachingExample.Product) = CachingExample.Product
Item key 'ItemTwo' (CachingExample.Product) = CachingExample.Product
Item key 'ItemThree' (CachingExample.Product) = CachingExample.Product
Getting an item from the cache...
Cached item values are: ID = 79, Name = 'Useful Thing', Description = Something
that is useful for everything.
In general, the pattern for a function that performs reactive cache loading is:
1. Check if the item is in the cache and the value returned is not null.
2. If it is found in the cache, return it to the calling code.
3. If it is not found in the cache, create or obtain the object or value and cache it.
4. Return this new value or object to the calling code.
137
Summary
This chapter looked at the ways that you can implement caching across your application and your
enterprise in a consistent and configurable way by using the Caching Application Block. The block
provides a non-distributed cache that can cache items in memory, and optionally in a persistent
backing store such as isolated storage or a database. You can also easily add new backing stores if
required, and even replace the cache manager if you want to create a mechanism that does support
other features, such as distributed caching.
The Caching block is flexible in order to meet most requirements for most types of applications. You
can define multiple caches and partition each one, which is useful if you want to use a single
database for multiple caches. And you can easily add encryption to the caching mechanism for items
stored in a persistent backing store.
The block also provides a wide range of expiration mechanisms, including several time-based
expirations as well as file-based expiration. Unlike some caching mechanisms, you can specify
multiple expirations for each cached item, and even create your own custom expiration policies.
On top of all of this flexibility, the block makes it easy for administrators and operators to change the
behavior through configuration using the configuration tools provided with Enterprise Library. They
can change the settings for the cache, such as the polling frequency, change the backing stores that
the block uses, and change the algorithms that it uses to encrypt cached data.
138
This chapter discussed all of these features, and contained detailed examples of how you can use the
block in your own applications. For more information about the Caching block, see the online
documentation and the help files installed with Enterprise Library.
139
140
141
to create a complete list of criteria for all known invalid input unless there is a limited range of
invalid values.
142
When you use a rule set to validate an instance of a specific type or object, the block can apply the
rules to:
Notice that you can validate the values of method parameters and the return type of methods that
take parameters when that method is invoked, only by using the validation call handler (which is
part of the Validation block) in conjunction with the Unity dependency injection and interception
mechanism. The validation call handler will validate the parameter values based on the rules for
each parameter type and any validation attributes applied to the parameters. We dont cover the
use of the validation call handler in this guide, as it requires you to be familiar with Unity
interception techniques. For more information about interception and the validation call handler,
see the Unity interception documentation installed with Enterprise Library or available online at
http://go.microsoft.com/fwlink/?LinkId=188875.
143
Alternatively, you can create individual validators programmatically to validate specific values, such
as strings or numeric values. However, this is not the main focus of the blockthough we do include
samples in this chapter that show how you can use individual validators.
In addition, the Validation block contains features that integrate with Windows Forms, Windows
Presentation Foundation (WPF), ASP.NET, and Windows Communication Foundation (WCF)
applications. These features use a range of different techniques to connect to the UI, such as a proxy
validator class based on the standard ASP.NET Validator control that you can add to a Web page, a
ValidationProvider class that you can specify in the properties of Windows Forms controls, a
ValidatorRule class that you can specify in the definition of WPF controls, and a behavior extension
that you can specify in the <system.ServiceModel> section of your WCF configuration. You'll see
more details of these features later in this chapter.
The length of a string, or the occurrence of a specified set of characters within it.
Whether a value lies within a specified range, including tests for dates and times relative to
a specified date/time.
Whether a value is one of a specified set of values, or can be converted to a specific data
type or enumeration value.
Whether a value is null, or is the same as the value of a specific property of an object.
The composite validators are used to combine other validators when you need to apply more
complex validation rules. The Validation block includes an AND validator and an OR validator, each
of which acts as a container for other validators. By nesting these composite validators in any
combination and populating them with other validators, you can create very comprehensive and
very specific validation rules.
Table 1 describes the complete set of validators provided with the Validation block.
Table 1
The validators provided with the Validation block
Validator
type
Validator name
Description
Value
Validators
Contains Characters
Validator
144
Object
Validators
Composite
Validators
Single
Member
Validators
Domain Validator
Enum Conversion
Validator
Property Comparison
Validator
Range Validator
Regular Expression
Validator
Checks that the DateTime value falls within a specified range using
relative times and dates.
Checks that the length of the string is within the specified range.
Type Conversion
Validator
Object Validator
Object Collection
Validator
Checks that the object is a collection of the specified type and then
invokes validation on each element of the collection.
And Composite
Validator
Or Composite Validator
For more details on each validator, see the documentation installed with Enterprise Library or
available online at http://go.microsoft.com/fwlink/?LinkId=188874. You will see examples that use
many of these validators throughout this chapter.
To a field. The Validation block will check that the field value satisfies all validation rules
defined in validators applied to the field.
To a property. The Validation block will check that the value of the get property satisfies all
validation rules defined in validators applied to the property.
145
To a method that takes no parameters. The Validation block will check that the return value
of the method satisfies all validation rules defined in validators applied to the method.
To a parameter in a WCF Service Contract. The Validation block will check that the
parameter value satisfies all validation rules defined in validators applied to the parameter.
To parameters of methods that are intercepted, by using the validation call handler in
conjunction with the Policy Injection application block. For more information on using
interception, see Appendix C, "Policy Injection in Enterprise Library."
Each of the validators described in the previous section has a related attribute that you apply in your
code, specifying the values for validation (such as the range or comparison value) as parameters to
the attribute. For example, you can validate a property that must have a value between 0 and 10
inclusive by applying the following attribute to the property definition, as seen in the following code.
[RangeValidator(0, RangeBoundaryType.Inclusive, 10, RangeBoundaryType.Inclusive)]
DataAnnotations Attributes
In addition to using the built-in validation attributes, the Validation block will perform validation
defined in the vast majority of the validation attributes in the
System.ComponentModel.DataAnnotations namespace. These attributes are typically used by
frameworks and object/relational mapping (O/RM) solutions that auto-generate classes that
represent data items. They are also generated by the ASP.NET validation controls that perform both
client-side and server-side validation. While the set of validation attributes provided by the
Validation block does not map exactly to those in the DataAnnotations namespace, the most
common types of validation are supported. A typical use of data annotations is shown here.
[System.ComponentModel.DataAnnotations.Required(
ErrorMessage = "You must specify a value for the product ID.")]
[System.ComponentModel.DataAnnotations.StringLength(6,
ErrorMessage = "Product ID must be 6 characters.")]
[System.ComponentModel.DataAnnotations.RegularExpression("[A-Z]{2}[0-9]{4}",
ErrorMessage = "Product ID must be 2 capital letters and 4 numbers.")]
public string ID { get; set; }
In reality, the Validation block validation attributes are data annotation attributes, and can be used
(with some limitations) whenever you can use data annotations attributesfor example, with
ASP.NET Dynamic Data applications. The main difference is that the Validation block attribute
validation occurs only on the server, and not on the client.
Also keep in mind that, while DataAnnotations supports most of the Validation block attributes, not
all of the validation attributes provided with the Validation block are supported by the built-in .NET
validation mechanism. For more information, see the documentation installed with Enterprise
Library, and the topic "System.ComponentModel.DataAnnotations Namespace" at
http://msdn.microsoft.com/en-us/library/system.componentmodel.dataannotations.aspx.
146
Self-Validation
Self-validation might sound as though you should be congratulating yourself on your attractiveness
and wisdom, and your status as fine and upstanding citizen. However, in Enterprise Library terms,
self-validation is concerned with the use of classes that contain their own validation logic.
For example, a class that stores spare parts for aircraft might contain a function that checks if the
part ID matches a specific format containing letters and numbers. You add the HasSelfValidation
attribute to the class, add the SelfValidation attribute to any validation functions it contains, and
optionally add attributes for the built-in Validation block validators to any relevant properties. Then
you can validate an instance of the class using the Validation block. The block will execute the selfvalidation method.
Self-validation cannot be used with the UI validation integration features for Windows Forms, WPF,
or ASP.NET.
Self-validation is typically used where the validation rule you want to apply involves values from
different parts of your class or values that are not publicly exposed by the class, or when the
validation scenario requires complex rules that even a combination of composed validators cannot
achieve. For example, you may want to check if the sum of the number of products on order and the
number already in stock is less than a certain value before allowing a user to order more. The
following extract from one of the examples you'll see later in this chapter shows how self-validation
can be used in this case.
[HasSelfValidation]
public class AnnotatedProduct : IProduct
...
... code to implement constructor and properties goes here
...
[SelfValidation]
public void Validate(ValidationResults results)
{
string msg = string.Empty;
if (InStock + OnOrder > 100)
{
msg = "Total inventory (in stock and on order) cannot exceed 100 items.";
results.AddResult(new ValidationResult(msg, this, "ProductSelfValidation",
"", null));
}
}
The Validation block calls the self-validation method when you validate this class instance, passing to
it a reference to the collection of ValidationResults that it is populated with any validation errors
found. The code above simply adds one or more new ValidationResult instances to the collection if
the self-validation method detects an invalid condition. The parameters of the ValidationResult
constructor are:
The validation error message to display to the user or write to a log. The ValidationResult
class exposes this as the Message property.
147
A reference to the class instance where the validation error was discovered (usually the
current instance). The ValidationResult class exposes this as the Target property.
A string value that describes the location of the error (usually the name of the class
member, or some other value that helps locate the error). The ValidationResult class
exposes this as the Key property.
An optional string tag value that can be used to categorize or filter the results. The
ValidationResult class exposes this as the Tag property.
A reference to the validator that performed the validation. This is not used in self-validation,
though it will be populated by other validators that validate individual members of the type.
The ValidationResult class exposes this as the Validator property.
In configuration. You define a type that you want to apply rules to, and then define one or
more rule sets for that type. To each rule set you add the required combination of
validators, each one representing a validation rule within that rule set. You can specify one
rule set for each type as the default rule set for that type. The rules within this rule set are
then treated as members of the default (unnamed) rule set, as well as that named rule set.
In Validation block validator attributes applied to classes and their members. Every
validation attribute will accept a rule set name as a parameter. For example, you specify
that a NotNullValidator is a member of a rule set named MyRuleset, like this.
[NotNullValidator(MessageTemplate = "Cannot be null",
Ruleset = "MyRulesetName")]
In SelfValidation attributes within a class. You add the Ruleset parameter to the attribute
to indicate which rule set this self-validation rule belongs to. You can define multiple selfvalidation methods in a class, and add them to different rule sets if required.
148
[SelfValidation(Ruleset = "MyRulesetName")]
Figure 2 shows the configuration console with the configuration used in the example application for
this chapter. It defines a rule set named MyRuleset for the validated type (the Product class).
MyRuleset is configured as the default rule set, and contains a series of validators for all of the
properties of the Product type. These validators include two Or Composite Validators (which contain
other validators) for the DateDue and Description properties, three validators that will be combined
with the And operation for the ID property, and individual validators for the remaining properties.
When you highlight a rule, member, or validator in the configuration console, it shows connection
lines between the configured items to help you see the relationships between them.
Specifying Rule Sets When Validating
You can specify a rule set name when you create a type validator that will validate an instance of a
type. If you use the ValidatorFactory facade to create a type validator for a type, you can specify a
rule set name as a parameter of the CreateValidator method. If you create an Object Validator or an
Object Collection Validator programmatically by calling the constructor, you can specify a rule set
name as a parameter of the constructor. Finally, if you resolve a validator for a type through the
Enterprise Library Container, you can specify a rule set name as the string key value. We look in
more detail at the options for creating validators later in this chapter.
149
If you specify a rule set name when you create a validator for an object, the Validation
block will apply only those validation rules that are part of the specified rule set. It will, by
default, apply all rules with the specified name that it can find in configuration, attributes,
and self-validation.
If you do not specify a rule set name when you create a validator for an object, the
Validation block will, by default, apply all rules that have no name (effectively, rules with an
empty string as the name) that it can find in configuration, attributes, and self-validation. If
you have specified one rule set in configuration as the default rule set for the type you are
validating (by setting the DefaultRule property for that type to the rule set name), rules
within this rule set are also treated as being members of the default (unnamed) rule set.
The one time that this default mechanism changes is if you create a validator for a type using a
facade other than ValidatorFactory. As you'll see later in this chapter you can use the
ConfigurationValidatorFactory, AttributeValidatorFactory, or ValidationAttributeValidatorFactory
to generate type validators. In this case, the validator will only apply rules that have the specified
name and exist in the specified location.
For example, when you use a ConfigurationValidatorFactory and specify the name MyRuleset as the
rule set name when you call the CreateValidator method, the validator you obtain will only process
rules it finds in configuration that are defined within a rule set named MyRuleset for the target
object type. If you use an AttributeValidatorFactory, the validator will only apply Validation block
rules located in attributes and self-validation methods of the target class that have the name
MyRuleset.
Configuring multiple rule sets for the same type is useful when the type you need to validate is a
primitive type such as a String. A single application may have dozens of different rule sets that all
target String.
150
Then you can edit your code to specify the namespaces used by the Validation block and, optionally,
the integration features if you need to integrate with WCF or a UI technology.
If you are using WCF integration, you should add a reference to the System.ServiceModel
namespace.
Advantages
Considerations
Rule sets in
configuration
Validation block
attributes
Data annotation
attributes
151
Validators created
programmatically
If you decide to use attributes to define your validation rules within classes but are finding it difficult
to choose between using the Validation block attributes and the Microsoft .NET data annotation
attributes, you should consider using the Validation block attributes approach as this provides more
powerful capabilities and supports a far wider range of validation operations. However, you should
consider the data annotations approach in the following scenarios:
When you are working with existing applications that already use data annotations.
When you are building a Web application where you will use the ASP.NET Data Annotation
Model Binder, or you are using ASP.NET Dynamic Data to create data-driven user interfaces.
When you are using a framework such as the Microsoft Entity Framework, or another
object/relational mapping (O/RM) technology that auto-generates classes that include data
annotations.
Use the ValidatorFactory facade to create validators. This approach makes it easy to create
type validators that you can use, in conjunction with rule sets, to validate multiple members
of an object instance. This is generally the recommended approach. You also use this
approach to create validators that use only validation attributes or data annotations within
the classes you want to validate, or only rule sets defined in configuration. You can resolve
an instance of the ValidatorFactory using a single line of code, as you will see later in this
chapter.
Resolve individual validators through the Enterprise Library Container. This approach
allows you to obtain a validator instance using dependency injection; for example, by simply
specifying the type of validator you require in the constructor of a class that you resolve
through the container. If you specify a name when you resolve the instance, this is
interpreted as the name of the rule set for that validator to use when validating objects. See
152
Alternatively, you can extract more information about the validation result for each individual
validator where an error occurred. The example application we provide demonstrates how you can
do this, and you'll see more details later in this chapter.
153
The Message property of a validator is actually a template, not just a simple text string that is
displayable. When the block adds an individual ValidationResult to the ValidationResults instance
for each validation error it detects, it parses the value of the Message property looking for tokens
that it will replace with the value of specific properties of the validator that detected the error.
The value injected into the placeholder tokens, and the number of tokens used, depends on the type
of validatoralthough there are three tokens that are common to all validators. The token {0} will
be replaced by the value of the object being validated (ensure that you escape this value before you
display or use it in order to guard against injection attacks). The token {1} will contain the name of
the member that was being validated, if available, and is equivalent to the Key property of the
validator. The token {2) will contain the value of the Tag property of the validator.
The remaining tokens depend the on the individual validator type. For example, in the case of the
Contains Characters validator, the tokens {3} and {4} will contain the characters to check for and the
ContainsCharacters value (All or Any). In the case of a range validator, such as the String Length
validator, the tokens {3} to {6} will contain the values and bound types (Inclusive, Exclusive, or
Ignore) for the lower and upper bounds you specify for the validator. For example, you may define a
String Length validator like this:
[StringLengthValidator(5, RangeBoundaryType.Inclusive, 20,
RangeBoundaryType.Inclusive,
MessageTemplate = "[{0}]Name must be between {3} and {5} characters.")]
If this validator is attached to a property named Description, and the value of this property is invalid,
the ValidationResults instance will contain the error message Description must be between 5 and
20 characters.
Other validators use tokens that are appropriate for the type of validation they perform. The
documentation installed with Enterprise Library lists the tokens for each of the Validation block
validators. You will also see the range of tokens used in the examples that follow.
154
The Product class is used primarily with the example that demonstrates using a configured rule set,
and contains no validation attributes. The AttributedProduct class contains Validation block
attributes, while the AnnotatedProduct class contains .NET Data Annotation attributes. The latter
two classes also contain self-validation routinesthe extent depending on the capabilities of the
type of validation attributes they contain. You'll see more on this topic when we look at the use of
validation attributes later in this chapter.
The application provides the following individual examples you can use to help you understand in
detail the different ways that you can use the Validation block:
Validating Objects and Collections of Objects. This is the core topic for using the Validation
block, and is likely to be the most common scenario in your applications. It shows how you
can create type validators to validate instances of your custom classes, how you can dive
deeper into the ValidationResults instance that is returned, how you can use the Object
Validator, and how you can validate collections of objects.
Using Validation Attributes. This section describes how you can use attributes applied to
your classes to enable validation of members of these classes. These attributes use the
Validation block validators and the .NET Data Annotation attributes.
Creating and Using Individual Validators. This section shows how you can create and use
the validators provided with the block to validate individual values and members of objects.
WCF Service Validation Integration. This section describes how you can use the block to
validate parameters within a WCF service.
Finally, we'll round off the chapter by looking briefly at how you can integrate the Validation block
with user interface technologies such as Windows Forms, WPF, and ASP.NET.
155
You can then create a validator for any type you want to validate. For example, this code creates a
validator for the Product class and then validates an instance of that class named myProduct.
Validator<Product> pValidator = valFactory.CreateValidator<Product>();
ValidationResults valResults = pValidator.Validate(myProduct);
By default, the validator will use the default rule set defined for the target type (you can define
multiple rule sets for a type, and specify one of these as the default for this type). If you want the
validator to use a specific rule set, you specify this as the single parameter to the CreateValidator
method, as shown here.
Validator<Product> productValidator
= valFactory.CreateValidator<Product>("RuleSetName");
ValidationResults valResults = productValidator.Validate(myProduct);
The example named Using a Validation Rule Set to Validate an Object creates an instance of the
Product class that contains invalid values for all of the properties, and then uses the code shown
above to create a type validator for this type and validate it. It then displays details of the validation
errors contained in the returned ValidationResults instance. However, rather than using the simple
technique of iterating over the ValidationResults instance displaying the top-level errors, it uses
code to dive deeper into the results to show more information about each validation error, as you
will see in the next section.
Delving Deeper into ValidationResults
You can check if validation succeeded, or if any validation error were detected, by examining the
IsValid property of a ValidationResults instance and displaying details of any validation errors that
occurred. However, when you simply iterate over a ValidationResults instance (as we demonstrated
in the section "Performing Validation and Displaying Validation Errors" earlier in this chapter), we
displayed just the top-level errors. In many cases, this is all you will require. If the validation error
occurs due to a validation failure in a composite (And or Or) validator, the error this approach will
display is the message and details of the composite validator.
However, sometimes you may wish to delve deeper into the contents of a ValidationResults
instance to learn more about the errors that occurred. This is especially the case when you use
nested validators inside a composite validator. The code we use in the example provides richer
information about the errors. When you run the example, it displays the following results (we've
removed some repeated content for clarity).
The following 6 validation errors were detected:
+ Target object: Product, Member: DateDue
- Detected by: OrCompositeValidator
156
You can see that this shows the target object type and the name of the member of the target object
that was being validated. It also shows the type of the validator that performed the operation, the
Tag property values, and the validation error message. Notice also that the output includes the
validation results from the validators nested within the two OrCompositeValidator validators. To
achieve this, you must iterate recursively through the ValidationResults instance because it contains
nested entries for the composite validators.
The code we used also contains a somewhat contrived feature: to be able to show the value being
validated, some examples that use this routine include the validated value at the start of the
message using the {0} token in the form: [{0}] validation error message. The example code parses
the Message property to extract the value and the message when it detects that this message string
contains such a value. It also encodes this value for display in case it contains malicious content.
While this may not represent a requirement in real-world application scenarios, it is useful here as it
allows the example to display the invalid values that caused the validation errors and help you
understand how each of the validators works. We havent listed the code here, but you can examine
it in the example application to see how it works, and adapt it to meet your own requirements. You'll
find it in the ShowValidationResults, ShowValidatorDetails, and GetTypeNameOnly routines
located in the region named Auxiliary routines at the end of the main program file.
157
Alternatively, you can call the default constructor of the Object validator. In this case, it will create a
type-specific validator for the type of the target instance you pass to the Validate method. If you do
not specify a rule set name in the constructor, the validation will use the default rule set defined for
the type it is validating.
Validator pValidator = new ObjectValidator("RuleSetName");
ValidationResults valResults = pValidator.Validate(myProduct);
The validation will take into account any applicable rule sets, and any attributes and self-validation
methods found within the target object.
Differences Between the Object Validator and the Factory-Created Type Validators
While the two approaches you've just seen to creating or obtaining a validator for an object achieve
the same result, there are some differences in their behavior:
If you do not specify a target type when you create an Object Validator programmatically,
you can use it to validate any type. When you call the Validate method, you specify the
target instance, and the Object validator creates a type-specific validator for the type of the
target instance. In contrast, the validator you obtain from a factory can only be used to
validate instances of the type you specify when you obtain the validator. However, it can
also be used to validate subclasses of the specified type, but it will use the rules defined for
the specified target type.
The Object Validator will always use rules in configuration for the type of the target object,
and attributes and self-validation methods within the target instance. In contrast, you can
use a specific factory class type to obtain validators that only validate the target instance
using one type of rule source (in other words, just configuration rule sets, or just one type of
attributes).
The Object Validator will acquire a type-specific validator of the appropriate type each time
you call the Validate method, even if you use the same instance of the Object validator
every time. In contrast, a validator obtained from one of the factory classes does not need
to do this, and will offer improved performance.
As you can see from the flexibility and performance advantages listed above, you should generally
consider using the ValidatorFactory approach for creating validators to validate objects rather than
creating individual Object Validator instances.
158
You can also create an Object Collection validator programmatically, and use it to validate a
collection held in a variable. The example named Validating a Collection of Objects demonstrates
this approach. It creates a List named productList that contains two instances of the Product class,
one of which contains all valid values, and one that contains invalid values for some of its properties.
Next, the code creates an Object Collection validator for the Product type and then calls the Validate
method.
// Create an Object Collection Validator for the collection type.
Validator collValidator
= new ObjectCollectionValidator(typeof(Product));
// Validate all of the objects in the collection.
ValidationResults results = collValidator.Validate(productList);
Finally, the code displays the validation errors using the same routine as in earlier examples. As the
invalid Product instance contains the same values as the previous example, the result is the same.
You can run the example and view the code to verify that this is the case.
159
[StringLengthValidator(6, RangeBoundaryType.Inclusive,
6, RangeBoundaryType.Inclusive,
MessageTemplate = "Product ID must be {3} characters.")]
[RegexValidator("[A-Z]{2}[0-9]{4}",
MessageTemplate = "Product ID must be 2 letters and 4 numbers.")]
public string ID { get; set; }
Other validation attributes used within the AttributedProduct class include an Enum Conversion
validator that ensures that the value of the ProductType property is a member of the ProductType
enumeration, shown here. Note that the token {3} for the String Length validator used in the
previous section of code is the lower bound value, while the token {3} for the Enum Conversion
validator is the name of the enumeration it is comparing the validated value against.
[EnumConversionValidator(typeof(ProductType),
MessageTemplate = "Product type must be a value from the '{3}' enumeration.")]
public string ProductType { get; set; }
If you want to allow null values for a member of a class, you can apply the IgnoreNulls attribute.
Applying Self-Validation
Some validation rules are too complex to apply using the validators provided with the Validation
block or the .NET Data Annotation validation attributes. It may be that the values you need to
perform validation come from different places, such as properties, fields, and internal variables, or
involve complex calculations.
In this case, you can define self-validation rules as methods within your class (the method names are
irrelevant), as described earlier in this chapter in the section "Self-Validation." We've implemented a
self-validation routine in the AttributedProduct class in the example application. The method simply
checks that the combination of the values of the InStock, OnOrder, and DateDue properties meets
160
predefined rules. You can examine the code within the AttributedProduct class to see the
implementation.
Results of the Validation Operation
The example creates an invalid instance of the AttributedProduct class shown above, validates it,
and then displays the results of the validation process. It creates the following output, though we
have removed some of the repeated output here for clarity. You can run the example yourself to see
the full results.
Created and populated a valid instance of the AttributedProduct class.
There were no validation errors.
Created and populated an invalid instance of the AttributedProduct class.
The following 7 validation errors were detected:
+ Target object: AttributedProduct, Member: ID
- Detected by: RegexValidator
- Validated value was: '12075'
- Message: 'Product ID must be 2 capital letters and 4 numbers.'
...
...
+ Target object: AttributedProduct, Member: ProductType
- Detected by: EnumConversionValidator
- Validated value was: 'FurryThings'
- Message: 'Product type must be a value from the 'ProductType' enumeration.'
...
...
+ Target object: AttributedProduct, Member: DateDue
- Detected by: OrCompositeValidator
- Validated value was: '19/08/2010 15:55:16'
- Message: 'Date due must be between today and six months time.'
+ Nested validators:
- Detected by: RelativeDateTimeValidator
- Validated value was: '18/11/2010 13:36:02'
- Message: 'Value can be NULL or a date.'
- Detected by: NotNullValidator
- Validated value was: '18/11/2010 13:36:02'
+ Target object: AttributedProduct, Member: ProductSelfValidation
- Detected by: [none]
- Tag value:
- Message: 'Total inventory (in stock and on order) cannot exceed 100 items.'
Notice that the output includes the name of the type and the name of the member (property) that
was validated, as well as displaying type of validator that detected the error, the current value of the
member, and the message. For the DateDue property, the output shows the two validators nested
within the Or Composite validator. Finally, it shows the result from the self-validation method. The
values you see for the self-validation are those the code in the self-validation method specifically
added to the ValidationResults instance.
Validating Subclass Types
While discussing validation through attributes, we should briefly touch on the factors involved when
you validate a class that inherits from the type you specified when creating the validator you use to
161
validate it. For example, if you have a class named SaleProduct that derives from Product, you can
use a validator defined for the Product class to validate instances of the SaleProduct class. The
Validate method will also apply any relevant rules defined in attributes in both the SaleProduct class
and the Product base class.
If the derived class inherits a member from the base class and does not override it, the validators for
that member defined in the base class apply to the derived class. If the derived class inherits a
member but overrides it, the validators defined in the base class for that member do not apply to
the derived class.
Validating Properties that are Objects
In many cases, you may have a property of your class defined as the type of another class. For
example, your OrderLine class is likely to have a property that is a reference to an instance of the
Product class. It's common for this property to be defined as a base type or interface type, allowing
you to set it to an instance of any class that inherits or implements the type specified for the
property.
You can validate such a property using an ObjectValidator attribute within the class. However, by
default, the validator will validate the property using rules defined for the type of the propertyin
this example the type IProduct. If you want the validation to take place based on the actual type of
the object that is currently set as the value of the property, you can add the ValidateActualType
parameter to the ObjectValidator attribute, as shown here.
public class OrderLine
{
[ObjectValidator(ValidateActualType=true)]
public IProduct OrderProduct { get; set; }
...
}
Compared to the validation attributes provided with the Validation block, there are some limitations
when using the validation attributes from the DataAnnotations namespace:
The range of supported validation operations is less comprehensive, though there are some
new validation types available in.NET Framework 4.0 that extend the range. However, some
validation operations such as property value comparison, enumeration membership
checking, and relative date and time comparison are not available when using data
annotation validation attributes.
162
You cannot specify rule sets names, and so all rules implemented with data annotation
validation attributes belong to the default rule set. You will see more details about rule sets
later in this chapter.
There is no simple built-in support for self-validation, as there is in the Validation block.
You can, of course, include both data annotation and Validation block attributes in the same class if
you wish, and implement self-validation using the Validation block mechanism in a class that
contains data annotation validation attributes. The validation methods in the Validation block will
process both types of attributes.
For more information about data annotations, see http://msdn.microsoft.com/enus/library/system.componentmodel.dataannotations.aspx (.NET Framework 3.5) and
http://msdn.microsoft.com/en-us/library/system.componentmodel.dataannotations(VS.100).aspx
(.NET Framework 4.0).
An Example of Using Data Annotations
The examples we provide for this chapter include one named Using Data Annotation Attributes and
Self-Validation. This uses only the range of data annotation attributes in version 3.5 of the .NET
Framework, so you can run it on machines that do not have Visual Studio 2010 or version 4.0 of the
.NET Framework installed.
The class named AnnotatedProduct contains data annotation attributes to implement the same
rules as those applied by Validation block attributes in the AttributedProduct class (which you saw in
the previous example). However, due to the limitations with data annotations, the self-validation
method within the class has to do more work to achieve the same validation rules.
For example, it has to check the minimum value of some properties as the data annotation
attributes in version 3.5 of the .NET Framework only support validation of the maximum value (in
version 4.0, they do support minimum value validation). It also has to check the value of the
DateDue property to ensure it is not more than six months in the future, and that the value of the
ProductType property is a member of the ProductType enumeration.
To perform the enumeration check, the self-validation method creates an instance of the Validation
block Enum Conversion validator programmatically, and then calls its DoValidate method (which
allows you to pass in all of the values required to perform the validation). The code passes to this
method the value of the ProductType property, a reference to the current object, the name of the
enumeration, and a reference to the ValidationResults instance being use to hold all of the
validation errors.
var enumConverterValidator = new EnumConversionValidator(typeof(ProductType),
"Product type must be a value from the '{3}' enumeration.");
enumConverterValidator.DoValidate(ProductType, this, "ProductType", results);
The code that creates the object to validate, validates it, and then displays the results in the same
way as you saw in the previous example, with the exception that it creates an invalid instance of the
163
AnnotatedProduct class, rather than the AttributedProduct class. The result when you run this
example is also similar to that of the previous example, but with a few exceptions. We've listed some
of the output here.
Created and populated an invalid instance of the AnnotatedProduct class.
The following 7 validation errors were detected:
+ Target object: AnnotatedProduct, Member: ID
- Detected by: [none]
- Tag value:
- Message: 'Product ID must be 6 characters.'
...
+ Target object: AnnotatedProduct, Member: ProductSelfValidation
- Detected by: [none]
- Tag value:
- Message: 'Total inventory (in stock and on order) cannot exceed 100 items.'
+ Target object: AnnotatedProduct, Member: ID
- Detected by: ValidationAttributeValidator
- Message: 'Product ID must be 2 capital letters and 4 numbers.'
+ Target object: AnnotatedProduct, Member: InStock
- Detected by: ValidationAttributeValidator
- Message: 'Quantity in stock cannot be less than 0.'
You can see that validation failures detected for data annotations contain less information than
those detected for the Validation block attributes, and validation errors are shown as being detected
by the ValidationAttributeValidator classthe base class for data annotation validation attributes.
However, where we performed additional validation using the self-validation method, there is extra
information available.
Defining Attributes in Metadata Classes
In some cases, you may want to locate your validation attributes (both Validation block attributes
and .NET Data Annotation validation attributes) in a file separate from the one that defines the class
that you will validate. This is a common scenario when you are using tools that generate the class
files, and would therefore overwrite your validation attributes. To avoid this you can locate your
validation attributes in a separate file that forms a partial class along with the main class file. This
approach makes use of the MetadataType attribute from the
System.ComponentModel.DataAnnotations namespace.
You apply the MetadataType attribute to your main class file, specifying the type of the class that
stores the validation attributes you want to apply to your main class members. You must define this
as a partial class, as shown here. The only change to the content of this class compared to the
attributed versions you saw in the previous sections of this chapter is that it contains no validation
attributes.
[MetadataType(typeof(ProductMetadata))]
public partial class Product
{
... Existing members defined here, but without attributes or annotations ...
}
You then define the metadata type as a normal class, except that you declare simple properties for
each of the members to which you want to apply validation attributes. The actual type of these
164
properties is not important, and is ignored by the compiler. The accepted approach is to declare
them all as type Object. As an example, if your Product class contains the ID and Description
properties, you can define the metadata class for it, as shown here.
public class ProductMetadata
{
[Required(ErrorMessage = "ID is required.")]
[RegularExpression("[A-Z]{2}[0-9]{4}",
ErrorMessage = "Product ID must be 2 capital letters and 4 numbers.")]
public object ID;
[StringLength(100, ErrorMessage = "Description must be less than 100 chars.")]
public object Description;
}
ConfigurationValidatorFactory. This factory creates validators that only apply rules defined
in a configuration file, or in a configuration source you provide. By default it looks for
configuration in the default configuration file (App.config or Web.config). However, you can
create an instance of a class that implements the IConfigurationSource interface, populate
it with configuration data from another file or configuration storage media, and use this
when you create this validator factory.
AttributeValidatorFactory. This factory creates validators that only apply rules defined in
Validation block attributes located in the target class, and rules defined through selfvalidation methods.
For example, to obtain a validator for the Product class that validates using only attributes and selfvalidation methods within the target instance, and validate an instance of this class, you resolve an
instance of the AttributeValidatorFactory from the container, as shown here.
AttributeValidatorFactory attrFactory =
EnterpriseLibraryContainer.Current.GetInstance<AttributeValidatorFactory>();
Validator<Product> pValidator = attrFactory.CreateValidator<Product>();
ValidationResults valResults = pValidator.Validate(myProduct);
165
message, and the Tag property. Then you call the Validate method of the validator, specifying the
object or value you want to validate. The example, Creating and Using Validators Directly,
demonstrates the creation and use of some of the individual and composite validators provided with
the Validation block.
Validating Strings for Contained Characters
The example code first creates a ContainsCharactersValidator that specifies that the validated value
must contains the characters c, a, and t, and that it must contain all of these characters (you can, if
you wish, specify that it must only contain Any of the characters). The code also sets the Tag
property to a user-defined string that helps to identify the validator in the list of errors. The overload
of the Validate method used here returns a new ValidationResults instance containing a
ValidationResult instance for each validation error that occurred.
// Create a Contains Characters Validator and use it to validate a string.
Validator charsValidator = new ContainsCharactersValidator("cat",
ContainsCharacters.All,
" Value must contain {4} of the characters '{3}'.");
charsValidator.Tag = "Validating the String value 'disconnected'";
ValidationResults valResults = charsValidator.Validate("disconnected");
166
};
Having created an array of validators, we can now use this to create a composite validator. There are
two composite validators, the AndCompositeValidator and the OrCompositeValidator. You can
combine these as well to create any nested hierarchy of validators you require, with each
combination returning a valid result if all (with the AndCompositeValidator) or any (with the
OrCompositeValidator) of the validators it contains are valid. The example creates an
OrCompositeValidator, which will return true (valid) if the validated string is either null or contains
exactly five characters. Then it validates a null value and an invalid string, passing into the Validate
method the existing ValidationResults instance.
Validator orValidator = new OrCompositeValidator(
"Value can be NULL or a string of 5 characters.",
valArray);
// Validate two values with the Or Composite Validator.
orValidator.Validate(null, valResults);
orValidator.Validate("MoreThan5Chars", valResults);
If required, you can create a composite validator containing a combination of validators, and specify
this composite validator in the second parameter. A similar technique can be used with the Field
Value validator and Method Return Value validator.
After performing all of the validation operations, the example displays the results by iterating
through the ValidationResults instance that contains the results for all of the preceding validation
operations. It uses the same ShowValidationResults routine we described earlier in this chapter.
This is the result:
167
You can see how the message template tokens create the content of the messages that are
displayed, and the results of the nested validators we defined for the Or Composite validator. If you
want to experiment with individual validators, you can modify and extend this example routine to
use other validators and combinations of validators.
168
[OperationContract]
[FaultContract(typeof(ValidationFault))]
bool AddNewProduct(
[NotNullValidator(MessageTemplate = "Must specify a product ID.")]
[StringLengthValidator(6, RangeBoundaryType.Inclusive,
6, RangeBoundaryType.Inclusive,
MessageTemplate = "Product ID must be {3} characters.")]
[RegexValidator("[A-Z]{2}[0-9]{4}",
MessageTemplate = "Product ID must be 2 letters and 4 numbers.")]
string id,
...
[IgnoreNulls(MessageTemplate = "Description can be NULL or a string value.")]
[StringLengthValidator(5, RangeBoundaryType.Inclusive,
100, RangeBoundaryType.Inclusive,
MessageTemplate = "Description must be between {3} and {5} characters.")]
string description,
[EnumConversionValidator(typeof(ProductType),
MessageTemplate = "Must be a value from the '{3}' enumeration.")]
string prodType,
...
[ValidatorComposition(CompositionType.Or,
MessageTemplate = "Date must be between today and six months time.")]
[NotNullValidator(Negated = true,
MessageTemplate = "Value can be NULL or a date.")]
[RelativeDateTimeValidator(0, DateTimeUnit.Day, 6, DateTimeUnit.Month,
MessageTemplate = "Value can be NULL or a date.")]
DateTime? dateDue);
}
You can see that the service contract defines a method named AddNewProduct that takes as
parameters the value for each property of the Product class we've used throughout the examples.
Although the previous listing omits some attributes to limit duplication and make it easier to see the
structure of the contract, the rules applied in the example service we provide are the same as you
saw in earlier examples of validating a Product instance. The method implementation within the
WCF service is simpleit just uses the values provided to create a new Product and adds it to a
generic List.
Editing the Service Configuration
After you define the service and its validation rules, you must edit the service configuration to force
validation to occur. The first step is to specify the Validation block as a behavior extension. You will
need to provide the appropriate version information for the assembly, which you can obtain from
the configuration file generated by the configuration tool for the client application, or from the
source code of the example, depending on whether you are using the assemblies provided with
Enterprise Library or assemblies you have compiled yourself.
<extensions>
<behaviorExtensions>
<add name="validation"
type="Microsoft.Practices...WCF.ValidationElement,
Microsoft.Practices...WCF" />
</behaviorExtensions>
169
Next, you edit the <behaviors> section of the configuration to define the validation behavior you
want to apply. As well as turning on validation here, you can specify a rule set name (as shown) if
you want to perform validation using only a subset of the rules defined in the service. Validation will
then only include rules defined in validation attributes that contain the appropriate Ruleset
parameter (the configuration for the example application does not specify a rule set name here).
<behaviors>
<endpointBehaviors>
<behavior name="ValidationBehavior">
<validation enabled="true" ruleset="MyRuleset" />
</behavior>
</endpointBehaviors>
... other existing behaviors here ...
</behaviors>
Note that you cannot use a configuration rule set with a WCF serviceall validation rules must be
in attributes.
Finally, you edit the <services> section of the configuration to link the ValidationBehavior defined
above to the service your WCF application exposes. You do this by adding the behaviorConfiguration
attribute to the service element for your service, as shown here.
<services>
<service behaviorConfiguration="ExampleService.ProductServiceBehavior"
name="ExampleService.ProductService">
<endpoint address="" behaviorConfiguration="ValidationBehavior"
binding="wsHttpBinding" contract="ExampleService.IProductService">
<identity>
<dns value="localhost" />
</identity>
</endpoint>
<endpoint address="mex" binding="mexHttpBinding" contract="IMetadataExchange" />
</service>
...
</services>
170
However, one important issue is the way that service exceptions are handled. The example code
specifically catches exceptions of type FaultException<ValidationFault>. This is the exception
generated by the service, and ValidationFault is the type of the fault contract we specified in the
service contract.
Validation errors detected in the WCF service are returned in the Details property of the exception
as a collection. You can simply iterate this collection to see the validation errors. However, if you
want to combine them into a ValidationResults instance for display, especially if this is part of a
multi-step process that may cause other validation errors, you must convert the collection of
validation errors returned in the exception.
The example application does this using a method named ConvertToValidationResults, as shown
here. Notice that the validation errors returned in the ValidationFault do not contain information
about the validator that generated the error, and so we must use a null value for this when creating
each ValidationResult instance.
// Convert the validation details in the exception to individual
// ValidationResult instances and add them to the collection.
ValidationResults adaptedResults = new ValidationResults();
foreach (ValidationDetail result in results)
{
adaptedResults.AddResult(new ValidationResult(result.Message, target,
result.Key, result.Tag, null));
}
return adaptedResults;
When you execute this example, you will see a message indicating the service being startedthis
may take a while the first time, and may even time out so that you need to try again. Then the
output shows the result of validating the valid Product instance (which succeeds) and the result of
validating the invalid instance (which produces the now familiar list of validation errors shown here).
The following 6 validation errors were detected:
+ Target object: Product, Member:
- Detected by: [none]
- Tag value: id
- Message: 'Product ID must be two capital letters and four numbers.'
...
+ Target object: Product, Member:
- Detected by: [none]
- Tag value: description
- Message: 'Description can be NULL or a string value.'
+ Target object: Product, Member:
- Detected by: [none]
- Tag value: prodType
- Message: 'Product type must be a value from the 'ProductType' enumeration.'
...
+ Target object: Product, Member:
- Detected by: [none]
- Tag value: dateDue
- Message: 'Date due must be between today and six months time.'
Again, we've omitted some of the duplication so that you can more easily see the result. Notice that
there is no value available for the name of the member being validated or the validator that was
171
used. This is a form of exception shielding that prevents external clients from gaining information
about the internal workings of the service. However, the Tag value returns the name of the
parameter that failed validation (the parameter names are exposed by the service), allowing you to
see which of the values you sent to the service actually failed validation.
Then you can define the validation controls in your page. The following shows an example that
validates a text box that accepts a value for the FirstName property of a Customer class, and
validates it using the rule set named RuleSetA.
<EntLibValidators:PropertyProxyValidator id="firstNameValidator"
runat="server" ControlToValidate="firstNameTextBox"
PropertyName="FirstName" RulesetName="RuleSetA"
SourceTypeName="ValidationQuickStart.BusinessEntities.Customer" />
One point to be aware of is that, unlike the ASP.NET validation controls, the Validation block
PropertyProxyValidator control does not perform client-side validation. However, it does integrate
172
with the server-based code and will display validation error messages in the page in the same way as
the ASP.NET validation controls.
For more information about ASP.NET integration, see the documentation installed with Enterprise
Library and available online at http://go.microsoft.com/fwlink/?LinkId=188874.
Windows Forms User Interface Validation
The Validation block includes the ValidationProvider component that extends Windows Forms
controls to provide validation using a rule set defined in your application through configuration,
attributes, and self-validation. You can handle the Validating event to perform validation, or invoke
validation by calling the PerformValidation method of the control. You can also specify an
ErrorProvider that will receive formatted validation error messages.
To use the ValidationProvider, you add the assembly named
Microsoft.Practices.EnterpriseLibrary.Validation.Integration.WinForms to your application, and
reference it in your project.
For more information about Windows Forms integration, see the documentation installed with
Enterprise Library and available online at http://go.microsoft.com/fwlink/?LinkId=188874.
WPF User Interface Validation
The Validation block includes the ValidatorRule component that you can use in the binding of a WPF
control to provide validation using a rule set defined in your application through configuration,
attributes, and self-validation. To use the ValidatorRule, you add the assembly named
Microsoft.Practices.EnterpriseLibrary.Validation.Integration.WPF to your application, and reference
it in your project.
As an example, you can add a validation rule directly to a control, as shown here.
<TextBox x:Name="TextBox1">
<TextBox.Text>
<Binding Path="ValidatedStringProperty" UpdateSourceTrigger="PropertyChanged">
<Binding.ValidationRules>
<vab:ValidatorRule SourceType="{x:Type test:ValidatedObject}"
SourcePropertyName="ValidatedStringProperty"/>
</Binding.ValidationRules>
</Binding>
</TextBox.Text>
</TextBox>
You can also specify a rule set using the RulesetName property, and use the
ValidationSpecificationSource property to refine the way that the block creates the validator for the
property.
For more information about WPF integration, see the documentation installed with Enterprise
Library and available online at http://go.microsoft.com/fwlink/?LinkId=188874.
173
Summary
In this chapter we have explored the Enterprise Library Validation block and shown you how easy it
is to decouple your validation code from your main application code. The Validation block allows you
to define validation rules and rule sets; and apply them to objects, method parameters, properties,
and fields of objects your use in your application. You can define these rules using configuration,
attributes, or even using custom code and self-validation within your classes.
Validation is a vital crosscutting concern, and should occur at the perimeter of your application, at
trust boundaries, and (in most cases) between layers and distributed components. Robust validation
can help to protect your applications and services from malicious users and dangerous input
(including SQL injection attacks); ensure that it processes only valid data and enforces business rules;
and improve responsiveness.
The ability to centralize your validation mechanism and the ability to define rules through
configuration also make it easy to deploy and manage applications. Administrators can update the
rules when required without requiring recompilation, additional testing, and redeployment of the
application. Alternatively, you can define rules, validation mechanisms, and parameters within your
code if this is a more appropriate solution for your own requirements.
174
A Secret Shared
One important point you must be aware of is that there are two basic types of encryption:
symmetric (or shared key) encryption, and asymmetric (or public key) encryption. The Cryptography
block supports only symmetric encryption. The patterns & practices guide "Data Confidentiality" at
http://msdn.microsoft.com/en-us/library/aa480570.aspx provides an overview of both types of
encryption and lists the factors you should consider when using encryption.
Is a secret still secret when you tell it to somebody else? When using symmetric encryption, you
dont have a choice. Unlike asymmetric encryption, which uses different public and private keys,
175
symmetric encryption uses a single key to both encrypt and decrypt the data. Therefore, you must
share the encryption key with the other party so that they (or it, in the case of code) can decrypt the
data.
In general, this means that the key should be long and complex (the name of your dog is not a great
example of an encryption key). Depending on the algorithm you choose, this key will usually be a
minimum of 128 bitsthe configuration tools in Enterprise Library can generate random keys for
you, as you'll see in the section "Configuring Cryptographic Providers" later in this chapter.
Alternatively, you can configure the encryption providers to use your existing keys.
Making a Hash of It
Hashing is useful when you need to store a value or data in a way that hides the original content
with no option of reconstructing the original content. An obvious example is when storing passwords
in a database. Of course, the whole point of creating a hash is to prevent the initial value from being
readable; thus, the process is usually described as a one-way hashing function. Therefore, as you
cant get the original value back again, you can only use hashing where it is possible to compare
hashed values. This is why many systems allow users only to reset (but not retrieve) their passwords;
because the system itself has no way to retrieve the original password text.
In the case of stored passwords, the process is easy. You just hash the password the user provides
when they log in and compare it with the hash stored in your database or repository. Just be aware
that you cannot provide a forgotten password function that allows users to retrieve a password.
Sending them the hashed value would not be of any help at all.
Other examples for using hashing are to compare two long string values or large objects. Hashing
effectively generates a unique key for such a value or object that is considerably smaller, or shorter,
than the value itself.
176
177
Microsoft.Practices.EnterpriseLibrary.Security.Cryptography.dll
Microsoft.Practices.EnterpriseLibrary.Security.Caching.dll
To make it easier to use the objects in the Cryptography block, you can add references to the
relevant namespaces to your project. Then you are ready to write some code. The following sections
demonstrate the tasks you can accomplish, and provide more details about how the block helps you
to implement a common and reusable strategy for cryptography.
However, before you start to use the objects in the block, you must resolve an instance of the
CryptographyManager class. This class exposes the API that you interact with to use the
cryptography providers (symmetric and hash providers) in your code. The simplest approach is to
use the GetInstance method of the Enterprise Library container, as shown here.
// Resolve the default CryptographyManager object from the container.
CryptographyManager defaultCrypto
178
= EnterpriseLibraryContainer.Current.GetInstance<CryptographyManager>();
179
defined in the configuration of the application, and the text string to encrypt. To decrypt the
resulting string, the code then calls the DecryptSymmetric method of the Cryptography Manager,
passing to it (as before) the name of the AES managed symmetric algorithm provider defined in the
configuration of the application, and the encrypted base-64 encoded string. We've removed some of
the lines of code that simply write values to the console screen to make it easier to see the code that
actually does the work.
// Define the text string instance to encrypt.
string sampleText = "This is some text to encrypt.";
// Use the AES Symmetric Algorithm Provider.
// The overload of the EncryptSymmetric method that takes a
// string returns the result as a Base-64 encoded string.
string encrypted = defaultCrypto.EncryptSymmetric("AesManaged", sampleText);
// Now decrypt the result string.
string decrypted = defaultCrypto.DecryptSymmetric("AesManaged", encrypted);
// Destroy any in-memory variables that hold sensitive information.
encrypted = null;
decrypted = null;
Notice that the last lines of the code destroy the in-memory values that hold the sensitive
information used in the code. This is good practice as it prevents any leakage of this information
should an error occur elsewhere in the application, and prevents any attacker from being able to
dump the memory contents and view the information. If you store data in a string, set it to null,
allowing the garbage collector to remove it from memory during its next run cycle. If you use an
array, call the static Array.Clear method (passing in the array you used) to remove the contents after
use.
You may also consider storing values in memory using the SecureString class, which is part of the
Microsoft .NET Framework. However, in the current release of Enterprise Library, the methods of
the Security block do not accept or return SecureString instances, and so you must translate them
into strings when interacting with the block methods. For more information about using the
SecureString class, see "SecureString Class" at http://msdn.microsoft.com/enus/library/system.security.securestring.aspx.
When you run this example, you'll see the output shown below. You can see the value of the original
string, the base-64 encoded encrypted data, and the result after decrypting this value.
Text to encrypt is 'This is some text to encrypt.'
Encrypted and Base-64 Encoded result is '+o3zulnEOeggpIqUeiHRD2ID4E85TSPxCjS/D6k
II4CUCjedFvlNOXjrqjna7ZWWbJp5yfyh/VrHw7oQPzUtUaxlXNdyiqSvDGcU814NNq4='
Decrypted string is 'This is some text to encrypt.'
180
If you run this example, you'll see the output shown below. You can see the value of the properties
of the Product class we created, the encrypted data (we base-64 encoded it for display), and the
result after decrypting this data.
Object to encrypt is 'CryptographyExample.Product'
- Product.ID = 42
181
The CreateHash method takes as parameters the name of a hash provider configured in the
Cryptography block for the application, and the item for which it will create the hash value.
There are two overloads of this method. One accepts a string and returns the hash as a
string. The second overload accepts the data to encrypt as a byte array, and returns a byte
array containing the hash value.
The CompareHash method takes as parameters the name of a hash provider configured in
the Cryptography block for the application, the un-hashed item to compare the hash with,
and the hash value to compare to the un-hashed item. There are two overloads of this
method. One accepts the un-hashed item and the hash as strings. The second overload
accepts the un-hashed item and the hash as byte arrays.
182
Next, the code performs three comparisons of the hash values using the CompareHash method of
the Cryptography Manager. It compares the hash of the first string with first string itself, to prove
that they are equivalent. Then it compares the hash of the first string with the second string, to
provide that they are not equivalent. Finally, it compares the hash of the second string with the third
string, which varies only in letter case, to prove that these are also not equivalent.
As in earlier examples, we've removed some of the lines of code that simply write values to the
console screen to make it easier to see the code that actually does the work.
// Define the text
string sample1Text
string sample2Text
string sample3Text
strings
= "This
= "This
= "This
instance to encrypt.
is some text to hash.";
is some more text to hash.";
is Some More text to hash.";
// Create the hash values using the SHA512 Hash Algorithm Provider.
// The overload of the CreateHash method that takes a
// string returns the result as a string.
string hashed1Text = defaultCrypto.CreateHash("SHA512CryptoServiceProvider",
sample1Text);
string hashed2Text = defaultCrypto.CreateHash("SHA512CryptoServiceProvider",
sample2Text);
string hashed3Text = defaultCrypto.CreateHash("SHA512CryptoServiceProvider",
sample3Text);
// Compare the strings with some of the hashed values.
Console.WriteLine("Comparing the string '{0}' with the hash of this string:",
sample1Text);
Console.WriteLine("- result is {0}",
defaultCrypto.CompareHash("SHA512CryptoServiceProvider",
sample1Text, hashed1Text));
Console.WriteLine("Comparing the string '{0}' with hash of the string '{1}'",
sample1Text, sample2Text);
Console.WriteLine("- result is {0}",
defaultCrypto.CompareHash("SHA512CryptoServiceProvider",
sample2Text, hashed1Text));
Console.WriteLine("Comparing the string '{0}' with hash of the string '{1}'",
sample2Text, sample3Text);
Console.WriteLine("- result is {0}",
defaultCrypto.CompareHash("SHA512CryptoServiceProvider",
sample3Text, hashed2Text));
If you run this example, you'll see the output shown below. You can see the hash values of the three
text strings, and the result of the three hash comparisons.
Text strings to hash and the resulting hash values are:
This is some text to hash.
v38snPJbuCtwfMUSNRjsgDqu4PB7ok7LQ2id4RJMZUGlhn+LTgX3FNEVuUbauokCpiCzzfZI2d9sNjlo
56NmuZ/8FY2sknxrD262TLSSYSQ=
This is some more text to hash.
183
braokQ/wraq9WVnKSqBROBUNG2lBwiICwX0lTGPSaooaJXL7/WcJvUCtBry8+0iRg+Rij5Xiz56jD4Zm
xcKrp7kGVDeWuA7jHeYiFZmGbOU=
This is Some More text to hash.
aw3anokiiBXPJfxZ5kf2SrlTEN3lokVlT+46t0V1B7der1wsNTD4dPxKQly8SDAjoCgCWwzSCh4k+OUf
O6/y6JIpFtWpQDqHO3JH+Rj25K0=
Comparing the string 'This is some text to hash.' with the hash of this string:
- result is True
Comparing the string 'This is some text to hash.' with hash of the string
is some more text to hash.'
- result is False
'This
Comparing the string 'This is some more text to hash.' with hash of the string
'This is Some More text to hash.'
- result is False
184
If you run this example, you'll see the output shown below. You can see the hash values of the two
instances of the Product class, and the result of the hash comparison.
First object to hash is 'CryptographyExample.Product'
- Product.ID = 42
- Product.Name = Exciting Thing
- Product.Description = Something to keep you on your toes.
Generated hash (when Base-64 encoded for display) is:
Gd2V77Zau/pgOcg1A2A5zk6RTd5zFFnHKXfhVx8LEi4=
Second object to hash is 'CryptographyExample.Product'
- Product.ID = 79
- Product.Name = Fun Thing
- Product.Description = Something to keep the grandchildren quiet.
Generated hash (when Base-64 encoded for display) is:
1Eyal+AHf3e2QyEB+sqsGDOdux1Iom4z0zGLYlHlC78=
Comparing second object with hash of the first object:
- result is False
185
ISymmetricCryptoProvider, that define hashing and encryption provider requirements. For a custom
hashing provider, you must implement the CreateHash and CompareHash methods based on the
hashing algorithm you choose. For a custom encryption provider, you must implement the Encrypt
and Decrypt methods based on the encryption algorithm you choose.
One other way that you may want to modify the block is to change the way that it creates and stores
keys. By default, it stores keys that you provide or generate for the providers in DPAPI-encrypted
disk files. You can modify the KeyManager class in the block to change this behavior, and modify the
Wizard that helps you to specify the key in the configuration tools.
For more information about extending and modifying the Cryptography block, see the online
documentation and the help files installed with Enterprise Library.
Summary
This chapter looked at the Cryptography Application Block. It began by discussing cryptographic
techniques and strategies for which the block is suitable, and helped you decide how you might use
the block in your applications. The two most common scenarios are symmetric
encryption/decryption of data, and creating hash values from data. Symmetric encryption is useful
whenever you need to protect data that you are storing or sending across a network. Hashing is
useful for tasks such as storing passwords so that you can confirm user identity without allowing the
passwords to be visible to anyone who may access the database or intercept the passwords as they
pass over a network.
Many types of cryptographic algorithms that you may use with the Cryptography block require
access to a key for both encryption and decryption. It is vitally important that you protect this key
both to prevent unauthorized access to the data and to allow you to encrypt it when required. The
Cryptography block protects key files using DPAPI encryption.
The bulk of the chapter then explored the main techniques for using the block. This includes
encrypting and decrypting data, creating a hash value, and comparing hash values (for example,
when verifying a submitted user password). As you have seen, the block makes these commonly
repeated tasks much simpler, while allowing the configuration to be easily managed postdeployment and at run time by administrators and operations staff.
186
187
"How to install and administer the Authorization Manager in Windows Server 2003" at
http://support.microsoft.com/kb/324470.
AzMan allows you to define an application, the roles for that application, and the operations (such as
submit order or approve expenses) that the application exposes. For each operation, you can define
users and groups that can execute that operation. You can include local and domain user accounts
and account groups stored in Active Directory. You can store your authorization rules in Active
Directory, in an XML file, or in a database.
188
189
or more security caches for your security tokens. Your code can store tokens in this cache, and
retrieve them when required. You can even persist the credentials across application restarts by
defining backing stores for credentials. The third area is where you configure the authorization rules
that define the users, groups, and operations related to your application.
Figure 1
The security settings section
190
Figure 1 shows the configuration for the example application we provide for this chapter. You can
see the areas where we defined the authorization providers and the security cache. Because we
specified the Caching Application Block as the security cache, the configuration tool added the
Caching block to the configuration automatically. We added an isolated storage backing store to the
Caching block to persist credentials, and specified a symmetric storage provider for this store to
protect the persisted credentials. This automatically added the Cryptography block to the
configuration, and we specified a DPAPI symmetric crypto provider to perform the encryption.
For more information about configuring the Caching block, see Chapter 5, "A Cache Advance for your
Applications." For more information about configuring the Cryptography block, see Chapter 7
"Relieving Cryptography Complexity."
Microsoft.Practices.EnterpriseLibrary.Security.dll
Microsoft.Practices.EnterpriseLibrary.Security.Cryptography.dll
Microsoft.Practices.EnterpriseLibrary.Security.Cache.CachingStore.dll
Microsoft.Practices.EnterpriseLibrary.Security.Caching.dll
191
Microsoft.Practices.EnterpriseLibrary.Security.Caching.Cryptography.dll
Microsoft.Practices.EnterpriseLibrary.Security.AzMan.dll
You need the caching assemblies only if you are using a cache to store credentials. You need the
AzMan assembly only if you are using the AzMan rule store.
Now, after adding references to the relevant namespaces to your project, you are ready to write
some code. The following sections demonstrate the tasks you can accomplish, and provide more
details of the way the block helps you to implement a common and reusable strategy for security.
192
if so, displays details of the user's identity using a separate routine named ShowUserIdentityDetails.
We'll look at that routine in a short while.
The code then caches this Windows identity in the security cache to obtain the token, and displays
details of this token. Then it generates a new generic principal for this identity, defining it as a
member of a role named FieldSalesStaff, and displays the details of this new principal using another
routine named ShowGenericPrincipalDetails. Again, we'll look at this routine in a short while. Next,
the code caches the generic principal, collects the token from the security cache, and displays details
of this token.
// Get current Windows Identity and check if authenticated.
WindowsIdentity identity = WindowsIdentity.GetCurrent();
if (identity.IsAuthenticated)
{
Console.WriteLine("Current user identity obtained from Windows:");
ShowUserIdentityDetails(identity);
// Cache the Windows Identity and save the token in a variable.
identityToken = secCache.SaveIdentity(identity);
Console.WriteLine("Current user identity has been cached.");
Console.WriteLine("The IIdentity security token is '{0}'.",
identityToken.Value);
// Generate a Generic Principal for this identity and save in cache.
IPrincipal principal = new GenericPrincipal(identity,
new string[] {"FieldSalesStaff"});
Console.WriteLine("Created a new Generic Principal for this user:");
ShowGenericPrincipalDetails(principal);
principalToken = secCache.SavePrincipal(principal);
Console.WriteLine("Current user principal has been cached.");
Console.WriteLine("The IPrincipal security token is '{0}'.",
principalToken.Value);
}
else
{
Console.WriteLine("Current user is not authenticated.");
}
The tokens are stored in program-wide variables and are therefore available to code in the other
examples for this chapter.
You can also use the SaveProfile method of the security cache to store a user's profile (such as the
user's ASP.NET profile), and obtain a token that you can use to access it again when required.
Displaying User Identity Details
The previous code uses a separate routine named ShowUserIdentityDetails that does just that. It
displays the values of the two properties common to all types that implement the IIdentity interface,
and then checks if the identity is actually an instance of the WindowsIdentity class. If it is, the code
displays the values of the additional properties that are specific to this type.
void ShowUserIdentityDetails(object identity)
{
193
When you run the example, you will see output like that below. Of course, the identity details will
differ for your logged-on account. Notice, however, that the output shows that the principal is a
member of only one of the two roles we tested for. You can also see the value of the tokens
generated by the security cache when we cached the identity and principal.
Current user identity obtained from Windows:
- Current user SOME-DOMAIN\username is authenticated.
- Authentication type: Kerberos.
- Impersonation level: None.
- Is the Guest account: False.
- Is the System account: False.
- SID value: 'S-1-5-21-xxxxxxx-117609710-xxxxxxxxx-1108'.
- Member of 12 account groups.
Current user identity has been cached.
The IIdentity security token is '02acc9a5-6dac-4b40-a82d-a16f3d9ddc37'.
Created a new Generic Principal for this user:
- Current user is SOME-DOMAIN\username.
- IsInRole 'SalesManagers': False.
- IsInRole 'FieldSalesStaff': True.
Current user principal has been cached.
The IPrincipal security token is 'ffcbc717-63ad-4a8b-82e2-26af54741ac1'.
194
You can also use the GetProfile method of the security cache to retrieve a user's profile (such as the
user's ASP.NET profile) by supplying a suitable token obtained from the security cache using the
SaveProfile method.
The example produces output like the following, though the actual values will, of course, differ for
your account identity.
The IIdentity security token is '02acc9a5-6dac-4b40-a82d-a16f3d9ddc37'.
User identity has been retrieved from the cache:
- Current user SOME-DOMAIN\username is authenticated.
- Authentication type: Kerberos.
- Impersonation level: None.
- Is the Guest account: False.
- Is the System account: False.
- SID value: 'S-1-5-21-xxxxxxx-117609710-xxxxxxxxx-1108'.
- Member of 12 account groups.
195
After you retrieve an identity, principal, or profile, you can compare the values with those of the
current user or use it to authenticate a user for other processes or systems.
When you run this example, you will see the values of the tokens before they are expired, and
messages indicating that they were removed from the cache.
The IIdentity security token is 'e303fd67-331a-45b0-94d4-087e462cacda'.
The identity for this token has been expired and removed from the cache.
The IPrincipal security token is 'd6563752-78ed-489a-86fa-efd76c97a976'.
The principal for this token has been expired and removed from the cache.
196
To check if a user is authorized, you call the Authorize method of an authorization provider, passing
to it the user principal and the name of the task or operation. The Authorize method returns either
true or false. The two providers included in the block are the authorization rule provider and the
AzMan authorization provider (for details of these providers, see "What Are Authorization Rule
Providers?" near the beginning of this chapter). The examples we present for this chapter include
one that uses the authorization rule provider and one that uses the AzMan authorization provider.
Using Security Block Configured Rules
If you only need to store authorization rules within the configuration of your application and have
them fully managed by the Security block, you can use the authorization rule provider. As you saw
earlier in this chapter, you configure a series of authorization rules for your application. Each rule
defines an expression that specifies which users can access a specific task or carry out a specific
operation.
The example Authorize a user for a process using a stored rule demonstrates this approach to
authorization. In the application configuration we defined two rules:
The example code starts by displaying the value of the current principal token stored in the
application-level variable (you must execute the first example to authenticate yourself and obtain a
token before you can run this example). Then it retrieves the principal from the security cache using
this token, and calls a separate routine named AuthorizeUserWithRules that performs the
authorization.
The AuthorizeUserWithRules routine takes as parameters the generic principal as a type that
implements the IPrincipal interface, and a reference to the authorization provider to use. In this
example, this is the Security block authorization rule provider resolved from the Enterprise Library
container and stored in the variable named ruleAuth when the example application starts. We
showed how you can obtain instances of the two types of authorization provider in the section
"Diving in With an Example," earlier in this chapter.
// Check if the user has run the option that caches the identity and principal.
if (null != principalToken)
{
// First try authorizing tasks using the cached Generic Principal.
Console.WriteLine("The IPrincipal security token is '{0}'.",
principalToken.Value);
// Retrieve the user principal from the security cache using the token.
IPrincipal principal = secCache.GetPrincipal(principalToken);
if (null != principal)
{
197
// Check if this user is authorized for tasks using the Rule Provider.
AuthorizeUserWithRules(principal, ruleAuth);
}
else
{
// Identity removed from cache due to time expiration, or explicitly in code.
Console.WriteLine("Principal not found in cache for the specified token.");
}
}
else
{
Console.WriteLine("You must obtain a token by caching the current identity "
+ "before you can use it to check authorization rules.");
}
The following code shows the AuthorizeUserWithRules routine we used in the previous example. It
simply calls the Authorize method of the authorization provideronce for the UpdateSalesData
task and once for the ReadSalesData taskand displays the results.
void AuthorizeUserWithRules(IPrincipal principal,
IAuthorizationProvider authProvider)
{
// Determine whether user is authorized for rule defined as "UpdateSalesData".
bool canUpdateSalesData = authProvider.Authorize(principal, "UpdateSalesData");
Console.WriteLine("User can execute 'UpdateSalesData' task: {0}",
canUpdateSalesData);
// Determine whether user is authorized for rule defined as "ReadSalesData".
bool canReadSalesData = authProvider.Authorize(principal, "ReadSalesData");
Console.WriteLine("User can execute 'ReadSalesData' task: {0}",
canReadSalesData);
}
When you run this example, you will see output similar to that below. The code in the first example
of this chapter, which authorizes the user and caches the identity and principal, defines the principal
it generates as a member of only the FieldSalesStaff role, and so the user is authorized only for the
ReadSalesData task.
The IPrincipal security token is '77a9c8af-9691-4ae4-abb5-0e964dc4610e'.
User can execute 'UpdateSalesData' task: False
User can execute 'ReadSalesData' task: True
198
the rules it contains to specify your own local or domain accounts to experiment with AzMan
authorization.
The example code is similar to what you saw when we used the Security block authorization rule
provider in the previous example. It obtains the current user principal from the security cache using
the token stored in the application-level variable, and calls the same AuthorizeUserWithRules
method as the previous example to check if this principal is authorized for the UpdateSalesData and
ReadSalesData tasks. This example then generates a WindowsPrincipal for the current user and
checks if this is authorized for the UpdateSalesData and ReadSalesData tasks.
The main differences in the code for this example are that it passes a reference to the AzMan
authorization provider created when the program starts to the AuthorizeUserWithRules routine, as
shown here.
// First try authorizing tasks using the cached Generic Principal.
IPrincipal genPrincipal = secCache.GetPrincipal(principalToken);
if (null != genPrincipal)
{
// Check if this user is authorized for tasks by AzMan.
AuthorizeUserWithRules(genPrincipal, azmanAuth);
}
...
// Now try checking for authorization for tasks using cached WindowsIdentity
IIdentity identity = secCache.GetIdentity(identityToken);
if (null != identity)
{
// Generate a WindowsPrincipal from the IIdentity.
IPrincipal winPrincipal = new WindowsPrincipal(identity as WindowsIdentity);
// Check if this user is authorized for tasks by AzMan.
AuthorizeUserWithRules(winPrincipal, azmanAuth);
// Note: this will only work if you are using a local machine account or your
// current domain account can access directory store to obtain information.
}
When you run this example, after configuring the AzMan rules to suit your own machine and
account, you should be able to see a result similar to that shown here.
The IPrincipal security token is '77a9c8af-9691-4ae4-abb5-0e964dc4610e'.
User can execute 'UpdateSalesData' task: False
User can execute 'ReadSalesData' task: True
The IIdentity security token is '3b6eb4a7-b958-4cc2-b2b9-112cd58c566d'.
User can execute 'UpdateSalesData' task: False
User can execute 'ReadSalesData' task: True
199
inherit from and extend to perform custom authorization. You simply need to implement the
Authorize method, and then integrate your new provider with Enterprise Library.
You can also implement custom cache managers and cache backing stores and integrate these with
the Caching Application Block to provide a custom caching mechanism for credentials, and
implement a custom cryptography provider for the Cryptography Application Block that you can
then use to encrypt cached credentials. For more information about creating custom providers,
cache managers, and backing stores, see the online documentation and the help files installed with
Enterprise Library and available online at http://go.microsoft.com/fwlink/?LinkId=188874.
Summary
This chapter described how you can use the Security Application Block to simplify common tasks
such as caching authenticated user credentials and checking if users are authorized to perform
specific tasks. While the code required to implement these tasks without using the Security block is
not overly onerous, the block does save you the effort of writing and testing the same code in
multiple locations. It also allows you to use a variety of different cache and authorization providers,
depending on your requirements, and change the provider through configuration. Administrators
and operators will find this feature useful when they come to deploy your applications in different
environments.
The chapter described the scenarios for using the Security block, and explained the concepts of
authorizing users and caching credentials. It then presented detailed examples of how you can use
the features of the block in a sample application. You will find more details on specific tasks, such as
configuration and deployment, in online documentation and the help files installed with Enterprise
Library.
200
Reducing coupling between classes. Dependencies are clearly defined in each class. The
configuration information, and mappings between interfaces or base classes and the actual
concrete types, are stored in the container used by the dependency injection mechanism,
and can be updated as requiredwithout requiring any changes to the run-time code.
Making your code more discoverable. You can easily tell from the types of the constructors,
properties, or methods of your classes what objects they use and what dependencies they
have. If you create instances using code inside the classes, it is more difficult to trace
dependencies. Resolving dependencies at the surface of a class by specifying the types or
interfaces it requires and taking advantage of dependency injection is the recommended
approach.
Making testing easier. If you resolve or obtain objects using code within your classes, you
must provide a suitably configured container for use when unit testing these classes. If you
take advantage of dependency injection, you can create simple mock test objects for your
classes to use.
201
alone DI mechanism. Unity provides a comprehensive set of capabilities, and makes it easier to
implement common dependency inversion patterns and techniques that are useful in application
architecture, design, and development.
Unity provides a container that you use to store type mappings and registrations. You can create
multiple containers, and nest these containers in a hierarchical fashion, if required. It also supports
extensions that allow you to implement extra functionality for objects resolved through the
container. Unity can generate instances of any object that has a public constructor (in other words,
objects that you can create using the new operator).
As Unity creates each object, it inspects it for dependencies and automatically populates these. So,
for example, if you specify a parameter for the constructor or a dependent property of a custom
class to be of type MyBusinessComponent, Unity will create an instance of MyBusinessComponent
and populate the constructor or property. However, if MyBusinessComponent defines a
dependency on another class named MyDataComponent, Unity will create an instance of that class
and populate that dependency, and so on. If there is no mapping in the container for the type
specified in the parameter or property, Unity simply creates a new instance of the specified type by
calling the constructor of that type that has the greatest number of parameters, and returns it.
Imagine that MyDataComponent requires a LogWriter to create log messages. If it contains a
dependent parameter or property of type LogWriter, Unity will populate that as part of the
instantiation process as well. This process (sometimes referred to as auto-wiring) can apply right
across the defined dependencies in your application, as shown in Figure 1.
Figure 1
A possible object graph for a business component
202
Register mappings between interfaces or base classes and concrete object types. When
you resolve the interface or base type, Unity will return an instance of the concrete type.
This may be a new or an existing instance, depending on the lifetime you specify for the
registration.
Register instances of existing objects in the container. When you resolve the type, Unity
will return an instance of that type. This is useful when working with objects that must be
instantiated as single instances (singletons), or must have a non-standard lifetime, such as
services used by your application. Unity will return the existing instance.
Specify the lifetime of objects that will be resolved through the container. Unity can
resolve objects based on the singleton pattern; or with a weak reference when the object is
managed by another process. It can also be configured to return instances on a per-thread
basis, where a new instance is created for each thread while an existing instance is returned
for calls on the same thread.
Define multiple named registrations for a type. You can create more than one registration
or mapping for a type as long as these registrations and mappings have a unique name.
Registrations and mappings that do not have a name are known as default registrations and
default mappings, where the name is effectively an empty string.
Construct an entire object graph for an application that will be resolved at run time. Unity
can automatically resolve the types specified in constructor and method parameters and
properties at run time. The parameters of constructors or methods of objects that are
resolved through the container, or the values for properties of objects resolved through the
container, can be populated with an object resolved through another registration within the
container, or an instance of the specified type if no matching registration exists.
Specify values for constructor and method parameters and properties. These can be the
parameters of constructors or methods of objects that are resolved through the container,
or the values for properties of objects resolved through the container. The value can be
specified directly, or it can be an object resolved through another registration within the
container.
Define matching rules and behaviors as part of an interception policy. These can be used
to apply business rules or change the behavior of existing components. Calls to methods or
properties of these objects will then pass through a policy pipeline containing one or more
interception behaviors. This is a similar approach to that used in aspect-oriented
programming (AOP).
Add custom extensions to the container. These can extend or change the behavior of the
container when resolving objects. Some Unity features, such as interception, are powered
by a container extension that is included with the standard Unity installation.
203
You can define all of your dependency injection and interception requirements using a
configuration file. At run time, you use a single line of code to read the configuration and
load it into a Unity container.
You can use the methods of the container that register types, type mappings, parameter
and property values, and interception requirements in your code. You can also use these
methods to modify any existing registrations in the container at run time.
You can apply attributes to define dependencies for constructor and method parameters
and properties of types that you will resolve through the container. This is a simple
approach, but does not provide the same level of control as using a configuration file or the
run-time container API.
You can also use a mixture of all of these techniques; for example, you can register a mapping in the
container between an interface and a concrete implementation, then use an attribute to define a
dependency for a property or parameter on an implementation of this interface.
Table 1 will help you to choose the best approach for your own requirements.
Table 1
Defining dependencies
Technique
Description
Considerations
Configurationbased
Dynamic
registration
Attribute-based
In addition, you can change the behavior of the dependency resolution mechanism is several ways:
You can specify parameter overrides or dependency overrides that set the values of specific
parameters.
You can define optional dependencies, so that Unity will set the value of a parameter or
property to null if it cannot resolve the type of the dependency.
204
You can use deferred resolution, so that the resolution does not take place until the target
type is actually required or used in your code.
You can specify a lifetime manager that will control the lifetime of the resolved type.
The following sections of this appendix describe some of the more common techniques for defining
dependencies in your classes though constructor, property, and method call injection. We do not
discuss interception in this appendix. For full details of all the capabilities and uses of Unity, see the
Unity section of the documentation installed with Enterprise Library and available online at
http://go.microsoft.com/fwlink/?LinkId=188875.
Constructor Injection
By default, Unity will attempt to resolve and populate the types of every parameter of a class
constructor when you resolve that type through the container. You do not need to configure or add
attributes to a class for this to occur. Unity will choose the most complex constructor (usually the
one with the largest number of parameters), resolve the type of each parameter through the
container, and then create a new instance of the target type using the resolved values.
The following are some simple examples that demonstrate how you can define constructor injection
for a type.
Automatic Constructor Injection
If you have a class that contains a non-default constructor, Unity will automatically populate any
dependencies defined in the parameters of the constructor. For example, the following type has a
dependency on a type named Database.
public class MyNewObject
{
public MyNewObject(Database defaultDB)
{
// code to use the resolved Database instance here
}
}
If you need to change the behavior of the automatic constructor injection process, perhaps to
specify the lifetime of the resolved type or to set the value or lifetime of the types resolved for the
parameters, you can configure the container at design time using a configuration file or at run time
using the container API.
Design-Time Configuration
Configuring constructor injection in a configuration file is useful when you need to exert control over
the process. For example, consider the following class that contains a single constructor that takes
two parameters.
public class MyNewObject
{
public MyNewObject(Database defaultDB, string departmentName)
{
...
205
}
}
The second parameter is a string, and Unity cannot generate an instance of a string type unless you
have registered it in the container using a named instance registration. Therefore, you must override
the default behavior of the automatic injection process. You can do this in a configuration file, and at
the same time manage three aspects of the injection process: the resolved object lifetime, the value
of parameters, and the choice of constructor when the type contains more than one constructor.
For example, you can use the following register directive in a configuration file to specify that the
resolved instance of MyNewObject should be a singleton (with its lifetime managed by the
container), that Unity should resolve the type Database of the parameter named defaultDB and
inject the result, and that Unity should inject the string value "Customer Service" into the parameter
named departmentName.
<register type="MyNewObject">
<lifetime type="singleton" />
<constructor>
<param name="defaultDB" />
<param name="departmentName" value="Customer Service" />
</constructor>
</register>
When you specify constructor injection like this, you are also specifying which constructor Unity
should use. Even if the MyNewObject class contains a more complex constructor, Unity will use the
one that matches the list of parameters you specify in the register element.
To register your types using named registrations, you simply add the name attribute to the register
element, as shown here.
<register type="MyNewObject" name="Special Customer Object">
...
</register>
To register mappings between an interface or base class and a type that implements the interface or
inherits the base type, you add the mapTo attribute to the register element. You can, of course,
define default (unnamed) and named mappings in the same way as you do type registrations. The
following example shows registration of a named mapping.
<register type="IMyType" mapTo="MyImplementingType"
name="Special Customer Object">
...
</register>
Run-Time Configuration
You can configure injection for the default or a specific constructor at run time by calling the
RegisterType method of the Unity container. This approach also gives you a great deal of control
over the process. The following code registers the MyNewObject type with a singleton (containercontrolled) lifetime.
myContainer.RegisterType<MyNewObject>(new ContainerControlledLifetimeManager());
206
If you want to create a named registration, you add the name as the first parameter of the
RegisterType method, as shown here.
myContainer.RegisterType<MyNewObject>("Special Customer Object",
new ContainerControlledLifetimeManager());
If you want to create a mapping, you specify the mapped type as the second generic type parameter,
as shown here.
myContainer.RegisterType<IMyType, MyImplementingType>(
"Special Customer Object",
new ContainerControlledLifetimeManager());
If you need to specify the value of the constructor parameters, such as a String type (which Unity
cannot create unless you register a String instance with the container), or specify which constructor
Unity should choose, you include an instance of the InjectionConstructor type in your call to the
RegisterType method. For example, the following creates a registration named Special Customer
Object for the MyNewObject type as a singleton, specifies that Unity should resolve the type
Database of the parameter named defaultDB and inject the result, and that Unity should inject the
string value "Customer Service" into the parameter named departmentName.
myContainer.RegisterType<MyNewObject>(
"Special Customer Object",
new ContainerControlledLifetimeManager(),
new InjectionConstructor(typeof(Database), "Customer Service")
);
If your class has multiple constructors, and you want to specify the one Unity will use, you apply the
InjectionConstructor attribute to that constructor, as shown in the code excerpt that follows. If you
do not specify the constructor to use, Unity chooses the most complex (usually the one with the
most parameters). This technique is useful if the most complex constructor has parameters that
Unity cannot resolve.
public class MyNewObject
{
public MyNewObject(Database defaultDB, string departmentName)
{
...
207
}
[InjectionConstructor]
public MyNewObject(Database defaultDB)
{
...
}
}
Run-Time Configuration
You can configure injection for any public property of the target class at run time by calling the
RegisterType method of the Unity container. This gives you a great deal of control over the process.
The following code performs the same dependency injection process as the configuration file
example you have just seen. Notice the use of the ResolvedParameter type to specify the named
mapping that Unity should use to resolve the ILogger interface.
myContainer.RegisterType<MyOtherObject>(
new InjectionProperty("BusinessComponent"),
new InjectionProperty("DataSource", "CorpData42"),
new InjectionProperty("Logger",
208
You can use the ResolvedParameter type in constructor and method call injection as well as in
property injection, and there are other types of injection parameter classes available for even more
specialized tasks when configuring injection.
Configuration with Attributes
To specify injection for a property, you can alternatively apply the Dependency attribute to it to
indicate that the type defined and exposed by the property is a dependency of the class. The
following code demonstrates property injection for a class named MyNewObject that exposes as a
property a reference to an instance of the type Database.
public class MyNewObject
{
[Dependency]
public Database CustomerDB { get; set; }
}
When you apply the Dependency attribute without specifying a name, the container will return the
type specified as the default (an unnamed registration) or a new instance of that type. To specify a
named registration when using property injection with attributes, you include the name as a
parameter of the Dependency attribute, as shown below.
public class MyNewObject
{
[Dependency("LocalDB")]
public Database NamedDB { get; set; }
}
209
Design-Time Configuration
The techniques for specifying dependency injection for method parameters is very similar to what
you saw earlier for constructor parameters. The following excerpt from a configuration file defines
the dependencies for the two parameters of a method named Initialize for a type named
MyNewObject. Unity will resolve the type of the parameter named customerDB through the
container and inject the result into that parameter of the target type. It will also inject the string
value "Customer Services" into the parameter named departmentName.
<register type="MyNewObject">
<method name="Initialize">
<param name="customerDB" />
<param name="departmentName" value="Customer Services" />
</method>
</register>
You can also use the dependencyName and dependencyType attributes to specify how Unity should
resolve the type for a parameter in exactly the same way as you saw for property injection. If you
have more than one overload of a method in your class, Unity uses the set of parameters you define
in your configuration to determine the actual method to populate and execute.
Run-Time Configuration
As with constructor and property injection, you can configure injection for any public method of the
target class at run time by calling the RegisterType method of the Unity container. The following
code achieves the same result as the configuration extract you have just seen.
myContainer.RegisterType<MyNewObject>(
new InjectionMethod("Initialize", typeof(Database), "CustomerServices")
);
In addition, you can specify the lifetime of the type, and use named dependencies, in exactly the
same way as you saw for constructor injection.
Configuration with Attributes
You can apply the InjectionMethod attribute to a method to indicate that any types defined in
parameters of the method are dependencies of the class. The following code demonstrates the most
common scenario, saving the dependent object instance in a class-level variable, for a class named
MyNewObject that exposes a method named Initialize that takes as parameters instances of the
type Database and an instance of a concrete type that implements the ILogger interface.
public class MyNewObject
{
private Database theDB;
private ILogger theLogger;
[InjectionMethod]
public void Initialize(Database customerDB, ILogger loggingComponent)
{
// assign the dependent objects to class-level variables
theDB = customerDB;
theLogger = loggingComponent;
210
}
}
You can also add the Dependency attribute to a parameter to specify the name of the registration
Unity should use to resolve the parameter type, just as you saw earlier for constructor injection with
attributes. And, as with constructor injection, all of the parameters of the method must be
resolvable through the container. If any are value types that Unity cannot create, you must ensure
that you have a suitable registration in the container for that type, or use a dependency override to
set the value.
This returns the type registered as the default (no name was specified when it was registered). If you
want to resolve a type that was registered with a name, you specify this name as a parameter of the
Resolve method. You might also consider using implicit typing instead of specifying the type, to
make your code less dependent on the results of the resolve process.
var theInstance = container.Resolve<MyNewObject>("Registration Name");
Alternatively, you may choose to define the returned type as the interface type when you are
resolving a mapped type. For example, if you registered a type mapping between the interface
IMyType and the concrete type MyNewObject, you should consider using the following code when
you resolve it.
IMyType theInstance = container.Resolve<IMyType>();
Writing code that specifies an interface instead of a particular concrete type means that you can
change the configuration to specify a different concrete type without needing to change your code.
Unity will always return a concrete type (unless it cannot resolve an interface or abstract type that
you specify; in which case an exception is thrown).
You can also resolve a collection of types that are registered using named mappings (not default
unnamed mappings) by calling the ResolveAll method. This may be useful if you want to check what
types are registered in your run-time code, or display a list of available types. However, Unity also
exposes methods that allow you to iterate over the container and obtain information about all of the
registrations.
211
We don't have room to provide a full guide to using Unity here. However, this discussion should
have given you a taste of what you can achieve using dependency injection. For more detailed
information about using Unity, see the documentation installed with Enterprise Library and available
online at http://go.microsoft.com/fwlink/?LinkId=188874.
212
These topics provide information about how you can use the more sophisticated dependency
injection approach for creating instances of Enterprise Library objects, as described in Chapter 1,
"Introduction." If you have decided not to use this approach, and you are using the Enterprise
Library service locator and its GetInstance method to instantiate Enterprise Library types, they are
not applicable to your scenario.
213
and loads the registrations in the <container> section that has the name MyContainerName into a
new Unity container.
// Read a specified config file using the .NET configuration system.
ExeConfigurationFileMap map = new ExeConfigurationFileMap();
map.ExeConfigFilename = @"c:\configfiles\myunityconfig.config";
System.Configuration.Configuration config
= ConfigurationManager.OpenMappedExeConfiguration(map,
ConfigurationUserLevel.None);
// Get the unity configuration section.
UnityConfigurationSection section
= (UnityConfigurationSection)config.GetSection("unity");
// Create and populate a new UnityContainer with the configuration information.
IUnityContainer theContainer = new UnityContainer();
theContainer.LoadConfiguration(section, "MyContainerName");
You can define multiple containers within the <unity> section of a configuration file providing each
has a unique name, and load each one into a separate container at run time. If you do not assign a
name to a container in the configuration file, it becomes the default container, and you can load it
by omitting the name parameter in the LoadConfiguration method.
To load a container programmatically in this way, you must add the System.Configuration.dll
assembly and the Microsoft.Practices.Unity.Configuration.dll assembly to your project. You should
also import the following namespaces:
Microsoft.Practices.EnterpriseLibrary.Common.Configuration.Unity
Microsoft.Practices.Unity
214
As one or more parameters of a constructor in the target class. Unity will create instances
of the appropriate types and populate the constructor parameters when the target object is
instantiated. This is the approach you will typically use. For example, you can have Unity
automatically create and pass into your constructor an instance of a LogWriter or an
ExceptionManager, store the reference in a class variable or field, and use it within that
class.
As one or more properties of the target class. Unity will create an instance of the type
defined by the property or in configuration and set that instance as the value of the
property when the class is resolved through the container.
As one or more parameters of a method in the target class. Unity will create instances of
the appropriate types and populate the method parameters when the target object is
instantiated, and then call that method. You can store the references passed in the
parameters in a class variable or field for use within that class. This approach is typically
used when you have an Initialize or similar method that should execute when the class is
instantiated.
By taking advantage of this capability to populate an entire object graph, you may decide to have the
container create and inject instances of the appropriate types for all of the dependencies defined in
your entire application when it starts up (or, at least, a significant proportion of it).
While this may seem to be a strange concept, it means that you do not need to hold onto a
reference to the container after you perform this initial population of dependencies. That doesnt
mean you cannot hold onto the container reference as well, but resolving all of the required types at
startup can improve run-time performance at the cost of slightly increased startup time. Of course,
this also requires additional memory and resources to hold all of the resolved instances, and you
must balance this against the expected improvement in run-time performance.
You can populate all of your dependencies by resolving the main form or startup class through the
container. The container will automatically create the appropriate instances of the objects required
by each class and inject them into the parameters and properties. However, it does rely on the
215
container being able to create and return instances of types that are not registered in the container.
The Unity container can do this. If you use an alternative container, you may need to preregister all
of the types in your application, including the main form or startup class.
Typically, this approach to populating an entire application object graph is best suited to applications
built using form-based or window-based technologies such as Windows Presentation Foundation
(WPF), Windows Forms, console applications, and Microsoft Silverlight (using the version of Unity
specifically designed for use in Silverlight applications).
For information about how you can resolve the main form, window, or startup class of your
application, together with example code, see the documentation installed with Enterprise Library or
available online at http://go.microsoft.com/fwlink/?LinkId=188874.
However, to use the container to resolve types throughout your application, you must hold a
reference to it. You can store the container in a global variable in a Windows Forms or WPF
application, in the Application dictionary of an ASP.NET application, or in a custom extension to the
InstanceContext of a Windows Communication Foundation (WCF) service.
Table 1 will help you to understand when and where you should hold a reference to the container in
forms-based and rich client applications built using technologies such as Windows Forms, WPF, and
Silverlight.
Table 1
Holding a reference to the container in forms-based and rich client applications
Task
When
Where
At application startup.
At application startup.
216
Table 2 will help you to understand when and where you should hold a reference to the container in
request-based applications built using technologies such as ASP.NET Web applications and Web
services.
Table 2
Holding a reference to the container in request-based applications
Task
When
Where
At application startup.
At application startup.
For more detailed information about how you can maintain a reference to the container in different
types of applications, in particular, request-based applications, and the code you can use to achieve
this, see the documentation installed with Enterprise Library or available online at
http://go.microsoft.com/fwlink/?LinkId=188874.
217
The task of the configurator is to translate the configuration file information into a series of
registrations within the container. Enterprise Library contains only the UnityContainerConfigurator,
though you can write your own to suit your chosen container, or obtain one from a third party.
An alternative approach is to create a custom implementation of the IServiceLocator interface that
may not use a configurator, but can read the application configuration and return the appropriate
fully populated Enterprise Library objects on demand.
See http://commonservicelocator.codeplex.com for more information about the IServiceLocator
interface.
To keep up with discussions regarding alternate configuration options for Enterprise Library, see the
forums on CodePlex at http://www.codeplex.com/entlib/Thread/List.aspx.
218
Add validation capabilities by using the validation handler. This call handler uses the
Validation block to validate the values passed in parameters to the target object. This is a
useful approach to circumvent the limitations within the Validation block, which cannot
validate parameters of method calls except in specific scenarios such as in Windows
Communication Foundation (WCF) applications.
Add logging capabilities to objects by using the logging handler. This call handler uses the
Logging block to generate log entries and write them to configured target sources.
Add exception handling capabilities by using the exception handling handler. This call
handler uses the Exception Handling block to implement a consistent strategy for handling,
replacing, wrapping, and logging exceptions.
Add authorization capabilities to objects by using the authorization handler. This call
handler uses the Security block to check if the caller has the required permission to execute
each call.
219
Add performance measurement capabilities by using the performance counter handler. This
call handler updates Windows performance counters with each call, allowing you to
measure performance and monitor target object activity.
Add custom behavior to objects by creating your own interception call handlers.
For more information about using Unity to implement interception, see the documentation installed
with Enterprise Library or available online at http://go.microsoft.com/fwlink/?LinkId=188874.
For information on how to use the Policy Injection block facade, see the documentation for version
4.1 of Enterprise Library on MSDN at http://msdn.microsoft.com/en-us/library/dd139982.aspx.
220
This appendix provides an overview of the scenarios for using these features and demonstrates how
you can apply them in your own applications and environments. More information on the scenarios
presented here is provided in the documentation installed with Enterprise Library and available
online at http://go.microsoft.com/fwlink/?LinkId=188874.
External Configuration
External configuration encompasses the different ways that configuration information can reside in a
persistent store and be applied to a configuration source at run time. Possible sources of persistent
configuration information are files, a database, and other custom stores. Enterprise Library can load
configuration information from any of these stores automatically. To store configuration in a
221
database you can use the SQL configuration source that is available as a sample from the Enterprise
Library community site at http://entlib.codeplex.com. You can also specify one or more
configuration sources to satisfy more complex configuration scenarios, and create different
configurations for different run-time environments. See the section "Scenarios for Advanced
Configuration" later in this appendix for more information.
Programmatic Support
Programmatic support encompasses the different ways that configuration information can be
generated dynamically and applied to a configuration source at run time. Typically, in Enterprise
Library this programmatic configuration takes place through the fluent interface specially designed
to simplify dynamic configuration, or by using the methods exposed by the Microsoft .NET
Framework System.Configuration API.
Using the Fluent Interfaces
All of the application blocks except for the Validation Application Block and Policy Injection
Application Block expose a fluent interface. This allows you to configure the block at run time using
intuitive code assisted by Microsoft IntelliSense in Visual Studio to specify the providers and
properties for the block. The following is an example of configuring an exception policy for the
Exception Handling Application Block and loading this configuration into the Enterprise Library
container.
var builder = new ConfigurationSourceBuilder();
builder.ConfigureExceptionHandling()
.GivenPolicyWithName("MyPolicy")
.ForExceptionType<NullReferenceException>()
.LogToCategory("General")
.WithSeverity(System.Diagnostics.TraceEventType.Warning)
.UsingEventId(9000)
.WrapWith<InvalidOperationException>()
.UsingMessage("MyMessage")
.ThenNotifyRethrow();
var configSource = new DictionaryConfigurationSource();
builder.UpdateConfigurationWithReplace(configSource);
EnterpriseLibraryContainer.Current
= EnterpriseLibraryContainer.CreateDefaultContainer(configSource);
222
You can select Add Configuration Settings on the Blocks menu to display the section that contains
the default system configuration source. If you click the chevron arrow to the right of the
Configuration Sources title to open the section properties pane you can see that this is also, by
default, specified as the Selected Sourcethe configuration source to which the configuration
generated by the tool will be written. When an application that uses Enterprise Library reads the
configuration, it uses the settings specified for the selected source.
The following sections describe the common scenarios for more advanced configuration that you
can accomplish using the configuration tools. Some of these scenarios require you to add additional
configuration sources to the application configuration.
223
224
2. Set the relevant properties of the shared configuration source. For example, if you are using
the built-in file-based configuration source, set the File Path property to the path and name
for the application's configuration file.
3. Set the Selected Source property in the properties pane for the Configuration Sources
section to System Configuration Source.
4. Click the plus-sign icon in the Redirected Sections column and click Add Redirected Section.
A redirected section defines one specific section of the local application's configuration that
you want to redirect to the shared configuration source so that it loads the configuration
information defined there. Any local configuration settings for this section are ignored.
5. In the new redirected section, select the configuration section you want to load from the
shared configuration store using the drop-down list in the Name property. The name of the
section changes to reflect your choice.
6. Set the Configuration Source property of the redirected section by selecting the shared
configuration source you defined in your configuration. This configuration source will
provide the settings for the configuration sections that are redirected.
7. Repeat steps 4, 5, and 6 if you want to redirect other configuration sections to the shared
configuration store. Configuration information for all sections for which you do not define a
redirected section will come from the local configuration source.
8. To edit the contents of the shared configuration store, you must open that configuration in
the configuration tools or in a text editor; you cannot edit the configuration of shared
sections when you have the local application's configuration open in the configuration tool.
If you open the shared configuration in the configuration tool, ensure that the Selected
Source property of that configuration is set to use the system configuration source.
You cannot share the contents of the Application Settings section. This section in the configuration
tool stores information in the standard <appSettings> section of the configuration file, which cannot
be redirected.
225
can use the sample SQL configuration source that is available from the Enterprise Library
community site at http://entlib.codeplex.com to store your configuration in a database.
2. Set the relevant properties of the shared configuration source. For example, if you are using
the built-in file-based configuration source, set the File Path property to the path and name
for the application's configuration file.
3. Set the Parent Source property in the properties pane for the Configuration Sources section
to your shared configuration source. Leave the Selected Source property in the properties
pane set to System Configuration Source.
4. Configure your application in the usual way. You will not be able to see the settings
inherited from the shared configuration source you specified as the parent source.
However, these settings will be inherited by your local configuration unless you override
them by configuring them in the local configuration. Where a setting is specified in both the
parent source and the local configuration, the local configuration setting will apply.
5. To edit the contents of the shared parent configuration store, you must open that
configuration in the configuration tools or in a text editor; you cannot edit the configuration
of parent sections when you have the local application's configuration open in the
configuration tool. If you open the parent configuration in the configuration tool, ensure
that the Selected Source property of that configuration is set to use the system
configuration source.
The way that the configuration settings are merged, and the ordering of items in the resulting
configuration, follows a predefined set of rules. These are described in detail in the documentation
installed with Enterprise Library and available online at
http://go.microsoft.com/fwlink/?LinkId=188874.
226
3. Save the configuration. The configuration tool generates a normal (.config) file and a delta
(.dconfig) file for each environment. The delta file(s) can be managed by administrators, and
stored in a separate secure location, if required. This may be appropriate when, for
example, the production environment settings should not be visible to developers or
testers.
4. To create a run-time merged configuration file (typically, this is done by an administrator):
Select Open Delta File from the Environments menu and load the appropriate
override configuration (.dconfig) file.
Set the Environment Configuration File property in the properties pane for the
environment to the path and name for the merged configuration file for that
environment.
Right-click on the title of the environment and click Export Merged Environment
Configuration File.
227
228
</EncryptedData>
</connectionStrings>
If you only intend to deploy the encrypted configuration file to the server where you encrypted the
file, you can use the DataProtectionConfigurationProvider. However, if you want to deploy the
encrypted configuration file on a different server, or on multiple servers in a Web farm, you should
use the RsaProtectedConfigurationProvider. You will need to export the RSA private key that is
required to decrypt the data. You can then deploy the configuration file and the exported key to the
target servers, and re-import the keys. For more information, see "Importing and Exporting
Protected Configuration RSA Key Containers" at http://msdn.microsoft.com/enus/library/yxw286t2(VS.80).aspx.
Of course, the next obvious question is "How do I decrypt the configuration?" Thankfully, you don't
need to. You can open an encrypted file in the configuration tools as long as it was created on that
machine or you have imported the RSA key file. In addition, Enterprise Library blocks will be able to
decrypt and read the configuration automatically, providing that the same conditions apply.
229