C# Unit 3
C# Unit 3
C# Unit 3
14.1.1 Introduction
There are various aspects of software application those need to be monitored and improved
over once the application comes in use. In fact, the application can be tested through various
angles while it is in development process as well as after its complete development, so that the
client does not face any issues. Software application may not always behave as expected.
.NET Core provides tools and APIs that helps to diagnose various issues quickly and
effectively.
Following are the benefits of Diagnostic tools,
The ability to monitor performance while debugging.
Correlate performance data with debugging activity.
Richer user experience through IntelliTrace and the Output windows.
Shortening diagnose time to identify and fix an issue.
14.1.2 Types of Tools Used for Diagnostic Tasks
1. Debugger Events : These tools provide access to all Break, Output, and IntelliTrace
events during the debugging. The data is presented both as a timeline and as a tabular
view. Messages that are sent to the Debug tab of the Output window are captured by
IntelliTrace. The post-mortem tools can be used with Windows 7 and later. Windows
8 and later is required to run profiling tools with the debugger (Diagnostic Tools
window).
2. The Memory Usage tools : These tools help developer to monitor the memory usage
of the application during debugging. A developer can analyze memory impact,
growth, and leak issues simply by looking at the graphs. Code break events allow to
identify the parts of the graph that are related to recent debugging activity. Snapshots
can be taken before and after debug actions to analyze memory usage and memory
growth. Developer can look at the number and size of objects on the heap. .NET
Object Allocation tool, helps to identify allocation patterns and anomalies in .NET
code.
3. The CPU Usage tools : These tools track the application's CPU usage while
debugging. The CPU Usage tool is a good place to start analyzing application
performance. It will tell more about CPU resources that application is consuming. A
developer can get a per-function breakdown of CPU usage. To do this go, to Debug -
> Start Diagnostic Tools without Debugging, select CPU Usage, and click Start.
To use the tool most effectively, two breakpoints can be set in the code, one at the
beginning and one at the end of the function or the region of code that needs to be analyzed.
The profiling data can be examined when it is paused at the second breakpoint. The CPU
Usage view shows a list of functions ordered by longest running, with the longest running
function at the top. This can help to find out functions where performance bottlenecks are
happening.
Profile release builds without the debugger
Profiling tools like CPU Usage and Memory Usage can be used with the debugger, or
the profiling tools can run post-mortem using the Performance Profiler, which is
intended to provide analysis for Release builds. In the Performance Profiler, the
diagnostic info can be collected while the app is running, and then the collected
information can be examined after the app is stopped.
Examine UI performance and accessibility events (UWP - Universal Windows
Platform)
In UWP applications, UI Analysis can be enabled in the Diagnostic Tools window. The
tool searches for common performance or accessibility issues and displays them in the
Events view while debugging process is on. The event descriptions provide information
that can help to resolve issues.
Analyze resource consumption (XAML - Extensible Application Markup Language)
In XAML applications, such as Windows desktop WPF apps and UWP apps, one can
analyze resource consumption using the Application Timeline tool. For example, the
aspects that can be analyzed are, the time spent by application preparing UI frames
(layout and render), servicing network and disk requests, and in scenarios like
application startup, page load, and Window resize. To use the tool, choose Application
Timeline in the Performance Profiler, and then choose Start. In the application,
scenario should be reviewed with a suspected resource consumption issue, and then
Stop collection can be chosen to generate the report.
Low frame rates in the Visual throughput graph may correspond to visual problems
that can be seen when the application is running. Similarly, high numbers in the
UI thread utilization graph may also correspond to UI responsiveness issues. In the
report, a time period can be selected with a suspected performance issue, and then the
detailed UI thread activities can be examined in the Timeline details view (lower pane).
In the Timeline details view, information such as the type of activity (or the UI element
involved) along with the duration of the activity can be found.
In Direct3D applications the activity on the GPU can be examined and performance
issues can be analyzed. To use the tool, GPU Usage can be chosen in the Performance
Profiler, and then Start can be selected. Later after profiling is over Stop collection can
be chosen to generate a report. The graphs can be used to determine whether there are
CPU bound or GPU bound performance bottlenecks.
Analyze performance (legacy tools)
In Visual Studio 2019, the legacy Performance Explorer and related profiling tools such
as the Performance Wizard were folded into the Performance Profiler, which can be
opened using Debug > Performance Profiler. In the Performance Profiler, the
available diagnostics tools depend on the target chosen and the current, open startup
project. The CPU Usage tool provides the sampling capability previously supported in
the Performance Wizard. The Instrumentation tool provides the instrumented profiling
capability (for precise call counts and durations) that was in the Performance Wizard.
Additional memory tools also appear in the Performance Profiler.
14.1.3 System.Diagnostics Namespace
The System.Diagnostics namespace provides classes that allow to interact with system
processes, event logs, and performance counters.
1. The EventLog component provides functionality to write to event logs, read event
log entries, and create and delete event logs and event sources on the network. The
EntryWrittenEventHandler provides a way to interact with event logs
asynchronously. Supporting classes provide access to more detailed control,
including - permission restrictions, the ability to specify event log types (which
controls the type of default data that is written with an event log entry), and iterate
through collections of event log entries. The other related classes those can be used
are EventLogPermission, EventLogEntryType, and EventLogEntryCollection
classes.
2. The Process class provides functionality to monitor system processes across the
network, and to start and stop local system processes. In addition for retrieving lists
of running processes (by specifying either the computer, the process name, or the
process id) or viewing information about the process that currently has access to the
processor, one can get detailed knowledge of process threads and modules both
through the Process class itself, and by interacting with the ProcessThread and
ProcessModule classes. The ProcessStartInfo class enables to specify a variety of
elements with which to start a new process, such as input, output, and error streams,
working directories, and command line verbs and arguments. These give a fine
control over the behavior of the processes. Other related classes can be used to
specify window styles, process and thread priorities, and interact with collections of
threads and modules.
3. The PerformanceCounter class enables to monitor system performance, while the
PerformanceCounterCategory class provides a way to create new custom counters
and categories. One can write to local custom counters and read from both local and
remote counters (system as well as custom). One can sample counters using the
PerformanceCounter class, and calculate results from successive performance
counter samples using the CounterSample class. The CounterCreationData class
enables to create multiple counters in a category and specify their types. Other
classes associated with the performance counter component provide access to
collections of counters, counter permission, and counter types.
The System.Diagnostics namespace also provides classes that allow to debug application
and to trace the execution of the code.
System.Diagnostics.Debug class
This class is the most popular widely used class that provides a set of methods and
properties that help debug the code. This class cannot be inherited. Following are the
important methods of this class.
Assert : Overloaded. Checks for a condition and displays a message if the condition is
false
Write/WriteLine : Writes information about the debug to the trace listeners in the
Listeners collection
WriteIf/WriteLineIf : Writes information about the debug to the trace listeners in the
Listeners collection if a condition is true.
System.Diagnostics.Trace class
This class provides a set of methods and properties that help to trace the execution of the
code. The properties and methods can be used in the Trace class to instrument the release
builds. Instrumentation allows the monitoring of the health of the application running in real-
life settings. Members of this class are almost similar to the Debug class.
System.Diagnostics.SymbolStore class
Class Description
class TraceListener Provides the base class for the listeners who
monitor trace and debug output.
Enumerations
Enumeration Description
enumeration TraceLevel Specifies what messages to output for the Debug, Trace and
TraceSwitch classes.
Managed debuggers allows to interact with the program. Pausing, incrementally executing,
examining, and resuming gives insight into the behavior of code. A debugger is the first
choice for diagnosing functional problems that can be easily reproduced.
Logging and tracing
Logging and tracing are related techniques. They refer to instrument the code to create log
files. The files record the details of what a program does. These details can be used to
diagnose the most complex problems. When combined with time stamps, these techniques are
also valuable in performance investigations.
Unit testing
The dotnet-dump tool is a way to collect and analyze Windows and Linux core dumps
without a native debugger.
dotnet-trace
.NET Core includes what is called the EventPipe through which diagnostics data is
exposed. The dotnet-trace tool allows to consume useful profiling data from application that
can help in scenarios where one needs to root cause applications running slow.
1. Security model : When some action is performed on the user interface, such as click a
button, the application connects to the internet, downloads the modules into the Download
Assembly Cache, and begins executing. Behind the scenes, what proof/evidence exists
that one can trust the code, the computer is downloading ?
Explanation
.NET enforces security polices around assemblies. It uses the evidence that an assembly
has, such as the origination of the file. The runtime places all code from the local intranet
into a specific group. It then uses the security policies to decide what permissions the code
should be granted at a granular level.
2. Code access security also known as evidence-based security : The CLR examines the
evidence associated with the code to determine which security policy group the code
belongs to. The CLR then checks what permission set is associated with that code group.
If the code group has the permissions demanded, the request is granted; if not, an
exception is thrown.
3. Code Groups : Code identity permissions are used to define a security policy code group
which groups the related entities. The evidence provided by an assembly is used as the
condition for granting and revoking permissions to it. It is done by putting the code in an
appropriate code group. Every code group stipulates a membership condition and has
specific conditions attached to it. Any assemblies that meet the condition become a
member of the group.
4. Evidence : In order for the CLR to determine which code group to place assembly
information into, the first step is to read supplied evidence. There are two main sources
internet and intranet from where the information is collected. The group internet defines
code that is sources from the internet and the group intranet defines code sources from a
LAN.
Some of the major type of evidence are listed below.
Evidence Description
Zone The region such as intranet, local, trusted from which the code originated.
Strong
Unique verifiable name for the code.
Name
Publisher The assembly digital signature that uniquely identified the developer.
5. Permission sets : A permission set is a named set of permissions that can be associated
with a code group. Permissions are the actions that are allowed for each code group to
perform. The system administrator usually manages the permissions at the enterprise,
machine and user levels. Administrators specify which code groups and permission sets
are defined for their system that CLR can use to determine which permissions are allowed
or denied to an assembly.
Some built-in permission sets are,
designed and written so that it can be subsequently localized into one or more languages with
relative ease
14.3.1 Concept
Globalization allows the web application to be useful for different people in different parts
of the world who speaks different languages. ASP.Net makes it easy to localize an application
through the use of resource files. Resource files are xml files with .resx extension. Resources
can either be a global resource, in which it is accessible throughout the site and placed in
App_GlobalResources folder in a website or a local resource which is specific to a page only
and is placed in App_LocalResources folder.
Localization is the process of customizing the globalized web application to a specific
locale and culture. Various resources such as images and text for the specific locale are
created. The resource file in localization is scoped to a particular page in an application.
Localization is the process of translating an application's resources into localized versions
for each culture that the application will support. Localization support is possible to
automatically detect the user's culture and to respond to that detection with properly formatted
dates, currency and other numeric values, properly configured controls etc. One should
proceed to the localization step only after completing the Localizability step to verify that the
globalized application is ready for localization.
For each localized version of the application, a new satellite assembly should be added that
contains the localized user interface block translated into the appropriate language for the
target culture. The code block for all cultures should remain the same. The combination of a
localized version of the user interface block with the code block produces a localized version
of the application.
CultureInfo Class ,Provides information about a specific culture . The information
includes the names for the culture, the writing system, the calendar used, and formatting for
dates and sort strings.
14.3.2 Various Resources Required to be Localized/Translated
In a typical ASP.NET application, the following four resources or items may differ
depending on the user’s language and regional preferences and hence need to localized,
1. Text resources – The text that resides in the aspx page. Both asp server tags and
traditional html tags may contain text on an aspx page. This includes the UI text that may
appear in C-Sharp(C#) and Visual Basic code behind files.
All text resources should be externalized to resx files in order to make translation and
maintenance easier. Also known as “resources file”, a resx file consists of XML entries
which specify objects and strings.
Visual Studio does provide an automated way to externalize the strings after the page has
been created but this function is limited. It only works on the server tags on aspx pages.
Visual Studio cannot externalize the text in CS/VB code or process regular html tags. The
best approach is to externalize the tags as the pages are being developed.
2. Database content - Most of the translatable text resides in a database for a typical
ASP.NET application particularly in content management systems (product information,
articles etc.).
The database schema and relations will have to be carefully designed to externalize
translatable text to a table(s) which maps to languages and parent tables. Although adding
additional columns to existing tables for each language may seem easier; maintenance of
this solution is extremely difficult and not recommended.
3. Images and graphics - Graphics and images may contain translatable text. If possible, try
avoiding images and graphics with embedded text. Using background images and
retrieving the text from a resource file will make the localization effort much easier. If
that is not possible, point the source of the image to the resx file which will easily change
it depending on the language. This applies to other external assets such as pdf and doc
files as well.
4. Regional options - These items are very straight forward ones likes, Date/time, currency,
number/decimal formatting will vary depending on the user’s region and/or preferences.
As in any other programming language and platform, avoid manually formatting and hard
coding date/time and number formats. .NET offers a CultureInfo class which provides
access to culture-specific instances of objects such as : DateTimeFormatInfo and
NumberFormatInfo. Using these objects, culture-specific operations can be performed
easily such as formatting dates and numbers, casing and comparing strings.
14.3.3 Creating a Global Application and a Local Version of the same
Application
Globalization is the first step in the process. A globalized application supports localized
user interfaces and regional data for all users. Truly global applications should be culture-
neutral and language-neutral. A globalized application can correctly accept, process and
display a worldwide assortment of scripts, data formats and languages. While the globalized
application may possess such flexibility, ensure that the application's resources are separated
that require translation from the rest of the application's code. If application is correctly tested
for localizability before proceeding to the localization step, then there is no need to modify the
application's source code during localization.
The steps for creating a local resource file in an ASP.NET application are as shown below,
1) To create a resource file, Open the page for which the resource file is to be created.
2) Add a drop down list control and switch to the design view of the web page
3) At the top of the ASP.NET web application in the Tools menu, click ‘Generate Local
Resources’ option.
4) Visual Studio will automatically create the App_LocalResources folder in the root
folder of the application
5) A neutral base resource file is created for the current page in the application. It
contains a key/value pair for each control property or page property that requires
localization.
6) In the Visual Studio web page, a meta attribute is added for the web server control for
controlling the implicit localization. The UICulture and Culture properties are added
to the <%@Page %> directive.
7) Add the values for each resource that is added in the application and save the
resource file.
Creating resource file for different cultures
The .Net Framework represents different cultures across the world by using the culture
code. The culture code consists of two parts, a two letter language code and an optional
two letter country/region code.
The general code format of the culture code is as shown below,
IsNeutralCulture
It returns a Boolean value indicating the culture represented by the
CultureInfo object is a neutral culture
NumberFormat
It gets or sets the NumberFormatInfo object that defines the correct
format for displaying numbers, currency, and percentage
Microsoft .Net enables to access the Windows Registry programmatically to store and
retrieve data. The Windows Registry is a hierarchical database that has a collection of Keys,
Sub Keys, Predefined Keys, Hives, and Value Entries and can be used to store system specific
or application specific data. As it is said, the registry acts as a central repository of information
for the operating system and the applications on a computer. One can take advantage of the
Windows Registry to store configuration metadata of the applications so that it can be
retrieved a later when required.
The Windows Registry stores the following types of information in a hierarchical manner.
1. User profile information
2. Hardware information of the system
3. Property settings
4. Information of installed programs in the system
Extra care should be taken while manipulating the Windows Registry. It is advisable to
back-up the registry before any changes are made to it so that changes can be reverted back if
required.
The Windows Registry is a hierarchical database for storing many kinds of system and
application settings. Prior to the Registry, .ini files in the form of text files were commonly
used for storing these settings. However, these files were unable to meet all the requirements
of a modern application. Especially in multi-user scenarios the .ini files were nearly useless.
On the other hand, the Windows Registry uses one logical repository that is able to store user-
specific settings.
14.4.1 Structure of the Registry
The Registry is based on the two basic elements, keys and values and the entire structure is
a tree with several root elements that slightly differ depending on the version of Windows
being currently used. The keys are container objects very similar to folders that can contain
other keys or values. The values can be a string, binary, or DWORD depending on the
scenario.
The most common root elements are as follows,
1. HKEY_CLASSES_ROOT : This root element holds the information about
registered (installed) applications and associated file extensions. For example,
Windows is able to open the .docx extension with Microsoft Word because of the
settings in this key. It is not advised to alter these keys manually and the Folder
Options in the Windows Explorer should be used instead.
Registry manipulations (store, retrieve and remove) can be done programmatically using
settings in code in C#. The Registry and RegistryKey classes from the Microsoft.Win32
namespace are used for registry manipulations.
Registry class
This class is like a gate keeper and allows to access the root elements. It provides
RegistryKey objects of the root keys and several static methods to access key/value pairs.
RegistryKey class
Q.1 What are various diagnostic tasks in .NET environment ? (Refer section 14.1)
Q.2 How security is dealt in .NET environment ? (Refer section 14.2)
Q.3 Discuss Localization in .NET. (Refer section 14.3)
Q.4 What is Windows registry ? (Refer section 14.4)
Q.5 How Windows registry is manipulated ? (Refer section 14.4)
Q.6 Discuss in detail about security in .Net. (Refer section 14.2)
Q.7 Discuss the Common ASP.NET Security Flaws.
(Refer section 14.2)
Concept
Multitasking is the ability of an operating system to execute more than one program
simultaneously. Operating system achieves the multitasking through CPU scheduling
algorithm. Operating system uses algorithm to switch the CPU from one program to the next
so quickly that appears as if all of the programs are executing at the same time.
Multithreading is the ability of an operating system to execute the different parts of the
process (A program in execution is called as process.) simultaneously. The program has to
be designed so that it is separated in different sub-parts(threads) and different threads do not
interfere with each other.
A thread is defined as the execution path of a program.
Explanation
C# supports parallel execution of job or program code through multithreading and set of
classes that carry out threading programming.
Each thread defines a unique flow of control.
When application involves complicated and time consuming operations, then it is often
helpful to set different execution paths or threads, with each thread performing a particular
job.
Threads are lightweight processes.
Use of threads saves wastage of CPU cycle and increase efficiency of an application.
Concept
The life cycle of a thread starts when an object of the System.Threading.Thread class is
created and ends when the thread is terminated or completes execution.
Explanation
1. Thread Creation
1. Create a new ThreadStart delegate. The delegate points to a method that will be
executed by the new thread.
2. Pass this delegate as a parameter when creating a new Thread instance.
3. Now, call the Thread.Start method to run the thread method.
4. Syntax
using System.Threading;
Example 2
//illustration of operation of thread creation.
using System;
using System.Threading;
namespace ThreadApps
{
class ThreadCreationOp
{
public static void CallChildThread()
{
Console.WriteLine("Child thread starts");
}
Example 3
// Two methods in the same class for creating two threads
Program 1
using System;
using System.Threading;
thr1.Start();
thr2.Start();
}
}
Output
Thread1 1
Thread1 2
Thread2 1
Thread2 2
Thread1 3
Thread1 4
Thread2 3
Thread2 4
Thread1 5
Thread2 5
Thread2 6
Example 4
// One thread method to create two threads by creating two objects of the thread class.
using System;
using System.Threading;
Console.WriteLine(this.name);
for (int cnt = 1; cnt < =5; cnt ++) {
Console.WriteLine(Data {0}", cnt);
}
}
Output
Before thread start
Thread 1
Data 1
Data 2
Thread 2
Data 1
Data 2
Thread 1
Data 3
Data 4
Thread 2
Data 3
Data 4
Thread 1
Data 5
Thread 2
Data 5
Property Description
CurrentPrinciple Gets or sets the thread's current principal (for role-based security).
CurrentUICulture Gets or sets the current culture used by the Resource Manager to look up
culture-specific resources at run-time.
IsAlive Gets a value indicating the execution status of the current thread.
Sr.No. Methods
Notifies a host that managed code has finished executing instructions that
depend on the identity of the current physical operating system thread.
Eliminates the association between a name and a slot, for all threads in the
process. For better performance, use fields that are marked with the
ThreadStaticAttribute attribute instead.
Retrieves the value from the specified slot on the current thread, within the
current thread's current domain. For better performance, use fields that are
marked with the ThreadStaticAttribute attribute instead.
namespace MultithreadingApplication
{
class MainThreadProgram
{
static void Main(string[] args)
{
Thread th = Thread.CurrentThread;
th.Name = "MainThread";
Console.WriteLine("This is {0}", th.Name);
Console.ReadKey();
}
}
}
Input
NIL
Output
This is MainThread
5. ThreadPool Class
1. Using ThreadPool class , many different threads can be executed in parallel and have the
system recycle them as soon as possible.
2. This is an ideal way to start many short-lived tasks as indicated following figure.
Thread pooling saves the CLR time of creating an entirely new thread for every short-
lived task and reclaiming its resources once it dies. Thread pooling effectively uses CPU
time slots.
Thread pooling enables to begin several tasks without having to set the properties for
each thread.
Thread pooling enables to pass information as an object to the procedure arguments of
the task that is being executed.
Using Thread pooling maximum number of threads that are to be processed can be fixed
to certain number.
Example
//illustration of Threadpool class
If one want to pass any parameters to the method executing by the thread it can be pass
them as a second argument.like this for run method with object parameter.
}
Console.WriteLine("Child Thread Finished");
}
}
The Thread.Sleep method can be used to pause a thread for a fixed period of time.
Syntax
Thread.Sleep()
Example
//Illustration of Sleep method
using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
using System.Threading;
namespace MultiThread
{
class SleepOp
{
static void Main(string[] args)
{
Thread th = new Thread(Show);
th.Start();
for (int cnt = 1; cnt <= 10; cnt ++)
{
Console.WriteLine("India");
Thread.Sleep(2000); //sleeps for 2000 milliseconds
}
}
Output
Welcome To
India
Welcome To
India
Welcome To
Welcome To
India
Welcome To
India
India
India
India
India
India
India
2. Destroying Threads
Example
//Illustration of thread Abort operation
using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
using System.Threading;
namespace MultiThread
{
class AbortOp
{
static void Main(string[] args)
{
Thread th = new Thread(Show);
if (th.IsAlive)
{
th.Start();
th.Abort(); //show method if immediately stopped as thread aborts
}
for (int cnt = 1; cnt <= 10; cnt ++)
{
Console.WriteLine("India");
}
}
private static void Show()
{
for (int cnt = 1; cnt <= 5; cnt ++)
{
Console.WriteLine("Welcome To");
}
}
}
}
Output
India
India
India
India
India
India
India
India
India
India
3. Suspend a Thread
/* PrioritiesDemo1 program do not define any priority, which executes the method Show()
first and then Main method gets a chance to execute. */
using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
using System.Threading;
namespace MultiThread
{
class PriortiesDemo1
{
static void Main(string[] args)
{
Thread th = new Thread(Show);
th.Start();
for (int cnt = 1; cnt <= 10; cnt ++)
{
Console.WriteLine("India");
}
}
Example
//illustration of thread Using With Lowest Priority
/* PrioritiesDemo2 program defines the priority. New thread has low priority therefore
method Show() of this thread gets chance after high priority Main thread. Main method
executes first. */
using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
using System.Threading;
namespace MultiThread
{
class PrioritiesDemo2
{
static void Main(string[] args)
{
Thread th = new Thread(Show);
th.Priority = ThreadPriority.Lowest;
th.Start();
for (int cnt = 1; cnt <= 10; cnt++)
{
Console.WriteLine("India");
}
}
1. Mutually exclusive tasks of the single process, such as gathering user input and
background processing can be managed with the use of threads.
2. Threads can be used as a convenient way to structure a program that performs several
similar or identical tasks concurrently.
3. Major advantage of using the threads is that multiple activities can be carried out
simultaneously.
4. Another advantage is that thread can be used to achieve faster computations by doing two
different computations in two threads instead of serially one after the other.
15.6.1 Synchronization
Concept
A thread is said to be blocked when its execution is paused for some reason, such as
when waiting for another to end via Join or EndInvoke.
Explanation
Exclusive locking is used to ensure that only one thread can enter particular sections of
code at a time. The .NET Framework provides two exclusive locking constructs namely
lock and Mutex.
Explanation
Only one thread can lock the synchronizing object (in this case, locker) at a time, and
any contending threads are blocked until the lock is released. If more than one thread
contends the lock, they are queued on a "ready queue" and granted the lock on a first-
come, first-served basis. Exclusive locks are sometimes said to enforce serialized access
to whatever's protected by the lock, because one thread's access cannot overlap with that
of another thread.
A thread is blocked while awaiting a contended lock has a ThreadState of
WaitSleepJoin.
Locking should be done before accessing any field comprising writable shared state.
15.6.4 Mutex
A Mutex is like a C# lock, but it can work across multiple processes. In other words, Mutex
can be computer-wide as well as application-wide.
15.6.5 Semaphore
A Semaphore is similar to mutex, except that the Semaphore has no "owner"-it is thread-
agnostic. Any thread can call Release on a Semaphore, whereas with Mutex and lock, only
the thread that obtained the lock can release it. Semaphores can be useful in limiting
concurrency-preventing too many threads from executing a particular piece of code at once.
15.6.6 Nonblocking Synchronization
For locking, always a thread must be blocked, suffering the overhead and latency of being
temporarily descheduled.
The .NET Framework's nonblocking synchronization constructs can perform simple
operations without ever blocking, pausing, or waiting. These involve using instructions that
are strictly atomic or instructing the compiler to use "volatile" read and write semantics.
Suspend
Resume
Join
Q.2 Explain Synchronous and Asynchronous Operations.
Ans. : Synchronization is a way of creating a thread-safe code where only a single thread
will access the code in a given time. A synchronous call waits for completion of method
and then continous the program flow. Synchronous programming adversely affects the UI
operations that normally happens when user tries to perform time-consuming operations
since only one thread is used.
In Asynchronous operation, the method call immediately returns allowing the program
to perform other operations while the method called completes its share of work in certain
circumstances.
Q.3 What is Thread Pooling ?
Ans. : A Thread pool is a collection of threads that perform tasks without disturbing the
primary thread. Once the task is completed by a thread it returns to the primary thread.
Q.4 Illustrate Race Condition.
Ans. : A Race Condition occurs in a situation when two threads access the same resource
and try to change it at the same time. The thread which accesses the resource first cannot be
predicted. Let me take a small example where two threads X1 and X2 are trying to access
the same shared resource called T. And if both threads try to write the value to T, then the
last value written to T will be saved.
Q.5 Name the different states of a thread.
Ans. : Following are the different states of a thread :
Unstarted : The thread is created
Running : The execution of the thread starts
WaitSleepJoin : The thread is asked to sleep, asks other thread to wait and asks the
other thread to join
Suspended : The thread has been suspended
Aborted : The thread is dead but didn't change the state
Stopped : The thread has been stopped
Q.6 What are the properties of the thread class ?
Ans. : Following are the properties of the thread class :
IsAlive : When the thread is active, it contains the true value.
Name : It is used for setting the name for a thread and also for returning the name of
the thread.
Priority : The value is returned to the task set as per operating system.
IsBackground : It is the deciding factor for a thread whether it should be in the
background or the foreground.
ThreadState : This explains the state of the thread.
Q.7 Name a few methods used for handling the multi-threaded operations.
Ans. : Following are the few methods that are used for handling the multi-threaded
operations :
Start
Sleep
Abort
Suspend
Resume
Join
Q.8 How can we access a thread in C# ? AU : Dec.-19
Ans. :
For accessing main thread it requires the Thread class object to refer it. It can be
created by using the CurrentThread property of the Thread class. It will return the
reference to the thread in which it used. So when the CurrentThread property is used
inside the main thread one will get the reference of the main thread. After that, one
will get control over the main thread just like another thread. Once this control is
obtained then if further threads are created in the applications they all can be accessed
using Thread class.
The System.Threading namespace provides the classes and interfaces that implement
multithreaded programming. One of the classes in this namespace is the Thread class.
This class is used for creating and controlling threads.
namespace threademos
{
class ProgramThreadStatus
{
static void Main(string[] args)
{
Console.WriteLine("Current information");
Thread t = Thread.CurrentThread;
t.Name = "primarythread";
Console.ReadKey();
}
}
}
Q.7 Write a C# program to illustrate the concept of Passing Parameter for thread.
AU : May-18, Marks 6
Ans. :
/*
* C# Program to illustrate the Concept of Passing Parameter for Thread
*/
using System;
using System.Threading;
public class ThreadDemo
{
public static void Main()
{
Thread newThread = new Thread(ThreadDemo.task1);
newThread.Start(20);
ThreadDemo p = new ThreadDemo ();
newThread = new Thread(p.task2);
newThread.Start("Instance");
Console.ReadLine();
}
public static void task1(object data)
{
Console.WriteLine("Static Thread Procedure : Data ='{0}'",data);
}
public void task2(object data)
{
Console.WriteLine("Instance Thread Procedure : Data ='{0}'", data);
}
}
Q.8 Consider 2 toll booths in a national highway. People passing by are to pay
30/-. The booths keeps track of the number of people who visited the booth
and the total ticket amount collected. Model the toll booth with a class called
TollBooth in C# with the following members.
Data Members
Numbers of people who visited the booth
Total amount collected
Member Functions
Increment number of people and amount of someone passes by
Display the result
Exception : The number of people has to be a valid integer.
Use appropriate constructors and destructor. Use the concept of threads to
implement the 2 booths. (Refer section 15.3) AU : Dec.-19, Marks 13
AU : May-19
Concept
1. ADO.NET is an object-oriented set of libraries that allows developer to interact with data
sources. Commonly, the data source is a database, but it could also be a text file, an Excel
spreadsheet, or an XML file.
Explanation
2. There are many different types of databases available through which data can be accessed.
For example, there is Microsoft SQL Server, Microsoft Access, Oracle, Borland Interbase,
and IBM DB2, just to name a few.
ADO.NET Architecture
3. Advantages of ADO.Net
1. ADO.NET relies on managed providers defined by the .NET Common Language
Runtime.
2. ADO.NET can have separate objects that represent connections to different data
sources.
3. In ADO.NET one can create multiple data provider namespaces to connect
specifically with a particular data source, making access faster and more efficient and
allowing each namespace to exploit the features of its targeted data provider.
4. ADO.NET gives the choice of either using client side or server side cursors.
Concept
5. By keeping connections open for only a minimum period of time, ADO.NET conserves
system resources and provides maximum security for databases and also has less impact
on system performance.
Concept
1. As different data sources expose different protocols, there is need of different methods to
communicate with the right data source using the right protocol.
Explanation
2. Some older data sources use the ODBC protocol, many newer data sources use the
OleDb protocol, and that allow to communicate with them directly through .NET
ADO.NET class libraries.
3. ADO.NET provides a relatively common way to interact with data sources in the form of
libraries. These libraries are called Data Providers and are usually named for the protocol
or data source type they allow to interact with. Following Table 16.3.1 lists some well
known data providers, the API prefix they use, and the type of data source they allow to
interact with.
4. The Data provider communication
5. ADO.NET Data Providers are class libraries that allow a common way to interact with
specific data sources or protocols. The library APIs have prefixes that indicate which
provider they support.
Provider API prefix Data source description
name
ODBC Data Data Sources with an ODBC interface. Normally older data
Odbc
Provider
bases.
OleDb Data Data Sources that expose an OleDb interface, i.e. Access or
OleDb
Provider
Excel.
Concept
ADO.NET includes many objects to work around with data which are discussed below.
Explanation
Concept
DataSet provides a disconnected representation of result sets from the Data Source, and
it is completely independent from the Data Source. DataSet provides much greater
flexibility when dealing with related Result Sets.
Explanation
DataSet contains rows, columns, primary keys, constraints, and relations with other
DataTable objects. It consists of a collection of DataTable objects that one can relate to
each other with DataRelation objects. The DataAdapter Object provides a bridge
between the DataSet and the Data Source.
Data Source Identifies the server. Could be local machine, machine domain name, or
IP Address.
Integrated Security Set to SSPI to make connection with user’s Windows login
The following statement shows a connection string, using the User ID and Password
parameters for security purpose,
SqlConnection conn = new SqlConnection(
"Data Source=DatabaseServer;Initial Catalog=Northwind;User
D=ValidUserID;Password=ValidPassword");
Notice how the Data Source is set to DatabaseServer to indicate that one can identify a
database located on a different machine, over a LAN, or over the Internet. Additionally,
User ID and Password replace the Integrated Security parameter.
--C# ADO.NET Connection String
Connection String is a normal String representation which contains Database connection
information to establish the connection between Database and the Application. The
Connection String includes parameters such as the name of the driver, Server name and
Database name , as well as security information such as user name and password.
Usually Data Providers use a connection string containing a collection of parameters to
establish the connection with the database through applications. The .NET Framework
provides mainly three data providers, they are which are Microsoft SQL Server,
OLEDB, ODBC.
Building Microsoft SQL Server Connection String
connetionString="Data Source=ServerName;Initial Catalog=Databasename;
User ID=UserName;Password=Password"
Building OLEDB Data Provider Connection String
connetionString="Provider=Microsoft.Jet.OLEDB.4.0;
Data Source=yourdatabasename.mdb;"
Building ODBC Connection String
connetionString="Driver={Microsoft Access Driver (*.mdb)};
DBQ=yourdatabasename.mdb;"
The purpose of creating a SqlConnection object is to work around the database. Other
ADO.NET objects, such as a SqlCommand and a SqlDataAdapter take a connection
object as a parameter. The sequence of operations occurring in the lifetime of a
SqlConnection can be,
1. Instantiate the SqlConnection.
2. Open the connection.
3. Pass the connection to other ADO.NET objects.
4. Perform database operations with the other ADO.NET objects.
5. Close the connection.
Example
/// Illustration of working with SqlConnection objects
class SqlConnectionDemo
{
static void Main()
{
// 1. Instantiate the connection object
SqlConnection sqlconn = new SqlConnection(
"Data Source=(local);Initial Catalog=Northwind;Integrated Security=SSPI");
try
{
// 2. Open the connection
sqlconn.Open();
//
// 4. Use the connection
//
3. SqlCommand Object
A SqlCommand object allows to specify operations on database. For example, select,
insert, modify, and delete commands on rows of data in a database table. The
SqlCommand object can be used to support disconnected data management scenarios as
well using SqlDataAdapter.
The Command Object uses the connection object to execute SQL queries.
The queries can be in the Form of Inline text, Stored Procedures or direct Table access.
An important feature of Command object is that it can be used to execute queries and
Stored Procedures with Parameters.
If a select query is issued, the result set it returns is usually stored in either a DataSet or
a DataReader object.
Associated Properties of SqlCommand Class
This method executes the command specifies and returns the number of
ExecuteNonQuery
rows affected.
This method executes the command specified and returns the first column
ExecuteScalar
of first row of the result set. The remaining rows and column are ignored.
Here, SqlCommand constructor is used with no parameters. Instead, it explicity sets the
CommandText and Connection properties of the SqlCommand object, sqlcmd.
The ExecuteNonQuery method call sends the command to the database.
Getting Single values
Single value from database is required so as to get aggregate value from data, which may
be count, sum, average etc. ExecuteReader is used to perform this and the result is calculated.
// 1. Instantiate a new command
SqlCommand sqlcmd = new SqlCommand("select count(*) from BookCategories", sqlconn);
public SqlCommandDemo()
{
// Instantiate the connection
Console.WriteLine();
Console.WriteLine("Book Categories Before Insert");
// ExecuteReader method
sqlscd.ReadData();
sqlscd.ReadData();
Console.WriteLine();
Console.WriteLine("BookCategories After Update");
sqlscd.ReadData();
Console.WriteLine();
Console.WriteLine("BookCategories After Delete");
sqlscd.ReadData();
//ExecuteScalar method
int countOfRecords = sqlscd.GetNumberOfRecords();
Console.WriteLine();
Console.WriteLine("Total Records: {0}", numberOfRecords);
}
try
{
// Open the connection
conn.Open();
// 1. Instantiate a command
SqlCommand sqlcmd = new SqlCommand();
{
sqlconn.Close();
}
}
}
try
{
// Open the connection
sqlconn.Open();
DataReader Properties
Property Description
DataReader methods
Method Description
NextResult Advances the data reader to the next result during batch transactions.
Getxxx There are dozens of Getxxx methods. These methods read a specific data
type value from a column. For example. GetChar will return a column
value as a character and GetString as a string.
// print results
Console.Write("{0,-35}", contact);
Console.Write("{0,-30}", city);
Console.Write("{0,-35}", publication);
}
The return value of Read is type bool and returns true as long as there are more records to
read. After the last record in the data stream has been read, Read returns false.
Each column can be extracted from the row with a numeric indexer. Instead of numeric
indexer string indexer can be used, where the string is the column name from the SQL query
(the table column name is used if an asterisk, *, is used.) String indexers are much more
readable, making the code easier to maintain.
Regardless of the type of the indexer parameter, a SqlDataReader indexer will return type
object.
Closing Connection
Once used SqlDataReader, close the SqlConnection.
try
{
// data access code
}
finally
{
// 3. close the reader
if (sqlrdr != null)
{
sqlrdr.Close();
}
Example
using System;
using System.Data;
using System.Data.SqlClient;
namespace ADONetDemos
{
class ReadersDemo
{
static void Main()
{
ReadersDemo rsd = new ReadersDemo();
rsd.DataRead();
}
try
{
// open the connection
sqlconn.Open();
// print results
Console.Write("{0,-35}", contact);
Console.Write("{0,-30}", city);
Console.Write("{0,-35}", publication);
}
}
finally
{
// 3. close the reader
if (sqlrdr != null)
{
sqlrdr.Close();
}
{
sqlconn.Close();
}
}
}
}
}
5. Using SqlDataAdapter
A DataSet is an in-memory data store that can hold numerous tables. DataSets only hold
data and do not interact with a data source. SqlDataAdapter manages connections with
the data source and gives the disconnected behavior. The SqlDataAdapter opens a
connection only when required and closes it as soon as it has performed its task.
SqlDataAdapter performs the following tasks when filling a DataSet with data,
1. Open connection
2. Retrieve data into DataSet
3. Close connection
and performs the following actions when updating data source when DataSet
changes,
1. Open connection
2. Write changes from DataSet to data source
3. Close connection
In between the Fill and Update operations, data source connections are closed. Data can be
read and written with the DataSet. As the applications holds on to connections only when
necessary, the application becomes more scalable.
Creating a DataSet Object
DataSet dsBooks = new DataSet();
Now, the DataSet is empty and need a SqlDataAdapter to load it.
Creating A SqlDataAdapter
1. The SqlDataAdapter holds the SQL commands and connection object for reading
and writing data. It is initialized with a SQL select statement and connection object :
SqlDataAdapter daBooks = new SqlDataAdapter("select BookID,Author from Books",
sqlconn);
2. The code above creates a new SqlDataAdapter, daBooks. The SQL select statement
specifies what data will be read into a DataSet. The connection object, sqlconn,
should have already been instantiated, but not opened. SqlDataAdapter’s is
responsible to open and close the connection during Fill and Update method calls.
3. The SqlDataAdapter contains all of the commands necessary to interact with the data
source. There are two ways to add insert, update, and delete commands : via
SqlDataAdapter properties or with a SqlCommandBuilder.
-Using SqlCommandBuilder
SqlCommandBuilder cmdBldr = new SqlCommandBuilder(daBooks);
4. SqlCommandBuilder is instantiated with a single constructor parameter of the
SqlDataAdapter, daBooks, instance. This tells the SqlCommandBuilder what
SqlDataAdapter to add commands to. The SqlCommandBuilder will read the SQL
select statement (specified when the SqlDataAdapter was instantiated), infer the
insert, update, and delete commands, and assign the new commands to the Insert,
Update, and Delete properties of the SqlDataAdapter, respectively.
Note that, SqlCommandBuilder would work for single table and not for multiple table
joins.
Filling the DataSet
daBooks.Fill(dsBooks, "Books");
The Fill method, in the code above, takes two parameters : A DataSet and a table name.
The DataSet must be instantiated before trying to fill it with data. The second parameter is the
name of the table that will be created in the DataSet.
The Fill method has an overload that accepts one parameter for the DataSet only. In that
case, the table created has a default name of “table1” for the first table. The number will be
incremented (table2, table3, …, tableN) for each table added to the DataSet where the table
name was not specified in the Fill method.
Using the DataSet
A DataSet will bind with both ASP.NET and Windows forms DataGrids.
//Illustration with of DataSet assigned to a Windows forms DataGrid
dgBooks.DataSource = dsBooks;
dgBooks.DataMember = "Books";
DataSet is assigned to the DataSource property of the DataGrid.
To specify exactly which table to use, set the DataGrid’s DataMember property to the
name of the table.
Updating Changes
After modifications are made to the data, if it is to be updated to actual database, the
Update method of the SqlDataAdapter is used to put modifications to the database.
daBooks.Update(dsBooks, "Books");
The Update method, above, is called on the SqlDataAdapter instance that originally filled
the dsBooks DataSet. The second parameter to the Update method specifies which table, from
the DataSet, to update. The table contains a list of records that have been modified and the
Insert, Update, and Delete properties of the SqlDataAdapter contain the SQL statements used
to make database modifications.
6. DataSet Object
DataSet is tabular representation of data into rows and columns. The dataset
represents a subset of the database. It does not have a continuous connection to the
database. To update the database a reconnection is required. The DataSet contains
DataTable objects and DataRelation objects. The DataRelation objects represent the
relationship between two tables.
The ADO.NET DataSet contains DataTableCollection and their DataRelationCollection
. It represents a collection of data retrieved from the Data Source. Dataset can be used in
combination with DataAdapter class. The DataSet object offers a disconnected data
source architecture. The Dataset can work with the data it contain, without knowing the
source of the data coming from. That is, the Dataset can work with a disconnected mode
from its Data Source. It gives a better advantage over DataReader, because the
DataReader is working only with the connection oriented Data Sources.
Some important properties of the DataSet class
Properties Description
CaseSensitive Indicates whether string comparisons within the data tables are case-
sensitive.
EnforceConstraints Indicates whether constraint rules are followed when attempting any
update operation.
Events Gets the list of event handlers that are attached to this component.
ExtendedProperties Gets the collection of customized user information associated with the
DataSet.
Locale Gets or sets the locale information used to compare strings within the
table.
Prefix Gets or sets an XML prefix that aliases the namespace of the DataSet.
Methods Description
AcceptChanges Accepts all changes made since the DataSet was loaded or this
method was called.
GetChanges Returns a copy of the DataSet with all changes made since it
was loaded or the AcceptChanges method was called.
GetChanges(DataRowState) Gets a copy of DataSet with all changes made since it was
loaded or the AcceptChanges method was called, filtered by
DataRowState.
GetXMLSchema Returns the XSD schema for the XML representation of the
data.
Load(IDataReader, LoadOption, Fills a DataSet with values from a data source using the
DataTable[]) supplied IDataReader, using an array of DataTable instances
to supply the schema and namespace information.
Load(IDataReader, LoadOption, Fills a DataSet with values from a data source using the
String[]) supplied IDataReader, using an array of strings to supply the
names for the tables within the DataSet.
Merge() Merges the data with data from another DataSet. This method
has different overloaded forms.
ReadXML() Reads an XML schema and data into the DataSet. This
method has different overloaded forms.
ReadXMLSchema(0) Reads an XML schema into the DataSet. This method has
different overloaded forms.
RejectChanges Rolls back all changes made since the last call to
AcceptChanges.
WriteXML() Writes an XML schema and data from the DataSet. This
method has different overloaded forms.
The DataTable class represents the tables in the database. It has the following important
properties; most of these properties are read only properties except the PrimaryKey
property.
Some important properties of datatable class
Properties Description
PrimaryKey Gets or sets an array of columns as the primary key for the table.
Methods Description
GetChanges Returns a copy of the DataTable with all changes made since the
AcceptChanges method was called.
LoadDataRow Finds and updates a specific row, or creates a new one, if not found any.
RejectChanges Rolls back all changes made since the last call to AcceptChanges.
Properties Description
Methods Description
AcceptChanges Accepts all changes made since this method was called.
RejectChanges Rolls back all changes made since the last call to AcceptChanges.
The DataAdapter object acts as a mediator between the DataSet object and the database.
This helps the Dataset to contain data from multiple databases or other data source.
using System;
using System.Data;
using System.Data.SqlClient;
class DatasetDemo
{
static void Main()
{
string sqlconnString = "server=(local)\\SQLEXP;database=MyDatabase;Integrated
Security=SSPI";
string sqlqry = @"select * from Books";
SqlConnection sqlconn = new SqlConnection(sqlconnString);
try
{
sqlconn.Open();
SqlDataAdapter sqlda = new SqlDataAdapter(sqlqry, sqlconn);
DataSet ds = new DataSet();
sqlda.Fill(ds, "Books");
DataTable dt = ds.Tables["CatagoryName"];
foreach (DataRow row in dt.Rows)
{
foreach (DataColumn col in dt.Columns)
Console.WriteLine(row[col]);
Console.WriteLine(" ".PadLeft(20, '='));
}
}
catch(Exception e)
{
Console.WriteLine("Error: " + e);
}
finally
{
sqlconn.Close();
}
}
}
16.6.2 ADO .NET and Store Procedures - Updating Databases through Stored
Procedures
A stored procedure is nothing more than a prepared SQL code that is saved to reuse again
and again. So if there is a query that is being used over multiple times it can be better saved as
a stored procedure and then just the stored procedure would be called as per need to execute
the SQL code. So store procedure is a pre-define and a pre-compiled set of source code.
Difference between Stored Procedure and Functions
1. Function must return a value, but in Stored Procedure it is optional (procedure can
return zero or n values).
2. Functions can have only input parameters for it, whereas procedures can have
input/output parameters.
3. Functions can be called from procedure, whereas procedures cannot be called from
function. As stored Procedure may contain DML statements where as function can't
contain DML statements.
4. Procedure allows SELECT as well as DML (INSERT/UPDATE/DELETE) statement
in it whereas function allows only SELECT statement in it.
5. Stored procedure is precompiled execution plan whereas functions are not.
Using Stored Procedure for Database Updation in ADO .NET
Stored procedures can be called using SqlDataAdapter to update the table in database
from the DataTable . If a query is written in SelectCommand of SqlDataAdapter, it
automatically generates the required InsertCommand, UpdateCommand and
DeleteCommand in a simple scenario to update the database but if the name of Stored
Procedure is passed in SelectCommand, then it won't be able to generate these other
commands.
There is another way of doing it by table-valued parameter where one can pass the
datatable directly to stored procedure but SQL SERVER 2005 and previous version
does not support it and that requires creating User Type and enabling CLR in SQL
SERVER.
For using stored procedure one can use SqlDataAdapter. SqlDataAdapter will require
only name of the stored procedure and the DataTable which needs to be updated.
Example 1 -
Step 1 - Create table in SQL named Customer with the specified columns.
CREATE TABLE Customer(
[ID] [int] IDENTITY(1,1) Primary Key,
[Name] [varchar](20) NULL,
[City] [varchar](20) NULL,
[DOB] [date] NULL,
[Married] [bit] NULL,
[Mobile] [int] NULL)
Step 2 - Create a Stored Procedure to Get the Data from Customer Table.
-- Create Get stored procedure
Create Procedure uspGetCustomer @ID int
as
IF @ID < 1
SELECT * FROM Customer
Else
SELECT * FROM Customer WHERE ID = @ID
Create another Stored Procedure to update the Customer Table.
Note - Name of the parameter should be the same as column name of the table with
prefix as @.
There will be one extra parameter named @RowState with int type to check the passed row
need to be deleted, updated or inserted.
For creating this stored procedure, one can extensively use the query generated by SQL
server (Right click on table in Object Explorer -> Script Table as -> use CREATE TO,
UPDATE TO, INSERT TO, DELETE TO).
-- Create Update stored procedure
Create Procedure uspUpdateCustomer @ID int,
@Name varchar (20),
@City varchar (20),
@DOB date,
@Married bit,
@Mobile int,
@RowState int
as
IF @RowState = 4
command.Parameters.AddWithValue("@ID", -1);
return GetDetails();
}
UpdateDetails(dtTable);
}
Private functions GetDetails and UpdateDetails. These function won't need
replication for each table if multiple tables are updated.
1 GetDetails function will use Fill functions of SqlDataAdapter to fill the DataTable.
UpdateDetails function will first use GetChanges to get the updated, inserted or
deleted rows so that only those rows are passed which need to changed in the
database.
2. Add one extra column RowState to dtChanges to pass the RowState of the changed
row.
3. For loop will create SqlParameter using the name of columns in datatable and add it
in the SqlCommand.
4. Add common command to SqlDataAdapter for each INSERT, UPDATE, DELETE
and its constructor.
5. Get Inserted, updated and deleted rows and For loop will fill the RowState column
for each row for added and modified rows. As RowState can not be called on deleted
row, no need to set RowState it will become null for each deleted row.
6. Next, call Update function of SqlDataAdapter to update the database.
//Method to get details
private DataTable GetDetails()
{
command.CommandType = CommandType.StoredProcedure;
try
{
// Fill from database
adapter.Fill(dtTable);
}
catch (InvalidOperationException ioEx)
{
}
catch (Exception ex)
{
}
return dtTable;
command.CommandType = CommandType.StoredProcedure;
command.Parameters.Add(parameter);
}
if (dtModifiedRows != null)
{
for (int i = 0; i < dtModifiedRows.Rows.Count; i++)
{
dtModifiedRows.Rows[i]["RowState"] = 16;
}
}
try
{
//Update Database
if (dtAddedRows != null)
{
adapter.Update(dtAddedRows);
}
if (dtDeletedRows != null)
{
adapter.Update(dtDeletedRows);
}
if (dtModifiedRows != null)
{
adapter.Update(dtModifiedRows);
}
}
catch (Exception exception)
{
}
}
Step 4 - The complete Data Access Layer code using the above function.
ConnString is the name of connection string in configuration file.
public class DataAccessLayer
{
SqlConnection connection;
SqlCommand command;
SqlDataAdapter adapter;
public DataAccessLayer()
{
ConnectionStringSettingsCollection settings
= ConfigurationManager.ConnectionStrings;
Step 1 - Create table in SQL named Registration with the specified columns.
CREATE TABLE UserRegistration (
C_Id int IDENTITY(1, 1) NOT NULL,
C_Name varchar(100) NULL,
C_Age varchar(100) NULL,
C_Country varchar(100) NULL
);
Step 2 - Create a Stored Procedure to insert, delete or update the Data from
Registration Table.
Create procedure SpMyProcedure (
@Id int = null,
@Name varchar(100)= null,
@Age varchar(100)= null,
@Country varchar(100)= null,
@Action varchar(100)= null
) As begin if @Action = 'Insert' Insert into UserRegistration(C_Name, C_Age, C_Country)
values
(@Name, @Age, @Country) if @Action = 'Update'
Update
UserRegistration
set
C_Name = @Name,
C_Age = @Age,
C_Country = @Countrywhere C_Id = @Id if @Action = 'Delete'
Delete from
UserRegistration
where
C_Id = @Id end
cmd.Parameters.AddWithValue("@Country", txtCountry.Text);
cmd.Parameters.AddWithValue("@Id", txtId.Text);
cn.Open();
cmd.ExecuteNonQuery();
cn.Close();
}
This class resides in namespace System.Data.SqlClient. The exception that is thrown when
SQL Server returns a warning or error. This class cannot be inherited.
Using System.Data.SqlClient.SqlException
class MainClass
{
static void Main()
{
SqlConnection conn = new SqlConnection(@"data source = .\sqlexpress;integrated
security = true;database = northwind");
cmd.CommandType = CommandType.StoredProcedure;
cmd.CommandText = "sp_DbException_1";
try
{
conn.Open();
cmd.ExecuteNonQuery();
}
catch (System.Data.SqlClient.SqlException ex)
{
Console.WriteLine("Source: " + ex.Source);
Console.WriteLine("Number: "+ ex.Number.ToString());
Console.WriteLine("Message: "+ ex.Message);
Console.WriteLine("Class: "+ ex.Class.ToString ());
Console.WriteLine("Procedure: "+ ex.Procedure.ToString());
Console.WriteLine("Line Number: "+ex.LineNumber.ToString());
Console.WriteLine("Server: "+ ex.Server.ToString());
}
catch (System.Exception ex)
{
Console.WriteLine("Source: " + ex.Source);
Console.WriteLine("Exception Message: " + ex.Message);
}
finally
{
if (conn.State == ConnectionState.Open)
{
Console.WriteLine("Finally block closing the connection");
conn.Close();
}
}
}
}
Purposeful use of Finally block in ADO .NET exceptions
finally
{
sqlConnection.Close();
}
Using the "using" statement in ADO .NET Exception handling
The using statement can be used to specify a boundary for the object outside of which, the
object is destroyed automatically. The runtime invokes the Dispose method of the objects that
are specified within this statement when the control comes out of this block. This is why this
is a preferred choice when using exceptions for managing resources in .NET. Refer to the
following code that uses the "using" statement :
string connectionString =
; // Some connection string
using (SqlConnection sqlConnection = new SqlConnection(connectionString))
{
sqlConnection.Open();
//Some code
}
Note that when the end of the using block would be encountered, the Dispose () method
will be immediately called on the instance. Note that when the Dispose() method is called on
this connection instance, it checks to see if the connection is opened; if open it would close it
implicitly prior to disposing off the instance.
Dispose and Finalize for more information on when and how to use them appropriately.
The above code gets translated implicitly to :
string connectionString =
; // Some connection string
SqlConnection sqlConnection = new SqlConnection(connectionString));
try
{
sqlConnection.Open();
//Some code
}
finally
{
((IDispose)sqlConnection).Dispose();
}
Remember to keep the try block as short as possible. Note that in the code example above,
the connection is just opened in the try block. Do not use any unnecessary code/logic in the try
block that is not supposed to throw any exception. Do not catch any exception that cannot be
handled and avoid rethrowing exceptions unnecessarily as it is very expensive.
Prevent unnecessary database hit
One of the most useful of all features of ADO.NET is that one can attach messages to each
row of data in a DataSet object. The SqlDataAdapter class attaches error messages to the rows
of a DataSet if a specific database action has not been successfully completed. Then one can
check whether there are any errors in a DataSet prior to sending the same for updating the
database using the HasErrors property of the DataSet instance. This, if used judiciously, can
prevent an unnecessary database hit. Please refer to the code snippet that follows,
DataSet newDataSet = previousDataSet.GetChanges(DataRowState.Modified);
if (newDataSet.HasErrors)
{
// If there are errors take appropriate action
}
else
{
// Necessary code to update the database
}
A transaction is a single unit of work which means either ALL or NONE. If a transaction is
successful, all of the data operations are committed (saved) and become a durable part of the
database. If a transaction encounters errors/exceptions and must be canceled or rolled back,
then all of the data modifications/operations need to be removed.
Database transaction takes a database from one consistent state to another. At the end of the
transaction the system must be in the prior state if the transaction fails or the status of the
system should reflect the successful completion if the transaction goes through.
16.8.1 Properties of Transaction
Atomicity
A transaction consists of many steps. When all the steps in a transaction get completed, it
will get reflected in DB or if any step fails, all the transactions are rolled back.
Consistency
The database will move from one consistent state to another, if the transaction succeeds
and remains in the original state, or if the transaction fails.
Isolation
The transaction commands are only used in SQL DML (Data Manipulation Language) that
has operations like INSERT, UPDATE and DELETE. That is they change the data. The DDL
(Data Definition Language) or DCL (Data Control Language) are used to in creating structure
and SQL security.
The transaction commands are as below,
COMMIT
This command is used to save the changes invoked by the transaction.
ROLLBACK
This command is used to undo the changes made by transaction.
SAVEPOINT
With the help of this command one can roll the transaction back to a certain point without
rolling back the entire transaction.
SET TRANSACTION
This command is used to specify characteristics for the transaction. For example, one can
specify a transaction to be read only, or read write it. Also helps set the name of transaction.
16.8.3 Transaction with ADO.NET
try
{
objCmd1.ExecuteNonQuery();
objCmd2.ExecuteNonQuery(); // Throws exception due to foreign key constraint
objTrans.Commit();
}
catch (Exception)
{
objTrans.Rollback();
}
finally
{
objConn.Close();
}
}
As seen in the above program, first a connection is opened with SQL Database then object
of SqlTransaction class is instantiated. Also the reference of SqlTransaction is preserved with
this transaction object by calling SQL Begin Transaction method. Within the try block two
SQL commands are executed. If no error occurs the transaction will be committed other than
the catch block rolled back the transaction. Finally, database connection is closed.
16.8.4 TransactionScope Class
scope.Complete();
}
}
catch (ThreadAbortException ex)
{
// Handle exception
}
When TransactionScope is used it makes the code block in transactional mode. It cannot be
inherited. It is present in namespace System.Transactions.TransactionScope
TransactionScope has 3 main properties(IsolationLevel).
• Isolation Level
It defines the locking mechanism to read data in another transaction. Available options are
Read UnCommitted, Read Committed, Repeatable Read, Serializable. Default is Serializable.
• Timeout
How much time transaction object will wait to be completed. SqlCommand Timeout is
different than Transaction Timeout. SqlCommand Timeout defines how much time the
SqlCommand object will wait for a database operation to be completed. Available options are
10 minutes. Default is 1 minute.
• TransactionScopeOption
Disable This component does not participate in a transaction. This is the default
value.
NotSupported This component runs outside the context of a transaction.
Required It is the default value for TransactionScope. If any already exists then it
will join with that transaction otherwise create new one.
RequiresNew When this option is selected a new transaction is always created. This
transaction is independent with its outer transaction.
Suppress When this option is selected, no transaction will be created. Even if is
already there.
To set the default values
<system.transactions>
<defaultSettings timeout="30"/>
<machineSettings maxTimeout="1200"/>
</system.transactions>
Example to illustrate use of TransactionScope class
// A TransactionScope class object is created and SQL queries are defined to add records to
Author //table, TaskMember table. If no errors then Complete() is called to commit the data. If
exception is //raised it rolls back to previous state.
using (var txscope =new TransactionScope(TransactionScopeOption.RequiresNew))
{
try
{
using (SqlConnection objConn = new SqlConnection(strConnString))
{
objConn.Open();
SqlCommand objCmd1 = new SqlCommand("insert into tblAuthor values(12, 'Ravi')",
objConn);
SqlCommand objCmd2 = new SqlCommand("insert into tblTaskMember(MemberID,
TaskID) values(12, 18)", objConn);
objCmd1.ExecuteNonQuery();
objCmd2.ExecuteNonQuery(); // Throws exception due to foreign key constraint
//The Transaction will be completed
txscope.Complete();
}
}
catch(Exception ex)
{
// Log error
txscope.Dispose();
}
}
Ans. : It is a collection of data tables that contain the data. It is used to fetch data without
interacting with a Data Source that's why, it also known as disconnected data access
method. It is an in-memory data store that can hold more than one table at the same time.
One can use DataRelation object to relate these tables. The DataSet can also be used to
read and write data as XML document.
ADO.NET provides a DataSet class that can be used to create DataSet object. It contains
constructors and methods to perform data related operations.
Q.8 Define ADO.Net model. AU : May-18
Ans. : ADO.NET stands for ActiveX Data Object is a database access technology
developed by Microsoft as part of its .NET framework that can access any kind of data
source. It’s a set of object-oriented classes that provides a rich set of data components to
create high-performance, reliable and scalable database applications for client- server
Ans. : Stored procedures are so popular and have become so widely used and therefore
expected of Relational Database Management Systems (RDBMS) that even MySQL finally
caved to developer peer pressure and added the ability to utilize stored procedures to their
very popular open source database. The list below details why stored procedures have
gained such a stalwart following among application developers (and even Database
Administrators for that matter) :
Quick response - Since stored procedures are compiled and stored, whenever a
procedure is called the response is quick.
Grouping related code and avoiding duplication - One can group all the required
SQL statements in a procedure and execute them at once. Using procedures, one can
avoid repetition of code moreover with these one can use additional SQL
functionalities like calling stored functions. Once stored procedure is compiled it can
be used in any number of applications. If any changes are needed one can just change
the procedures without touching the application code.
Maintainability - Because scripts are in one location, updates and tracking of
dependencies based on schema changes becomes easier
Testing - Can be tested independent of the application
Isolation of business rules - Having Stored Procedures in one location means that
there’s no confusion of having business rules spread over potentially disparate code
files in the application
Speed / Optimization - Stored procedures are cached on the server. Execution plans
for the process are easily reviewable without having to run the application
Utilization of set-based processing - The power of SQL is its ability to quickly and
efficiently perform set-based processing on large amounts of data; the coding
equivalent is usually iterative looping, which is generally much slower
Security - Limit direct access to tables via defined roles in the database. Provide an
“interface” to the underlying data structure so that all implementation and even the
data itself is shielded. Securing just the data and the code that accesses it is easier than
applying that security within the application code itself
Q.10 List the advantages of ADO.NET model. AU : Dec.-18
Ans. : The DataAdapter class is needed because it works as a bridge between a DataSet
and a data source to retrieve data. DataAdapter is a class that represents a set of SQL
commands and a database connection. It can be used to fill the DataSet and update the data
source.
The DataAdapter enables to connect to a dataset and specify SQL strings for retrieving
data from or writing data to a DataSet. A dataset represents in–memory cached data. An in
memory object frees developers from the confines of the specifics of database and allows
to deal with the data in memory. The DataAdapter serves as an intermediary between the
database and the DataSet.
The DataAdapter can perform Select, Insert, Update and Delete SQL operations in the
Data Source. The Insert, Update and Delete SQL operations, are carried using the
continuation of the Select command perform by the DataAdapter. The SelectCommand
property of the DataAdapter is a Command Object that retrieves data from the data source.
The InsertCommand, UpdateCommand, and DeleteCommand properties of the
DataAdapter are Command objects that manage updates to the data in the data source
according to modifications made to the data in the DataSet. From the following links
describes how to use SqlDataAdapter and OleDbDataAdapter in detail
Q.1 What are the different methods by which we can populate a Dataset ?
(Refer section 16.5)
Q.2 What are DataProviders ? What are components of data provider ?
(Refer section 16.3)
Q.3 Describe ADO.NET object model in detail. (Refer section 16.4)
Q.4 What are the key events of SqlConnection Class ? (Refer section 16.6)
Q.5 What is meant by 'Transaction' in a database and what are the 'Properties of
Transaction' ? (Refer section 16.8)
Q.6
Concept
1. XML is short for eXtensible Markup Language. It is a very widely used format for
exchanging data, mainly because it's easy readable for both humans and machines. It is
basically a stricter version of HTML.
Explanation
2. XML is made for the World Wide Web but the set of tags are not fixed. They can be
extended and hence the name. One can have own tags. The only thing is to maintain a
structural relationship between them. It's written in SGML (Standard Generalized Markup
Language) which is an international standard for defining descriptions across the globe.
3. XML was developed way back in 1996 by an XML Working Group (known originally as
the SGML Editorial Review Board) which was formed under the guidance of the World
Wide Web Consortium (W3C) which was chaired by Jon Bosak of Sun Microsystems.
4. HTML was designed to display data and specify how that data should look where as XML
was designed to describe and structure data. In this way, an XML file itself doesn’t
actually do anything. It doesn’t say how to display the data or what to do with data, just as
a text file doesn’t.
5. XML is made up of tags, attributes and values and looks something like this,
<users>
<user name="Ravi" age="38" />
<user name="Aviw" age="40" />
</users>
Almost every programming language has built-in functions or classes to deal with it. C# is
definitely one of them, with an entire namespace, the System.Xml namespace, to deal with
pretty much any aspect of XML.
6. The .Net technology widely supports XML file format. The .Net Framework provides the
Classes for read, write, and other operations in XML formatted files . These classes are
stored in the namespaces like System.Xml, System.Xml.Schema,
System.Xml.Serialization, System.Xml.XPath, System.Xml.Xsl etc. The Dataset in
ADO.NET uses XML as its internal storage format.
7. Example
myBook.xml
<journal>
<book>
<title>C# Programming</title>
<author>Ravi Wani</author>
<chapters>
</chapters>
</book>
</journal>
In above example the subject of data is defined in the XML file.
It can be seen clearly that there is a journal containing books, each of which contains some
chapters along with their names.
XML can be less efficient than some other file formats. Still, in many cases, the loss in
efficiency that results from the increased size can be made up by the speed of processing a
well-defined XML file, as parsers (programs that read XML) can predict the structure.
8. Compared to plain text data format/structure file, the XML file shows clearly what each
piece of information represents and where it belongs in the data hierarchy. This "data-
describing data" is known as metadata, and is a great strength of XML in that one can
create own specifications and structure the data to be interpreted by any other system.
9. XML tags identify the data and are used to store and organize the data, rather than
specifying how to display it like HTML tags, which are used to display the data. XML is
not going to replace HTML in the near future, but it introduces new possibilities by
adopting many successful features of HTML.
10. There are few important characteristics of XML that make it useful in a variety of systems
and solutions:
XML is extensible : XML allows to create own self-descriptive tags, or language, that
suits the application.
XML carries the data, does not present it : XML allows to store the data irrespective of
how it will be presented.
XML is a public standard : XML was developed by an organization called the World
Wide Web Consortium (W3C) and is available as an open standard.
1. XML Declaration
<?xml version="1.0"?>
<note>
<to>RaviW</to>
<from>AviW</from>
<heading>Greetings</heading>
<body>Wish you happy 2017</body>
</note>
The first line in the document :
The XML declaration should always be included. It defines the XML version of the
document, here it is 1.0 specification of XML.
The XML document can optionally have an XML declaration. It is written as below :
<?xml version="1.0" encoding="UTF-8"?>
Where version is the XML version and encoding specifies the character encoding used in
the document.
Rules for XML declaration
The XML declaration is case sensitive and must begin with "<?xml>" where "xml" is
written in lower-case.
If document contains XML declaration, then it strictly needs to be the first statement of
the XML document.
The XML declaration strictly needs be the first statement in the XML document.
An HTTP protocol can override the value of encoding that is put in the XML declaration.
<?xml version="1.0"?>
The next line defines the first element of the document (the root element)
<note>
The next lines defines 4 child elements of the root (to, from, heading, and body):
<to>RaviW</to>
<from>AviW</from>
<heading>Greetings</heading>
<body>Wish you happy 2017</body>
The last line defines the end of the root element
</note>
2. XML Tags and elements
All XML documents must contain a single tag pair to define the root element and rest other
elements must be nested within the root element. All elements can have sub (child) elements.
Sub elements must be in pairs and correctly nested within their parent element.
<root>
<child>
<subchild>
</subchild>
</child>
</root>
For example, following is not a correct XML document, because both the x and y elements
occur at the top level without a root element :
<x>...</x>
<y>...</y>
The following example shows a correctly formed XML document :
<root>
<x>...</x>
<y>...</y>
</root>
- All XML elements must have a closing tag
XML tags are case sensitive. The tag <Note> is different from the tag <note>.
Opening and closing tags must therefore be written with the same case.
<Note>Wrong</note>
<Note>Correct</Correct>
3. Attributes
An attribute specifies a single property for the element, using a name/value pair.
An XML-element can have one or more attributes.
Syntax Rules for XML Attributes
Attribute names in XML (unlike HTML) are case sensitive. That is, HREF and href are
considered two different XML attributes.
Same attribute cannot have two values in a syntax. The following example shows
incorrect syntax because the attribute b is specified twice :
<a b="x" c="y" b="z">....</a>
Attribute names are defined without quotation marks, whereas attribute values must
always appear in quotation marks. Following example demonstrates incorrect xml syntax :
<a b=x>....</a>
In the above syntax, the attribute value is not defined in quotation marks.
Attribute values must always be quoted
In XML the attribute value must always be quoted.
Following specification is incorrect as date attribute is note quoted.
<?xml version="1.0"?>
<note date=12/11/99>
</note>
Following is the correct specification.
<?xml version="1.0"?>
<note date="12/11/99">
</note>
4. XML References
6. XML Namespaces
A XML namespace allows to qualify an element. XML namespaces uses URI, which is
associated with a prefix for the namespace. A namespace is defined using an xmlns
declaration, followed by the prefix, which is equal to a URI that uniquely identifies the
namespace.
xmlns:movie="http://www.enjoypoint.com/movies">
By adding this namespace definition as an attribute to a tag, one can use the prefix
movie in that tag, and any tags it contains, to fully qualify elements:
<movie:film-types xmlns:movie="http://www.enjoypoint.com/movies">
<movie:film-type>Drama</movie:film-type>
<movie:film-type>Romance</movie:film-type>
</movie:film-types>
Parsers can now recognise both meanings of "film type" and handle them accordingly.
1. XmlTextReader : The XmlTextReader class is just one of the methods of reading XML
files. It approaches an XML file in a similar way to a DataReader, in that the document is
read element by element, making decisions as reading proceeds. It's by far the easiest
class to use to parse an XML file quickly.
2. XmlTextWriter : Similarly, the XmlTextWriter class provides a means of writing XML
files line by line.
3. XmlValidatingReader : XmlValidatingReader is used to validate an XML file against a
schema file.
Reading an XML File In .NET
XmlTextReader. XmlTextReader is based upon the XmlReader class, specially designed
to read byte streams, making it suitable for XML files that are to be located on disk, on a
network, or in a stream.
First create a new instance of the class.
The constructor takes the location of the XML file that it will read.
// file
// url
{
// parse file
}
The loop continues until the end of the file is reached, or the loop is broken in the middle
explicitly. Each node is inspected, ascertain its type, and the information is gathered. The
NodeType property exposes the current type of node that’s being read.
An XmlReader will see the following element as 3 different nodes :
<book>Programming</book>
The <book> part of the element is recognised as an XmlNodeType.Element node.
The text part is recognised as an XmlNodeType.Text node, and the closing tag </book> is
seen as an XmlNodeType.EndElement node.
The code below shows how to output the XML tag through the reader object that is created
earlier :
while (reader.Read())
{
switch (reader.NodeType)
{
case XmlNodeType.Element:
Console.Write("<"+reader.Name+">");
break;
case XmlNodeType.Text:
Console.Write(reader.Value);
break;
case XmlNodeType.EndElement:
Console.Write("</"+reader.Name+">");
break;
}
}
Console.Write("<"+reader.Name);
while (reader.MoveToNextAttribute())
{
Console.WriteLine(reader.Name+" = "+reader.Value);
}
break;
Now, if it is fed in this XML:
<book first="1" second="2">Programming</foo>
The output is,
<book first="1" second="2">Programming</book>
Earlier, the attributes were ignored, and output was,
<book>Programming</book>
Note that, if an element does not contain any attributes, the loop is never started. This
means that there is no need to first check to see whether there are attributes. The number of
attributes on a node can be found using the AttributeCount property.
Writing XML in .NET
Writing XML is done by the XmlTextWriter class. Again, taking a forward-only approach,
one can build XML files from different node types, which are output in order.
To begin, create an instance of the class :
XmlTextWriter writer = new XmlTextWriter("FirstWrite.xml", null);
The second parameter, which, here, is set to null, allows us to specify the encoding format
to use in the XML file. Setting it to null produces a standard UTF-8 encoded XML file
without an encoding attribute on the document element.
For writing elements and attributes, the methods exposed in the XmlTextWriter are used.
Consider example of book,
myBook.xml
<journal>
<book>
<title>C# Programming</title>
<author>Ravi Wani</author>
<chapters>
</chapters>
</book>
</journal>
writer.WriteEndElement();
There is no need to say which element is to be closed, because the writer will automatically
write the closing tag for the last-opened element, which, in this case, is chapter.
Now close the remaining elements in the same fashion,
writer.WriteEndElement();
writer.WriteEndElement();
Finally, "flush" the writer, or, in other words, output the requested information to XML
file, then close the writer to free file and resources,
writer.Flush()
writer.Close()
using System;
using System.Xml.Serialization;
namespace XMLCreationTest
{
public class XMLTest1
{
public String value1;
public String value2;
}
class Program
{
static void Main(string[] args)
{
XMLTest1 xtest = new XMLTest() { value1 = "Value 1", value2 = "Value 2" };
XmlSerializer xs = new XmlSerializer(xtest.GetType());
xs.Serialize(Console.Out, xtest);
Console.ReadKey();
}
}
}
The actual serialization is done by an instance of the class XmlSerializer, from the
System.Xml.Serialization namespace. The serializer’s constructor requires a reference to the
type of object it should work with – which can be obtained by using the GetType() method of
an instanced object, or a call to the function typeof() and specifying the class name as the only
argument.
The Serialize() method takes an object of the defined type, translates that object into XML,
and then writes the information to a defined stream (in this case, the TextWriter object of the
console’s output stream). The XML output of the sample code is shown below :
is used to deserialize the string to an instance of the Test class, and the example then prints the
fields to the console. To obtain a suitable stream that can be passed into the XmlSerializer’s
constructor, a StringReader (from the System.IO namespace) is declared.
using System;
using System.IO;
using System.Xml.Serialization;
namespace XMLCreationTest
{
public class XMLTest2
{
public String value1;
public String value2;
}
class Program
{
static void Main(string[] args)
{
String xData = "<?xml version=\"1.0\" encoding=\"ibm850\"?><Test
xmlns:xsi=\"http://www.w3.org/2001/XMLSchema-instance\"
mlns:xsd=\"http://www.w3.org/2001/XMLSchema\"><value1>Value
1</value1><value2>Value 2</value2></Test>";
XmlSerializer xs = new XmlSerializer(typeof(Test));
XMLTest2 test = (XMLTest2)xs.Deserialize(new StringReader(xData));
Console.WriteLine("V1: " + test.value1);
Console.WriteLine("V2: " + test.value2);
}
}
}
9. Overriding the Serialization of a Class
As mentioned earlier, all public properties and fields of a class are automatically
serializable, and can usually be converted to XML without using any directives or attributes.
Private properties and fields are not serialized by default. To include these, and for more
precise control over how an object is serialized to XML, one can override the entire
serialization process.
10. Controlling the Serialization using Attributes
When serializing data for exchange with other applications, or when working to a
predefined XML schema, it is useful to be able to change the element and attribute names
used during the process.
By default, elements in the XML output are named after the properties or fields that they
are based on.
The root node can be renamed using the XmlRoot attribute, and change the name of child
nodes by using the XmlElement attribute and setting its ElementName.
Multiple properties can be specified for an attribute by separating them with commas
within the parenthesis. This usually takes the form [attributename(property1=value1,
property2=value2…)]
[XmlRoot("XTest")]
public class Test
{
[XmlElement(ElementName="V1")]
public String value1;
[XmlElement(V2")]
public String value2;
}
Note that when specifying the element name, the property name can be omitted.
1. While an XML file might conform to the XML specification, it might not be a valid form
of a particular dialect. An XML schema help to verify that certain elements are present,
while making sure that the values presented are of the correct type.
2. There are a few different specifications of schemas: XSD, DTD, and XSX. Though DTD
(Document Type Definition) is the most common schema used today, XSD (XML
Schema Definition) is a newer standard that's gaining acceptance, as it provides the finest
grained control for XML validation.
3. XML Schema is commonly known as XML Schema Definition (XSD) which is used to
describe and validate the structure and the content of XML data.
4. XML schema defines the elements, attributes and data types.
5. Schema element supports Namespaces. It is similar to a database schema that describes
the data in a database.
6. The basic idea behind XML Schemas is that they describe the legitimate format that an
XML document can take.
7. Syntax
- Declare a schema in XML document as,
<xs:schema xmlns:xs="http://www.w3.org/2001/XMLSchema">
Example
- Global Types - With global type, one can define a single type in document, which can
be used by all other references.
For example, suppose the author and publication are to be generalized for different
locations of the publication. In such case, define a general type as below :
<xs:element name="LocationType">
<xs:complexType>
<xs:sequence>
<xs:element name="author" type="xs:string" />
<xs:element name="publication" type="xs:string" />
</xs:sequence>
</xs:complexType>
</xs:element>
- Attributes
Attributes in XSD provide extra information within an element. Attributes have name and
type property as shown below :
<xs:attribute name="x" type="y"/>
Generating XML schema
The XML Schema Definition tool (Xsd.exe) allows to generate an XML schema that
describes a class or to generate the class defined by an XML schema.
8. To generate scheme specific classes
1. Go to command prompt.
2. Pass the XML Schema as an argument to the XML Schema Definition tool, which
creates a set of classes that are precisely matched to the XML Schema, for example:
3. xsd mySchema.xsd
The tool can only process schemas that reference the World Wide Web Consortium XML
specification of March 16, 2001. In other words, the XML Schema namespace must be
"http://www.w3.org/2001/XMLSchema" as shown in the following example.
<?xml version="1.0" encoding="utf-8"?>
<xs:schema attributeFormDefault="qualified" elementFormDefault="qualified"
argetNamespace="" xmlns:xs="http://www.w3.org/2001/XMLSchema">
4. Modify the classes with methods, properties, or fields, as necessary. For more
information about modifying a class with attributes, see Controlling XML Serialization
Using Attributes and Attributes That Control Encoded SOAP Serialization.
"XSD" is currently the de facto standard for describing XML documents. There are 2
versions in use 1.0 and 1.1, which are on the whole the same. An XSD schema is itself an
XML document, there is even an a XSD schema to describe the XSD standard.
It is often useful to examine the schema of the XML stream that is generated when
instances of a class (or classes) are serialized. For example, one might publish the schema for
others to use, or might compare it to a schema with which one is trying to achieve conformity.
9. To generate an XML Schema document from a set of classes
XmlValidatingReader validatingReader;
4. Now, schemaCollection is added to the schemas so that the validation reader can use
them.
validatingReader.Schemas.Add(xsdCollection);
5. If an error is found by the validator, an event is fired which should be caught by creating
an event handler, and code so as to respond to the errors,
validatingReader.ValidationEventHandler += new
alidationEventHandler(validationCallBack);
xvr.Schemas.Add(xscl);
xvr.ValidationType = ValidationType.Schema;
xvr.ValidationEventHandler +=
new ValidationEventHandler(ValidationHandler);
1. If there is one type of XML and need to transform it to another type, then XSLT can be
used for that purposes. XML Stylesheet Transformation(XSLT) is defined as a language
for converting xml documents to other document formats. XSLT processors parse the
input XML document, as well as the XSLT stylesheet and then process the instructions
found in the XSLT stylesheet, using the elements from input XML document. During the
processing of the XSLT instructions, a structured XML output is created.
1. Since XML is used for data representation, it cannot be used for displaying formatted
data. To display formatted data, it uses stylesheet.
2. There are two types of stylesheets, Extensible Stylesheet Language (XSL) and
Cascading Stylesheets (CSS). XSL is a superset of CSS. It supports various functions
which are not supported by CSS. For example, CSS cannot be used to sort elements or do
conditional formatting. Also XSL is written in a different syntax from CSS. XSL follows
the syntax of XML.
3. Following are some of the elements of XSL :
Stylesheet : An XSL file has <stylesheet> as the root element. It is written as follows:
<xsl:stylesheet xmlns:xsl="http://www.w3.org/1999/XSL/Transform" version="1.0">
template : The <template> element is used to specify the formatting rules to be applied
when a particular node is matched. It is written as follows:
<xsl:template match="/">
for-each : The <for-each> element is used to loop through a particular XML element. It is
written as follows:
<xsl:for-each select="employees/emp">
value-of : The <value-of> element is used to display the value of a selected node. If the
node is an attribute, it is prefixed with @.
<xsl:value-of select="@id"/>
3. Static and dynamic transformation
An XML file can be transformed statically or dynamically. For statically linking an XML
file to a stylesheet, the following Processing Instruction can be used :
<?xml-stylesheet type="text/xsl" href="filename.xsl"?>
An XML file can be dynamically linked to a stylesheet by using an instance of the
XslCompiledTransform class.
Example
-The .xml file to be transformed.
Books.xml
<?xml version="1.0" encoding="utf-8" ?>
<Books>
<book>
<name>C#</name>
<cost>Rs 500</cost>
</book>
<book>
<name>ASP</name>
<cost>Rs 550</cost>
</book>
</Books>
-Now, frame an XSLT document for the above XML.
Book.xsl
<?xml version="1.0" encoding="iso-8859-1" ?>
<xsl:stylesheet version="1.0"
xmlns:xsl="http://www.w3.org/1999/XSL/Transform">
<xsl:template match="/">
<table border="1">
<tr>
<th>Name</th>
<th>Price</th>
</tr>
<xsl:for-each select="Books/book">
<tr>
<td><xsl:value-of select="name"/></td>
<td><xsl:value-of select="cost"/></td>
</tr>
</xsl:for-each>
</table>
</xsl:template>
</xsl:stylesheet>
Tthe XmlUrlResolver and XslTransform class are used to perform the transformation.
XmlUrlResolver class is used to resolve external XML resources such as entities, document
type definitions (DTDs), or schemas. It is also used to process, the include and import
elements found in Extensible StyleSheet Language (XSL) stylesheets or XML Schema
Definition language (XSD) schemas.
The transformation code,
XmlUrlResolver resolver = new XmlUrlResolver();
resolver.Credentials = System.Net.CredentialCache.DefaultCredentials;
XslTransform xsltrans=new XslTransform();
xsltrans.Load( ,resolver);
xsltrans.Transform( , ,resolver);
The Load method is use to load an XSLT document. This method can accept an
XmlReader, a document URL, or a variety of other objects. Then the Transform method is
called. This method is overloaded and can therefore accept a variety of parameters.
The output -
Name Price
C# R.S 500
ASP R.S 500
<LastName>Wani</LastName>
</Name>
<Name>
<FirstName>Prakash</FirstName>
<LastName>Deshmukh</LastName>
</Name>
</Names>
To get all <Name> nodes use XPath expression /Names/Name.
The first slash means that the <Names> node must be a root node.
SelectNodes method returns collection XmlNodeList which will contain the <Name>
nodes.
To get value of sub node <FirstName> one can simply index XmlNode with the node
name : xmlNode["FirstName"].InnerText.
Example
XmlDocument xml = new XmlDocument();
xml.LoadXml(myXmlString); // suppose that myXmlString contains
<Names>...</Names>"
To get only male names (to select all nodes with specific XML attribute) use XPath
expression /Names/Name[@type='M'].
The code -
XmlDocument xml = new XmlDocument();
xml.LoadXml(str); // suppose that str string contains "<Names>...</Names>"
Output
Ravi
Shyam
Avi
Lucky
Sandy
There are two approaches to work with XML and ADO, namely, one, use ADO.NET to
access XML documents or two, use XML and ADO.NET to access XML. Also, relational
database can be accessed using ADO.NET and XML.NET.
Reading XML using Data Set
The DataSet class can be used to access data. This class implements methods and
properties to work with XML documents.
- The Read xml Method
The ReadXMLSchema method reads an XML schema in a DataSet object (which has four
overloaded forms) with which Text Reader, string, stream, and XmlReader can be used
Example
//illustrates how to use a file as direct input and call the ReadXmlSchema method to read
he file
Example
//illustration of reading the file XmlReader and use XmlTextReader as the input of
/ReadXmlSchema
An XML parser is a software library or package that provides interfaces for client
applications to work with an XML document. The XML Parser is designed to read the
XML and create a way for programs to use XML. XML parser validates the document and
check that the document is well formatted. XML parsers can be of two main types
DOM parser or SAX parser.
17.9.1 The DOM Approach
A DOM document is an object which contains all the information of an XML document.
It is composed like a tree structure. The DOM Parser implements a DOM API. This API is
very simple to use.
17.9.1.1 Features of DOM Parser
A DOM Parser creates an internal structure in memory which is a DOM document object
and the client applications get information of the original XML document by invoking
methods on this document object.
DOM Parser has a tree based structure.
Advantages
1) It supports both read and write operations and the API is very simple to use.
2) It is preferred when random access to widely separated parts of a document is required.
Disadvantages
1) It is memory inefficient. (consumes more memory because the whole XML document
needs to loaded into memory).
2) It is comparatively slower than other parsers.
Manipulating the contents of an XML document includes traversing the list of nodes
in the document, setting and querying attribute values, and manipulating the tree
itself by creating and inserting new nodes. This section discusses XML documents
manipulation using the Document Object Model (DOM) modeled by the
XmlDocument class in the System.Xml namespace. The DOM is recursive, meaning
that each node has the same properties and methods as every other node in the
document.
17.9.1.2 Important Basic XML Related Concepts
3. XSLT - is designed for use as part of XSL, transforming an XML document into
another XML document, or another type of document that is recognized by a
browser, like HTML or XHTML. XSLT uses XPath.
4. XPath - is a set of syntax rules for defining parts of an XML document.
17.9.1.3 XMLNode Class
XMLNode class is used to manipulate nodes in DOM. The XML node is the most basic
unit of abstraction within a DOM-modeled document.
Commonly Used XmlNode Properties
Attributes - The attributes of the node (XmlAttributeCollection).
ChildNodes - The list of child nodes of the current node (XmlNodeList).
FirstChild - Returns the first child of the XML node (first being first in document
order).
HasChildNodes - A Boolean that indicates whether the node has child nodes.
InnerText - Gets or sets the text inside the node.
InnerXml - Gets or sets the XML within the node.
LastChild - Returns the last child (document order relative) of the node.
Name - The name of the node.
NodeType - Indicates the type of the node. This can be several things, including (but
not limited to): Document, DocumentFragment, Element, EndElement, Entity,
Notation, Text, Whitespace, or XmlDeclaration.
OuterXml - The XML representing the current node and all its child nodes.
OwnerDocument - The document to which the current node belongs.
ParentNode - The parent node of the current node, if any.
PreviousSibling - Gets the node immediately preceding the current node in document
order.
Value - Gets or sets the value of the current node.
Commonly Used XmlNode Methods
AppendChild - Adds a child node to the end of the current list of child nodes.
Clone - Creates a duplicate of the node.
CreateNavigator - Creates an XPathNavigator for this node.
InsertAfter - Inserts the given node immediately after the current node.
InsertBefore - Inserts the given node immediately before the current node.
PrependChild - Adds the given child node at the beginning of the child node list.
The XmlDocument class, deals with the entire document, is also itself an XmlNode. The
XmlDocument class, inherits from XmlNode. This fits with the DOM pattern in that the
document is a node that can have child nodes. The methods in the following list show
some of the additional methods available to the XmlDocument that are not part of a
standard node class.
XmlDocument Class Methods
CreateAttribute - Creates an XmlAttribute with the given name.
CreateCDataSection - Creates a CData section with the given data.
CreateComment - Creates an XmlComment.
CreateDocumentFragment - Creates a document fragment.
CreateElement - Creates an XmlElement, an XmlNode with element-specific
functionality.
CreateNode - Creates an XmlNode.
CreateTextNode - Creates an XmlText node.
CreateWhitespace - Creates whitespace for insertion into the document.
ImportNode - Imports a node from another document.
Load - Loads the XML from the given file.
LoadXml - Loads the XML from the given XML string.
Save - Saves the XML document.
Validate - Validates the XML document against a schema.
Program 1
//Add node/xml value to existing XML
string tempXml = @"<states>
<State ID='1' Name='Punjab' />
< State ID='2' Name=' Rajasthan' />
< State ID='3' Name=' Gujrat' />
< State ID='4' Name=' Maharashtra' />
< State ID='5' Name=' Tamilnadu' />
</State>";
Program 2
// to illustrate different ways to add the node, using XMLDocument and XDocument
class.
Using XmlDocument
// Option1: Using InsertAfter()
// Adding Node to XML
XmlDocument docstate1 = new XmlDocument();
docstate1.LoadXml(tempXml);
XmlNode root1 = docstate1.DocumentElement;
//Create a new attrtibute.
XmlElement elem = docstate1.CreateElement("State");
XmlAttribute attr = docstate1.CreateAttribute("ID");
attr.Value = "6";
elem.Attributes.Append(attr);
//Create a new attrtibute.
XmlAttribute attr2 = docstate1.CreateAttribute("Name");
attr2.Value = "Karnatak";
elem.Attributes.Append(attr2);
//Add the node to the document.
root1.InsertAfter(elem, root1.LastChild);
docstate1.Save(Console.Out);
Console.WriteLine();
</State>
Program 3
//Edit/Update XML data - to change/update XML node value
Using XDocument
// Option1: Using SetAttributeValue()
XDocument xmlDoc = XDocument.Parse(tempXml);
// Update Element value
var items = from item in xmlDoc.Descendants("State")
where item.Attribute("ID").Value == "3"
select item;
foreach (XElement itemElement in items)
{
itemElement.SetAttributeValue("Name", "NewGujrat");
}
xmlDoc.Save(Console.Out);
Console.WriteLine();
Program 5
Select node value from XML
When XML data is used, data is fetched based on the node value. Suppose it is required to
get the state name whose ID is 2. For this XMLDocument class or
XDocument(System.XML.Linq namespace) can be used.
XmlDocument xmldoc = new XmlDocument();
xmldoc.LoadXml(tempXml);
int nodeId = 2;
XmlNode nodeObj = xmldoc.SelectSingleNode("/States/State[@ID=" + nodeId + "]");
//string id = nodeObj["State"].InnerText; // For inner text
string pName = nodeObj.Attributes["Name"].Value;
XmlDocument or XDocument classes are widely used to manipulate XML data. However,
there are some differences between them as listed below,
XDocument(System.Xml.Linq) is from the LINQ to XML API and
XmlDocument(System.Xml.XmlDocument) is the standard DOM-style API for XML.
If .NET version 3.0 or lower is used, then XmlDocument I to be used, the classic DOM
API. On the other hand, while using .NET version 3.5 onwards, XDocument should be
used.
Performance wise XDocument is faster than XmlDocument because it (XDocument) is
newly developed to get better usability with LINQ. It's (XDocument) much simpler to
create documents and process them.
SAX (Simple API for XML) is a serial access parser API for XML. SAX provides a
mechanism for reading data from an XML document. It is a popular alternative to the DOM.
SAX parser for XML processing is a parser which implements SAX (SAX parser) functions as
a stream parser, with an event-driven API. The user defines a number of call back methods
that will be called when event occur during parsing.
The SAX events include,
a) XML text nodes.
b) XML element nodes.
c) XML processing instructions.
d) XML comments.
A SAX Parser implements SAX API. This API is an event based API and less intuitive.
17.9.2.1 Features of SAX Parser
SAX (stands for Simple API for XML) is actually a simple extension of the text file reader.
Writing is simple in which the elements and attributes are written in the same order as they are
present in the file (the tree structure is ignored in this approach). .NET provides the
XmlWriter class to carry writing into the XML though it is a text file. It works around the
elements, more accurately, nodes. This class is in the System.Xml namespace.
17.9.2.3 XmlTextWriter Class
Constructor Description
Example
//the classfile for writing in to xml file
using System.xml;
class User
{
public string Name { get; private set; }
public int Age { get; private set; }
public DateTime Registered { get; private set; }
Reading is performed just like writing. XML as a text file is read line by line, from top to
bottom. SAX gives what are known as nodes (XMLNode) which it gets while reading. A node
can be an element, an attribute, or a value. The nodes are received in a loop in the same order
that they're written in the file. The .NET framework provides the XmlReader class is used to
read XML files. This class is in the System.Xml namespace.
Example
//Reading XML file
//Below is XML file to be parsed, users.xml
<?xml version="1.0" encoding="utf-8"?>
<users>
<user age="42">
<name>Ravi </name>
<registered>3/2/2020</registered>
</user>
<user age="41">
<name>Raj </name>
<registered>2/1/2020</registered>
</user>
<user age="42">
<name>Ram </name>
<registered>1/2/2020</registered>
</user>
</users>
User.cs class is as below,
class User
{
public string Name { get; private set; }
public int Age { get; private set; }
public DateTime Registered { get; private set; }
{
case "name":
name = xr.Value;
break;
case "registered":
registered= DateTime.Parse(xr.Value);
break;
}
}
// reads the closing element
else if ((xr.NodeType == XmlNodeType.EndElement) && (xr.Name == "user"))
users.Add(new User(name, age, registered));
}
}
3. Stores entire document in the memory Does not store XML in memory, It is
before processing, Occupies more memory, no compulsory for the entire xml tree
The entire xml tree must be in memory to be loaded before the parser begins to
before the parser begins to parse. parse, it can work with the maximum
depth of the given xml tree.
4. Traversing can be done in any direction. Traversing can be done top to bottom.
5. There are formal specifications for the There aren't formal specifications
DOM. within SAX.
6. Streamed reading from the disk is Streamed reading from the disk is
impossible. possible.
7. Less problems during the XML validation. More problems during the XML
validation.
For example,
P2P stands for Peer-To-Peer. A P2P network is created when two or more computers are
connected and share resources without going through a separate server computer. All
computers in a P2P network have the equal privilege. In a P2P network, a computer is
both a server and client. There is no need for central coordination. A computer in a P2P
network is usually called a node.
In P2P network peer is how clients are referred to. The word "client" makes no sense in
a P2P network because there is not necessarily a 'server' to be a client of.
Groups of peers that are connected to each other are known by the interchangeable
terms meshes, clouds, or graphs. A given group can be said to be well-connected if,
1. There is a connection path between every pair of peers, so that every peer can
connect to any other peer as required.
2. There are a relatively small number of connections to traverse between any pair of
peers.
3. Removing a peer will not prevent other peers from connecting to each other.
It should be noted that this does not mean every peer must be able to connect to every
other peer. In fact, if a network is closely mathematically analyzed it will be found that
peers need to connect only to a relatively small number of other peers in order for these
conditions to be met.
Main challenge in P2P network is discovering the potential clients in the network and
locating chunks of the file that other clients might have. Another challenge is finding
optimal communication between clients that may be separated by entire continents.
Every client participating in a P2P network application must be able to perform the
following operations to overcome these problems,
1. It must be able to discover other clients.
2. It must be able to connect to other clients.
3. It must be able to communicate with other clients.
The discovery problem has two obvious solutions. One is to keep the list of the clients
on the server so that all the clients can obtain this list and contact other clients (known
as peers). Another way is to maintain A PNRP (Peer Name Resolution Protocol)
infrastructure so as to discover the clients in the network. Most file sharing systems use
the "list on a server" solution, by using servers know as trackers. Also, in file sharing
systems any client may act as a server declaring that it has a file available and
registering it with a tracker. In fact, a pure P2P network needs no server at all, just peers
are sufficient to work in the network.
The connection problem is a more critical one, and concerns the overall structure of the
networks used by a P2P application. If there is one group of clients, all of which can
communicate with one another, the topology of the connections between these clients
can become extremely complex. The performance can be improved by having more than
one group of clients, each of which consists of connections between clients in that
group, but not to clients in other groups, a kind of hierarchical network. Locale-based
groups can be build to get an additional performance boost, because clients can
communicate with each other with fewer hops between networked computers.
Communication is perhaps a problem of lesser importance, because communication
protocols such as TCPlIP are well established and can be reused here. There is,
however, scope for improvement in both high-level technologies (for example, WCF
services can be used and therefore all the functionality that WCF offers) and low-level
protocols (such as multicast protocols to send data to multiple endpoints
simultaneously).
Discovery, connection, and communication are central to any P2P implementation.
Apart from above concerns P2P network should be aware of is that of flooding.
Flooding is the way in which a single piece of data may be propagated through a
network to all peers, or of querying other nodes in a network to locate a specific piece of
data. In unstructured P2P networks this is a fairly random process of contacting nearest
neighbor peers, which in turn contact their nearest neighbors, and so on until every peer
in the network is contacted. It is also possible to create structured P2P networks such
that there are well-defined pathways for queries and data flow among peers.
It is resilient. If one computer is down in the network, other computers can continue to
work and communicate. There is no single point of failure.
It is efficient. Because any computer on the network is both a client and a server, so a
computer can get data from the closest peer computer.
A network of peers is easily scaled and more reliable than a single server. A single
server is subject to a single point of failure or can be a bottleneck in times of high
network utilization.
P2P technology to prevent the Web server from crashing or flooding due servicing
client requests. Instead of sending the requested files directly from the server to all the
clients, files can be sent to just a few clients. The remaining clients then download the
files from the clients that already have received the files, a few more clients download
from those second-level clients, and so on. In fact, this process is made even faster by
splitting the file into chunks and dividing these chunks between clients, some of whom
download it directly from the server, and some of whom download chunks from other
clients. This is how file-sharing technologies work.
PNRP uses multiple clouds, in which a cloud is a grouping of computers that are able to
find each other. PNRP provides two clouds,
The global cloud corresponds to the global IPv6 address scope and global addresses and
represents all the computers on the entire IPv6 Internet. There is only a single global
cloud.
The link-local cloud corresponds to the link-local IPv6 address scope and link-local
addresses. A link-local cloud is for a specific link, which is typically the same as the
locally attached subnet. There can be multiple link-local clouds.
A third cloud, the site-specific cloud, corresponds to the site IPv6 address scope and site-
local addresses. This cloud has been deprecated, although it is still supported in PNRP.
18.6.3 PNRP Names and IDs
As the first letter in PNRP suggests, every node must register a Peer Name on the
network. Peer names are fixed names for resources such as computers, users, groups, or
services. This is similar to today's DNS except, instead of just IP addresses, the
resources can be more granular (specific).
A peer name is an endpoint for communication, which can be a computer, a user, a
group, a service, or anything else that is required to resolve to an IPv6 address. Peer
names can be registered as unsecured or secured. Unsecured names are just text strings
that are subject to spoofing, as anyone can register a duplicate unsecured name.
Unsecured names are best used in private or otherwise protected networks. Secured
names are protected with a certificate and a digital signature. Only the original publisher
will be able to prove ownership of a secured name.
18 - 6 C# and .NET Programming
Peer to Peer Networking, PNRP, Building P2P Applications
A Peer Name is a case-sensitive text string that has the format "Authority.Classifier".
The value of Authority depends on whether the name is secured or unsecured. The value
is always 0 for an unsecured Authority. The value of Classifier is a text string name
given for the resource and cannot contain spaces. The following list shows some
examples of peer names,
o 0.techtest
o 0.tech.peername
o 8412c005a63ec1964b7d8f2cabebd4916ae7f33d.demotest
PNRP uses peer names to identify resources in a peer network. The key here is "Peer
Network". This is not the whole IPv6 network that the computer is connected to, it is
limited to just the resources available within a Cloud. Registering any resource not
managed by the Peer-to-Peer networking APIs either will result in an error or will not be
resolved later.
PNRP IDs are 256 bits long and are composed of the following,
The high-order 128 bits, known as the peer-to-peer (P2P) ID, are a hash of a peer name
assigned to the endpoint.
The peer name of an endpoint has the following format: Authority.Classifier. For
secured names, Authority is the Secure Hash Algorithm 1 (SHA1) hash of the public
key of the peer name in hexadecimal characters. For unsecured names, the Authority is
the single character "0". Classifier is a string that identifies the application and can be
any Unicode string up to 150 characters long.
The low-order 128 bits are used for the Service Location, which is a generated number
that identifies different instances of the same P2P ID in the same cloud. The 256-bit
combination of P2P ID and Service Location allows multiple PNRP IDs to be registered
from a single computer.
For each cloud, each peer node manages a cache of PNRP IDs that includes both its own
registered PNRP IDs and the entries cached over time. The entire set of PNRP IDs
located on all the peer nodes in a cloud comprises a distributed hash table. It is possible
to have entries for a given PNRP ID located on multiple peers. Each entry in the PNRP
cache contains the PNRP ID, a Certified Peer Address (CPA), and the IPv6 address of
the publishing node. The CPA is a self-signed certificate that provides authentication
protection for the PNRP ID and contains application endpoint information such as
addresses, protocol numbers, and port numbers.
Therefore, the name resolution process for PNRP consists of resolving a PNRP ID to a
CPA. After the CPA is obtained, communication with desired endpoints can begin.
There are two different methods of performing PNRP name resolution based on the
version of PNRP. PNRP version 1 was included in Windows XP Service Pack 2 (SP2),
Windows XP Professional x64 Edition, and Windows XP with Service Pack 1 (SP1) and
the Advanced Networking Pack for Windows XP. It used a recursive system for name
resolution, and is not discussed in this paper. PNRP version 2 was redesigned in for
Windows Vista and Windows XP Service Pack 3 to reduce network bandwidth by using
an iterative approach to name resolution. The two versions of PNRP are not compatible.
18.6.4 PNRP Name Resolution - Searching for a Peer Name
PNRP name resolution uses the following two phases as describe below,
Endpoint determination : In this phase, a peer that is attempting to resolve the PNRP
ID of a service on a peer computer first determines the IPv6 address of the peer that
published the PNRP ID of the PNRP service running on that peer.
PNRP ID resolution : After locating and confirming the availability of the peer with
the PNRP ID corresponding to the PNRP service of the desired endpoint, the requesting
peer sends a PNRP Request message to that peer for the PNRP ID of the desired service.
The endpoint sends a reply confirming the PNRP ID of the requested service, a
comment, and up to 4 kilobytes of additional information that the requesting peer can
use for future communication. For example, if the desired endpoint is a gaming server,
the additional data can contain information about the game, the level of play, and the
current number of players.
During endpoint determination, PNRP uses an iterative process for locating the node
that published the PNRP ID, in which the node performing the resolution is responsible
for contacting nodes that are successively closer to the target PNRP ID.
To perform name resolution in PNRP, the peer examines the entries in its own cache for
an entry that matches the target PNRP ID. If found, the requesting peer sends a PNRP
Request message to the peer- to be connected and waits for a response. If an entry for
the PNRP ID is not found, the peer sends a PNRP Request message to the peer that
corresponds to the entry that has a PNRP ID that most closely matches the target PNRP
ID. The node that receives the PNRP Request message examines its own cache and
does the following :
If the PNRP ID is found, the requested peer replies directly to the requesting peer.
If the PNRP ID is not found and a PNRP ID in the cache is closer to the target PNRP
ID, the requested peer sends a response to the requesting peer containing the IPv6
address of the peer that corresponds to the entry that has a PNRP ID that most closely
matches the target PNRP ID. From the IP address in the response, the requesting node
sends another query to the IPv6 address referred to by the first node.
If the PNRP ID is not found and there is no PNRP ID in its cache that is closer to the
target PNRP ID, the requested peer sends the requesting peer a response that indicates
this condition. The requesting peer then chooses the next-closest PNRP ID.
The requesting peer continues this process with successive iterations, eventually
locating the node that registered the PNRP ID.
In PNRP, to resolve a Peer Name, the Peer Name, search criteria and optional cloud
name (Global by default) and IP address hint is to be provided. Typically, a lookup is
used to determine if a Peer Name already exists or to contact it directly. The following
search criteria options are supported,
Default
NonCurrentProcessPeerName
AnyPeerName The matching peer name can be
registered locally or remotely.
NearestNonCurrentProcessName The matching peer name can be
registered locally or remotely, but the
resolve request excludes any peer name
registered by the process making the
resolve request and looks for the service
closest to the local IP address.
NearestPeerName The matching peer name can be
registered locally or remotely, but the
resolve request looks for the service
closest to the local IP address.
NearestRemotePeerName The resolve request excludes any peer
name registered locally on this computer
and looks for the service closest to the
local IP address.
NonCurrentProcessPeerName The matching peer name can be
registered locally or remotely, but the
resolve request excludes any peer name
registered by the process making the
resolve request.
RemotePeerName The resolve request excludes any peer
name registered locally on this computer.
Again, a series of Windows Socket calls are used to synchronously begin a lookup.
Call WSALookupServiceBegin to begin the enumeration and return a handle.
Call WSALookupServiceNext to resolve the peer name.
Call WSALookupServiceEnd to complete the enumeration.
DNS Name Corresponding to a Peer Name
Since PNRP is a serverless DNS, it makes sense to be able to lookup the DNS name
associated with a Peer Name. It is also useful to determine the Peer Name when
provided with a DNS name. In Windows Vista, two additional Peer-to-Peer APIs are
provided to do just this.
An Endpoint Determination Scenario in a peer-to-peer network for PNRP
In a P2P network, Peer Machine 3 has entries for its own PNRP ID (1500) and the
PNRP ID of 2500 and 3000. The set of peer nodes is shown in Fig. 18.6.1. An arrow
from one node to another means that the node from which the arrow originates has an
entry in its cache for the node to which the arrow is pointing.
When Peer Machine 3 wants to determine the endpoint for the PNRP ID of 6000, the
following process occurs,
1. Because 3000 is numerically closer to 6000, Peer Machine 3 sends a PNRP Request
message to the node that registered the PNRP ID of 3000 (Peer Machine 4).
2. Peer Machine 4 does not have an entry for the PNRP ID of 6000 and does not have
any entries that are closer to 6000. Peer Machine 4 sends a response back to Peer
Machine 3 indicating that it could not find an entry closer to 6000.
3. Because 2500 is the next numerically closer PNRP ID to 6000, Peer Machine 3 sends
a PNRP Request message to the node that registered the PNRP ID of 2500 (Peer
Machine 1).
4. Because Peer Machine 1 has an entry in its cache for the PNRP ID of 6000, it sends
the IPv6 address of Peer Machine 2 to Peer Machine 3.
5. Peer Machine 3 sends a PNRP Request to Peer Machine 2.
6. Peer Machine 2 sends a positive name resolution response back to Peer Machine 3.
After endpoint determination, Peer Machine 3 sends a PNRP name resolution request
for the service with which Peer Machine 3 wants to establish communication to Peer
Machine 2. Peer Machine 2 sends back a response that contains optional data about the
service for the requesting application.
To register on the network an unsecured Peer Name, a valid unsecured Peer Name and
an IP address is to be provided. Optionally, the Cloud name (Global by default) can be
mentioned and an additional comment or description associated with the resource can be
supplied. This information is stored in the WSAQUERYSET data structure and passed
to the WSASetService with the Register option.
To unregister on the network an unsecured Peer Name, a valid unsecured Peer Name
and optionally the Cloud name (Global by default) is required to be provided. This
information is stored in the WSAQUERYSET data structure and passed to the
WSASetService with the Delete option.
PNRP Name Publication
To publish a new PNRP ID, a peer performs the following :
o Sends PNRP publication messages to its cache neighbors (the peers that have
registered PNRP IDs in the lowest level of the cache) to seed their caches.
o Chooses random nodes in the cloud that are not its neighbors and sends them
PNRP name resolution requests for its own P2P ID. The resulting endpoint
determination process seeds the caches of random nodes in the cloud with the
PNRP ID of the publishing peer.
PNRP version 2 nodes do not publish PNRP IDs if they are only resolving other P2P
IDs. The HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows
NT\CurrentVersion\PeerNet\PNRP\IPV6-Global\SearchOnly=1 registry value
(REG_DWORD type) allows you to specify that peers only use PNRP for name
resolution, never for name publication. This registry value can also be configured
through Group Policy.
18.6.6 Scaling Peer Name Resolution with a Multi-Level Cache
To keep the sizes of the PNRP caches small, peer nodes use a multi-level cache, in
which each level contains a maximum number of entries. Each level in the cache
represents a one tenth smaller portion of the PNRP ID number space (2256). The
lowest level in the cache contains a locally registered PNRP ID and other PNRP IDs
that are numerically close to it. As a level of the cache is filled with a maximum of 20
entries, a new lower level is created.
The maximum number of levels in the cache is on the order of log10 (Total number of
PNRP IDs in the cloud). For example, for a global cloud with 100 million PNRP IDs,
there are no more than 8 (=log10(100,000,000)) levels in the cache and a similar
number of hops to resolve a PNRP ID during name resolution. This mechanism allows
for a distributed hash table for which an arbitrary PNRP ID can be resolved by
forwarding PNRP Request messages to the next-closest peer until the peer with the
corresponding CPA is found. The result of this multi-level caching scheme is that
each peer does not have to store a large amount of cache entries. Even for a large
number of PNRP IDs, the local storage and network traffic to resolve an arbitrary
PNRP ID is not substantial.
To ensure that resolution can complete, each time a node adds an entry to the lowest
level of its cache, it floods a copy of the entry to all the nodes within the last level of
the cache. The cache entries are refreshed over time. Cache entries that are stale are
removed from the cache. The result is that the distributed hash table of PNRP IDs is
based on active endpoints, unlike DNS in which address records and the DNS
protocol provide no guarantee that the node associated with the address is actively on
the network.
PNRP Cache Initialization
To initialize the PNRP cache when a peer node starts up, the node can use the following
methods,
1. Persistent cache entries - Previous cache entries that were present when the node
was shut down are loaded from hard disk storage.
2. PNRP seed nodes - PNRP allows administrators to specify the addresses or DNS
names of PNRP seed nodes that contain CPAs for current participants in the cloud.
3. Simple Service Discovery Protocol - PNRP nodes are required to register
themselves using the Universal Plug-and-Play (UPnP) Simple Service Discovery
Protocol (SSDP). A node joining a cloud can use an SSDP Msearch message to
locate nearby SSDP nodes.
1. Chat Server
namespace ChatServer
{
public partial class ChatServer : Form
{
private CustomPeerResolverService cprs;
private ServiceHost host;
public ChatServer()
{
InitializeComponent();
btnStop.Enabled = false;
}
btnStart.Enabled = false;
btnStop.Enabled = true;
}
}
The config file plays the most important role of specifying the required details as shown
below,
<CODE>
?xml version="1.0" encoding="utf-8" ?
<configuration>
system.serviceModel
services
service name="System.ServiceModel.PeerResolvers.
CustomPeerResolverService"
host
baseAddresses
add baseAddress="net.tcp://10.34.34.241/ChatServer"/
baseAddresses
host
endpoint address="net.tcp://10.34.34.241/ChatServer"
binding="netTcpBinding"
bindingConfiguration="TcpConfig"
contract="System.ServiceModel.PeerResolvers.
IPeerResolverContract"
endpoint
service
services
bindings
netTcpBinding
binding name="TcpConfig"
security mode="None"/security
binding
netTcpBinding
bindings
system.serviceModel
</configuration>
Config file is very simple. A .NET predefined service is used in this,
System.ServiceModel.PeerResolvers.CustomPeerResolverService.
The base address is provided for hosting the resolver service as shown. The important
thing about the end point is that it uses, System.ServiceModel.PeerResolvers.
IPeerResolverContract contract which is already available in the foundation. Here, TCP
end point is used for the communication. Hence the endpoint is provided with TCP
binding and it is configured it security mode as none. Now the server is ready for chat
client. Just start the server.
3. Chat Client
As compared to the Chat server, the client becomes little bit complicated. It does
everything on its own. It needs something common that the server will use to communicate
with clients and that can be established using interfaces.
The client code is as below,
<CODE>
using System;
using System.Collections.Generic;
using System.ComponentModel;
using System.Data;
using System.Drawing;
using System.Text;
using System.Windows.Forms;
using System.ServiceModel;
using System.ServiceModel.Channels;
namespace ChatClient
{
// This is the service contract at client side which uses
// the same contract as call back contract.
// Using CallbackContract, server sends message to clients
[ServiceContract(CallbackContract = typeof(IChatService))]
public interface IChatService
{
// All operation contracts are one way so that client
// can fire the message and forget
// When server responds, client catches it acts accordingly
[OperationContract(IsOneWay = true)]
void Join(string memberName);
[OperationContract(IsOneWay = true)]
void Leave(string memberName);
[OperationContract(IsOneWay = true)]
void SendMessage(string memberName, string message);
}
public ChatClient()
{
InitializeComponent();
this.AcceptButton = btnLogin;
}
channel = null;
this.userName = txtUserName.Text.Trim();
// Create InstanceContext to handle call back interface
// Pass the object of the CallbackContract implementor
InstanceContext context = new InstanceContext(
new ChatClient(txtUserName.Text.Trim()));
// create a participant with the given end point
// The communication is managed with CHAT MESH and
// each client creates a duplex
// end point with the mesh. Mesh is nothing but the
// named collection of nodes.
factory =
new DuplexChannelFactory<ichatchannel />(context, "ChatEndPoint");
channel = factory.CreateChannel();
channel.Open();
channel.Join(this.userName);
grpMessageWindow.Enabled = true;
grpUserList.Enabled = true;
grpUserCredentials.Enabled = false;
this.AcceptButton = btnSend;
rtbMessages.AppendText
("****WEL-COME to Chat Application*****\r\n");
txtSendMessage.Select();
txtSendMessage.Focus();
}
catch (Exception ex)
{
MessageBox.Show(ex.ToString());
}
}
}
if (NewJoin != null)
{
NewJoin(memberName);
}
}
#endregion
}
catch (Exception ex)
{
MessageBox.Show(ex.ToString());
}
}
}
}
4. The Client's Config file
The important details are mentioned in the config file as shown below,
<CODE>
?xml version="1.0" encoding="utf-8" ?
<Configuration>
system.serviceModel
client
endpoint name="ChatEndPoint" address="net.p2p://chatMesh/ChatServer"
binding="netPeerTcpBinding"
bindingConfiguration="PeerTcpConfig"
contract="ChatClient.IChatService"endpoint
client
bindings
netPeerTcpBinding
binding name="PeerTcpConfig" port="0"
security mode="None"security
resolver mode="Custom"
custom address="net.tcp://10.34.34.241/ChatServer"
binding="netTcpBinding"
bindingConfiguration="TcpConfig"custom
resolver
binding
netPeerTcpBinding
netTcpBinding
binding name="TcpConfig"
security mode="None"security
binding
netTcpBinding
bindings
system.serviceModel
</configuration>
In the config file the endpoint names "ChatEndpoint" have been mentioned, which point
to a chat mesh and use netPeerTcpBinding. Here, when the binding is configured, the
custom resolver mode is sued and the custom TCP address is provided where actual
server is running.
Giving port number as zero will automatically detect the free port for
communication. The security mode is mentioned as none. Thus using this configuration the
clients can be started and are able to send a message across the intranet.
Let's take a closer look at each of these namespaces, the classes they provide, and
applications built around each.
The selection of ports and the protocol used by the peers should be taken into account
when designing the application because firewalls are often designed to allow
communication only over a particular port. While designing a peer-to-peer application
for a corporate environment, it might be useful to discuss with the network administrator
about which ports are open or recommended for application use. If the application is
designed for users who will most likely not be going through a firewall to find other
peers, just about any port that is not already reserved for another protocol will work. In
general, port numbers are divided into three ranges,
1. Well-known ports (from 0 through 1023)
2. Registered ports (from 1024 through 49151)
3. Dynamic and/or private ports (from 49152 through 65535)
Without DNS, users must know the IP address of the web page that you wanted to
access.
Q.4 What is a node in P2P network ?
Ans. :
Two or more computers are connected directly by an optical fiber or any other cable.
A node is a point where a connection is established. It is a network component that is
used to send, receive and forward the electronic information.
A device connected to a network is also termed as Node. Let's consider that in a
network there are 2 computers, 2 printers, and a server are connected, then we can
say that there are five nodes on the network.
Q.5 How are networks classified based on their connections ?
Ans. : Networks are classified into two categories based on their connection types. They are
mentioned below :
Peer-to-peer networks (P2P) : When two or more computers are connected
together to share resources without the use of a central server is termed as a peer-to-
peer network. Computers in this type of network act as both server and client. It is
generally used in small companies as they are not expensive.
Server-based networks : In this type of network, a central server is located to store
the data, applications, etc of the clients. The server computer provides the security
and network administration to the network.
Q.1 Discuss P2P network with neat diagram. (Refer section 18.1)
Q.2 Explain PNRP with name resolution process. (Refer section 18.6)
Q.3 What are PNRP clouds ? (Refer section 18.6)
Q.4 Discuss P2P terminology. (Refer section 18.2)
Q.5 What are uses and benefits of P2P ? (Refer section 18.5)
Concept :
WPF Architecture
1. Earlier, user interface frameworks offered by Microsoft such as MFC and Windows
forms, were simply the wrappers around User32 and GDI32 DLLs. WPF makes least
use of User32.
2. WPF is more than just a wrapper and it is a part of the .NET framework.
3. It contains a mixture of managed and unmanaged code.
4. Components of WPF architecture
I. The presentation framework and the presentation core have been written in managed
code.
II. Milcore is a part of unmanaged code which allows tight integration with DirectX
(responsible for display and rendering the objects).
III. Common language runtime makes the development process more productive by offering
many features such as memory management, error handling.
1. Resolution independence
In WPF application, there is two part appearance of UI and its behavior. By appearance it
means application's User Interface and it is specified by XAML. By behavior it means
how the application work and it is handled by .NET language like C# orVB.Net etc.
So it is very easy to customize the look of controls and its functionality. XAML holds
same position in WPF applications like CSS in HTML holds in web applications.
4. Control templates
What if the user wants to change the shape of a button, this will definitely sound weird
for .NET developers. Now it can be done in WPF by defining the control template. For
example a Button can be declared on for WPF window and its shape can be changed to
elliptic. Templates are an integral part of user interface design in WPF.
WPF has the following three types of templates, Control Template, Items Panel
Template, and Data Template.
5. Control transforms : WPF contains a handful of 2D transforms which enables to
change the size, position, rotation angle and also allows skewing. Control transforms
can be performed in two ways Layout Transform and Render Transform.
Layout Transform - Transform is applied before the control is laid out on the form
Render Transform - Transform is applied after the control is laid on the form
Types of Triggers :
Basically, there are 3 types of triggers namely,
1. Property Trigger
2. Data Trigger
3. Event Trigger
9. WPF styles : Style is a way to group similar properties in a single Style object and
apply to multiple objects. The Style element in XAML represents a style. A Style is
usually added to the resources of a FrameworkElement. The x:Key is the unique key
identifier of the style.
10. Different types of layout : Layout is used to separate the control on the User
Interface. It provides a way to develop a neat and user friendly application that is
having lot of controls in UI.
1. Grid - Grid as name suggested is same as table. It has rows and columns. Controls can
be placed inside grid cell. It is same as in web application GridView.
2. Dock panel - Using docking control can be placed either to top, right, left or bottom.
The dock panel provide this type of functionality in WPF. A Dock Panel is used to
dock child elements in the left, right, top, and bottom positions of the relative
elements. The position of child elements is determined by the Dock property of the
respective child elements and the relative order of those child elements. The default
value of Dock property is left. The Dock property is of type Dock enumeration that
has Left, Right, Top, and Bottom values.
3 Stack panel - Inside Stack Panel a control is managed either horizontally or vertically.
A control can be set on any other scale.
4. Canvas - It provides a layout where controls can be placed anywhere as required.
Developer has complete control on the layout.
5. Wrap panel - A Wrap panel lays the child controls from left to right and as the name
suggests it goes to the new line once it fills up the container width.
11. 2D and 3D graphics and animation : WPF comes with entire rich component all in
one. In WPF controls can be used as 3D layout or 2D layout. In this animation, media
file and graphics can be used. It supports playing video or audio file with the help of
Media player. Custom themes are possible to use with WPF.
12. Media in WPF : WPF has two classes to work with audio, video and video with audio
namely, MediaElement b MediaPlayer. The MediaElement is a part of XAML
UIElement and is supported by both XAML and WPF code behind but MediaPlayer is
available in WPF code behind only.
13. Charting in WPF : WPF supports common charts including line, bar, curve and
others.
14. WPF DatePicker control : A DatePicker control is used to create a visual DatePicker
that let user to pick a date and fire an event on the selection of the date.
15. WPF ListBox : The XAML ListBox element represents a ListBox control.
<ListBox></ListBox>
The Width and Height properties represent the width and the height of a ListBox. The
Name property represents the name of the control, which is a unique identifier of a
control. The Margin property tells the location of a ListBox on the parent control. The
HorizontalAlignment and VerticalAlignment properties are used to set horizontal and
vertical alignments.
16. WPF ComboBox : A ComboBox control is an items control that works as a ListBox
control but only one item from the collection is visible at a time and clicking on the
ComboBox makes the collection visible and allows users to pick an item from the
collection. Unlike a ListBox control, a ComboBox does not have multiple item
selection. A ComboBox control is a combination of three controls - A Button, a
Popup, and a TextBox. The Button control is used to show or hide available items and
Popup control displays items and lets user select one item from the list. The TextBox
control then displays the selected item.
17. WPF MessageBox : The MessageBox class in WPF represents a modal message box
dialog, which is defined in the System.Windows namespace. The Show, a static
method of the MessageBox is the only method that is used to display a message box.
The Show method returns a MessageBoxResult enumeration that has the values
None, OK, Cancel, Yes, and No. MessageBoxResult is used to determine what button
was clicked on a MessageBox and take an appropriate action.
18. WPF DataGrid : DataGrid element represents WPF DataGrid control in XAML.
<DataGrid />
When a DataGrid control is dragged and dropped from Toolbox to designer, it
positions the control and adds certain startup code XA.
The Width and Height properties represent the width and the height of a DataGrid.
The Name property represents the name of the control, which is a unique identifier of
a control. The Margin property sets the margin of placement of DataGrid on the
window.
19. WPF ProgressBar : The ProgressBar tag in XAML represents a WPF ProgressBar
control.
<ProgressBar></ProgressBar>
The Width and Height properties represent the width and the height of a ProgressBar.
The Name property represents the name of the control, which is a unique identifier of
a control. The Margin property tells the location of a ProgressBar on the parent
control. The HorizontalAlignment and VerticalAlignment properties are used to set
horizontal and vertical alignments.
20. WPF TreeView : A TreeView represents data in a hierarchical view in a parent child
relationship where a parent node can be expanded or collapsed. The left side bar of
Windows Explorer is an example of a TreeView.
Apart from enhanced UI and data binding services WPF also provides below facilities,
Provides the capability to setup generic host communication with two parties.
Allows setting up service properties.
Allows to make the settings in order to have a service in C# as well as a Java client.
Allows creating standard HTTP SOAP web service with the help of WPF.
While XAML is used to build user interfaces in WPF, C# is used as the code-behind
language in WPF. While Windows and their controls are created in XAML at design-
time, they can also be created at runtime using C# language. C# is also used to write all
events programming and business logic. All actions, events and rendering are done in the
code behind C# code.
XAML is used to describe the objects, properties and their relation in between them. It
makes you able to create any type of objects i.e. graphical and non-graphical.
Q.4 What are the different types of layout controls in WPF ?
Ans. : Following are the different types of layout controls :
Grid
DockPanel
WrapPanel
Canvas
UniformGrid
StackPanel
Q.5 What is the difference between Dynamic Resource and Static Resource ?
Ans. :
Sr. No. Static Resource Dynamic Resource
1. Static Resources evaluate the resource Dynamic Resource evaluates the
one time only. resources every time they are required.
2. Static Resource is light. Dynamic Resource is heavy due to
frequently evaluated.