What Is SOA
What Is SOA
What Is SOA
Web services promote an environment for systems that is loosely coupled and interoperable.
Many of the concepts for Web services come from a conceptual architecture called service-
oriented architecture (SOA). SOA configures entities (services, registries, contracts, and proxies)
to maximize loose coupling and reuse. This chapter describes these entities and their
configuration in an abstract way. Although you will probably use Web services to implement
your service-oriented architecture, this chapter explains SOA without much mention of a
particular implementation technology. This is done so that in subsequent chapters, you can see
the areas in which Web services achieve some aspects of a true SOA and other areas in which
Web services fall short.
Before we analyze the details of SOA, it is important to first explore the concept of software
architecture, which consists of the software’s coarse-grained structures. Software architecture
describes the system’s components and the way they interact at a high level.
These components are not necessarily entity beans or distributed objects. They are abstract
modules of software deployed as a unit onto a server with other components. The interactions
between components are called connectors. The configuration of components and connectors
describes the way a system is structured and behaves, as shown in Figure 1. Rather than creating
a formal definition for software architecture in this chapter, we will adopt this classic definition:
“The software architecture of a program or computing system is the structure or structures of the
system, which comprise software components, the externally visible properties of those
components, and the relationships among them.”
Service-oriented architecture is a special kind of software architecture that has several unique
characteristics. It is important for service designers and developers to understand the concepts of
SOA, so that they can make the most effective use of Web services in their environment.
SOA is a relatively new term, but the term “service” as it relates to a software service has been
around since at least the early 1990s, when it was used in Tuxedo to describe “services” and
“service processes”. Sun defined SOA more rigorously in the late 1990s to describe Jini, a
lightweight environment for dynamically discovering and using services on a network. The
technology is used mostly in reference to allowing “network plug and play” for devices. It allows
devices such as printers to dynamically connect to and download drivers from the network and
register their services as being available.
Figure 2 shows that other technologies can be used to implement service-oriented architecture.
Web services are simply one set of technologies that can be used to implement it successfully.
Figure 2: Web services are one set of technologies for implementing service-oriented
architecture.
The most important aspect of service-oriented architecture is that it separates the service’s
implementation from its interface. In other words, it separates the “what” from the “how.”
Service consumers view a service simply as an endpoint that supports a particular request format
or contract. Service consumers are not concerned with how the service goes about executing
their requests; they expect only that it will.
Consumers also expect that their interaction with the service will follow a contract, an agreed-
upon interaction between two parties. The way the service executes tasks given to it by service
consumers is irrelevant. The service might fulfill the request by executing a servlet, a mainframe
application, or a Visual Basic application. The only requirement is that the service send the
response back to the consumer in the agreed-upon format.
SOA has a single-minded goal when it comes to modeling—to tie business processes together
with underlying applications by using Web services in a distributed computing plan.
The purpose of the SOA Service Analysis and Design is to provide a step-by-step process for
conducting the analysis of service candidates that one wishes to implement in an SOA. For
concepts and theories behind why you need to conduct proper service-oriented modeling, see the
Service Modeling best practice guide.
Introduction
SOA is not a product. It is not tangible or something you put your hands on. From a high-level
perspective, SOA architecture is extremely flexible and can change or extend when demands
warrant. This document, however, is not intended to study architecture. Instead, its purpose is to
walk you through the steps of analyzing services, and then defines how to design and implement
those services into an SOA that provides your organization with agility.
As you further examine service-oriented analysis and design perspectives, you should be aware
that SOA solutions are composed of reusable services based on standards-based, well-defined
interfaces.
Service-oriented modeling is a component of the analysis process, in which several processes are
executed to help you identify service candidates. The service candidates are then assembled into
abstract compositions that implement one ore more business processes.
Analysis and design also helps you to understand what type of thinking is necessary to evaluate
business processes and logic to determine if they are SOA-worthy, service candidates.
This section describes how to utilize a combination approach using aspects of top-down and
bottom-up strategies. Both strategies offer unique sets of service identification parameters.
Specifically, the top-down approach centers on conceptual services analysis techniques, where
service candidates are analyzed to evaluate and prioritize their potential to become a service
before proceeding to formal design processes. Top-down can identify service reuse (a primary
SOA feature) and conceptual agreement of a service candidate, but the service can be difficult to
implement due to lack of design principles from a technology perspective.
Meanwhile, the bottom-up strategy is able to tie services to technology, which translates into
tight coupling and limits service reuse.
By combining elements from both strategies using an iterative process, you achieve
identification of SOA service candidates, which takes you to the Analysis Phase (Section 2) and
Design Phase (Section 3).
For more information and detail on services, see the Service Modeling best practice guide.
Figure 1 represents the principle steps in an SOA service lifecycle with emphasis placed on Steps
1-3 which are covered in this document.
Note: This list of goals is not exhaustive. These are meant to remind you about some goals.
Analysis Process
Key aspects of the Service-Oriented Analysis process are sub-steps of Step 2 in the SOA Service
Lifecycle in Figure 1 The sub-steps are:
The custom design standards you create are important because they set the tone for
understanding and realizing the benefits of SOA.
Design Goals
These are considerations you should keep in mind when considering how to construct physical
service interface definitions:
Solidly coded services are flexible and agile; they are easy to reconfigure and reuse through
loose coupling, encapsulation, and information hiding.
Well-designed services are meaningful and multi-faceted beyond enterprise applications; these
are meant to be standalone and not reliant upon other services.
Services abstractions are definitive, exact, complete, and consistent.
Service naming conventions are non-cryptic, logically named, and easy to figure out within the
enterprise IT community.
You have a clear solution for providing support for service-oriented principles.
Note: This list of goals is not exhaustive. These are meant to remind you about some goals.
Design Process
Each design process entity has a one-to-one relationship with the service candidates identified
from the service-oriented analysis phase. Figure 3 shows a sample design process.
The IBM Customer Information Control System (CICS) is a transaction processing (TP) monitor
that was originally developed for IBM mainframes. It controls the interaction between
applications and users and lets programmers develop screen displays without detailed knowledge
of the terminals being used. CICS provides industrial strength, online transaction management,
and connectivity for mission-critical applications.
CICS transactions fall in two categories: visual and non-visual. End users interact with visual
transactions through terminal emulators and green screens. Non-visual transactions provide no
end-user interaction and they are invoked by other system programs. Non-visual transactions are
also called COMMAREA transactions, since the invocation parameters and output data use a
mainframe storage area called the communication area.
With the above in mind, it is realistic to think that one can build an SOA system on legacy CICS-
based applications and not rip the existing systems apart and start from scratch. The
advancements of the capabilities and the richness of the new feature set of z/OS helps make
achieving a true SOA on System z a reality. The rest of the article tries to identify some of the
key features of z/OS that can be leveraged while designing a robust infrastructure around which
to build an SOA system.
Service-Oriented Architecture (SOA) has been one of the latest IT architectural trends. It has
demonstrated the benefits that can be harnessed by aligning IT initiatives with the business goals
of an enterprise. Many small, medium, and large enterprises, across all the vertical market
sectors, have implemented and deployed SOA successfully and enjoyed its benefits, which have
been ably backed up by impressive return on investment (ROI) numbers.
1. A service
2. A composition of services
3. A service orchestration
These compose additional layers of abstraction, each of which introduces run time overhead that
might be construed as a necessary evil. The hardware infrastructure on which an SOA is
deployed plays a critical role in bolstering the performance and throughput and other SLA
capabilities of the run time, thereby offsetting (and sometimes more than offsetting) the overhead
(from increased abstraction) associated in an SOA-based system.
For the last four decades, IT departments and enterprises have enjoyed the unique workload
management capabilities of the IBM System z™ and z/OS® architectures. z/OS has been proved
to deliver unsurpassed availability and reliability. Scalability with z/OS mainframes has met and
often surpassed future transaction throughput requirements. It is practically impossible to ask IT
shops accustomed to these kind of performance benefits to forfeit their legacy systems, which
have been instrumental to make their IT run their businesses with a very high level of
satisfaction.
A great number of the legacy applications still in use are COBOL/CICS applications. Using the
application programming interface (API) provided by CICS, a programmer can write programs
that communicate with online users and read from or write to customer and other records (orders,
inventory figures, customer data, and so forth) in a database (usually referred to as “data sets”)
using CICS facilities rather than IBM’s access methods directly. Like other transaction
managers, CICS can ensure that transactions are completed and, if not, undo partly completed
transactions so that the integrity of data records is maintained.
The CICS Transaction Server Version 3.1 exposes CICS-based applications as Web services, as
well as integrates the applications into an SOA-based system. The CICS Transaction Server also
allows CICS applications to act as both a service provider and a service requestor, thereby
simplifying integration of CICS applications into a modern business-to-business (B2B) and
service-based environment.
Figure 1: CICS and Web Services
The CICS Transaction Server can modernize CICS applications using either of the two
architectural patterns: loose coupling or tight coupling. Each of these integration architecture
forms is realized through specific features and capabilities of the CICS Transaction Server. SOA
can be realized through both architectural forms. Which one to use is an architecture decision,
one of the main deciding factors of the SLA requirements for the system. The rest of this section
explores the key features of the CICS Transaction Server that enables CICS applications to be
made participants in an SOA. Let’s explore three architecture scenarios, one for the loosely
coupled pattern and two for the tightly coupled pattern.
Recent CICS Transaction Server enhancements include support for Web Services and Enterprise
Java Beans (EJBs). IBM began shipping the latest release, CICS Transaction Server Version 4.1,
which contains support for Event Processing, Atom Feeds, and RESTful Interfaces, in June
2009.
Together, the three standards combine to give a web service the ability to function, describe
itself, and be found within a network. While theoretically a web service could function fully
using SOAP alone, figure 1 shows how a web service needs WSDL and UDDI to be effective.
1. SOAP
SOAP is the lingua franca of web services, the XML structure on which all web services
messages are built. When we say that web services are based on XML, we actually mean that
web services are based on SOAP messages, which are written in XML. What makes SOAP
special and distinct from plain-vanilla XML is that every SOAP message follows a pattern that
has been specified by the W3C standards.
Figure 1: When it comes to describing its functionality and location, a web service is in and of
itself essentially “dumb.” To describe itself to potential requestors, the web service relies on its
WSDL document, which provides a detailed explanation of the web service’s functionality and
how to access it. For location, the web service relies on its listing in a UDDI registry to enable
potential requestors to find it.
SOAP is sometimes referred to as a “data wrapper” or “data envelope.” Here’s what those apt
descriptions mean: Every SOAP message begins with a tag that reads <SOAP-ENV:envelope>.
The envelope tag signals the message recipient that it is about to receive a SOAP message. What
follows is a header, which contains the critical information about where the message is going and
from whom it came. And then there is the body of the SOAP message, which lays out the actual
data or operating instructions required by the consuming computer. Figure 2 shows a SOAP
message, complete with “envelope” and body, traveling across a network from a web services
consumer computer to a provider computer, in this case, a mainframe.
Figure 2: A SOAP message is formatted as an “envelope” of XML code that defines its
beginning and end. The “header” describes where the message came from, where it’s going, and
how it is going to get there. The “body” of the SOAP message contains the relevant data or
procedural instructions of the SOAP request of response.
2. WSDL
The Web Services Description Language (WSDL) is an XML document, designed according to
standards specified by the W3C, that describes exactly how a specific web service works.
However, the WSDL document (often referred to in speech as the “wizdil”) is much more than a
mere instruction manual on how to use the web service that it describes. Web services
development software can process the WSDL document and generate the SOAP messages
automatically that are needed to invoke that specific service.
Because of the capabilities of WSDL, web services are known as “self-describing” software
elements. This is a very powerful concept. Not only can web services inter-operate universally
through SOAP, they can also be described universally using WSDL. A software developer in
Borneo can create a piece of software to invoke a web service in Trinidad just by reading and
processing the WSDL document. He does not need to speak to anyone, read any particular
manuals, or buy any special software—theoretically he need only conform to the standards. I say
theoretically because clearly the developer in Borneo still would need to be properly authorized,
and the transaction should be properly monitored to be safe, but the mechanism itself of
achieving a request and a response between these parties has now been drastically simplified.
3. UDDI
Though several types of web service registries are available for use, we identify the Universal
Discovery, Description, and Integration (UDDI) directory as the general standard used as a
registry of web services that are available for use in a particular net-work. Think of the UDDI as
a sort of “yellow pages” of web services. If you wanted to find a web service in your enterprise,
you would look in the UDDI. The UDDI would tell you where to find that service, and it would
link you to the WSDL document so you could examine the web service and make sure it was the
one you wanted.
A UDDI registry is a central concept as one shifts to a model that assumes a distributed, loosely
coupled set of web services. The services your process may want to consume could be anywhere
at any given moment, and in fact the same function may be performed by a different service
depending on changing criteria, such as availability or price. In this environment, operating
without a directory of sorts to find the services would mandate “hard-coding” the location of the
service into the consuming application, undermining a key reason for having adopted web
services in the first place. Imagine depending on a person for a key task and then finding that
individual has moved without leaving a forwarding address.
Web services have Uniform Resource Locator (URL) Internet addresses. To the requesting
computer, a web service is simply a URL. As we noted in the previous chapter, a URL is the
“address” of a program or a website like Amazon or Yahoo. Because web services utilize
Internet protocols, they can be invoked by sending the requesting SOAP message to the web
service “address.” (Note that most web service URLs are not as simple as
http://www.amazon.com; they might be more like http://qams1:8080/8f3af62e=11d7-a378.) This
may not seem like such a big deal, but in fact it is at the core of how the entire system functions.
The magic of web services is that they are located at addresses to which any computer can
connect. The web ser-vice’s URL is the basis for its universality and network transparency.
Universality comes from the standard way to describe it, and the transparency comes from the
ability to use a “logical name” in your consuming application that the UDDI can “resolve” for
you into the appropriate URL. For instance, if I want to use a credit card authorization service in
my application, rather than “hard-code” a location of the service, I can invoke a logical name
(say, “CreditAuth”) and allow the UDDI to resolve the name into a URL. That way, if the
location of a service changes (and things always change), my program can stay the same—
CreditAuth is still CreditAuth anywhere the service might be. Masking these kinds of changes
from the consuming application is key to achieving the agility that web services technology
promises.
Learn how to use and test SugarCRM web services WSDL-API via SOAPSonar Enterprise
Edition, a .NET-based stand-alone testing tool.
Introduction
SugarCRM provides an extensive web services SOAP API for integration with external
applications. This article introduces how to use .NET-based SOAPSonar Enterprise Edition for
invoking SugarCRM SOAP-API. The following simple steps are required to get started:
With these simple steps, you will be ready to integrate SugarCRM with any web services-aware
application. The web services API provided by SugarCRM is extensive, easy-to-use, and
flexible.
Test Setup
To explore SugarCRM’s web services functionality, as shown in Figure 1, a simple test
environment is setup with SOAPSonar installed on a client machine. SOAPSonar consumes the
WSDL-based API definition published by SugarCRM On-Demand hosted service. This WSDL
file provides all the necessary constructs for SOAPSonar
to send SOAP requests to SugarCRM over HTTP protocol.
Figure 1: SugarCRM Web Services Invocation using SOAPSonar.
Installation Steps
The installation and setup steps are as follows:
1. Download and Install SOAPSonar Enterprise Edition: SOAPSonar is a web services testing
client that consumes a WSDL and generates functional, performance, interoperability and
vulnerability tests for a target web service.
With SOAPSonar, testing SugarCRM web services is easy and code free. Download
SOAPSonar Enterprise Edition.
2. Register for a SugarCRM Professional On-Demand Trial Account: You can also use the Live
Demo version of the product with the following attributes:
a. Live Demo URI: http://demo.sugarcrm.com
b. User: will
c. Password: will
3. Load SugarCRM WSDL: Start SOAPSonar and load the WSDL from SugarCRM. Figure 2
shows SOAPSonar with the WSDL loaded from location:
▪ http://demo.sugarcrm.com/sugarcrm/soap.php?wsdl
SOAPSonar parses the WSDL and in the left navigation panel, exposes the extensive list of
operations provided by SugarCRM. We can quickly test the SugarCRM web services interface
by sending credentials using the “login”
operation. Since SugarCRM expects MD5 values of the password, as shown in Figure 2, the
MD5(HashString) Context Function is used with password “will” as the input HashString. The
Context Function menu is accessible by simply clicking at right side of the password field.
Figure 2: SOAPSonar invoking SugarCRM login operation
On submitting the request (push the circled blue arrow), SugarCRM returns a SOAP Response,
as shown in the lower right panel of Figure 2. The SOAP Response contains the session id that
will be used for invoking subsequent operations.
4. Set Response Variable: In this step, the session id obtained in the last step is passed in as a
“Response Variable” for a selected operation. To set the Response Variable:
a. Select the Response Variable
b. Select the variable of interest. In our case, we are interested in the “id” element.
c. Drag this variable to the bottom panel. You will be prompted to set the variable name. Use
the default name “id”.
5. Assign Response Variable: In this step, the Response Variable “id” captured in the previous
step is assigned as the input to the selected operation. We selected the “get_module_fields”
operation as shown in Figure 4.
a. Click on the right side of the “session” field to provide Parameter Menu and navigate to
Response Variable where you select “id” as an input to the session field.
b. Push the Green Check button to save the values.
c. Push the Blue Arrow button to run the test.
As seen in the lower Response Panel, the list of fields associated with the “Leads” Module is
returned.
Conclusions
Web Services are the foundations of modern distributed systems. The widespread use of Web
Services across network devices, applications, and corporate infrastructure mandates that CRM
systems also expose their interfaces via simple
web services for WSDL/SOAP-based messaging for rapid integration. Using SOAPSonar
enables IT professionals to rapidly test integration paths with SugarCRM without extensive
development. With features such as operations chaining, developers and testers can quickly
construct code-free test case sequences across WSDL operations exposed by SugarCRM. Using
a .NET-based testing tool further validates that the PHP-based web services exposed by
SugarCRM are programming language independent and can be consumed by non-PHP clients.
References
1. SugarCRM SOAP API Documentation
2. Crosscheck Networks SOAPSonar Product Help.
Testing SOA could be viewed as a complex computing problem. With any complex problem,
the key is to break it down into smaller, more manageable components and build quality into
these deliverables. The foundations to successful SOA testing are as follows:
Equal weighting of testing effort throughout the project life cycle. Many organizations
still fail to recognize the real benefits of static and formal review techniques during the
early stages of the project. Most or all of the testing effort comes too late at the end of
the project life cycle. More testing effort will be required at a service (program) level.
The SOA test team is a blend of business domain and technology experts.
Design the project test approach alongside the project business and technical
requirements. Budget for the Test team to be involved from the start of the project.
Implement Quality Controls throughout the project life cycle.
Security Testing is not an end of project activity! Design and Plan Security testing
from the start of the
project.
Test tools are a must!
How do you test SOA architecture? You don’t. Instead, you learn how to break down the
architecture to its component parts, working from the most primitive to the most
sophisticated, testing each component, then the integration of the holistic architecture. In other
words, you have to divide the architecture into domains, such as services, security, and
governance and test each domain separately using the recommended approach and tools.
SOA is loosely coupled with complex interdependencies and a SOA testing approach must
follow the same pattern.
Figure 1 represents a model of SOA components and how they’re interrelated. The Test
team designing the Project Test approach and plans must have a macro understanding of how all
of the components work independently and collectively.
Governance Testing
Service-component-level testing
Service-level testing
Integration-level testing
Process/Orchestration-level testing
System-level testing
Security Testing
Governance Testing
SOA Governance is a key factor in the success of any SOA Implementation. It is also the most
‘loosely’ used term, as it covers the entire lifecycle of SOA Implementation – from design to run
time to ongoing maintenance. SOA Governance refers to the Standards and Policies that govern
the design, build and implementation of a SOA solution and the Policies that must be enforced
during runtime.
Organizations must have well defined Design, Development, Testing and Security Standards that
will guide and direct SOA implementations. Quality controls and reviews must be implemented
throughout the entire Project life cycle to and processes, to ensure compliance. The appropriate
peers must conduct these reviews and deviations from recommended standards must be agreed
by the organization’s Governance team.
Test cases will be constructed and executed in all of the project test phases to determine if
SOA Policies are being enforced. SOA policies can be enforced at runtime, by using
technologies and/or monitoring tools.
SOA Governance testing will not be a separate test phase. Testing that SOA Governance is
enforced will take place throughout the project life cycle, through formal peer reviews and
different test scenarios that will be executed during the separate test phases.
Service-component-level Testing
Service-component-level testing or Unit testing, is normally performed by the developers to
test that the code not only successfully compiles, but the basic functionality of the
components and functions within a service are working as specified.
The primary goal of Component testing is to take the smallest piece of testable software
in the application, isolate it from the remainder of the code, and determine whether it behaves
exactly as you expect. Each Component is tested separately before integrating it into a service
or services.
The following quality and test activities are recommended in this phase/level of testing:
Formal peer reviews of the code to ensure it complies with organization standards and to
identifyany potential performance and security defects or weaknesses
Quality entry and exit criteria are not only defined for this level of testing, but are
achieved before
moving to the next level of testing
Service-level Testing
Service testing will be the most important test level/phase within your SOA Test approach.
Today, many organizations build a program or Web service, perform limited unit testing and
accelerate its delivery to the integration test phase, to allow the test team to evaluate its quality.
Service reuse will demand each service is delivered from this level/phase of testing with a
comprehensive statement of quality and even a Guarantee!
The following quality and test activities are recommended in this phase/level of testing:
Formal peer reviews of the code to ensure it complies with organization standards and to
identifyany potential interoperability, performance and security defects or weaknesses
Functional, performance and security regression suites to be executed against the
service. This will require the help of automated test tools and the development of
sophisticated harnesses and stubs
Quality entry and exit criteria are not only defined for this level of testing, but are
achieved before delivering the service to the next level of testing
Service Level testing must ensure that the service is not only meeting the requirements of the
current project, but more importantly, is still meeting the business and operational requirements
of the other processes that are using that service.
Integration-level Testing
The integration test phase will focus on service interfaces. This test phase aims to determine
if interface behaviour and information sharing between the services, are working as specified.
The test team will ensure that all the services delivered to this test phase comply with the defined
interface definition, in terms of standards, format and data validation. Integration testing test
scenarios should also ‘work’ the layers of communications, the network protocols. This test
phase may include testing external services to your organization.
Process/Orchestration-level Testing
Process/Orchestration testing ensures services are operating collectively as specified. This phase
of testing would cover business logic, sequencing, exception handling and process
decomposition (including service and process reuse).
System-level Testing
System Level testing will form the majority, if not all of the User Acceptance Test phase. This
test phase will test that the SOA technical solution has delivered the defined business
requirements and has met the defined business acceptance criteria. To ensure that this phase/level
of testing is targeting only the key business scenarios of the solution, the business stakeholders
and testers must fully understand the quality and test coverage that has been achieved in
previous test phases.
Security Testing
As SOA evolves and grows within your organization, the profile and necessity of Security
testing will increase. Today, many organizations perform an inadequate amount of penetration
testing at the very end of a project. SOA combined with Government and Regulatory
compliance, will require Security testing activities to be incorporated into the entire project life
cycle.
This article introduces Web Services, how they are created in .NET, and how they can be
customized to meet specific needs.
Contents
The file suffix “.asmx” means “Web Service” in the .NET world. When the .NET framework is
installed, this file type is automatically associated in IIS to an ISAPI filter called
aspnet_isapi.dll. We know that a Web Service is defined by a URL to a .WSDL file. In the .NET
world, this is a URL that points to an .asmx file with a ?wsdl argument. The ISAPI filter
supports the SOAP exchange that follows. The filter invokes the .NET compiler i f the file
hasn’t al ready been called. During compilation, the compiler looks at the embedded attributes
and Web Service ASP.NET di rective in the file and generates “metadata” that describes how the
class should behave when invoked as a Web Service. The metadata is put into the assembly (the
.NET term for DLL). Directives give the compiler speci fic instructions for the code that is
being compiled. The @WebService directive just tells the compiler to create entry points for the
ISAPI filter to call. Other
common di rectives are @ Application for application-specific attributes, @ Import to import a
namespace into an application, and @ Assembly to link an assembly to the application at parse
time.
[WebMethod][SoapRpcMethod]
public Order MyEncodedMethod()
{
Order myOrder = new Order();
myOrder.OrderID = "E123";
return myOrder;
}
Binary serialization, which preserves the complete state of an object between different
invocations of an application. For example, you can share an object between different
applications by serializing it to the clipboard.
XML serialization, which serializes public properties and fields. This is useful when
you want to provide or consume data without restricting the application that uses the data.
Because XML is an open standard, XML serialization is an attractive choice for sharing data
across the web. It is not necessary to write any special code to perform XML serial ization and
deserialization—it is built right into the .NET framework. You can change the default
serialization behavior by using attributes.
Literal and Encoded Styles
The XML generated by an XML Web Service can be formatted in either one of two ways: literal
or encoded. This affects how the WSDL describes the arguments and returned result for the Web
Service method call. When literal is used, this is considered “document style”—each argument
is an XML document described by an XML schema (XSD). When encode d is used, this is
considered “RPC style”—each argument is described as a data type. For our purposes, document
style works well because we often want to pass a complex set of data that would be burdensome
to describe as many arguments. It is much easier to define it as an XML schema structure and
pass the whole thing as a single SOAP argument. Within the System.XML.Serialization
namespace there are a set of attributes beginning with
SOAP and a set beginning with XML. When customizing the behavior of a Web Service, use the
SOAP* attributes for RPC style calls and the XML* attributes for document style calls.
Conclusion
In this article, we have introduced Web Services in .NET. We have demonstrated how simple
they are to create and test, and how they can be customized using attributes. Finally, we have
looked at how object-oriented principles in coding can be unified with the Web Services model.
4. SOA Migration Step 4: Incorporation of WebSphere Business Integration Server into the SOA
under the use of ESB
Initial unit testing will be done for each service. A test plan will be developed and further testing
based on the selective test cases will be performed prior to delivery. Each service is going to be
tested by our QA team and end users. At least one cycle of user acceptance testing is required for
a successful delivery of the project. A well-coordinated and successful UAT is a win-win
situation for both IT and business.
In this step we will address the key activities that we will perform for the analysis and the design
required to build a Service-Oriented Architecture (SOA). We strongly stress the importance of
addressing the techniques required for the identification, specification and realization of services,
their flows and composition, as well as the enterprise-scale components needed to realize and
ensure the quality of services required of a SOA. Service-oriented modeling requires additional
activities and artifacts that are not found in traditional object-oriented analysis and design
(OOAD).
This concept is based on an architectural style that defines an interaction model between three
primary parties: the service provider, who publishes a service description and provides the
implementation for the service, a service consumer, who can either use the uniform resource
identifier (URI) for the service description directly or can find the service description in a service
registry and bind and invoke the service. The service broker provides and maintains the service
registry. A meta-model showing these relationships is depicted in Figure 1 below.
Guiding Principles
By understanding the principles of the SOA style of architecture and design, along with the
benefits of those principles to the business and IT communities, we can determine the
applicability of SOA when designing a solution. These principles drive certain characteristics
that are essential to the design of a service. A service is a software resource (discoverable) with
an externalized service description. This service description is available for searching, binding,
and invocation by a service consumer. The service provider realizes the service description
implementation and also delivers the quality of service requirements to the service consumer.
Services should ideally be governed by declarative policies and thus support a dynamically re-
configurable architectural style.
Business agility is gained by IT systems that are flexible, primarily by separation of interface,
implementation, and binding (protocols) offered by a SOA, allowing the deferral of the choice of
which service provider to opt for at a given point in time based on new business requirements,
(functional and non-functional (for example, performance, security, scalability, and so forth)
requirements).
We can reuse the services across internal business units or across the value chains among
business partners in a fractal realization pattern. Fractal realization refers to the ability of an
architectural style to apply its patterns and the roles associated with the participants in its
interaction model in a composite manner. We can apply it to one tier in architecture and to
multiple tiers across the enterprise architecture. Among projects, it can be between business units
and business partners within a value chain in a uniform and conceptually scalable manner.
Adoption and Maturity Models. Where is US Trust at in the relative scale of maturity in the
adoption of SOA and Web Services? Every different level of adoption has its own unique needs.
Assessments. Do we have some pilots? Has US Trust dabbled into Web services? How good is
the resulting architecture (ESB)? Should we keep going in the same direction? Will this scale to
an enterprise SOA? Have we considered everything we need to consider?
Strategy & Planning Activities. How do we plan to migrate to a SOA? What are the steps,
tools, methods, technologies, standards, and training we will need to take into account? What is
the roadmap and vision, and how do we will get there? What’s the plan?
Governance. Should existing API or capability become a service? If not, which ones are
eligible? Every service should be created with the intent to bring value to the business in some
way. How do you manage this process without getting in the way?
Best Practices. What are some tried and tested ways of implementing security, ensuring
performance, compliance with standards for interoperability, designing for change?
SOA Template
An abstract view of SOA depicts it as a partially layered architecture of composite services that
align with business processes.
The relationship between services and components is that enterprise-scale components (large-
grained enterprise or business line components) realize the services and are responsible for
providing their functionality and maintaining their quality of service. Business process flows can
be supported by choreography of these exposed services into composite applications.
Integration architecture supports the routing, mediation, and translation of these services,
components, and flows using an Enterprise Service Bus (ESB). The deployed services must be
monitored and managed for quality of service and adherence to non-functional requirements.
For each of these layers, we will make design and architectural decisions. Therefore, to help
document SOA, we will create a document consisting of sections that correspond to each of the
layers.
Figure 3: SOA Migration Step 1 - The layers of a SOA
See the next step of this SOA Migration : SOA Migration Step 2: Enterprise Service Bus (ESB)
Evaluation
This step is the next step of SOA Migration Step 1: SOA Assessment. The backbone of SOA is
the Enterprise Service Bus (ESB). ESB provides the infrastructure to register services, to route
events/requests to the appropriate service provider, and to transform incoming XML messages.
The biggest benefit of ESB is to make integration efforts declarative and not developmental. US
Trust will be able to register new services with ESB, swap obsolete services, monitor business
activities, and most importantly create a unified integration pattern for either existing
applications or external trading partners/clients.
The main purpose of the initial test is to perform end-to-end test for the ESB infrastructure across
all layers (not real business case). It will simply validate ESB infrastructure and provide a
comfort level to US Trust. For instance, this test will include a generation of some file from
Advantage, ESB will receive an FTP event based on that file, ESB will route this event to some
service provider, a message will be transformed into “copybook” format, and a legacy
component will be invoked either as a scheduled task or as batch program.
Communication
Service Interaction
Check if WSDL interface is defined for each BPEL4WS process, and each activity within a
BPEL4WS process.
Check if Service providers can be substituted without changing the process structure.
Check if there is any support provided for UDDI registries.
Integration
Check if there is support for J2EE Connector Architecture resource adapters to connect to
enterprise information systems such as CICS Transaction Server and IMS.
Check if there is support for multiple transports, as defined by binding settings in WSDL
definitions. Service aggregation can be achieved using parallel process paths and data mapping.
Service level
Check if Business process instances are persisted and can survive a server restart or failure.
Check if Business processes can leverage J2EE transaction support.
Check if Compensation can be used to compensate transactions that have already committed, or
to compensate activities that cannot be rolled back due to their non-transactional nature.
Check if Support of JMS implemented with WebSphere MQ allows assured message delivery.
Security
Message Processing
Modeling
Infrastructure Intelligence
Check if Business Rules Beans support allows business rules to be dynamically changed at
runtime without modifying or redeploying a process.
See the next SOA Migration Step : SOA Migration Step 3: Requirements, Analysis, Design and
Implementation
Good requirements and analysis practices help reduce project risk and keep the project running
smoothly until the final product is successfully delivered. Defining the right tools that will help
the team understand the business problem, capture and manage evolving requirements, model
user interactions, defining database architecture, and incorporate stakeholder feedback
throughout the project life-cycle are key factors for successful implementation.
Business Analyst: Tasked with understanding and representing stakeholder needs, leading and
coordinating the collection and verification of customer and business needs, documenting and
organizing the requirements for a system, and communicating requirements to an entire team.
Other titles or roles that might do these tasks are systems analysts, project mangers, program
managers, or product managers. Skills: Rational RequisitePro and WebSphere Business
Integration Modeler
Application Architect: Responsible for creating and maintaining the overall structure and layout
of a software system’s components and their interfaces within and outside the system. Skills:
Rational RequisitePro, Rational Software Modeler and Rational Software Architect.
Systems Architect: Responsible for analyzing the role of the system in the broader enterprise,
defining the requirements the system needs to meet, in terms of services and nonfunctional
requirements, and defining the architecture of the system to meet the requirements. Skills:
Rational RequisitePro, Rational Software Modeler.
I recommend to use Rational products, which provides tools for architecture, design modeling,
construction, model-driven development, architecting rapid application development (RAD),
component testing, and runtime analysis activities. These tools help developers maximize their
productivity when building business applications, software products and systems.
The following products targeted specifically for design and construction activities will be
analyses and used:
Rational Software Architect: Helps developers creating applications for the Java
platform or in C++ that leverages model-driven development with the UML and unifies
all aspects of software application architecture.
Rational Software Modeler: UML-based visual modeling and design tool for architects,
systems analysts, and designers who need to ensure that their specifications, architecture,
and designs are clearly defined and communicated to their stakeholders.
Process Management
I recommend the use of the Rational Unified Process, or RUP Methodology. RUP is a software
development process platform based on proven best practices that are configurable to meet
projects’ needs. The comprehensive IT methods and planning and estimation tools of Rational
SUMMIT Ascendant complement the proven RUP guidance for developing quality software.
Rational Suite is a comprehensive solution that includes the Team Unifying Platform
capabilities, plus visual modeling, code-generation, and runtime analysis capabilities. This
solution can help to better plan, manage, and measure IT and development projects across the
enterprise.
Configuration Management
Regulatory compliance, standards enforcement and IT governance requirements heighten the
need for robust software configuration management process. As a result a comprehensive,
integrated software configuration management solutions that will streamline and automate
change across the application life cycle is a must for the success of this project.
Quality Management
Building quality into an application involves an iterative process and a set of tools to help team
members automate error-prone aspects of their work, freeing them to focus on creativity and
value. I recommend the use of a tool that will address the needs to build the required business
application.
The following products are examples of automated system testing tools recommendation:
Rational Functional Tester: An advanced, automated functional and regression testing tool for
testers and GUI developers who need superior control for testing Java, VS.NET and Web-based
applications.
Rational Performance Tester: A performance test creation, execution and analysis tool for
teams validating the scalability and reliability of complex e-business applications before
deployment.
Rational Robot: A testing tool for centralized QA teams who want to automate the functional
and performance testing of applications based on a variety of client/server GUI technologies.
Rational Team Unifying Platform: Integrates all the testing activities for one application with
centralized test management, defect tracking, and version control.
See previous step : SOA Migration Step 2: Enterprise Service Bus (ESB) Evaluation
See next step: SOA Migration Step 4: Incorporation of WebSphere Business Integration Server
into the SOA under the use of ESB
In the diagram shown below, the following products are recommended for implementing an ESB
as part of a service-oriented architecture:
WebSphere MQ
Web Services Gateway
WebSphere Business Integration Event Broker
WebSphere Business Integration Message Broker
WebSphere Business Integration implements both Process Services and ESB functionality. For
example, in WebSphere Business Integration Foundation there is the inclusion of WebSphere
MQ, WebSphere Business Integration Event Broker, and the Web Services Gateway.
It is worth bearing in mind that for certain solutions the ESB capabilities embedded in
WebSphere Business Integration Server Foundation will prove sufficient for required purposes.
These capabilities will be coupled with process integration functionality in an SOA.
There is important architectural principle, which must be applied when using a common
technology for multiple architectural components, in this case the ESB and Process Services
components. There must be a clean interface between the ESB mediations and the process flows
defined, for example a WSDL definition. Therefore both process flows and mediation flows for
the ESB will exist. Each will be independent of the other, accessed only through defined
interfaces. If a clean interface is not architected and implemented between the components, then
the benefits of using an ESB in an SOA will not be achieved, maintenance costs will soar and
flexibility will be lost.
Adopting this principle in WebSphere Business Integration Foundation requires careful business
process design. Business processes built for WebSphere Business Integration Foundation will
use the Business Process Execution Language for Web Services (BPEL4WS) open standard.
BPEL4WS can be used to model business process flow logic.
The design strategy for a SOA does not start from the “bottom-up” as is often the case with a
Web services-based approach. SOA is more strategic and business-aligned. Web services are a
tactical implementation of SOA. A number of important activities and decisions exist that
influence not just integration architecture but enterprise and application architectures as well.
They include the activities from the two key views of the consumer and provider described in
Figure 1 below.
Figure 1 below, shows the activities that are typically conducted by each of the roles of provider
and consumer. Note that the provider’s activities are a superset of the consumer’s activities (for
example, the provider would also be concerned with service identification, categorization, and so
forth).
In many cases, the differentiation of the roles comes from the fact that the consumers specify the
services they want, often search for it, and once they are convinced of the match between the
specification of the service they are looking for, and that provided by a service provider, they
bind and invoke the service as needed. The provider then needs to publish the services they are
willing to support; both in terms of functionality and most importantly in terms of the QOS that
consumers will require.
Figure 1: Activities of service-oriented modeling
The activities described above can be depicted to flow within the service-oriented modeling and
architecture method, as shown in Figure 2 below.
The process of service-oriented modeling and architecture consists of three general steps:
Identification,
Specification and
Realization of Services, Components and Flows (typically, choreography of services).
Service Identification
This process consists of a combination of top-down, bottom-up, and middle-out techniques of
domain decomposition, existing asset analysis, and goal-service modeling. In the top-down view,
a blueprint of business use cases provides the specification for business services. This top-down
process is often referred to as domain decomposition, which consists of the decomposition of the
business domain into its functional areas and subsystems, including its flow or process
decomposition into processes, sub-processes, and high-level business use cases. These use cases
often are very good candidates for business services exposed at the edge of the enterprise, or for
those used within the boundaries of the enterprise across lines of business.
In the bottom-up portion of the process or existing system analysis, existing systems are
analyzed and selected as viable candidates for providing lower cost solutions to the
implementation of underlying service functionality that supports the business process. In this
process, you analyze and leverage API’s, transactions, and modules from legacy and packaged
applications. In some cases, componentization of the legacy systems is needed to re-modularize
the existing assets for supporting service functionality.
The middle-out view consists of goal-service modeling to validate and unearth other services not
captured by either top-down or bottom-up service identification approaches. It ties services to
goals and sub-goals, key performance indicators, and metrics.
Subsystem Analysis
This activity takes the subsystems found above during domain decomposition and specifies the
interdependencies and flow between the subsystems. It also puts the use cases identified during
domain decomposition as exposed services on the subsystem interface. The analysis of the
subsystem consists of creating object models to represent the internal workings and designs of
the containing subsystems that will expose the services and realize them. The design construct of
“subsystem” will then be realized as an implementation construct of a large-grained component
realizing the services in the following activity.
Component Specification
In the next major activity, the details of the component that implement the services are specified:
Data
Rules
Services
o Configurable profile
o Variations
o Messaging and events specifications and management definition occur at this
step.
Service Allocation
Service allocation consists of assigning services to the subsystems that have been identified so
far. These subsystems have enterprise components that realize their published functionality.
Often you make the simplifying assumption that the subsystem has a one-to-one correspondence
with the enterprise components. Structuring components occurs when you use patterns to
construct enterprise components with a combination of:
Mediators
Façade
Rule objects
Configurable profiles
Factories
Service allocation also consists of assigning the services and the components that realize them to
the layers in the SOA. Allocation of components and services to layers in the SOA is a key task
that requires the documentation and resolution of key architectural decisions that relate not only
to the application architecture but also to the technical operational architecture designed and used
to support the SOA realization at runtime.
Service Realization
This step recognizes that the software that realizes a given service must be selected or custom
built. Other options that are available include integration, transformation, subscription and
outsourcing of parts of the functionality using Web services. In this step we make the decision as
to which legacy system module will be used to realize a given service and which services will be
built from the “ground-up”. Other realization decisions for services include: security,
management and monitoring of services. Top-down domain decomposition (process modeling
and decomposition, variation-oriented analysis, policy and business rules analysis, and domain
specific behavior modeling is conducted in parallel with a bottom-up analysis of existing legacy
assets that are candidates for componentization (modularization) and service exposure.
SOA Migration Step 4: Incorporation of WebSphere Business Integration Server into the SOA
under the use of ESB