Notes 1

Download as pdf or txt
Download as pdf or txt
You are on page 1of 58

1

Chapter 1
Evolution of model Computing

Introduction to Mainframe architecture:


The mainframes we use today date back to April 7, 1964, with the announcement of the IBM
System/360. System/360 uses the operating system was called MVS™ (multiple virtual storage). Later, IBM
packaged MVS and many of its key subsystems together and called the result OS/390 ®, which is the
immediate predecessor to z/OS.
Until the 80s, most mainframes used punched cards for input and tele-printers for output; these were
later replaced by CRT (cathode ray tube) terminals. Typical (post 1980) mainframe architecture is depicted
in Figure 1.1. A terminal-based user interface would display screens controlled by the mainframe server
using the virtual telecommunications access method (VTAM) for entering and viewing information.
VTAM (Virtual Telecommunications Access Method) is an IBM application program interface (API)
for communicating with telecommunication devices and their users. VTAM was the first IBM program to
allow programmers to deal with devices as "logical units" without having to understand the details of line
protocol and device operation. Prior to VTAM, programmers used IBM's Basic Telecommunications Access
Method (BTAM) to communicate with devices that used the binary synchronous (BSC) and start-stop
lineprotocols.
2

In IBM terminology, VTAM is access method software allowing application programs to read and
write data to and from external devices. It is called 'virtual' because it was introduced at the time when IBM
was introducing virtual storage by upgrading the operating systems of the System/360 series to virtual
storage versions. VTAM was supposed to be the successor to the older telecommunications access methods,
such as Basic telecommunications access method (BTAM) and Telecommunications AccessMethod
(TCAM), which were maintained for compatibilityreasons.
Originally, VTAM was provided free of charge like most systems software of that time. However,
VTAM 2 was the last version to be freely available. ACF/VTAM (Advanced Communication
Function/Virtual Telecommunications Access Method) was introduced in 1976 and was provided for a
license fee. The major new feature of ACF/VTAM was the Multisystem Networking Facility, which
introduced "implementation of intersystem communication among multiple S/370s."
VTAM has been renamed to be the SNA Services feature of Communications Server for OS/390.
This software package also provides TCP/IP functions. VTAM supports several network protocols,
including SDLC, Token Ring, start-stop, Bisync, local (channel attached) 3270 devices,[5] and later TCP/IP.
VTAM became part of IBM's strategic Systems Network Architecture (SNA) which in turn became part of
the more comprehensive Systems Application Architecture (SAA). Terminals communicated with the
mainframe using the systems network architecture (SNA) protocol, instead of the ubiquitous TCP/IP
protocol of today. While these mainframe computers had limited CPU power by modern standards, their I/O
bandwidth was (and is, to date) extremely generous relative to their CPU power. Consequently, mainframe
applications were built using batch architecture to minimize utilization of the CPU during data entry or
retrieval. Thus, data would be written to disk as soon as it was captured and then processed by scheduled
background programs, in sharp contrast to the complex business logic that gets executed during online
transactions on the web today. In fact, for many years, moving from a batch model to an online one was
considered a major revolution in IT architecture, and large systems migration efforts were undertaken to
achieve this; it is easy to see why: In a batch system, if one deposited money in a bank account it would
usually not show up in the balance until the next day after the end of day batch jobs had run! Further, if there
was incorrect data entry, a number of corrective measures would have to be triggered, rather than the
immediate data validations.
Typically, legacy programs written in COBOL, PL/1, and assembler language use VTAM to
communicate with interactive devices and their users. Programs that use VTAM macro instructions are
generally exchanging text strings (for example, online forms and the user's form input) and the most
common interactive device used with VTAM programs was the3270 Information Display System.
MVS (Multiple Virtual Storage) is an operating system from IBM that continues to run on many of
IBM's mainframe and large server computers. MVS has been said to be the operating system that keeps the
world going and the same could be said of its successor systems, OS/390 and z/OS. The payroll, accounts
receivable, transaction processing, database management, and other programs critical to the world’s largest
3

businessesareusuallyrunonanMVSorsuccessorsystem.AlthoughMVShasoftenbeenseenasa monolithic,
centrally-controlled information system, IBM has in recent years repositioned it(andsuccessorsystems) as a
"large server" in a network-oriented distributed environment, using a 3-tierapplicationmodel.The follow-on
version of MVS, OS/390, no longer included the "MVS" in its name. Since MVSrepresentsa
certainepochandcultureinthehistoryofcomputingandsincemanyolderMVSsystemsstilloperate,the term "MVS"
will probably continue to be used for some time. Since OS/390 also comes with UNIXuserandprogramming
interfaces built in, it can be used as both an MVS system and a UNIX system at thesametime.
AmorerecentevolutionofMVSis z/OS, an operatingsystem for IBM's zSeries mainframes.
TheVirtualStorageinMVSreferstotheuseofvirtualmemoryintheoperatingsystem.Virtual
storageormemoryallowsaprogramtohaveaccesstothemaximumamountofmemoryinasystemeven though this
memory is actually being shared among more than one application
program.Theoperatingsystemtranslatestheprogram'svirtualaddressintotherealphysicalmemoryaddresswhereth
edataisactuallylocated.The Multiple inMVSindicatesthataseparatevirtualmemoryismaintainedforeachof
Multiple task partitions.
Job Control Language (JCL) is a scripting language used on IBM mainframe operating systems to
instruct the system on how to run a batch job or start a subsystem. There are actually two IBM JCLs: one for
the operating system lineage that begins with DOS/360 and whose latest member is z/VSE; and the other for
the lineage from OS/360 to z/OS. They share some basic syntax rules and a few basic concepts, but are
otherwise very different. In the early mainframe architectures (through the mid/late 80s), application data
was stored either in structured files, or in database systems based on the hierarchical or networked data
model. Typical examples include the hierarchical IMS database from IBM, or the IDMS network database,
managed now by Computer Associates. The relational (RDBMS) model was published and prototyped in the
70s and debuted commercially in the early 80s with IBM‘s SQL/DS on the VM/CMS operating system
However, relational databases came into mainstream use only after the mid-80s with the advent of IBM‘s
DB2 on the mainframe and Oracle‘s implementation for the emerging UNIXplatform.
IMS (Information Management System) is a database and transaction management system that
was first introduced by IBM in 1968. Since then, IMS has gone through many changes in adapting to new
programming tools and environments. IMS is one of two major legacy database and transaction management
subsystems from IBM that run on mainframe MVS (now z/OS) systems. The other is CICS. It is claimed
that, historically, application programs that use either (or both) IMS or CICS services have handled and
continue to handle most of the world's banking, insurance, and order entry transactions. IMS consists of two
major components, the IMS Database Management System (IMS DB) and the IMS Transaction
Management System (IMS TM). In IMS DB, the data is organized into a hierarchy. The data in each level is
dependent on the data in the next higher level. The data is arranged so that its integrity is ensured, and the
storage and retrieval process is optimized. IMS TM controls I/O (input/output) processing, provides
formatting, logging, and recovery of messages, maintains communications security, and oversees the
4

scheduling and execution of programs. TM uses a messaging mechanism for queuing requests. IMS's original
programming interface was DL/1 (Data Language/1). Today, IMS applications and databases can be
connected to CICS applications and DB2 databases. Java programs can access IMS databases andservices.
Customer Information Control System (CICS) is a transaction server that runs primarily on
IBMmainframe systems under z/OS and z/VSE. CICS is middleware designed to support rapid, high-volume
online transaction processing. A CICS transactionis a unit of processing initiated by a single request that
may affect one or more objects. This processing is usually interactive (screen-oriented), but background
transactions are possible. CICS provides services that extend or replace the functions of the operating system
and are more efficient than the generalized services in the operating system and simpler for programmers to
use, particularly with respect to communication with diverse terminal devices. Applications developed for
CICS may be written in a variety of programming languages and use CICS-supplied language extensions to
interact with resources such as files, database connections, terminals, or to invoke functions such as web
services. CICS manages the entire transaction such that if for any reason a part of the transaction fails all
recoverable changes can be backed out. CICS is also widely used by many smaller organizations. CICS is
used in bank-teller applications, ATM systems, industrial production control systems, insurance
applications, and many other types of interactive applications.
The storage subsystem in mainframes, called virtual storage access mechanism (VSAM), built in
support for a variety of file access and indexing mechanisms as well as sharing of data between concurrent
users using record level locking mechanisms. Early file-structure-based data storage, including networked
and hierarchical databases, rarely included support for concurrency control beyond simple locking. The need
for transaction control, i.e., maintaining consistency of a logical unit of work made up of multiple updates,
led to the development of transaction-processing monitors‘ (TP-monitors), such as CICS (customer
information control system). CICS leveraged facilities of the VSAM layer and implemented commit and roll
back protocols to support atomic transactions in a multi-user environment. CICS is still in use in conjunction
with DB2 relational databases on IBM z-series mainframes. At the same time, the need for speed continued
to see the exploitation of so called direct access methods where transaction control is left to application
logic.
The term Virtual Storage Access Method (VSAM) applies to both a data set type and the access
method used to manage various user data types. Using VSAM, an enterprise can organize records in a file in
physical sequence (the sequential order that they were entered), logical sequence using a key (for example,
the employee ID number), or by the relative record number on direct access storage devices (DASD).
There are three types of VSAM data sets:
1. Entry Sequenced Data Set(ESDS)
2. Key Sequenced Data Set(KSDS)
3. Relative Record Data Set(RRDS)
VSAM records can be of fixed or variable length. VSAM data sets are briefly described as follows:
5

Key Sequence Data Set (KSDS)


This type is the most common use for VSAM. Each record has one or more key fields and a record can be
retrieved (or inserted) by key value. This provides random access to data. Records are of variable length.
IMS uses KDSDs.
Entry Sequence Data Set (ESDS)
This form of VSAM keeps records in sequential order. Records can be accessed sequentially. It is used by
IMS, DB2, and z/OS UNIX.
Relative Record Data Set (RRDS)
This VSAM format allows retrieval of records by number; record 1, record 2, and so forth. This provides
random access and assumes the application program has a way to derive the desired record numbers.

CLIENT-SERVER ARCHITECTURE:--

The microprocessor revolution of the 80s brought PCs to business desktops as well as homes. At the same
time minicomputers such as the VAX family and RISC-based systems running the UNIX operating system
and supporting the C programming language became available. It was now conceivable to move some data
processing tasks away from expensive mainframes to exploit the seemingly powerful and inexpensive
desktop CPUs. As an added benefit corporate data became available on the same desktop computers that
were beginning to be used for word processing and spreadsheet applications using emerging PC-based
office-productivity tools. In contrast terminals were difficult to use and typically found only in data
processing rooms‘. Moreover, relational databases, such as Oracle, became available on minicomputers,
overtaking the relatively lukewarm adoption of DB2 in the mainframe world.

Finally, networking using TCP/IP rapidly became a standard, meaning that networks of PCs and
minicomputers could share data. Corporate data processing rapidly moved to exploit these new technologies.
Figure 1.2 shows the architecture of client-server systems. First, the forms architecture for minicomputer-
based data processing became popular. At first this architecture involved the use of terminals to access
6

Server-side logic in C, mirroring the mainframe architecture; later PC-based forms applications provided
graphical GUIs as opposed to the terminal-based character-oriented CUIs. The GUI forms model was the
first client-server architecture. The forms architecture evolved into the more general client-server
architecture, wherein significant processing logic executes in a client application, such as a desktop PC:
Therefore the client-server architecture is also referred to as fat-client architecture, as shown in Figure
1.2. The client application (or fat-client‘) directly makes calls (using SQL) to the relational database using
networking protocols such as SQL/Net, running over a local area (or even wide area) network using TCP/IP.
Business logic largely resides within the client application code, though some business logic can also be
implemented within the database for faster performance, using stored procedures‘. The client-server
architecture became hugely popular: Mainframe applications which had been evolving for more than a
decade were rapidly becoming difficult to maintain, and client-server provided a refreshing and seemingly
cheaper alternative to recreating these applications for the new world of desktop computers and smaller
Unix-based servers. Further, by leveraging the computing power on desktop computers to perform
validations and other logic, online systems became possible, a big step forward for a world used to batch
processing. Lastly, graphical user interfaces allowed the development of extremely rich user interfaces,
whichaddedtothefeelingofbeingredeemedfromthemainframeworld.
In the early to mid-90s, the client-server revolution spawned and drove the success of a host of
application software products, such as SAP-R/3, the client-server version of SAP‘s ERP software2 for core
manufacturing process automation; which was later extended to other areas of enterprise operations.
Similarly supply chain management (SCM), such as from i2, and customer relationship management
(CRM), such as from Seibel, also became popular. With these products, it was conceivable, in principle, to
replace large parts of the functionality deployed on mainframes by client-server systems, at a fraction of the
cost.
However, the client-server architecture soon began to exhibit its limitations as its usage grew beyond
small workgroup applications to the core systems of large organizations: Since processing logic on the
Client directly accessed the database layer, client-server applications usually made many requests to the
server while processing a single screen. Each such request was relatively bulky as compared to the terminal-
based model where only the input and final result of a computation were transmitted. In fact, CICS and IMS
even today support hanged-data only modes of terminal images, where only those bytes changed by a user
are transmitted over the network. Such frugal network architectures enabled globally distributed terminals to
connect to a central mainframe even though network bandwidths were far lower than they are today. Thus,
while the client-server model worked fine over a local area network, it created problems when client-server
systems began to be deployed on wide area networks connecting globally distributed offices. As a result,
many organizations were forced to create regional data centers, each replicating the same enterprise
application, albeit with local data. This structure itself led to inefficiencies in managing global
7

Software upgrades, not to mention the additional complications posed by having to upgrade the client
applications on each desktop machine as well.
Finally, it also became clear over time that application maintenance was far costlier when user
interface and business logic code was intermixed, as almost always became the case in the fat client-side
applications. Lastly, and in the long run most importantly, the client-server model did not scale;
organizations such as banks and stock exchanges where very high volume processing was the norm could
not be supported by the client-servermodel.
Thus, the mainframe remained the only means to achieve large throughput high-performance
businessprocessing.
8

CLUSTER COMPUTING:

There are many applications that require high-performance computing. Some Examples:

Numerous scientific and engineering applications:

 Modeling, Simulations and analysis of complex systems like climate, galaxies, molecular structures,
nuclear explosionsetc.
 Business and Internet applications such as E-commerce (e.g. Amazon) and Web servers (e.g. Yahoo,
Google), File servers, databases,etc..,
 Dedicated parallel computers are veryexpensive.
 Also, Supercomputers are not easilyextendible.
 Cost effective approaches are: Cluster Computing, Grid Computing and Cloud Computing.

Cluster Computing is useful if an application which has one or more of the followingcharacteristics:

 Large runtimes
 Real timeconstraints
 Large memoryusage
 High I/Ousage
 Faulttolerance

Introduction
The first inspiration for cluster computing was developed in the 1960s by IBM as an alternative of linking
large mainframes to provide a more cost effective form of commercial parallelism [1]. At that time, IBM's
Houston Automatic Spooling Priority (HASP) system and its successor, Job Entry System (JES) allowed the
distribution of work to a user-constructed mainframe cluster. IBM still supports clustering of mainframes
through their Parallel Sysplex system, which allows the hardware, operating system, middleware, and
system management software to provide dramatic performance and cost improvements while permitting
large mainframe users to continue to run their existingapplications.
However, cluster computing did not gain momentum until the convergence of three important trends
in the 1980s: high-performance microprocessors, high-speed networks, and standard tools for high
performance distributed computing. A possible fourth trend is the increasing need of computing power for
computational science and commercial applications coupled with the high cost and low accessibility of
traditional supercomputers. These four building blocks are also known as killer-microprocessors, killer-
networks, killer-tools, and killer-applications, respectively. The recent advances in these technologies and
their availability as cheap and commodity components are making clusters or networks of computers such as
Personal Computers (PCs), workstations, and Symmetric Multiple-Processors (SMPs) an appealing solution
for cost-effective parallel computing. Clusters, built using commodity-off-the-shelf (COTS) hardware
components as well as free, or commonly used, software, are playing a major role in redefining the concept
of supercomputing. And consequently, they have emerged as mainstream parallel and distributed platforms
for high-performance, high-throughput and high-availability computing.
9

What is Cluster Computing?

 High performance computing (HPC) is the main motivator for ClusterComputing.


 It is alternative to symmetric multiprocessing to provide high performance andavailability.
 This has made clustering as one of the hottest new area in computer systemdesign.

Cluster is defined as:

 Co-Ordinated use of interconnected autonomous computers in a machineroom.


 Have a single system image spanning all itsnodes
 Work as an integrated collection of resources (Unified computingresource).
 A collection of stand-alone workstations of PCs that are interconnected by a high-speednetwork.

Clusters: Technological Push

 Today‘s PCs also have remarkably high computingpower.


 PCs running Linux provide the best performance for the price at the moment, providing good CPU
speed with cheap memory and diskspace.
 In the last few years, networking capabilities have also improvedphenomenally:
 It is now possible to connect clusters of workstations with latencies and bandwidths comparable to
tightly coupledmachines.
 Use COTS –Commodity-off-the-shelfcomponents.

Clusters started to takeoffs in 90‘s:

 Clusters of IBM, sun, DEC workstations connected by 10MB Ethernet LAN, HP clusters,etc..,

Components of a Cluster

A Cluster is made of several interconnected computers:

 Gives a single system image(SSI).


 Low cost alternatives to expensivesupercomputers.

SSI makes a cluster appear like a single machine to the user.

A cluster consists of:

 A Stand-alone machine withstorage


 A fast interconnectionnetwork
 Low latency communicationprotocols
 Software to give single system image: clustermiddleware
 ProgrammingTools

Why is Clusters than single 1‘s?

 Price/Performance: The reason for the growth in use of clusters is that they have significantly
reduced the cost of processingpower.

 Availability: Single points of failure can be eliminated, if any one system component goes down, the
system as a whole stay highlyavailable.
10

 Scalability: HPC clusters can grow in overallcapacitybecause processors and nodes canbeadded
as demandincreases.

Cluster Catagorization

High-availability

 High-availability clusters (also known as Failover Clusters) are implemented primarily for the purpose
of improving the availability of services that the clusterprovides.

They operate by having redundant nodes, which are then used to provide service when system
components fail.

The most common size for an HA cluster is two nodes, which is the minimum requirement to
provideredundancy.

HA cluster implementations attempt to use redundancy of cluster components to eliminate single points
offailure.

There are commercial implementations of High-Availability clusters for many operating systems. The
Linux-HA project is one commonly used free software HA package for the Linux operating system.

Load-balancing

Load-balancing is when multiple computers are linked together to share computational workloador
function as a single virtualcomputer.

Logically, from the user side, they are multiple machines, but function as a single virtualmachine.

Requests initiated from the user are managed by, and distributed among, all the
standalonecomputers to form acluster.

This results in balanced computational work among different machines, improving theperformance of
the clustersystem.

High- Performance

 Started from1994

 Donald Becker of NASA assembled thiscluster.

 also called Beowulfcluster

 Applications like data mining, simulations, parallel processing, weather modeling,etc.

Cluster Classification

 Open Cluster – All nodes can be seen from outside, and hence they need more IPs, and causemore
security concern. But they are more flexible and are used for internet/web/information servertask
11

 Close Cluster – They hide most of the cluster behind the gateway node. Consequently
theyneedlessIP addresses and provide better security. They are good for computingtasks.

Benefits

A clustered system offers many valuable benefits to a modern high performance computing infrastructure
including:

High processing capacity — by combining the power of multiple servers, clustered systems can tackle
large and complex workloads. For Example - One can reduce the time for key engineering simulation jobs
from days to hours, thereby shortening the time-to-market for its newproduct.

Resource consolidation — Asingle cluster can accommodate multiple workloads and can vary the
processing power assigned to each workload as required; this makes clusters ideal for resource consolidation
and optimizes resource utilization.

Optimal use of resources — Individual systems typically handle a single workload and must be sized to
accommodate expected peak demands for that workload; this means they typically run well below capacity
but can still "run out" if demand exceeds capacity—even if other systems are idle. Because clustered
systems share enormous processing power across multiple workloads, they can handle a demand peak—
even an unexpected one—by temporarily increasing the share of processing for that workload, there by
taking advantage of unusedcapacity.

Geographic server consolidation — some may share processing power around the world, for example by
diverting daytime US transaction processing to systems in Japan that are relatively idle overnight.
12

24 x 7 availability with failover protection — because processing is spread across multiple machines,
clustered systems are highly fault-tolerant: if one system fails, the others keep working.

Disaster recovery — Clusters can span multiple geographic sites so even if an entire site falls victim to a
power failure or other disaster, the remote machines keep working.

Horizontal and vertical scalability without downtime — as the business demands grow, additional
processing power can be added to the cluster without interrupting operations.

Centralized system management — Many available tools enable deployment, maintenance and monitoring
of large, distributed clusters from a single point of control.

Disadvantages:

Administration Complexity:

 Administering N node cluster is close to administering N bigmachines.


 Administering N node multiprocessor is close to administering 1 big machineonly.
 Higher cost ofownership.
 Connected using I/O bus(lower bandwidth andlatency)
 N machine cluster have N independent memories and N copies ofOS
 Large computers have smallvolumes:
 Development cost must be amortized over fewsystems.
 Results in highercost.
 Administration complexity can bemitigated:
 Construct fromSMPs
 Keep storage outside ofclusters:
 Use a Storage Area Network(SAN).

Cluster Applications

 Google SearchEngine.
 EarthquakeSimulation.
 WhetherForecasting.

GRID COMPUTING–
Grid is an infrastructure that involves the integrated and collaborative use of computers, networks,
databases and scientific instruments owned and managed by multiple organizations. Grid applications often
involve large amounts of data and/or computing resources that require secure resource sharing across
organizational boundaries.
Grid computing is a form of distributed computing whereby a "super and virtual computer" is
composed of a cluster of networked, loosely coupled computers, acting in concert to perform very large
tasks.Grid computing (Foster and Kesselman, 1999) is a growing technology that facilitates the executions
of large-scale resource intensive applications on geographically distributed computing resources. Facilitates
flexible, secure, coordinated large scale resource sharing among dynamic collections of individuals,
13

Institutions, and resource Enable communities (―virtual organizations‖) to share geographically distributed
resources as they pursue common goals.
Grid is a shared collection of reliable (cluster-tightly coupled) & unreliable resources (loosely
coupled machines) and interactively communicating researchers of different virtual organizations (doctors,
biologists, physicists). Grid System controls and coordinates the integrity of the Grid by balancing the usage
of reliable and unreliable resources among its participants providing better quality of service.
Grid computing is a method of harnessing the power of many computers in a network to solve
problems requiring a large number of processing cycles and involving huge amounts of data. Most
organizations today deploy firewalls around their computer networks to protect their sensitive proprietary
data. But the central idea of grid computing-to enable resource sharing makes mechanisms such as firewalls
difficult to use

Grid Topologies

Intra Grid/ Cluster Grids


– Local grid within anorganization
– Trust based on personalcontracts
Extra Grid/ Enterprise Grids
– Resources of a consortium of organizations connected through a (Virtual) PrivateNetwork
– Trust based on Business to Businesscontracts
Inter Grid/ Global Grids
– Global sharing of resources through theinternet
– Trust based oncertification

Types Of Grids

Computational Grid:-―A computational grid is a hardware and software infrastructure that provides
dependable, consistent, pervasive, and inexpensive access to high-end computational capabilities.‖ Provides
Users with compute power for solving jobs. The ability to provide mechanism that can intelligently and
transparently select computing resources capable to run users jobs with the ability to allow user to
independently manage the computing resources.

Example: Science Grid (US Department of Energy)


14

Data Grid:-A data grid is a grid computing system that deals with data — the controlled sharing and
management of large amounts of distributed data. Data Grid is the storage component of a grid
environment. Scientific and engineering applications require access to large amounts of data, and often this
data is widely distributed. A data grid provides seamless access to the local or remote data required to
complete compute intensive calculations.

Example :

Biomedical informatics Research Network (BIRN),


Southern California earthquake Center (SCEC).

Two Key Grid Computing Groups


the Globus Alliance
 Composed of peoplefrom:
Argonne National Labs, University of Chicago, University of Southern California Information Sciences
Institute, University of Edinburgh and others.
 OGSA/I standards initially proposed by the GlobusGroup

The Global Grid Forum


 Heavy involvement of Academic Groups andIndustry
 (e.g. IBM Grid Computing, HP, United Devices, Oracle, UK e-Science Programmed, US
DOE, US NSF, Indiana University, and manyothers)
 Process
 Meets three times annually Solicits involvement from industry, research groups, and
academics
As there are a large number of projects around the world working on developing Grids for different purposes
at different scales, several definitions of Grid abound. The Globus Project defines Grid as ―an
infrastructure that enables the integrated, collaborative use of high-end computers, networks,
databases, and scientific instruments owned and managed by multiple organizations.Another utility
notion based Grid definition put forward by the Grid bus Project is ―Grid is a type of parallel and
distributed system that enables the sharing, selection, and aggregation of geographically distributed
―autonomous resources dynamically at runtime depending on their availability, capability,
performance, cost, and user’squality -of-service requirements.
The development of the Grid infrastructure, both hardware and software, has become the focus of a
large community of researchers and developers in both academia and industry. The major problems being
addressed by Grid developments are the social problems involved in collaborative research:
15

A typical view of Grid environment:

A high-level view of activities involved within a seamless and scalable Grid environment is shown in
Figure 2. Grid resources are registered within one or more Grid information services. The end users submit
their application requirements to the Grid resource broker which then discovers suitable resourcesby
querying the Information services, schedules the application jobs for execution on these resources and then
monitors their processing until they are completed. A more complex scenario would involve more
requirements and therefore, Grid environments involve services such as security, information, directory,
resource allocation, application development, execution management, resource aggregation, andscheduling.
Figure 3 shows the hardware and software stack within a typical Grid architecture. It consists of four
layers: fabric, core middleware, user-level middleware, and applications and portals layers.
16

Grid Fabric layer consists of distributed resources such as computers, networks, storage devices and
scientific instruments. The computational resources represent multiple architectures such as clusters,
supercomputers, servers and ordinary PCs which run a variety of operating systems (such as UNIX variants
or Windows). Scientific instruments such as telescope and sensor networks provide real-time data that can
be transmitted directly to computational sites or are stored in adatabase.

Core Grid middleware offers services such as remote process management, co-allocation of resources,
storage access, information registration and discovery, security, and aspects of Quality of Service (QoS)
such as resource reservation and trading. These services abstract the complexity and heterogeneity of the
fabric level by providing a consistent method for accessing distributedresources.

User-level Grid middleware utilizes the interfaces provided by the low-level middleware to provide higher
level abstractions and services. These include application development environments, programming tools
and resource brokers for managing resources and scheduling application tasks for execution on global
resources.

Grid applications and portals are typically developed using Grid-enabled programming environments and
interfaces and brokering and scheduling services provided by user-level middleware. An example
application, such as parameter simulation or a grand-challenge problem, would require computational
power, access to remote datasets, and may need to interact with scientific instruments. Grid portals offer
Web-enabled application services, where users can submit and collect results for their jobs on remote
resources through the Web.

Operational Flow from Users Perspective


1. The users compose their application as a distributed application (e.g., parameter sweep) using visual
application developmenttools.
2. The users specify their analysis and quality-of-service requirements and submit them to the Grid
resourcebroker.
3. The Grid resource broker performs resource discovery and their characteristics using the Grid
informationservice.
4. The broker identifies resource service prices by querying the Grid marketdirectory.
5. The broker identifies the list of data sources or replicas and selects the optimalones.
6. The broker also identifies the list of computational resources that provides the required application
services.
7. The broker ensures that the user has necessary credit or authorized share to utilizeresources.
8. The broker scheduler maps and deploys data analysis jobs on resources that meet user quality-of-service
requirements.
9. The broker agent on a resource executes the job and returnsresults.
17

10. The broker collates the results and passes to theuser.


11. The metering system charges the user by passing the resource usage information to the accounting
system.

Benefits
 Exploit Underutilizedresources
 Resource loadBalancing
 Virtualize resources across anenterprise
 Data Grids, ComputeGrids
 Enable collaboration for virtualorganizations
Business benefits
 Improve efficiency by improving computationalcapabilities
 Bring together not only IT resources but alsopeople.
 Create flexible, resilient operationalinfrastructures
 Address rapid fluctuations in customerdemands.
Technology benefits
 Federate data and distribute itglobally.
 Support large multi-disciplinary collaboration across organizations andbusiness.
 Enable recovery andfailure
 Ability to run large-scale applications comprising thousands of computes, for wide range of
applications.
 Reduces signal latency – the delay that builds up as data are transmitted over theInternet.

Disadvantages of Grid Computing


 Resource sharing is further complicated when grid is introduced as a solution for utility computing
where commercial applications and resources become available as shareable and on demand
resources.
 The concept of commercial on-demand shareable adds new, more difficult challenges to the already
complicated grid problem list including service level features, accounting, usage metering, flexible
pricing, federated security, scalability, and open-endedintegration.
 Some applications may need to be tweaked to take full advantage of the newmodel.
18

Parallel Computing and Distributed Computing


The term parallel computing refers to a model in which the computation is divided among several processors
sharing the same memory. The architecture of a parallel computing system is often characterized by the
homogeneity of components: each processor is of the same type and it has the same capability as the others.
The shared memory has a single address space, which is accessible to all the processors. Parallel programs
are then broken down into several units of execution that can be allocated to different processors and can
communicate with each other by means of the shared memory. Originally we considered parallel systems
only those architectures that featured multiple processors sharing the same physical memory and that were
considered a single computer. Overtime, these restrictions have been relaxed, and parallel systems now
include all architectures that are based on the concept of shared memory, whether this is physically present
or created with the support of libraries, specific hardware, and a highly efficient networking infrastructure.
For example, a cluster of which the nodes are connected through an InfiniBand network and configured with
a distributed shared memory system can be considered a parallel system. The term distributed computing
encompasses any architecture or system that allows the computation to be broken down into units and
executed concurrently on different computing elements, whether these are processors on different nodes,
processors on the same computer, or cores within the same processor. Therefore, distributed computing
includes a wider range of systems and applications than parallel computing and is often considered a more
general term. Even though it is not a rule, the term distributed often implies that the locations of the
computing elements are not the same and such elements might be heterogeneous in terms of hardware and
softwarefeatures.

Elements of parallel computing

It is now clear that silicon-based processor chips are reaching their physical limits. Processing speed is
constrained by the speed of light, and the density of transistors packaged in a processor is constrained by
thermodynamic limitations. A viable solution to overcome this limitation is to connect multiple processors
working incoordination with each other to solve ―Grand Challengeproblems. The firststepsinthisdirection led
to the development of parallel computing, which encompasses techniques, architectures, and systems for
performing multiple activities in parallel. As we already discussed, the term parallel computing has blurred
its edges with the term distributed computing and is often used in place of the latter term. In this section, we
refer to its proper characterization, which involves the introduction of parallelism with in a single computer
by coordinating the activity of multiple processorstogether.

What is parallel processing?

Processing of multiple tasks simultaneously on multiple processors is called parallel processing. The parallel
program consists of multiple active processes (tasks) simultaneously solving a given problem. A given task
19

is divided into multiple sub tasks using a divide-and-conquer technique, and each sub task is processed on a
different central processing unit (CPU). Programming on a multi-processor system using the divide-and-
conquer technique is called parallel programming. Many applications to day require more computing power
than a traditional sequential computer can offer. Parallel processing provides a cost-effective solution to this
problem by increasing the number of CPU‘s in a computer and by adding an efficient communication
system between them. The work load can then be shared between different processors. This setup results in
higher computing power and performance than single-processor system offers. The development of parallel
processing is being influenced by many factors. The prominent among them include thefollowing:

• Computational requirements are ever increasing in the areas of both scientific and business computing.
The technical computing problems, which require high-speed computational power, are related to life
sciences, aerospace, geographical information systems, mechanical design and analysis, and thelike.

• Sequential architectures are reaching physical limitations as they are constrained by the speed of light and
thermodynamics laws. The speed at which sequential CPUs can operate is reaching saturation point (no
more vertical growth), and hence an alternative way to get high computational speed is to connect multiple
CPUs (opportunity for horizontalgrowth).

• Hardware improvements in pipelining, superscalar and the like are nonsalable and require sophisticated
compiler technology. Developing such compiler technology is a difficulttask.
• Vector processing works well for certain kinds of problems. It is suitable mostly for scientific problems
(involving lots of matrix operations) and graphical processing. It is not useful for other areas, such as
databases.
• The technology of parallel processing is mature and can be exploited commercially; there is already
significant R&D work on development tools andenvironments.
• Significant development in networking technology is paving the way for heterogeneouscomputing.

Hardware architectures for parallel processing

The core elements of parallel processing are CPUs. Based on the number of instruction and data streams that
can be processed simultaneously, computing systems are classified into the following four categories:

• Single-instruction, single-data (SISD)systems

• Single-instruction, multiple-data (SIMD)systems

• Multiple-instruction, single-data (MISD)systems

• Multiple-instruction, multiple-data (MIMD)systems


20

Single-instruction, single-data (SISD) systems

An SISD computing system is a uniprocessor machine capable of executing a single instruction, which
operates on a single data stream (see Figure 2.2). In SISD, machine instructions are processed sequentially;
hence computers adopting this model are popularly called sequential computers. Most conventional
computers are built using the SISD model. All the instructions and data to be processed have to be stored in
primary memory. The speed of the processing element in the SISD model is limited by the rate at which the
computer can transfer information internally. Dominant representative SISD systems are IBMPC,Macintosh,
andworkstations.

Single-instruction, multiple-data (SIMD) systems

An SIMD computing system is a multiprocessor machine capable of executing the same instruction on all
the CPUs but operating on different data streams (see Figure 2.3). Machines based on an SIMD model are
well suited to scientific computing since they involve lots of vector and matrix operations. For instance,
statements such as Ci=Ai* Bi can be passed to all the processing elements (PEs); organized data elements of
vectors A and B can be divided into multiple sets (N-sets for N PE systems); and each PE can process one
dataset. Dominant representative SIMD systems are Cray‘s vector processing machine and Thinking
Machines cm.
21

Multiple-instruction, single-data (MISD) systems

An MISD computing system is a multiprocessor machine capable of executing different instructions on


different PEs but all of them operating on the same dataset (see Figure2.4). For instance, statements such as

Perform different operations on the same data set. Machines built using the MISD model are not useful in
most of the applications; a few machines are built, but none of them are available commercially. They
became more of an intellectual exercise than a practical configuration.

Multiple-instruction, multiple-data (MIMD) systems

An MIMD computing system is a multiprocessor machine capable of executing multiple instructions on


multiple datasets (see Figure2.5). Each PE in the MIMD model has separate instruction and data streams;
hence machines built using this model are well suited to any kind of application. Unlike SIMD and MISD
machines, PEs in MIMD machines work asynchronously. MIMD machines are broadly categorized into
shared-memory MIMD and distributed-memory MIMD based on the way PEs are coupled to the main
memory. Shared memory MIMD machines. In the shared memory MIMD model, all the PEs are connected
to a single global memory and they all have access to it (see Figure 2.6). Systems based on this model are
also called tightly coupled multiprocessor systems. The communication between PEs in this model takes
place through the shared memory; modification of the data stored in the global memory by one PE is visible
to all other PEs. Dominant representative shared memory MIMD systems are Silicon Graphics machines and
Sun/IBM‘s SMP (Symmetric Multi-Processing).
22

Distributed memory MIMD machines

In the distributed memory MIMD model, all PEs have a local memory. Systems based on this model are also
called loosely coupled multiprocessor systems. The communication between PEs in this model takes place
through the inter connection network (the inter process communication channel, or IPC). The network
connecting PEs can be configured to tree, mesh, cube, and so on. Each PE operates asynchronously, and if
communication/synchronization among tasks is necessary, they can do so by exchanging messages between
them. The shared-memory MIMD architecture is easier to program but is less tolerant to failures and harder
to extend with respect to the distributed memory MIMD model. Failures in a shared-memory MIMD affect
the entire system, whereas this is not the case of the distributed model, in which each of the PEs can be
easily isolated. Moreover, shared memory MIMD architectures are less likely to scale because the addition
of more PEs leads to memory contention. This is a situation that does not happen in the case of distributed
memory, in which each PE has its own memory. As a result, distributed memory MIMD architectures are
most populartoday.

Approaches to parallel programming

A sequential program is one that runs on a single processor and has a single line of control. To make many
processors collectively work on a single program, the program must be divided into smaller independent
23

Chunks so that each processor can work on separate chunks of the problem. The program decomposed in
this way is a parallel program. A wide variety of parallel programming approaches are available. The most
prominent among them are the following:

• Dataparallelism

•Process parallelism

•Farmer-and-worker model

These three models are all suitable for task-level parallelism. In the case of data parallelism, the divide-and-
conquer technique is used to split data into multiple sets, and each data set is processed on different PEs
using the same instruction. This approach is highly suitable to processing on machines based on the SIMD
model. In the case of process parallelism, a given operation has multiple (but distinct) activities that can be
processed on multiple processors. In the case of the farmer- and-worker model, a job distribution approach is
used: one processor is configured as master and all other remaining PEs are designated as slaves; the master
assigns jobs to slave PEs and, on completion, they inform the master, which in turn collects results. These
approaches can be utilized in different levels of parallelism.

Elements of distributed computing

In the previous section, we discussed techniques and architectures that allow introduction of parallelism with
in a single machine or system and how parallelism operates at different levels of the computing stack. In this
section, we extend these concepts and explore how multiple activities can be performed by leveraging
systems composed of multiple heterogeneous machines and systems. We discuss what is generally referred
to as distributed computing and more precisely introduce the most common guidelines and patterns for
implementing distributed computing systems from the perspective of the softwaredesigner.

General concepts and definitions

Distributed computing studies the models, architectures, and algorithms used for building and managing
distributed systems. As a general definition of the term distributed system, we use the one proposed by
Tanenbaumet.al [1]:

A distributed system is a collection of independent computers that appears to its users as a single coherent
system. This definition is general enough to include various types of distributed computing systems that are
especially focused on unified usage and aggregation of distributed resources. A distributed system is one in
which components located at networked computers communicate and coordinate their actions only by
passing messages. As specified in this definition, the components of a distributed system communicate with
some sort of message passing. This is a term that encompasses several communication models. Components
24

of a distributed system A distributed system is the result of the interaction of several components that
traverse the entire computing stack from hardware to software. It emerges from the collaboration of several
elements that—by working together—give users the illusion of a single coherent system. Figure2.10
provides an overview of the different layers that are involved in providing the services of a distributed
system.

At the very bottom layer, computer and network hardware constitute the physical infrastructure; these
components are directly managed by the operating system, which provides the basic services for inter
process communication (IPC), process scheduling and management, and resource management in terms of
file system and local devices. Taken together these two layers become the platform on top of which
specialized software is deployed to turn a set of networked computers into a distributed system. The use of
well-known standards at the operating system level and even more at the hardware and network levels
allows easy harnessing of heterogeneous components and their organization into a coherent and uniform
system. For example, network connectivity between different devices is controlled by standards, which
allow them to interact seamlessly. At the operating system level, IPC services are implemented on top of
standardized communication protocols such Transmission Control Protocol/Internet Protocol (TCP/IP), User
Datagram Protocol (UDP) or others. The middle ware layer leverages such services to build a uniform
environment for the development and deployment of distributed applications. By relying on the services
offered by the operating system, the middleware develops its own protocols, data formats, and programming
language or frameworks for the development of distributed applications. All of them constitute a uniform
interface to distributed application developers that is completely independent from the underlying operating
system and hides all the heterogeneities of the bottom layers. The top of the distributed system stack is
represented by the applications and services designed and developed to use the middleware. These can serve
several purposes and often exposetheir
25

Features in the form of graphical user interfaces (GUIs) accessible locally or through the Internet via a Web
browser. For example, in the case of a cloud computing system, the use of Web technologies is strongly
preferred, not only to interface distributed applications with the end user but also to provide platform
services aimed at building distributed systems. A very good example is constituted by Infrastructure-as-a-
Service (IaaS) providers such as Amazon Web Services (AWS), which provide facilities for creating virtual
machines,organizingthemtogetherintoacluster,anddeployingapplicationsandsystemsontop.Figure
2.11 Shows an example of how the general reference architecture of a distributed system is contextualized in
the case of a cloud computingsystem.

Evolution of sharing on the Internet


26

The Emergence of Cloud Computing

The emergence of any technology and its wider acceptance is result of a series of developments those
precede it. The invention of a new technology may not be accepted immediately by larger society - either
business or otherwise. Before the advent and acceptance of any new technology there is a period of lead up
where various concepts and parts of the idea exist in different forms and expressions, followed by a sudden
boom when someone eventually discovers the right combination and the concept becomes concrete reality
widely and inevitably accepted in an unprecedented degree and depth.
1. Mainframe computing

Mainframes provided the computing for the first time to business users. A mainframe computer is housed in
a computer center and work to be carried out using it called job.Jobs are prepared for submitted to the
computer and processing produced outputs to be collected. Jobs are grouped into batches are processing
carried daily, weekly or monthly depending on the nature of the job. For instance, payroll applications run
weekly, whereas are accounts payable and receivable jobs submitted on monthly basis.
Advantages
1. Mainframe computing brought computing to business domain for the first time. It helped business to carry
out some mundane and routine jobs such as payroll, accounts, inventory thus sparing employees from
tediousjobs.
Disadvantages
1. Mainframe computing represented a centralized model of computing. It was available in one location, and
anyone who needs it must go to computer center for availing it.
27

2. PersonalComputing

Personal computing or desktop computing heralded a new direction in computing by providing computers to
eachemployeeontheirdesktop or workspace.It decentralized computingand empoweredeveryemployee with
required computing at her disposal. It consisted of personal computer small enough to fit conveniently in an
individual workspace. Every category of employees started using computers to their domains- accounts,
inventory, payroll and more andmore.
Advantages
● less expensive, easy to upgrade and less accessoriesneeded
Disadvantages
● lack portability, power use, monitors andperipherals
3. NetworkComputing

One of the drawbacks of desktop computing is that information sharing with other users is a tedious process.
You need to copy and carry it on a secondary storage device to share it. In a workplace environment, where
people have to produce and share information, this becomes a challenge. Networking computing offered a
solution to overcome this. Since networked computers can share information, it is possible to use them in
workplace so that workers can seamlessly exchange information they need or want to share across others.
Networked computers Local Area network (LAN) achieved this. In the networked computing model- a
relatively powerful computer serveris loaded with all software needed and each user to provide with a
connectedterminalto access and work.
4. InternetComputing
While network computing such as LAN connected uses within an office or institutions, Internet computing
is used to connect organizations located in different geographicallocations.
28

5. GridComputing
There are occasions wherein the computing power available within an enterprise is not sufficient to carry out
the computing task on hand. It may also be possible that data required for the processing is generated at
various geographical locations. In such cases Grid Computing is used. Grid computing requires the use of
software that can divide and farm out pieces of a program as one large system image to several thousand
computers.

6. CloudComputing
While grid computing helped to gather computing power from other institutions, it could be used by only by
participating organizations or privileged people. They also lacked commercial model. Cloud computing uses
many features of earlier systems such as Grid but extended computing to larger population by way of pay-
per-use.

Utility computing
Utility computing can be defined as the provision of computational and storage resources as a metered
service, similar to those provided by a traditional public utility company. This, of course, is not a new
idea. This form of computing is growing in popularity, however, as companies have begun to extend the
model to a cloud computing paradigm providing virtual servers that IT departments and users can access
on demand. Early enterprise adopters used utility computing mainly for non-mission-critical needs, but
that is quickly changing as trust and reliability issues are resolved.
Some people think cloud computing is the next big thing in the world of IT. Others believe it
is just another variation of the utility computing model that has been repackaged in this decade as
something newandcool. However, it is not just the buzzword ―cloud computing‖that iscausing confusion
among the masses. Currently, with so few cloud computing vendors actually practicing this form of
technology and also almost every analyst from every research organization in the country defining the
term differently, the meaning of the term has become very nebulous. Even among those who think they
understand it, definitions vary, and most of those definitions are hazy at best. To clear
thehazeandmakesomesenseofthenewconcept,thisbookwillattempttohelpyouunderstandjustwhat
29

Cloud computing really means, how disruptive to your business it may become in the future, and what
its advantages and disadvantages are.
As we said previously, the term the cloudis often used as a metaphor for the Internet and has
become a familiar cliché. However, when the cloud is combined with computing, it causes a lot of
confusion. Market research analysts and technology vendors alike tend to define cloud computing very
narrowly, as a new type of utility computing that basically uses virtual servers that have been made
available to third parties via the Internet. Others tend to define the term using a very broad, all-
encompassing application of the virtual computing platform. They contend that anything beyond the
firewall perimeter is in the cloud. A more tempered view of cloud computing considers it the delivery of
computational resources from a location other than the one from which you are computing.

Definitions Of Cloud Computing:


Cloud computing is a model for enabling ubiquitous(Anywhere & Any time), convenient, on-demand
network access to a shared pool of configurable computing resources (e.g., networks, servers, storage,
applications, and services) that can be rapidly provisioned and released with minimal management effort or
service provider interaction.

Wikipedia:Cloud computingisWeb-based processing, where by shared resources,software, and


information are provided to computers and other devices (such as smart phones) on demand over
theInternet.

Forrester: Cloud computing: A pool of abstracted, highly scalable, and managed infrastructure capable of
hosting end-customer applications and billed by consumption‖.

Cloud computing is using the internet to access someone else's software running on someone else's hardware
in someone else's data center-- Lewis Cunningham

Introduction to Cloud Computing:


Cloud computing takes the technology, services, and applications that are similar to those on the Internet and
turns theminto a self-service utility. The use of the word cloudmakesreference to the twoessential concepts:
• Abstraction: Cloud computing abstracts the details of system implementation from users and developers.
Applications run on physical systems that aren't specified, data is stored in locations that are unknown,
administration of systems is outsourced to others, and access by users isubiquitous.
• Virtualization: Cloud computing virtualizes systems by pooling and sharing resources. Systems and
storage can be provisioned as needed from a centralized infrastructure, costs are assessed on a metered basis,
multi-tenancy is enabled, and resources are scalable withagility.
30

Many people mistakenly believe that cloud computing is nothing more than the Internet given a
different name. Many drawings of Internet-based systems and services depict the Internet as a cloud, and
people refer to applications running on the Internet as running in the cloud,so the confusion is
understandable. The Internet has many of the characteristics of what is now being called cloud computing.
When you store your photos online instead of on your home computer, or use webmail or a social
networking site, you are using a cloud computing service. If you are an organization, and you want to use,
for example, an online invoicing service instead of updating the in-house one you have been using for many
Years, that online invoicing serviceis acloud computing service.
Cloud computing is the delivery of computing services over the Internet. Cloud services allow
individuals and businesses to use software and hardware that are managed by third parties at remote
locations. Examples of cloud services include online file storage, social networking sites, webmail, and
online business applications. The cloud computing model allows access to information and computer
resources from anywhere that a network connection is available. Cloud computing provides a shared pool of
resources, including data storage space, networks, computer processing power, and specialized corporate
and userapplications.

Cloud Architecture
 Cloud ServiceModels
 Cloud DeploymentModels
 Essential Characteristics of CloudComputing

Cloud Service Models


 Cloud Software as a Service(SaaS)
 Cloud Platform as a Service(PaaS)
 Cloud Infrastructure as a Service(IaaS)
Infrastructure as a Service (IaaS):--
This is the base layer of the cloudstack.
It serves as a foundation for the other two layers, for theirexecution.
The keyword behind this stack isVirtualization.
The capability provided to the consumer is to provision processing, storage, networks, and other
fundamental computingresources.
Consumer is able to deploy and run arbitrary software, which can include operating systems and
applications. The consumer does not manage or control the underlying cloud infrastructure but has
control over operating systems; storage, deployed applications, and possibly limited control of select
networking components (e.g., hostfirewalls).
IaaS provides virtual machines, virtual storage, virtual infrastructure, and other hardware assets as
resources that clients can provision. The IaaS service provider manages the entire infrastructure,
while the client is responsible for all other aspects of the deployment. This can include the operating
system, applications, and user interactions with thesystem.
Examples of IaaS service providersinclude:
• Amazon Elastic Compute Cloud(EC2)
• Eucalyptus
• GoGrid
• FlexiScale
• Linode
• RackSpaceCloud
• Terremark
Platform as a Service (PaaS):--
The middle layer of Could Stack, i.e. PaaS (Platform as aService).
This middle layer of cloud is consumed mainly bydevelopers.
The consumer does not manage or control the underlying cloud infrastructure including network, servers,
operating systems, or storage, but has control over the deployed applications and possibly application
hosting environmentconfigurations.
PaaS provides virtual machines, operating systems, applications, services, development frameworks,
transactions, and control structures. The client can deploy its applications on the cloud infrastructure or
use applications that were programmed using languages and tools that are supported by the PaaS service
provider.
Examples of PaaS servicesare:
• Force.com
• GoogleAppEngine
• Windows AzurePlatform.
Software as a Service (SaaS):--
 The capability provided to the consumer is to use the provider‘s applications running on a cloud
infrastructure.
The applications are accessible from various client devices through a thin client interface such as a web
browser (e.g., web-basedemail).
The consumer does not manage or control the underlying cloud infrastructure including network,
servers, operating systems, storage, or even individual application capabilities, with the possible
exception of limited user specific application configurationsettings.
 Software as a Service (SaaS) is a cloud computing model, which hosts various software applications
and makes them available to customers over the Internet or othernetwork.
Other good examples of SaaS cloud service providers are:
• GoogleApps
• Oracle OnDemand
• SalesForce.com
• SQLAzure

Types of cloud:

1. Public/ Externalcloud.
2. Hybrid/ Integratedcloud.
3. Private/ Internalcloud.
4. Community/ VerticalClouds.
1.Public/Externalcloud:
The cloud infrastructure is made available to the general public or a large industry group and is owned by an
organization selling cloud services. A public cloud(also called as External Cloud) is one based on the standard
cloud computing model, in which a service provider makes resources, such as applications and storage,
available to the general public over the Internet. Public cloud services may be free or offered on a pay‐per‐usage
model.
 A public cloud is hosted, operated, and managed by a third‐party vendor from one or more datacenters.
 In a public cloud, security management and day‐to‐day operations are relegated to the third party
vendor, who is responsible for the public cloud serviceoffering.
 Hence, the customer of the public cloud service offering has a low degree of control and oversight ofthe
physical and logical security aspects of a privatecloud.
The main benefits of using a public cloud service are:
 Easy and inexpensive set‐up because hardware, application and bandwidth costs are covered by
theprovider.
 Scalability to meetneeds.
 No wasted resources because you pay for what youuse.
Examples of public clouds include:
 Amazon Elastic Compute Cloud(EC2),
 IBM's BlueCloud,
 Google AppEngine
 Windows Azure ServicesPlatform

2.Hybrid/ Integratedcloud:--
A hybrid cloud is a composition of at least one private cloud and at least one public cloud. A hybrid cloud is
typically offered in one of two ways: a vendor has a private cloud and forms a partnership with a public cloud
provider, or a public cloud provider forms a partnership with a vendor that provides private cloud platforms. A
hybrid cloud is a cloud computing environment in which an organization provides and manages some resources
in‐house and has others provided externally.
 For example, an organization might use a public cloud service, such as Amazon Simple Storage
Service (Amazon S3) for archived data but continue to maintain in‐house storage for operational
customerdata.

3. Private/Internalcloud:--
The cloud infrastructure is operated solely for a single organization. It may be managed by the organization or a
third party, and may exist on-premises or off-premises. Private cloud (also called internal cloud or corporate
cloud) is a marketing term for a proprietary computing architecture that provides hosted services to a limited
number of people behind a firewall.

 Marketing media that uses the words "private cloud" is designed to appeal to an organization that needs
or wants more control over their data than they can get by using a third‐party hosted service such as
Amazon's Elastic Compute Cloud (EC2) or Simple Storage Service(S3).
A variety of private cloud patterns have emerged:
Dedicated: Private clouds hosted within a customer‐owned data center or at a collocation facility, and
operated by internal IT departments
Community: Private clouds located at the premises of a third party; owned, managed, and operated by a
vendor who is bound by custom SLAs and contractual clauses with security and compliance requirements
Managed: Private cloud infrastructure owned by a customer and managed by a vendor

4.Community/VerticalClouds
 Community clouds are a deployment pattern suggested by NIST, where semi‐private clouds will be formed
to meet the needs of a set of related stakeholders or constituents that have common requirements or
interests.
The cloud infrastructure is shared by several organizations and supports a specific community that has
shared concerns (e.g., mission, security requirements, policy, or compliance considerations). It may be
managed by the organizations or a third party and may exist on-premises oroff-premises.
 A community cloud may be private for its stakeholders, or may be a hybrid that integrates the respective
private clouds of the members, yet enables them to share and collaborate across their clouds by exposing
data or resources into the communitycloud.

ESSENTIAL CHARACTERISTICS:--

 On-demand self-service:--A consumer can uni-laterally provision computing


capabilities such as server time and network storage as needed automatically, without
requiring human interaction with a service provider.

 Broad network access: --Access to resources in the cloud is available over the network
usingstandardmethods in a manner that provides platform-independent access to clients of
all types. This includes a mixture of heterogeneous operating systems, and thick and thin
platforms such as laptops, mobile phones, and PDA.

 Resource pooling:--The provider‘s creates computing resources that are pooled to serve
multiple consumers using a multi-tenant model, with different physical and virtual
resources dynamically assigned and reassigned according to consumerdemand.
 Rapid elasticity:--Capabilities can be rapidly and elastically provisioned in some cases
automatically - to quickly scale out; and rapidly released to quickly scale in. To the
consumer, the capabilities available for provisioning often appear to be unlimited and can
be purchased in any quantity at anytime.
 Measured service:--Cloud systems automatically control and optimize resource usage by
leveraging a metering capability at some level of abstraction appropriate to the type of
service. Resource usage can be monitored, controlled, and reported - providing
transparency for both the provider and consumer of theservice.
BENEFITS OF CLOUD COMPUTING:--
 Business Benefits of CloudComputing
 Technical Benefits of CloudComputing
Business Benefits
 Almost zero upfront infrastructureinvestment
 Just-in-timeInfrastructure
 More efficient resourceutilization
 Usage-basedcosting
 Reduced time tomarket
Technical Benefits
 Automation – ―Scriptable infrastructure‖
 Auto-scaling
 ProactiveScaling
 More Efficient Developmentlifecycle
 ImprovedTestability
 Disaster Recovery and Business Continuity.

Disadvantages of cloud computing


While the benefits of cloud computing are myriad, the disadvantages are just as numerous. As a general
rule, the advantages of cloud computing present a more compelling case for small organizations than for larger
ones. Larger organizations can support IT staff and development efforts that put in place custom software
solutions that are crafted with their particular needs in mind.
When you use an application or service in the cloud, you are using something that isn't necessarily as
customizable as you might want. Additionally, although many cloud computing applications are very capable,
applications deployed on-premises still have many more features than their cloud counterparts. All cloud
computing applications suffer from the inherent latency that is intrinsic in their WAN connectivity. While cloud
computing applications excel at large-scale processing tasks, if your application needs large amounts of data
transfer, cloud computing may not be the best model for you.
Additionally, cloud computing is a stateless system, as is the Internet in general. In order for
communication to survive on a distributed system, it is necessarily unidirectional in nature. All the requests you
use in HTTP: PUTs, GETs, and so on are requests to a service provider. The service provider then sends a
response. Although it may seem that you are carrying on a conversation between client and provider, there is an
architectural disconnect between the two. That lack of state allows messages to travel over different routes and
for data to arrive out of sequence, and many other characteristics allow the communication to succeed even
when the medium is faulty. Therefore, to impose transactional coherency upon the system, additional overhead
in the form of service brokers, transaction managers, and other middleware must be added to the system. This
can introduce a very large performance hit into some applications.
If you had to pick a single area of concern in cloud computing, that area would undoubtedly be privacy
and security. When your data travels over and rests on systems that are no longer under your control, you have
increased risk due to the interception and malfeasance of others. You can't count on a cloud provider
maintaining your privacy in the face of government actions. In the United States, an example is the National
Security Agency's program that ran millions of phone calls from AT&T and Verizon through a data analyzer to
extract the phone calls that matched its security criteria. VoIP is one of the services that isheavily deployed on
cloud computing systems.

Service Oriented Architecture (SOA):


Service Oriented Architecture (SOA) describes a standard method for requesting services from
distributed components and managing the results. Because the clients requesting services, the components
providing the services, the protocols used to deliver messages, and the responses can vary widely, SOA
provides the translation and management layer in an architecture that removes the barrier for a client obtaining
desired services. With SOA, clients and components can be written in different languages and can use multiple
messaging protocols and networking protocols to communicate with one another. SOA provides the standards
that transport the messages and makes the infrastructure to support it possible. SOA provides access to reusable
Web services over a TCP/IP network, which makes this an important topic to cloud computing goingforward.
You don't need SOA if you are creating a monolithic cloud application that performs a specific function
such as backup, e-mail, Web page access, or instant messaging. Many of the large and familiar cloud computing
applications are monolithic and were built with proprietary technologies albeit often on top of open source
software and hardware.

Introducing Service Oriented Architecture


Service Oriented Architecture (SOA) is a specification and a methodology for providing platform- and
language-independent services for use in distributed applications. A service is a repeatable task within a
business process, and a business task is a composition of services. SOA describes a message-passing taxonomy
for a component-based architecture that provides services to clients upon demand. Clients access a component
that complies with SOA by passing a message containing metadata to be acted upon in a standard format. The
component acts on that message and returns a response that the client then uses for its own purpose. A common
example of a message is an XML file transported over a network protocol such as SOAP. Usually service
providers and service consumers do not pass messages directly to each other. Implementations of SOA employ
middleware software to play the role of transaction manager (or broker) and translator. That middleware can
discover and list available services, as well as potential service consumers, often in the form of a registry,
because SOA describes a distributed architecture security and trust services are built directly into many of these
products to protectcommunication.
Middleware products also can be where the logic of business processes resides; they can be general-
purpose applications, industry-specific, private, or public services. Middleware services manage lookup
requests. The Universal Description Discovery and Integration (UDDI) protocol is the one most commonly used
to broadcast and discover available Web services, often passing data in the form of an Electronic Business using
eXtensible Markup Language (ebXML) documents. Service consumers find a Web service in a broker registry
and bind their service requests to that specific service; if the broker supports several Web services, it can bind to
any of the ones that areuseful.
This architecture does not contain executable links that require access to a specific API. The message
presents data to the service, and the service responds. It is up to the client to determine if the service returned an
appropriate result. An SOA is then seen as a method for creating an integrated process as a set of linked
services. The componentexposes itself as anendpoint (aterm ofart in SOA) to theclient.
The most commonly used message-passing format is an Extensible Markup Language (XML) document
using Simple Object Access Protocol (SOAP), but many more are possible, including Web Services Description
Language (WSDL), Web Services Security (WSS), and Business Process Execution Language for Web Services
(WS-BPEL). WSDL is commonly used to describe the service interface, how to bind information, a n d
The nature of the component's service or endpoint. The Service Component Definition Language (SCDL) is used
to define the service component that performs the service, providing the component service information that is
not part of the Web service and that therefore wouldn't be part of WSDL.
Figure 13.1 shows a protocol stack for SOA architecture and how those different protocols execute the
functions required in the Service Oriented Architecture. In the figure, the box labeled Other Services could
include Common Object Request Broker Architecture (CORBA), Representational State Transfer (REST),
Remote Procedure Calls (RPC), Distributed Common Object Model (DCOM), Jini, Data Distribution Service
(DDS), Windows Communication Foundation (WCF), and other technologies and protocols. It is this flexibility
and neutrality that makes SOA so singularly useful in designing complex applications.
SOA provides the framework needed to allow clients of any type to engage in a request-response
mechanism with a service. The specification of the manner in which messages are passed in SOA, or in which
events are handled, are referred to as their contract. The term is meant to imply that the client engages the
service in a task that must be managed in a specified manner. In real systems, contracts may specifically be
stated with a Quality of Service parameter in a real paper contract. Typically, SOA requires the use of an
orchestrator or broker service to ensure that messages are correctly transacted. SOA makes no other demands on
either the client (consumer) or the components (provider) of the service; it is concerned only with the interface
or action boundary between the two. This is the earliest definition of SOA architecture.
FIGURE.a
A protocol stack for SOA showing the relationship of each protocol to its function
Components are often written to comply with the Service Component Architecture (SCA), a language and
technology-agnostic design specification that has wide, but not universal, industry support. SCA can use the
services of components that are written in the Business Process Execution Language (BPEL), Java, C#/.NET,
XML, or COBOL, and can apply to C++ and FORTRAN, as well as to the dynamic languages Python, Ruby,
PHP, and others. This allows components to be written in the easiest form that supports the business process
that the component is meant to service. By wrapping data from legacy clients written in languages such as
COBOL, SOA has greatly extended the life of many legacyapplications.
Components are coded with their service logic and their dependencies, QoS is established, and the
service is instantiated. In the SCA model, data and messages are exchanged in a Service Data Object (SDO).
This system of messaging using objects and services is sometimes referred to as a Data Access Service (DAS).
Figure.b shows how components of different types can communicate using different protocols as part of SOA.
When you combine Web services to create business processes, the integration must be managed. Two
main methods are used to combine Web services: orchestration and choreography. In orchestration, a
middleware service centrally coordinates all the different Web service operations, and all services send
messages and receive messages from the orchestrator. The logic of the compound business process is found at
the orchestrator alone. Figure 13.3 shows how orchestration ismanaged.

Figure.b
SOA allows for different component and client construction, as well as access to each using different protocols.

By contrast, a compound business process that uses choreography has no central coordination function. In
choreography, each Web service that is part of a business process is aware of when to process a message and
with what client or component it needs to interact with. Choreography is a collaborative effort where the logic of
the business process is pushed out to the members who are responsible for determining which operations to
execute and when to execute them, the structure of the messages to be passed and their timing, and other factors.
Figure.b illustrates the nature ofchoreography.
What isn't clear fromFigure.bbut is shown in Figure.c (orchestration) and Figure. d (choreography) is
that business processes are conducted using a sequence, in parallel, or simply by being invoked (called to). An
execution language like WS-BPEL provides commands for defining logic using conditional statements, loops,
variables, fault handlers, and other constructs. Because a business process is a collection of activity graphs,
complex processes are often shown as part of Unified Modeling Language (UML) diagrams. UML is the
modeling language of the Object Management Group that provides a method for creating visual models for
software in the form of 14 types of diagrams. Some of the diagram types are structure, behavior, class,
component, object, interaction, state, and sequence.
FIGURE.C
An orchestrated business process uses a central controlling service or element, referred to as the orchestrator,
conductor, or less frequently, the coordinator.

The Enterprise Service Bus


In Figure.d, those aforementioned hypothetical three different applications are shown interfaced with an
authentication module through what has come to be called an Enterprise Service Bus (ESB). An ESB is not a
physical bus in the sense of a network; rather, it is an architectural pattern comprised of a set of network
services that manage transactions in a Service Oriented Architecture. You may prefer to think of an ESB as a set
of services that separate clients from components on a transactional basis and that the use of the word bus in the
name indicates a high degree of connectivity or fabric quality to the system; that is, the system is loosely
coupled. Messages flow from client to component through the ESB, which manages these transactions, even
though the location of the services comprising the ESB may varywidely.
An ESB is necessary but not essential to a Service Oriented Architecture because typical business
processes can span a vast number of messages and events, and distributed processing is an inherently unreliable
method of transport. An ESB therefore plays the role of a transaction broker in SOA, ensuring that messages get
to where they were supposed to go and are acted upon properly. The service bus performs the function of
mediation: message translation, registration, routing, logging, auditing, and managing transactional integrity.
Transactional integrity is similar to ACID in a database system—atomicity, consistency, isolation, and
durability, the essence of which is that transactions succeed or they fail and are rolled back.
FIGURE.d
An SOA application of a shared logon or Authentication module

An ESB may be part of a network OS or may be implemented using a set of middleware products.An ESB creates
a virtual environment layered on top of an enterprise messaging system where services are advertised and
accessed. Think of an ESB as a message transaction system. IBM’s WS-Policy, and Kerberos security, and it runs
on the WebSphere Application server. It is interoperable with Open SCA. WebSphere ESB contains both a
Service Federation Management tool and an integrated Registry and Repository function.
These typical features are found in ESBs, among others:

• Process management services manage messagetransactions.


• Monitoring services aid in managing event.
• Data repositories or registries store business logic and aid in governance of businessprocesses.
• Data services pass messages between clients andservices.
• Data abstraction services translate messages from one format to another, asrequired.
• Governance is a service that monitors compliance of your operations with governmental regulation, which
can vary from state to state and from country tocountry.
Defining SOA Communications
Message passing in SOA requires the use of two different protocol types: the data interchange format and the
network protocol that carries the message. A client (or customer) connected to an ESB communicates over a
network protocol such as HTTP, Representational State Transfer (REST), or Java Message Service (JMS) to a
component (or service). Messages are most often in the form of the eXtensible Markup Language (XML) or in a
variant such as the Simple Object Access Protocol (SOAP). SOAP is a messaging format used in Web services
that use XML as the message format while relying on Application layer protocols such as HTTP and Remote
Procedure Calls (RPC) for message negotiation and transmission.
The software used to write clients and components can be written in Java, .NET, Web Service Business
Process Execution Language (WS-BPEL), or another form of executable code; the services that they message
can be written in the same or another language. What is required is the ability to transport and translate a
message into a form that both parties can understand. An ESB may require a variety of combinations in order to
support communications between a service consumer and a service provider. For example, in WebSphere ESB,
you might see the following combinations:
• XML/JMS (Java MessageService)
• SOAP/JMS
• SOAP/HTTP
• Text/JMS
• Bytes/JMS
The Web Service Description Language (WSDL) is one of the most commonly used XML protocols for
messaging in Web services, and it finds use in Service Oriented Architectures. Version 1.1 of WSDL is a W3C
standard, but the current version WSDL 2.0 (formerly version 1.2) has yet to be ratified by theW3C. The
significant difference between 1.1 and 2.0 is that version 2.0 has more support for RESTful (e.g. Web 2.0)
application, but much less support in the current set of software development tools. The most common transport
for WSDL is SOAP, and the WSDL file usually contains both XML data and an XML schema.
REST offers some very different capabilities than SOAP. With REST, each URL is an object that you
can query and manipulate. You use HTML commands such as GET, POST, PUT, and DELETE to work with
REST objects. SOAP uses a different approach to working with Web data, exposing Web objects through an
API and transferring data using XML. The REST approach offers lightweight access using standard HTTP
command, is easier to implement than SOAP, and comes with less overhead. SOAP is often more precise and
provides a more error-free consumption model. SOAP often comes with more sophisticated development tools.
All major Web services use REST, but many Web services, especially newer ones, combine REST with SOAP
to derive the benefits that both offer.
Contained within WSDL are essential objects to support message transfer, including these:
• The service object, a container where the serviceresides.
• The port or endpoint, which is the unique address of theservice.
• The binding, which is the description of the interface (e.g. RPC) and the transport (e.g.SOAP).
• The port Type, or interface that defines the capabilities of the Web service, and what operations are to be
performed, as well as the messages that must be sent to support theoperation.
• The operation that is to be performed on themessage.
• The message content, which is the data and metadata that the service operation is performedon.
Each message may consist of one or more parts, and each part must include typinginformation.
• The types used to describe the data, usually as part of the XML schema that accompanies theWSDL.
Notice the major elements of the document. The XML document sets up the namespace, defines the interface,
specifies the binding, names the service, provides the documentations for the service, and then supplies a
schema that may be used to validate the document. In the message descriptions, the message types are declared.
XML schema can be separate files, but what we see here is normal for WSDL, an inline schema that is part of
the WSDL document. For a much more complete step-by-step description, refer to the W3Ctutorial.
A WSDL file contains essential message data for a transaction, but it doesn't capture the full scope of a
Service Oriented Architecture design. Additional requirements need to be specified. The functional
requirements for message passing between client and service in SOA are embodied in the concept of a service
contract. A service contract codifies the relationship between the data to be processed, the metadata that
accompanies that data, the intended service, and the manner in which the service will act upon that message.
Messages therefore must have some of the following pieces of information contained insidethem:
• Header: The header contains the name of the service, service version, owner of the service, and perhaps a
responsibility assignment. This is often defined in terms of a RACI matrix where the various roles and
responsibilities for processes are spelled out in terms of a set of tasks or deliverables. The acronym designates
the Responsible party or service, the Accountable decision maker, the Consulted party, and person(s) or
service(s) that must be Informed on the use of theservice.
• Service Type: Examples of service types include data, business, integration, presentation, and processtypes.
• Functional Specification: This category includes the functional requirements, what service operations or
actions and methods must be performed, and the manner in which a service is invoked or initiated. Invocation
usually includes the URL and the nature of the serviceinterface.
• Transaction attributes: A message may define a transaction that may need to be managed or tracked or be
part of or include another transaction operated at a specific Quality of Service and under a specific Service
LevelAgreement(SLA).Securityparametersalsoarepartofatransaction'sattributes,asaretherolethe
Message plays in a process and the terms or semantics used to describe the interaction of the message with a
service's interface. Depending upon the degree of formalization a service contract may require messages to
carry a variable amount of information in order to be successfully transacted. You can see, therefore, how SOA
expands the definition of a Web service transaction fromWSDL.

Business Process Execution Language


If a message represents an atomic transaction in a Service Oriented Architecture, the next level of abstraction up
is the grouping and managing of sets of transactions to form useful work and to execute a business process. An
example of an execution language is the Business Process Execution Language (BPEL) or alternatively as the
Web Service Business Process Execution Language (WS-BPEL), a language standard for Web service
interactions. The standard is maintained by the Organization for the Advancement of Structured Information
Standards (OASIS) through their Web Services Business Process Execution Language Technical Committee.
BPEL is a meta-language comprised of two functions: executable commands for Web services and clients, and
internal or abstract code for executing the internal business logic that processes require. A meta-language is any
language whose statements refer to statements in another language referred to as the object language. BPEL is
often used to compose, orchestrate, and coordinate business processes with Web services in the SOA model,
and it has commands to manage asynchronous communications. BPEL uses XML with specific support for
messaging protocols such as SOAP, WSDL, UDDI, WS-Reliable Messaging, WS-Addressing, WS-
Coordination, and WS-Transactions. BPEL also builds on IBM's Web Services Flow Language (WSFL) and
Microsoft's XLANG for data transport; the former is a system of directed graphs, while the latter is a block-
structured language adding to additional verbs and nouns specific for business processes to BPEL, which were
combined to form BPEL4WS and are being merged with BPEL. A version of BPEL to support human
interaction is called BPEL4People, and it falls under the WS-Human Task specifications ofOASIS.
BPEL was designed to interact with WSDL and define business processes using an XML language.
BPEL does not have a graphical component. A business process has an internal or executable view and an
external or abstract view in BPEL. One process may interact with other processes, but the goal is to minimize
the number of specific extensions added to BPEL to support any particular business process. Data functions in
BPEL support process data and control flow, manage process instances, provide for logic and branching
structures, and allow for process orchestration. Because transactions are long-lived and asynchronous, BPEL
includes techniques for error handling and scopes transactions. As much as possible, BPEL uses Web services
for standards and to assemble and decompose processes.
Cloud Computing Reference Architecture by IBM

Roles:-The IBM Cloud Computing Reference Architecture defines three main roles:
Cloud Service Consumer,
Cloud Service Provider and
Cloud Service Creator.
 Each role can be fulfilled by a single person or can be fulfilled by a group of people
or an organization.
 The roles defined here intend to capture the common set of roles typically encountered
in any cloud computing environment.
 Therefore it is important to note that depending on a particular cloud computing scenario or
specificcloud implementation, there may be project‐specific sub‐rolesdefined.

Cloud Service Consumer


 A cloud service consumer is an organization, a human being or an IT system that consumes (i.e.,
requests, uses and manages, e.g. changes quotas for users, changes CPU capacity assigned to a VM,
increases maximum number of seats for a web conferencing cloud service) service instances delivered
by a particular cloudservice.
 The service consumer may be billed for all (or a subset of) its interactions with cloud service
and the provisioned service instance(s).
Cloud Service Provider
 The Cloud Service Provider has the responsibility of providing cloud services to Cloud
Service Consumers.
 A cloud service provider is defined by the ownership of a common cloud management platform
(CCMP).
 This ownership can either be realized by truly running a CCMP by himself or consuming one as a
service.
Cloud Service Creator
 The Cloud Service Creator is responsible for creating a cloud service, which can be run by
a Cloud Service Provider and by that exposed to Cloud Service Consumers.
 Typically, Cloud Service Creators build their cloud services by leveraging functionality which
is exposed by a Cloud Service Provider.
 Management functionality which is commonly needed by Cloud Service Creators is defined
by the CCMParchitecture.
 A Cloud Service Creator designs, implements and maintains runtime and management artifacts
specific to a cloudservice.

1. What is cloud Computing, What are various driving forces which forces for making use of cloud
computing?
Cloud Computing is a technology that uses the internet and central remote servers to maintain data and
applications. Cloud computing allows consumers and businesses to use applications without installation and
access their personal files at any computer with internet access. Use of computing resources (hardware and
software) that are delivered as a service over a network (typically the Internet). The name comes from the use of
a cloud-shaped symbol as an abstraction for the complex infrastructure it contains in system diagrams. Cloud
computing entrusts remote services with a user's data, software and computation.
Reasons to Make the Switch to Cloud Computing
 Saves timeBusinesses that utilize software programs for their management needs are disadvantaged,
because of the time needed to get new programs to operate at functional levels. By turning to cloud
computing, you avoid these hassles. You simply need access to a computer with Internet to view the
information youneed.
 Less glitchesApplications serviced through cloud computing require fewer versions. Upgrades are
needed less frequently and are typically managed by data centers. Often, businesses experience problems
with software because they are not designed to be used with similar applications. Departments cannot
share data because they use different applications. Cloud computing enables users to integrate various
types of applications including management systems, word processors, and e-mail. The fewer glitches,the
more productivity expected from employees.
 Going greenon average, individual personal computers are only used at approximately 10 to 20 percent
of their capacity. Similarly, computers are left idle for hours at a times soaking up energy. Pooling
resources into a cloud consolidates energy use. Essentially, you save on costs by paying for what you use
and extending the life of your PC.
 Fancy technologyCloud computing offers customers more access to power. This power is not ordinarily
accessible through a standard PC. Applications now use virtual power. Users can even build virtual
assistants, which automate tasks such as ordering, managing dates, and offering reminders for upcoming
meetings.
 MobilizationFrom just about anywhere in the world, services that you need are available. Sales are
conducted over the phone and leads are tracked by using a cell phone. Cloud computing opens users up to
a whole new world of wireless devices, all of which can be used to access any applications. Companies
are taking sales productivity to a whole new level, while at the same time, providing their sales
representatives with high quality, professional devices to motivate them to do their jobs well.
 Consumer trendsBusiness practices that are most successful are the ones that reflect consumer trends.
Currently, over 69 percent of Americans with internet access use a source of cloud computing. Whether it
is Web e-mail, data storage, or software, this number continues to grow. Consumers are looking to
conduct business with a modern approach.
 Social mediaSocial networkingis all the wave of the future among entrepreneurs. Companies are using
social networking sites such as Twitter, Facebook, and LinkedIn to heighten their productivity levels.
Blogs are used to communicate with customers about improvements that need to be made within
companies. LinkedIn is a popular website used by business professionals for collaboration purposes.
 Customize All too often, companies purchase the latest software in hopes that it will improve their sales.
Sometimes, programs do not quite meet the needs of a company. Some businesses require a personalized
touch that ordinary software cannot provide. Cloud computing gives the user the opportunity to build
custom applications on a user-friendly interface. In a competitive world, your business needs to stand out
from the rest.
No need for hardware hiccups
IT staff cutswhen all the services you need are maintained by experts outside your business, there is no need to hire new
ones.
Low Barriers to Entry
A major benefit to cloud computing is the speed at which you can have an office up and running. Mordecai notes that he
could have a server functional for a new client within a few hours, although doing the research work to assess a particular
planner's needs and get them fully operating could take a week ortwo.
Improving Security
Obviously, the security of cloud computing is a major issue for anyone considering a switch.
"The data is secure because it is being accessed through encryption set up by people smarter than us," says Dave
Williams, CFP®, of Wealth Strategies Group in Cordova, Tenn. "Clients like accessing their data through a
cloud environment because they know it's secure, they know they can get access to it, and they know we are able
to bring together a lot of theirrecords."
Increased Mobility
One of the major benefits in cloud computing for Lybarger is the instant mobility.
"I used to work with a large broker-dealer and when I was traveling, I sometimes would have difficulty getting
my computer connected to the Internet with all of the proprietary software on my laptop," he says. "There were
times when I was traveling when I wanted to be able to take care of a client's business on the spot, but I wasn't
able to. Now, I can do it in an instant."
Limitless Scalability
If you're looking to grow, the scalability of cloud computing could be a big selling point. With applications
software, you can buy only the licenses you need right now, and add more as needed. The same goes for storage
space, according to Lybarger.
Strong Compliance
Planners who are already in the cloud believe that their compliance program is stronger than it was before. For
Thornton, who is registered with the state of Georgia (not large enough to require registration with the SEC), his
business continuity plan includes an appendix that lists all the Web sites and his user names and passwords so
that, in his words, "If I get run over by a truck tomorrow, whoever comes in to take over can access my business
continuity plan and pretty much pick up where I left off."

2.What are various barriers founds for implementation of cloud computingsolutions?


There are several factors that you need to take into consideration before designing your own cloud-based
systems architecture, particularly if you're considering a multi-cloud/region architecture.
Cost - Before you architect your site/application and start launching servers, you should clearly understand the
SLA and pricing models associated with your cloud infrastructure(s). There are different costs associated with
both private and public clouds. For example, in AWS, data transferred between servers inside of the same
datacenter (Availability Zone) is free, whereas communication between servers in different datacenters within
the same cloud (EC2 Region) is cheaper than communication between servers in different clouds or on-premise
datacenters.
Complexity - Before you construct highly customized hybrid cloud solution architecture, make sure you
properly understand the actual requirements of your application, SLA, etc. Simplified architectures will always
be easier to design and manage. A more complex solution should only be used if a simpler version will not
Suffice. For example, a system architecture that is distributed across multiple clouds (regions) introduces
complexity at the architecture level and may require changes at the application level to be more latency-tolerant
and/or be able to communicate with a database that's migrated to a different cloud for failover purposes.
Speed - The cloud gives you more flexibility to control the speed or latency of your site/application. For
example, you could launch different instance types based on your application's needs. For example, do you need
an instance type that has high memory or high CPU? From a geographic point of view which cloud will provide
the lowest latency for your users? Is it necessary or cost effective to use a content distribution network (CDN)
or caching service? For user-intensive applications, the extra latency that results from cross-cloud/region
communication may not beacceptable.
Cloud Portability - Although it might be easier to use one of the cloud provider's tools or services, such as a
load balancing or database service, it's important to realize that if and when you need to move that particular tier
of your architecture to another cloud provider, you will need to modify your architecture accordingly. Since
ServerTemplates are cloud-agnostic, you can use them to build portable cloud architectures.
Security - For MultiCloud system architectures, it's important to realize that cross-cloud/region communication
is performed over the public Internet and may introduce security concerns that will need to be addressed using
some type of data encryption or VPN technology.
Broadly speaking, there are four key complicating factors in cloud computing that compound the difficulty of
performing effective due diligence on cloud provider offerings:

Volatility: New cloud vendors appear almost on a daily basis. Some will be credible, well resourced, and
professional. Others, not so much. Some are adding cloud to their conventional IT services to stay in the race,
and others are new entrants that are, as they say, cloud natives, in which case they do not suffer the pains and
challenges of reengineering legacy business models and support processes to the cloud. How can a CFO
perform due diligence on a provider‘s viability if it‘s new to the market and backed by impatient startup capital
that‘s expecting quick and positive returns? Are you concerned about the potentially complex nest of providers
that sit behind your provider‘s cloud offering? That is, the cloud providers that store its data, handle its
transactions, or manage its network? In the event that your provider ceases to exist, can they offer you
protection in the form of dataescrow?
The cloud ecosystem is far more complex than the on-premise world, even if it doesn‘t appear that way at first
blush. When you enter the cloud, have an exit strategy, and be sure you can execute it.
Legal precedent: To date, there are few legal precedents available to help shape the cloud decision- making
landscape and assist you in your decision-making processes. For example, last February, in an attempt to define
copyright in the cloud and what defines―fair use,‖ Google in effect sided with ReDigi (a provider that allows
users to store, buy, sell, and stream pre-owned digital music) against Capitol Records in a record industry
lawsuit.
Enterprises should develop an effective listening strategy for the latest court proceedings and decisions in the
legal jurisdictions in which you and your major customers operate. This can be as easy as creating your own
Google alert for keywords that are relevant to your company, industry, or regulatory environment.
Alternatively, ask your legal advisers what notification services they can offer you, or just subscribe to online
services such asfindlaw.com.
Learning from these proceedings and early court decisions will help you avoid pitfalls that your competitors
mayencounter.
Legislative and regulatory maturity: Lawyers, auditors, legislators, and regulators are still coming to grips
with cloud in its various forms. Navigating the complexities associated with the legislation and regulations that
can affect you and your provider‘s cloud ecosystem can be daunting, especially if you‘re operating across
multiple legal and international jurisdictions. For example, the US National Institute of Standards and
Technology has a well-defined Cloud Reference Architecture in which the role of Cloud Auditor is defined:
"Audits are performed to verify conformance to standards." Unfortunately, there are presently no universally
adopted standards for cloud computing, although there are a number of bodies (mostly sponsored by selected
vendors, such as the Cloud Security Alliance) attempting to define them in the areas of security,
interoperability, governance, and so on.4
The Contract In the public cloud model, the contract between your organization and your cloud provider takes
center stage. Your contract should be balanced, and reflect appropriate penalties and protections in the event of
non-performance by your provider. This may be easier said than done. You may not have sufficient financial
leverage to negotiate variations to the cloud provider‘s standardized contract. If the contract terms are mostly
favorable to the provider, yet the commercial benefits appear compelling to your organization, it may be worth
pricing in risk to your business case and then reassessing your position.

3.Explain the cloud architectures with the help of block schematics. What are various applications that
are providedby
i. Software asservices
ii. Platform asservices
iii. Implementation asservices
Note the definition of a common cloud management platform that delivers the business support systems and
operational support systems needed to deliver the different types of cloud services. The sophistication of these
BSS and OSS capabilities depend on the level of characteristics needed to deliver the cloud services. For
example, to support flexible pricing models, a public cloud service provider would need all of the BSS
capabilities along with the OSS metering capability. On the other hand, an enterprise that has chargeback
mechanisms in place will need the BSS billing capability along with the OSS metering capability.

Business requirements drive the cloud service offerings, and shows the range of offerings that are needed to
support the different requirements, including customers using cloud computing to supplement traditional IT.
Note that the cloud architecture will need consistent capability to monitor and control heterogeneous components
across traditional IT and cloud. Furthermore, with the different loosely coupled workloads emerging, the cloud
architecture will need to provide support for workload focused offerings, including analytics, application
development/test and collaboration/e-mail services. Technical requirements drive the underlying IT management
patterns, including a focus on handling the top adoption factors influencing cloud services – i.e. trust, security,
availability, and SLA management. Figure 3 summarizes the main capabilities in the operational support
systems. The architecture must focus on handling the major concerns of enterprises by facilitating
internal/external cloud interoperability. This requires the architecture, for example, to handle licensing and
security issues to span traditional IT, private and public clouds. Additionally, the
Architecture must support a self-service paradigm to manage clouds using a portal which requires a robust and
easy to use service management solution. A portal is key to access the catalog of services and to manage security
services. Of course, all of these services must be provided on top of a virtualized infrastructure of the underlying
IT resources that are needed to provide cloudservices.
Examples of IaaS include: Amazon CloudFormation (and underlying services such as Amazon EC2),
Rackspace Cloud, Terremark and Google ComputeEngine.
Examples of PaaS include: Amazon Elastic Beanstalk, Cloud Foundry, Heroku, Force.com, EngineYard,
Mendix, Google App Engine, Microsoft Azure and OrangeScape.
Examples of SaaS include: Google Apps, innkeypos, Quickbooks Online, Limelight Video Platform,
Salesforce.com and Microsoft Office 365.

2. Explain how applications can be deployed overcloud.


Step 1: Access the RightScale Zend PHP Solution Pack‘s pre-built architecture, including PHP components and
cloud-ready server deployment
Step 2: Develop PHP application
Step 3: Find cloud hosting provider
Step 4: Onboard the target application into the RightScale Zend PHP Solution Pack
Step 5: Test for and resolve any bugs, and optimize for performance, capacity and dynamic workloads
Step 6: Launch application in the cloud
Step 7: Proactively monitor performance through the RightScale Cloud Management Platform to quickly
identify and resolve issues using Zend Server Code Tracing
Step 8: Automatically re-allocate more servers or decommission servers according to real-time demand
Step 9: Customize architecture if needed
Step 10: Regain time and bandwidth to focus on other mission- critical projects
Now organizations whose business-critical applications are built on PHP can deploy and manage their
applications faster while improving resource utilization. RightScale and Zend automate 10 key steps to cloud
application development and deployment, backed by service and support.
Deploying Cloud Applications
Deploying into the cloud usually means deploying into an unfamiliar environment, an environment you don‘t
fully control because it is owned and operated by a third-party hosting service provider. Moreover, you may be
physically sharing servers with many other applications and owners you don‘t know. To prevent conflicts,
hosting service providers tend to operate very unforgiving application environments for the protection of all
parties. So what makes a generated application easy to deploy into a stringent cloud environment?
The application uses industry standard frameworks. Iron Speed Designer generates standard .NET Web
applications based on industrystandard databases, such as Microsoft SQL Server, Oracle and MySQL.
There are no proprietary libraries or run-time modules present in the generated applications, making them very
portable and easy to deploy.
Web Application Deployment
In this section we present an example how the combination of virtualization and on of self-service facilitate
application deployment (Sun Microsystems, 2009). In this example we consider a two-tier Web application
deployment using cloud, as illustrated in
The following steps comprise the deployment of the application:
• The developer selects a load balancer,Web server, and database server appliances from a library of
preconfigured virtual machineimages.
• The developer configures each component to make a custom image. The load balancer is configured, the Web
server is populated with its static content by uploading it to the storage cloud, and the database server
appliances are populated with dynamic content for thesite.
• The developer than layers custom code into the new architecture, in this way making the components meet
specific applicationrequirements.
• The developer chooses a pattern that takes the images for each layer and deploys them, handling networking,
security, and scalability issues. The secure, high-availabilityWeb application is up and running. When the
application needs to be updated, the virtual machine images can be updated, copied across the development
chain, and the entire infrastructure can be redeployed. In this example, a standard set of components can be used
to quickly deploy an application. With this model, enterprise business needs can be met quickly, without the
need for the time-consuming, manual purchase,install

8. Explain the concept of utility computing and elastic computing in cloud


 Utility computing refers to the ability to meter the offered services and charge customers for exact
usage. It is interesting to note that the term originates from public utility services such as electricity.
 Utility computing is very often connected to cloud computing as it is one of the options for its
accounting. As explained in Cloud computing infrastructure, utility computing is a good choice for
less resource demanding applications where peak usage is expected to be sporadic and rare.
 Still, Utility computing does not require Cloud computing and it can be done in any server
environment. Also, it is unreasonable to meter smaller usage and economically inefficient when
applied on a smaller scale. That is why it is most often applied oncloud hosting where large resources
are being managed.
Utility computing:
Utility computing is a service provisioning model in which a service provider makes computing resources and
infrastructure management available to the customer as needed, and charges them for specific usage rather than
a flat rate. Like other types of on-demand computing (such as grid computing), the utility model seeks to
maximize the efficient use of resources and/or minimize associated costs. The word utility is used to make an
analogy to other services, such as electrical power, that seek to meet fluctuating customer needs, and charge for
the resources based on usage rather than on a flat-rate basis. This approach, sometimes known as pay-per-use or
metered services is becoming increasingly common in enterprise computing and is sometimes used for the
consumer market as well, for Internet service, Web site access, file sharing, and other applications. Another
version of utility computing is carried out within an enterprise. In a shared poolutility model, an enterprise
centralizes its computing resources to serve a larger number of users without unnecessaryredundancy.
Advantages of Utility Computing:-
1. The client doesn't have to buy all the hardware, software and licenses needed to do business. Instead, the
client relies on another party to provide these services. The burden of maintaining and administering the system
falls to the utility computing company, allowing the client to concentrate on othertasks.
2. Utility computing gives companies the option to subscribe to a single service and use the same suite of
software throughout the entire clientorganization.
3. Another advantage is compatibility. In a large company with many departments, problems can arise with
computing software. Each department might depend on different software suites. The files used by employees
in one part of a company might be incompatible with the software used by employees in another part. Utility
computing gives companies the option to subscribe to a single service and use the same suite of software
throughout the entire clientorganization.
Disadvantages of Utility Computing:-
1. Potential disadvantage is reliability. If a utility computing company is in financial trouble or has frequent
equipment problems, clients could get cut off from the services for which they'repaying.
2. Utility computing systems can also be attractive targets forhackers. A hacker might want to access services
without paying for them or snoop around and investigate client files. Much of the responsibility of keeping the
system safe falls to theprovider
The term elastic computinghas become popular when discussing cloud computing. The Amazon elastic cloud
computing platform makes extensive use of virtualization based on the Xen hypervisor. Reserving and booting a
server instance on the Amazon EC cloud provisions and starts a virtual machine on one of Amazon‘s servers.
The configuration of the required virtual machine can be chosen from a set of options.
Theuserofthevirtualinstanceisunawareandoblivioustowhichphysicalservertheinstancehasbeenbooted
on, as well as the resource characteristics of the physical machine. An elastic multi-server environment is one
which is completely virtualized, with all hardware resources running under a set of cooperating virtual machine
monitors and in which provisioning of virtual machines is largely automated and can be dynamically controlled
according to demand. In general, any multi-server environment can be made elastic using virtualization in much
the same manner as has been done in Amazon‘s cloud, and this is what many enterprise virtualization projects
attempt to do. The key success factors in achieving such elasticity is the degree of automation that can be
achieved across multiple VMMs working together to maximize utilization. The scale of such operations is also
important, which in the case of Amazon‘s cloud runs into tens of thousands of servers, if not more. The larger
the scale, the greater the potential for amortizing demand efficiently across the available capacity while also
giving users an illusion of infinite computing resources. Technology to achieve elastic computing at scale is,
today, largely proprietary and in the hands of the major cloud providers. Some automated provisioning
technology is available in the public domain or commercially off the shelf and is being used by many
enterprises in their internal data center automation efforts. Apart from many startup companies, VMware‘s
VirtualCentre product suite aims to provide this capability through its V Cloudarchitecture. We shall discuss the
features of an elastic data center in more detail later in this chapter; first we cover virtual machine migration,
which is a pre-requisite for many of these capabilities.
Advantages of elastic computing
Low to no upfront infrastructure investments, just in time deployment, and a more efficient resource utilization
model are all benefits of the cloud. It‘s these very drivers which are creating a significant demand for cloud
based services.Major advantages of cloud computing includes:

Disadvantages of elastic computing


Despite its many benefits, cloud computing is not without its disadvantages. As you begin to explore your
managed data service solution options, it is important to understand that certain constraints may accompany
some of the advantages. Only by understanding both sides of the cloud computing spectrum can you select the
solution that is right for you.
As you explore your cloud computing options, a few disadvantages to be aware of include:
More elasticity means less control:While public clouds are great for quickly scaling up and down your
resources, companies that require complete and total control over their data and applications will need to avoid
the public cloud. Alternative solutions include hybrid clouds, private clouds, and colocation.
Differentiate between cloud computing and Grid Computing:

You might also like