Notes 1
Notes 1
Notes 1
Chapter 1
Evolution of model Computing
In IBM terminology, VTAM is access method software allowing application programs to read and
write data to and from external devices. It is called 'virtual' because it was introduced at the time when IBM
was introducing virtual storage by upgrading the operating systems of the System/360 series to virtual
storage versions. VTAM was supposed to be the successor to the older telecommunications access methods,
such as Basic telecommunications access method (BTAM) and Telecommunications AccessMethod
(TCAM), which were maintained for compatibilityreasons.
Originally, VTAM was provided free of charge like most systems software of that time. However,
VTAM 2 was the last version to be freely available. ACF/VTAM (Advanced Communication
Function/Virtual Telecommunications Access Method) was introduced in 1976 and was provided for a
license fee. The major new feature of ACF/VTAM was the Multisystem Networking Facility, which
introduced "implementation of intersystem communication among multiple S/370s."
VTAM has been renamed to be the SNA Services feature of Communications Server for OS/390.
This software package also provides TCP/IP functions. VTAM supports several network protocols,
including SDLC, Token Ring, start-stop, Bisync, local (channel attached) 3270 devices,[5] and later TCP/IP.
VTAM became part of IBM's strategic Systems Network Architecture (SNA) which in turn became part of
the more comprehensive Systems Application Architecture (SAA). Terminals communicated with the
mainframe using the systems network architecture (SNA) protocol, instead of the ubiquitous TCP/IP
protocol of today. While these mainframe computers had limited CPU power by modern standards, their I/O
bandwidth was (and is, to date) extremely generous relative to their CPU power. Consequently, mainframe
applications were built using batch architecture to minimize utilization of the CPU during data entry or
retrieval. Thus, data would be written to disk as soon as it was captured and then processed by scheduled
background programs, in sharp contrast to the complex business logic that gets executed during online
transactions on the web today. In fact, for many years, moving from a batch model to an online one was
considered a major revolution in IT architecture, and large systems migration efforts were undertaken to
achieve this; it is easy to see why: In a batch system, if one deposited money in a bank account it would
usually not show up in the balance until the next day after the end of day batch jobs had run! Further, if there
was incorrect data entry, a number of corrective measures would have to be triggered, rather than the
immediate data validations.
Typically, legacy programs written in COBOL, PL/1, and assembler language use VTAM to
communicate with interactive devices and their users. Programs that use VTAM macro instructions are
generally exchanging text strings (for example, online forms and the user's form input) and the most
common interactive device used with VTAM programs was the3270 Information Display System.
MVS (Multiple Virtual Storage) is an operating system from IBM that continues to run on many of
IBM's mainframe and large server computers. MVS has been said to be the operating system that keeps the
world going and the same could be said of its successor systems, OS/390 and z/OS. The payroll, accounts
receivable, transaction processing, database management, and other programs critical to the world’s largest
3
businessesareusuallyrunonanMVSorsuccessorsystem.AlthoughMVShasoftenbeenseenasa monolithic,
centrally-controlled information system, IBM has in recent years repositioned it(andsuccessorsystems) as a
"large server" in a network-oriented distributed environment, using a 3-tierapplicationmodel.The follow-on
version of MVS, OS/390, no longer included the "MVS" in its name. Since MVSrepresentsa
certainepochandcultureinthehistoryofcomputingandsincemanyolderMVSsystemsstilloperate,the term "MVS"
will probably continue to be used for some time. Since OS/390 also comes with UNIXuserandprogramming
interfaces built in, it can be used as both an MVS system and a UNIX system at thesametime.
AmorerecentevolutionofMVSis z/OS, an operatingsystem for IBM's zSeries mainframes.
TheVirtualStorageinMVSreferstotheuseofvirtualmemoryintheoperatingsystem.Virtual
storageormemoryallowsaprogramtohaveaccesstothemaximumamountofmemoryinasystemeven though this
memory is actually being shared among more than one application
program.Theoperatingsystemtranslatestheprogram'svirtualaddressintotherealphysicalmemoryaddresswhereth
edataisactuallylocated.The Multiple inMVSindicatesthataseparatevirtualmemoryismaintainedforeachof
Multiple task partitions.
Job Control Language (JCL) is a scripting language used on IBM mainframe operating systems to
instruct the system on how to run a batch job or start a subsystem. There are actually two IBM JCLs: one for
the operating system lineage that begins with DOS/360 and whose latest member is z/VSE; and the other for
the lineage from OS/360 to z/OS. They share some basic syntax rules and a few basic concepts, but are
otherwise very different. In the early mainframe architectures (through the mid/late 80s), application data
was stored either in structured files, or in database systems based on the hierarchical or networked data
model. Typical examples include the hierarchical IMS database from IBM, or the IDMS network database,
managed now by Computer Associates. The relational (RDBMS) model was published and prototyped in the
70s and debuted commercially in the early 80s with IBM‘s SQL/DS on the VM/CMS operating system
However, relational databases came into mainstream use only after the mid-80s with the advent of IBM‘s
DB2 on the mainframe and Oracle‘s implementation for the emerging UNIXplatform.
IMS (Information Management System) is a database and transaction management system that
was first introduced by IBM in 1968. Since then, IMS has gone through many changes in adapting to new
programming tools and environments. IMS is one of two major legacy database and transaction management
subsystems from IBM that run on mainframe MVS (now z/OS) systems. The other is CICS. It is claimed
that, historically, application programs that use either (or both) IMS or CICS services have handled and
continue to handle most of the world's banking, insurance, and order entry transactions. IMS consists of two
major components, the IMS Database Management System (IMS DB) and the IMS Transaction
Management System (IMS TM). In IMS DB, the data is organized into a hierarchy. The data in each level is
dependent on the data in the next higher level. The data is arranged so that its integrity is ensured, and the
storage and retrieval process is optimized. IMS TM controls I/O (input/output) processing, provides
formatting, logging, and recovery of messages, maintains communications security, and oversees the
4
scheduling and execution of programs. TM uses a messaging mechanism for queuing requests. IMS's original
programming interface was DL/1 (Data Language/1). Today, IMS applications and databases can be
connected to CICS applications and DB2 databases. Java programs can access IMS databases andservices.
Customer Information Control System (CICS) is a transaction server that runs primarily on
IBMmainframe systems under z/OS and z/VSE. CICS is middleware designed to support rapid, high-volume
online transaction processing. A CICS transactionis a unit of processing initiated by a single request that
may affect one or more objects. This processing is usually interactive (screen-oriented), but background
transactions are possible. CICS provides services that extend or replace the functions of the operating system
and are more efficient than the generalized services in the operating system and simpler for programmers to
use, particularly with respect to communication with diverse terminal devices. Applications developed for
CICS may be written in a variety of programming languages and use CICS-supplied language extensions to
interact with resources such as files, database connections, terminals, or to invoke functions such as web
services. CICS manages the entire transaction such that if for any reason a part of the transaction fails all
recoverable changes can be backed out. CICS is also widely used by many smaller organizations. CICS is
used in bank-teller applications, ATM systems, industrial production control systems, insurance
applications, and many other types of interactive applications.
The storage subsystem in mainframes, called virtual storage access mechanism (VSAM), built in
support for a variety of file access and indexing mechanisms as well as sharing of data between concurrent
users using record level locking mechanisms. Early file-structure-based data storage, including networked
and hierarchical databases, rarely included support for concurrency control beyond simple locking. The need
for transaction control, i.e., maintaining consistency of a logical unit of work made up of multiple updates,
led to the development of transaction-processing monitors‘ (TP-monitors), such as CICS (customer
information control system). CICS leveraged facilities of the VSAM layer and implemented commit and roll
back protocols to support atomic transactions in a multi-user environment. CICS is still in use in conjunction
with DB2 relational databases on IBM z-series mainframes. At the same time, the need for speed continued
to see the exploitation of so called direct access methods where transaction control is left to application
logic.
The term Virtual Storage Access Method (VSAM) applies to both a data set type and the access
method used to manage various user data types. Using VSAM, an enterprise can organize records in a file in
physical sequence (the sequential order that they were entered), logical sequence using a key (for example,
the employee ID number), or by the relative record number on direct access storage devices (DASD).
There are three types of VSAM data sets:
1. Entry Sequenced Data Set(ESDS)
2. Key Sequenced Data Set(KSDS)
3. Relative Record Data Set(RRDS)
VSAM records can be of fixed or variable length. VSAM data sets are briefly described as follows:
5
CLIENT-SERVER ARCHITECTURE:--
The microprocessor revolution of the 80s brought PCs to business desktops as well as homes. At the same
time minicomputers such as the VAX family and RISC-based systems running the UNIX operating system
and supporting the C programming language became available. It was now conceivable to move some data
processing tasks away from expensive mainframes to exploit the seemingly powerful and inexpensive
desktop CPUs. As an added benefit corporate data became available on the same desktop computers that
were beginning to be used for word processing and spreadsheet applications using emerging PC-based
office-productivity tools. In contrast terminals were difficult to use and typically found only in data
processing rooms‘. Moreover, relational databases, such as Oracle, became available on minicomputers,
overtaking the relatively lukewarm adoption of DB2 in the mainframe world.
Finally, networking using TCP/IP rapidly became a standard, meaning that networks of PCs and
minicomputers could share data. Corporate data processing rapidly moved to exploit these new technologies.
Figure 1.2 shows the architecture of client-server systems. First, the forms architecture for minicomputer-
based data processing became popular. At first this architecture involved the use of terminals to access
6
Server-side logic in C, mirroring the mainframe architecture; later PC-based forms applications provided
graphical GUIs as opposed to the terminal-based character-oriented CUIs. The GUI forms model was the
first client-server architecture. The forms architecture evolved into the more general client-server
architecture, wherein significant processing logic executes in a client application, such as a desktop PC:
Therefore the client-server architecture is also referred to as fat-client architecture, as shown in Figure
1.2. The client application (or fat-client‘) directly makes calls (using SQL) to the relational database using
networking protocols such as SQL/Net, running over a local area (or even wide area) network using TCP/IP.
Business logic largely resides within the client application code, though some business logic can also be
implemented within the database for faster performance, using stored procedures‘. The client-server
architecture became hugely popular: Mainframe applications which had been evolving for more than a
decade were rapidly becoming difficult to maintain, and client-server provided a refreshing and seemingly
cheaper alternative to recreating these applications for the new world of desktop computers and smaller
Unix-based servers. Further, by leveraging the computing power on desktop computers to perform
validations and other logic, online systems became possible, a big step forward for a world used to batch
processing. Lastly, graphical user interfaces allowed the development of extremely rich user interfaces,
whichaddedtothefeelingofbeingredeemedfromthemainframeworld.
In the early to mid-90s, the client-server revolution spawned and drove the success of a host of
application software products, such as SAP-R/3, the client-server version of SAP‘s ERP software2 for core
manufacturing process automation; which was later extended to other areas of enterprise operations.
Similarly supply chain management (SCM), such as from i2, and customer relationship management
(CRM), such as from Seibel, also became popular. With these products, it was conceivable, in principle, to
replace large parts of the functionality deployed on mainframes by client-server systems, at a fraction of the
cost.
However, the client-server architecture soon began to exhibit its limitations as its usage grew beyond
small workgroup applications to the core systems of large organizations: Since processing logic on the
Client directly accessed the database layer, client-server applications usually made many requests to the
server while processing a single screen. Each such request was relatively bulky as compared to the terminal-
based model where only the input and final result of a computation were transmitted. In fact, CICS and IMS
even today support hanged-data only modes of terminal images, where only those bytes changed by a user
are transmitted over the network. Such frugal network architectures enabled globally distributed terminals to
connect to a central mainframe even though network bandwidths were far lower than they are today. Thus,
while the client-server model worked fine over a local area network, it created problems when client-server
systems began to be deployed on wide area networks connecting globally distributed offices. As a result,
many organizations were forced to create regional data centers, each replicating the same enterprise
application, albeit with local data. This structure itself led to inefficiencies in managing global
7
Software upgrades, not to mention the additional complications posed by having to upgrade the client
applications on each desktop machine as well.
Finally, it also became clear over time that application maintenance was far costlier when user
interface and business logic code was intermixed, as almost always became the case in the fat client-side
applications. Lastly, and in the long run most importantly, the client-server model did not scale;
organizations such as banks and stock exchanges where very high volume processing was the norm could
not be supported by the client-servermodel.
Thus, the mainframe remained the only means to achieve large throughput high-performance
businessprocessing.
8
CLUSTER COMPUTING:
There are many applications that require high-performance computing. Some Examples:
Modeling, Simulations and analysis of complex systems like climate, galaxies, molecular structures,
nuclear explosionsetc.
Business and Internet applications such as E-commerce (e.g. Amazon) and Web servers (e.g. Yahoo,
Google), File servers, databases,etc..,
Dedicated parallel computers are veryexpensive.
Also, Supercomputers are not easilyextendible.
Cost effective approaches are: Cluster Computing, Grid Computing and Cloud Computing.
Cluster Computing is useful if an application which has one or more of the followingcharacteristics:
Large runtimes
Real timeconstraints
Large memoryusage
High I/Ousage
Faulttolerance
Introduction
The first inspiration for cluster computing was developed in the 1960s by IBM as an alternative of linking
large mainframes to provide a more cost effective form of commercial parallelism [1]. At that time, IBM's
Houston Automatic Spooling Priority (HASP) system and its successor, Job Entry System (JES) allowed the
distribution of work to a user-constructed mainframe cluster. IBM still supports clustering of mainframes
through their Parallel Sysplex system, which allows the hardware, operating system, middleware, and
system management software to provide dramatic performance and cost improvements while permitting
large mainframe users to continue to run their existingapplications.
However, cluster computing did not gain momentum until the convergence of three important trends
in the 1980s: high-performance microprocessors, high-speed networks, and standard tools for high
performance distributed computing. A possible fourth trend is the increasing need of computing power for
computational science and commercial applications coupled with the high cost and low accessibility of
traditional supercomputers. These four building blocks are also known as killer-microprocessors, killer-
networks, killer-tools, and killer-applications, respectively. The recent advances in these technologies and
their availability as cheap and commodity components are making clusters or networks of computers such as
Personal Computers (PCs), workstations, and Symmetric Multiple-Processors (SMPs) an appealing solution
for cost-effective parallel computing. Clusters, built using commodity-off-the-shelf (COTS) hardware
components as well as free, or commonly used, software, are playing a major role in redefining the concept
of supercomputing. And consequently, they have emerged as mainstream parallel and distributed platforms
for high-performance, high-throughput and high-availability computing.
9
Clusters of IBM, sun, DEC workstations connected by 10MB Ethernet LAN, HP clusters,etc..,
Components of a Cluster
Price/Performance: The reason for the growth in use of clusters is that they have significantly
reduced the cost of processingpower.
Availability: Single points of failure can be eliminated, if any one system component goes down, the
system as a whole stay highlyavailable.
10
Scalability: HPC clusters can grow in overallcapacitybecause processors and nodes canbeadded
as demandincreases.
Cluster Catagorization
High-availability
High-availability clusters (also known as Failover Clusters) are implemented primarily for the purpose
of improving the availability of services that the clusterprovides.
They operate by having redundant nodes, which are then used to provide service when system
components fail.
The most common size for an HA cluster is two nodes, which is the minimum requirement to
provideredundancy.
HA cluster implementations attempt to use redundancy of cluster components to eliminate single points
offailure.
There are commercial implementations of High-Availability clusters for many operating systems. The
Linux-HA project is one commonly used free software HA package for the Linux operating system.
Load-balancing
Load-balancing is when multiple computers are linked together to share computational workloador
function as a single virtualcomputer.
Logically, from the user side, they are multiple machines, but function as a single virtualmachine.
Requests initiated from the user are managed by, and distributed among, all the
standalonecomputers to form acluster.
This results in balanced computational work among different machines, improving theperformance of
the clustersystem.
High- Performance
Started from1994
Cluster Classification
Open Cluster – All nodes can be seen from outside, and hence they need more IPs, and causemore
security concern. But they are more flexible and are used for internet/web/information servertask
11
Close Cluster – They hide most of the cluster behind the gateway node. Consequently
theyneedlessIP addresses and provide better security. They are good for computingtasks.
Benefits
A clustered system offers many valuable benefits to a modern high performance computing infrastructure
including:
High processing capacity — by combining the power of multiple servers, clustered systems can tackle
large and complex workloads. For Example - One can reduce the time for key engineering simulation jobs
from days to hours, thereby shortening the time-to-market for its newproduct.
Resource consolidation — Asingle cluster can accommodate multiple workloads and can vary the
processing power assigned to each workload as required; this makes clusters ideal for resource consolidation
and optimizes resource utilization.
Optimal use of resources — Individual systems typically handle a single workload and must be sized to
accommodate expected peak demands for that workload; this means they typically run well below capacity
but can still "run out" if demand exceeds capacity—even if other systems are idle. Because clustered
systems share enormous processing power across multiple workloads, they can handle a demand peak—
even an unexpected one—by temporarily increasing the share of processing for that workload, there by
taking advantage of unusedcapacity.
Geographic server consolidation — some may share processing power around the world, for example by
diverting daytime US transaction processing to systems in Japan that are relatively idle overnight.
12
24 x 7 availability with failover protection — because processing is spread across multiple machines,
clustered systems are highly fault-tolerant: if one system fails, the others keep working.
Disaster recovery — Clusters can span multiple geographic sites so even if an entire site falls victim to a
power failure or other disaster, the remote machines keep working.
Horizontal and vertical scalability without downtime — as the business demands grow, additional
processing power can be added to the cluster without interrupting operations.
Centralized system management — Many available tools enable deployment, maintenance and monitoring
of large, distributed clusters from a single point of control.
Disadvantages:
Administration Complexity:
Cluster Applications
Google SearchEngine.
EarthquakeSimulation.
WhetherForecasting.
GRID COMPUTING–
Grid is an infrastructure that involves the integrated and collaborative use of computers, networks,
databases and scientific instruments owned and managed by multiple organizations. Grid applications often
involve large amounts of data and/or computing resources that require secure resource sharing across
organizational boundaries.
Grid computing is a form of distributed computing whereby a "super and virtual computer" is
composed of a cluster of networked, loosely coupled computers, acting in concert to perform very large
tasks.Grid computing (Foster and Kesselman, 1999) is a growing technology that facilitates the executions
of large-scale resource intensive applications on geographically distributed computing resources. Facilitates
flexible, secure, coordinated large scale resource sharing among dynamic collections of individuals,
13
Institutions, and resource Enable communities (―virtual organizations‖) to share geographically distributed
resources as they pursue common goals.
Grid is a shared collection of reliable (cluster-tightly coupled) & unreliable resources (loosely
coupled machines) and interactively communicating researchers of different virtual organizations (doctors,
biologists, physicists). Grid System controls and coordinates the integrity of the Grid by balancing the usage
of reliable and unreliable resources among its participants providing better quality of service.
Grid computing is a method of harnessing the power of many computers in a network to solve
problems requiring a large number of processing cycles and involving huge amounts of data. Most
organizations today deploy firewalls around their computer networks to protect their sensitive proprietary
data. But the central idea of grid computing-to enable resource sharing makes mechanisms such as firewalls
difficult to use
Grid Topologies
Types Of Grids
Computational Grid:-―A computational grid is a hardware and software infrastructure that provides
dependable, consistent, pervasive, and inexpensive access to high-end computational capabilities.‖ Provides
Users with compute power for solving jobs. The ability to provide mechanism that can intelligently and
transparently select computing resources capable to run users jobs with the ability to allow user to
independently manage the computing resources.
Data Grid:-A data grid is a grid computing system that deals with data — the controlled sharing and
management of large amounts of distributed data. Data Grid is the storage component of a grid
environment. Scientific and engineering applications require access to large amounts of data, and often this
data is widely distributed. A data grid provides seamless access to the local or remote data required to
complete compute intensive calculations.
Example :
A high-level view of activities involved within a seamless and scalable Grid environment is shown in
Figure 2. Grid resources are registered within one or more Grid information services. The end users submit
their application requirements to the Grid resource broker which then discovers suitable resourcesby
querying the Information services, schedules the application jobs for execution on these resources and then
monitors their processing until they are completed. A more complex scenario would involve more
requirements and therefore, Grid environments involve services such as security, information, directory,
resource allocation, application development, execution management, resource aggregation, andscheduling.
Figure 3 shows the hardware and software stack within a typical Grid architecture. It consists of four
layers: fabric, core middleware, user-level middleware, and applications and portals layers.
16
Grid Fabric layer consists of distributed resources such as computers, networks, storage devices and
scientific instruments. The computational resources represent multiple architectures such as clusters,
supercomputers, servers and ordinary PCs which run a variety of operating systems (such as UNIX variants
or Windows). Scientific instruments such as telescope and sensor networks provide real-time data that can
be transmitted directly to computational sites or are stored in adatabase.
Core Grid middleware offers services such as remote process management, co-allocation of resources,
storage access, information registration and discovery, security, and aspects of Quality of Service (QoS)
such as resource reservation and trading. These services abstract the complexity and heterogeneity of the
fabric level by providing a consistent method for accessing distributedresources.
User-level Grid middleware utilizes the interfaces provided by the low-level middleware to provide higher
level abstractions and services. These include application development environments, programming tools
and resource brokers for managing resources and scheduling application tasks for execution on global
resources.
Grid applications and portals are typically developed using Grid-enabled programming environments and
interfaces and brokering and scheduling services provided by user-level middleware. An example
application, such as parameter simulation or a grand-challenge problem, would require computational
power, access to remote datasets, and may need to interact with scientific instruments. Grid portals offer
Web-enabled application services, where users can submit and collect results for their jobs on remote
resources through the Web.
Benefits
Exploit Underutilizedresources
Resource loadBalancing
Virtualize resources across anenterprise
Data Grids, ComputeGrids
Enable collaboration for virtualorganizations
Business benefits
Improve efficiency by improving computationalcapabilities
Bring together not only IT resources but alsopeople.
Create flexible, resilient operationalinfrastructures
Address rapid fluctuations in customerdemands.
Technology benefits
Federate data and distribute itglobally.
Support large multi-disciplinary collaboration across organizations andbusiness.
Enable recovery andfailure
Ability to run large-scale applications comprising thousands of computes, for wide range of
applications.
Reduces signal latency – the delay that builds up as data are transmitted over theInternet.
It is now clear that silicon-based processor chips are reaching their physical limits. Processing speed is
constrained by the speed of light, and the density of transistors packaged in a processor is constrained by
thermodynamic limitations. A viable solution to overcome this limitation is to connect multiple processors
working incoordination with each other to solve ―Grand Challengeproblems. The firststepsinthisdirection led
to the development of parallel computing, which encompasses techniques, architectures, and systems for
performing multiple activities in parallel. As we already discussed, the term parallel computing has blurred
its edges with the term distributed computing and is often used in place of the latter term. In this section, we
refer to its proper characterization, which involves the introduction of parallelism with in a single computer
by coordinating the activity of multiple processorstogether.
Processing of multiple tasks simultaneously on multiple processors is called parallel processing. The parallel
program consists of multiple active processes (tasks) simultaneously solving a given problem. A given task
19
is divided into multiple sub tasks using a divide-and-conquer technique, and each sub task is processed on a
different central processing unit (CPU). Programming on a multi-processor system using the divide-and-
conquer technique is called parallel programming. Many applications to day require more computing power
than a traditional sequential computer can offer. Parallel processing provides a cost-effective solution to this
problem by increasing the number of CPU‘s in a computer and by adding an efficient communication
system between them. The work load can then be shared between different processors. This setup results in
higher computing power and performance than single-processor system offers. The development of parallel
processing is being influenced by many factors. The prominent among them include thefollowing:
• Computational requirements are ever increasing in the areas of both scientific and business computing.
The technical computing problems, which require high-speed computational power, are related to life
sciences, aerospace, geographical information systems, mechanical design and analysis, and thelike.
• Sequential architectures are reaching physical limitations as they are constrained by the speed of light and
thermodynamics laws. The speed at which sequential CPUs can operate is reaching saturation point (no
more vertical growth), and hence an alternative way to get high computational speed is to connect multiple
CPUs (opportunity for horizontalgrowth).
• Hardware improvements in pipelining, superscalar and the like are nonsalable and require sophisticated
compiler technology. Developing such compiler technology is a difficulttask.
• Vector processing works well for certain kinds of problems. It is suitable mostly for scientific problems
(involving lots of matrix operations) and graphical processing. It is not useful for other areas, such as
databases.
• The technology of parallel processing is mature and can be exploited commercially; there is already
significant R&D work on development tools andenvironments.
• Significant development in networking technology is paving the way for heterogeneouscomputing.
The core elements of parallel processing are CPUs. Based on the number of instruction and data streams that
can be processed simultaneously, computing systems are classified into the following four categories:
An SISD computing system is a uniprocessor machine capable of executing a single instruction, which
operates on a single data stream (see Figure 2.2). In SISD, machine instructions are processed sequentially;
hence computers adopting this model are popularly called sequential computers. Most conventional
computers are built using the SISD model. All the instructions and data to be processed have to be stored in
primary memory. The speed of the processing element in the SISD model is limited by the rate at which the
computer can transfer information internally. Dominant representative SISD systems are IBMPC,Macintosh,
andworkstations.
An SIMD computing system is a multiprocessor machine capable of executing the same instruction on all
the CPUs but operating on different data streams (see Figure 2.3). Machines based on an SIMD model are
well suited to scientific computing since they involve lots of vector and matrix operations. For instance,
statements such as Ci=Ai* Bi can be passed to all the processing elements (PEs); organized data elements of
vectors A and B can be divided into multiple sets (N-sets for N PE systems); and each PE can process one
dataset. Dominant representative SIMD systems are Cray‘s vector processing machine and Thinking
Machines cm.
21
Perform different operations on the same data set. Machines built using the MISD model are not useful in
most of the applications; a few machines are built, but none of them are available commercially. They
became more of an intellectual exercise than a practical configuration.
In the distributed memory MIMD model, all PEs have a local memory. Systems based on this model are also
called loosely coupled multiprocessor systems. The communication between PEs in this model takes place
through the inter connection network (the inter process communication channel, or IPC). The network
connecting PEs can be configured to tree, mesh, cube, and so on. Each PE operates asynchronously, and if
communication/synchronization among tasks is necessary, they can do so by exchanging messages between
them. The shared-memory MIMD architecture is easier to program but is less tolerant to failures and harder
to extend with respect to the distributed memory MIMD model. Failures in a shared-memory MIMD affect
the entire system, whereas this is not the case of the distributed model, in which each of the PEs can be
easily isolated. Moreover, shared memory MIMD architectures are less likely to scale because the addition
of more PEs leads to memory contention. This is a situation that does not happen in the case of distributed
memory, in which each PE has its own memory. As a result, distributed memory MIMD architectures are
most populartoday.
A sequential program is one that runs on a single processor and has a single line of control. To make many
processors collectively work on a single program, the program must be divided into smaller independent
23
Chunks so that each processor can work on separate chunks of the problem. The program decomposed in
this way is a parallel program. A wide variety of parallel programming approaches are available. The most
prominent among them are the following:
• Dataparallelism
•Process parallelism
•Farmer-and-worker model
These three models are all suitable for task-level parallelism. In the case of data parallelism, the divide-and-
conquer technique is used to split data into multiple sets, and each data set is processed on different PEs
using the same instruction. This approach is highly suitable to processing on machines based on the SIMD
model. In the case of process parallelism, a given operation has multiple (but distinct) activities that can be
processed on multiple processors. In the case of the farmer- and-worker model, a job distribution approach is
used: one processor is configured as master and all other remaining PEs are designated as slaves; the master
assigns jobs to slave PEs and, on completion, they inform the master, which in turn collects results. These
approaches can be utilized in different levels of parallelism.
In the previous section, we discussed techniques and architectures that allow introduction of parallelism with
in a single machine or system and how parallelism operates at different levels of the computing stack. In this
section, we extend these concepts and explore how multiple activities can be performed by leveraging
systems composed of multiple heterogeneous machines and systems. We discuss what is generally referred
to as distributed computing and more precisely introduce the most common guidelines and patterns for
implementing distributed computing systems from the perspective of the softwaredesigner.
Distributed computing studies the models, architectures, and algorithms used for building and managing
distributed systems. As a general definition of the term distributed system, we use the one proposed by
Tanenbaumet.al [1]:
A distributed system is a collection of independent computers that appears to its users as a single coherent
system. This definition is general enough to include various types of distributed computing systems that are
especially focused on unified usage and aggregation of distributed resources. A distributed system is one in
which components located at networked computers communicate and coordinate their actions only by
passing messages. As specified in this definition, the components of a distributed system communicate with
some sort of message passing. This is a term that encompasses several communication models. Components
24
of a distributed system A distributed system is the result of the interaction of several components that
traverse the entire computing stack from hardware to software. It emerges from the collaboration of several
elements that—by working together—give users the illusion of a single coherent system. Figure2.10
provides an overview of the different layers that are involved in providing the services of a distributed
system.
At the very bottom layer, computer and network hardware constitute the physical infrastructure; these
components are directly managed by the operating system, which provides the basic services for inter
process communication (IPC), process scheduling and management, and resource management in terms of
file system and local devices. Taken together these two layers become the platform on top of which
specialized software is deployed to turn a set of networked computers into a distributed system. The use of
well-known standards at the operating system level and even more at the hardware and network levels
allows easy harnessing of heterogeneous components and their organization into a coherent and uniform
system. For example, network connectivity between different devices is controlled by standards, which
allow them to interact seamlessly. At the operating system level, IPC services are implemented on top of
standardized communication protocols such Transmission Control Protocol/Internet Protocol (TCP/IP), User
Datagram Protocol (UDP) or others. The middle ware layer leverages such services to build a uniform
environment for the development and deployment of distributed applications. By relying on the services
offered by the operating system, the middleware develops its own protocols, data formats, and programming
language or frameworks for the development of distributed applications. All of them constitute a uniform
interface to distributed application developers that is completely independent from the underlying operating
system and hides all the heterogeneities of the bottom layers. The top of the distributed system stack is
represented by the applications and services designed and developed to use the middleware. These can serve
several purposes and often exposetheir
25
Features in the form of graphical user interfaces (GUIs) accessible locally or through the Internet via a Web
browser. For example, in the case of a cloud computing system, the use of Web technologies is strongly
preferred, not only to interface distributed applications with the end user but also to provide platform
services aimed at building distributed systems. A very good example is constituted by Infrastructure-as-a-
Service (IaaS) providers such as Amazon Web Services (AWS), which provide facilities for creating virtual
machines,organizingthemtogetherintoacluster,anddeployingapplicationsandsystemsontop.Figure
2.11 Shows an example of how the general reference architecture of a distributed system is contextualized in
the case of a cloud computingsystem.
The emergence of any technology and its wider acceptance is result of a series of developments those
precede it. The invention of a new technology may not be accepted immediately by larger society - either
business or otherwise. Before the advent and acceptance of any new technology there is a period of lead up
where various concepts and parts of the idea exist in different forms and expressions, followed by a sudden
boom when someone eventually discovers the right combination and the concept becomes concrete reality
widely and inevitably accepted in an unprecedented degree and depth.
1. Mainframe computing
Mainframes provided the computing for the first time to business users. A mainframe computer is housed in
a computer center and work to be carried out using it called job.Jobs are prepared for submitted to the
computer and processing produced outputs to be collected. Jobs are grouped into batches are processing
carried daily, weekly or monthly depending on the nature of the job. For instance, payroll applications run
weekly, whereas are accounts payable and receivable jobs submitted on monthly basis.
Advantages
1. Mainframe computing brought computing to business domain for the first time. It helped business to carry
out some mundane and routine jobs such as payroll, accounts, inventory thus sparing employees from
tediousjobs.
Disadvantages
1. Mainframe computing represented a centralized model of computing. It was available in one location, and
anyone who needs it must go to computer center for availing it.
27
2. PersonalComputing
Personal computing or desktop computing heralded a new direction in computing by providing computers to
eachemployeeontheirdesktop or workspace.It decentralized computingand empoweredeveryemployee with
required computing at her disposal. It consisted of personal computer small enough to fit conveniently in an
individual workspace. Every category of employees started using computers to their domains- accounts,
inventory, payroll and more andmore.
Advantages
● less expensive, easy to upgrade and less accessoriesneeded
Disadvantages
● lack portability, power use, monitors andperipherals
3. NetworkComputing
One of the drawbacks of desktop computing is that information sharing with other users is a tedious process.
You need to copy and carry it on a secondary storage device to share it. In a workplace environment, where
people have to produce and share information, this becomes a challenge. Networking computing offered a
solution to overcome this. Since networked computers can share information, it is possible to use them in
workplace so that workers can seamlessly exchange information they need or want to share across others.
Networked computers Local Area network (LAN) achieved this. In the networked computing model- a
relatively powerful computer serveris loaded with all software needed and each user to provide with a
connectedterminalto access and work.
4. InternetComputing
While network computing such as LAN connected uses within an office or institutions, Internet computing
is used to connect organizations located in different geographicallocations.
28
5. GridComputing
There are occasions wherein the computing power available within an enterprise is not sufficient to carry out
the computing task on hand. It may also be possible that data required for the processing is generated at
various geographical locations. In such cases Grid Computing is used. Grid computing requires the use of
software that can divide and farm out pieces of a program as one large system image to several thousand
computers.
6. CloudComputing
While grid computing helped to gather computing power from other institutions, it could be used by only by
participating organizations or privileged people. They also lacked commercial model. Cloud computing uses
many features of earlier systems such as Grid but extended computing to larger population by way of pay-
per-use.
Utility computing
Utility computing can be defined as the provision of computational and storage resources as a metered
service, similar to those provided by a traditional public utility company. This, of course, is not a new
idea. This form of computing is growing in popularity, however, as companies have begun to extend the
model to a cloud computing paradigm providing virtual servers that IT departments and users can access
on demand. Early enterprise adopters used utility computing mainly for non-mission-critical needs, but
that is quickly changing as trust and reliability issues are resolved.
Some people think cloud computing is the next big thing in the world of IT. Others believe it
is just another variation of the utility computing model that has been repackaged in this decade as
something newandcool. However, it is not just the buzzword ―cloud computing‖that iscausing confusion
among the masses. Currently, with so few cloud computing vendors actually practicing this form of
technology and also almost every analyst from every research organization in the country defining the
term differently, the meaning of the term has become very nebulous. Even among those who think they
understand it, definitions vary, and most of those definitions are hazy at best. To clear
thehazeandmakesomesenseofthenewconcept,thisbookwillattempttohelpyouunderstandjustwhat
29
Cloud computing really means, how disruptive to your business it may become in the future, and what
its advantages and disadvantages are.
As we said previously, the term the cloudis often used as a metaphor for the Internet and has
become a familiar cliché. However, when the cloud is combined with computing, it causes a lot of
confusion. Market research analysts and technology vendors alike tend to define cloud computing very
narrowly, as a new type of utility computing that basically uses virtual servers that have been made
available to third parties via the Internet. Others tend to define the term using a very broad, all-
encompassing application of the virtual computing platform. They contend that anything beyond the
firewall perimeter is in the cloud. A more tempered view of cloud computing considers it the delivery of
computational resources from a location other than the one from which you are computing.
Forrester: Cloud computing: A pool of abstracted, highly scalable, and managed infrastructure capable of
hosting end-customer applications and billed by consumption‖.
Cloud computing is using the internet to access someone else's software running on someone else's hardware
in someone else's data center-- Lewis Cunningham
Many people mistakenly believe that cloud computing is nothing more than the Internet given a
different name. Many drawings of Internet-based systems and services depict the Internet as a cloud, and
people refer to applications running on the Internet as running in the cloud,so the confusion is
understandable. The Internet has many of the characteristics of what is now being called cloud computing.
When you store your photos online instead of on your home computer, or use webmail or a social
networking site, you are using a cloud computing service. If you are an organization, and you want to use,
for example, an online invoicing service instead of updating the in-house one you have been using for many
Years, that online invoicing serviceis acloud computing service.
Cloud computing is the delivery of computing services over the Internet. Cloud services allow
individuals and businesses to use software and hardware that are managed by third parties at remote
locations. Examples of cloud services include online file storage, social networking sites, webmail, and
online business applications. The cloud computing model allows access to information and computer
resources from anywhere that a network connection is available. Cloud computing provides a shared pool of
resources, including data storage space, networks, computer processing power, and specialized corporate
and userapplications.
Cloud Architecture
Cloud ServiceModels
Cloud DeploymentModels
Essential Characteristics of CloudComputing
Types of cloud:
1. Public/ Externalcloud.
2. Hybrid/ Integratedcloud.
3. Private/ Internalcloud.
4. Community/ VerticalClouds.
1.Public/Externalcloud:
The cloud infrastructure is made available to the general public or a large industry group and is owned by an
organization selling cloud services. A public cloud(also called as External Cloud) is one based on the standard
cloud computing model, in which a service provider makes resources, such as applications and storage,
available to the general public over the Internet. Public cloud services may be free or offered on a pay‐per‐usage
model.
A public cloud is hosted, operated, and managed by a third‐party vendor from one or more datacenters.
In a public cloud, security management and day‐to‐day operations are relegated to the third party
vendor, who is responsible for the public cloud serviceoffering.
Hence, the customer of the public cloud service offering has a low degree of control and oversight ofthe
physical and logical security aspects of a privatecloud.
The main benefits of using a public cloud service are:
Easy and inexpensive set‐up because hardware, application and bandwidth costs are covered by
theprovider.
Scalability to meetneeds.
No wasted resources because you pay for what youuse.
Examples of public clouds include:
Amazon Elastic Compute Cloud(EC2),
IBM's BlueCloud,
Google AppEngine
Windows Azure ServicesPlatform
2.Hybrid/ Integratedcloud:--
A hybrid cloud is a composition of at least one private cloud and at least one public cloud. A hybrid cloud is
typically offered in one of two ways: a vendor has a private cloud and forms a partnership with a public cloud
provider, or a public cloud provider forms a partnership with a vendor that provides private cloud platforms. A
hybrid cloud is a cloud computing environment in which an organization provides and manages some resources
in‐house and has others provided externally.
For example, an organization might use a public cloud service, such as Amazon Simple Storage
Service (Amazon S3) for archived data but continue to maintain in‐house storage for operational
customerdata.
3. Private/Internalcloud:--
The cloud infrastructure is operated solely for a single organization. It may be managed by the organization or a
third party, and may exist on-premises or off-premises. Private cloud (also called internal cloud or corporate
cloud) is a marketing term for a proprietary computing architecture that provides hosted services to a limited
number of people behind a firewall.
Marketing media that uses the words "private cloud" is designed to appeal to an organization that needs
or wants more control over their data than they can get by using a third‐party hosted service such as
Amazon's Elastic Compute Cloud (EC2) or Simple Storage Service(S3).
A variety of private cloud patterns have emerged:
Dedicated: Private clouds hosted within a customer‐owned data center or at a collocation facility, and
operated by internal IT departments
Community: Private clouds located at the premises of a third party; owned, managed, and operated by a
vendor who is bound by custom SLAs and contractual clauses with security and compliance requirements
Managed: Private cloud infrastructure owned by a customer and managed by a vendor
4.Community/VerticalClouds
Community clouds are a deployment pattern suggested by NIST, where semi‐private clouds will be formed
to meet the needs of a set of related stakeholders or constituents that have common requirements or
interests.
The cloud infrastructure is shared by several organizations and supports a specific community that has
shared concerns (e.g., mission, security requirements, policy, or compliance considerations). It may be
managed by the organizations or a third party and may exist on-premises oroff-premises.
A community cloud may be private for its stakeholders, or may be a hybrid that integrates the respective
private clouds of the members, yet enables them to share and collaborate across their clouds by exposing
data or resources into the communitycloud.
ESSENTIAL CHARACTERISTICS:--
Broad network access: --Access to resources in the cloud is available over the network
usingstandardmethods in a manner that provides platform-independent access to clients of
all types. This includes a mixture of heterogeneous operating systems, and thick and thin
platforms such as laptops, mobile phones, and PDA.
Resource pooling:--The provider‘s creates computing resources that are pooled to serve
multiple consumers using a multi-tenant model, with different physical and virtual
resources dynamically assigned and reassigned according to consumerdemand.
Rapid elasticity:--Capabilities can be rapidly and elastically provisioned in some cases
automatically - to quickly scale out; and rapidly released to quickly scale in. To the
consumer, the capabilities available for provisioning often appear to be unlimited and can
be purchased in any quantity at anytime.
Measured service:--Cloud systems automatically control and optimize resource usage by
leveraging a metering capability at some level of abstraction appropriate to the type of
service. Resource usage can be monitored, controlled, and reported - providing
transparency for both the provider and consumer of theservice.
BENEFITS OF CLOUD COMPUTING:--
Business Benefits of CloudComputing
Technical Benefits of CloudComputing
Business Benefits
Almost zero upfront infrastructureinvestment
Just-in-timeInfrastructure
More efficient resourceutilization
Usage-basedcosting
Reduced time tomarket
Technical Benefits
Automation – ―Scriptable infrastructure‖
Auto-scaling
ProactiveScaling
More Efficient Developmentlifecycle
ImprovedTestability
Disaster Recovery and Business Continuity.
Figure.b
SOA allows for different component and client construction, as well as access to each using different protocols.
By contrast, a compound business process that uses choreography has no central coordination function. In
choreography, each Web service that is part of a business process is aware of when to process a message and
with what client or component it needs to interact with. Choreography is a collaborative effort where the logic of
the business process is pushed out to the members who are responsible for determining which operations to
execute and when to execute them, the structure of the messages to be passed and their timing, and other factors.
Figure.b illustrates the nature ofchoreography.
What isn't clear fromFigure.bbut is shown in Figure.c (orchestration) and Figure. d (choreography) is
that business processes are conducted using a sequence, in parallel, or simply by being invoked (called to). An
execution language like WS-BPEL provides commands for defining logic using conditional statements, loops,
variables, fault handlers, and other constructs. Because a business process is a collection of activity graphs,
complex processes are often shown as part of Unified Modeling Language (UML) diagrams. UML is the
modeling language of the Object Management Group that provides a method for creating visual models for
software in the form of 14 types of diagrams. Some of the diagram types are structure, behavior, class,
component, object, interaction, state, and sequence.
FIGURE.C
An orchestrated business process uses a central controlling service or element, referred to as the orchestrator,
conductor, or less frequently, the coordinator.
An ESB may be part of a network OS or may be implemented using a set of middleware products.An ESB creates
a virtual environment layered on top of an enterprise messaging system where services are advertised and
accessed. Think of an ESB as a message transaction system. IBM’s WS-Policy, and Kerberos security, and it runs
on the WebSphere Application server. It is interoperable with Open SCA. WebSphere ESB contains both a
Service Federation Management tool and an integrated Registry and Repository function.
These typical features are found in ESBs, among others:
Roles:-The IBM Cloud Computing Reference Architecture defines three main roles:
Cloud Service Consumer,
Cloud Service Provider and
Cloud Service Creator.
Each role can be fulfilled by a single person or can be fulfilled by a group of people
or an organization.
The roles defined here intend to capture the common set of roles typically encountered
in any cloud computing environment.
Therefore it is important to note that depending on a particular cloud computing scenario or
specificcloud implementation, there may be project‐specific sub‐rolesdefined.
1. What is cloud Computing, What are various driving forces which forces for making use of cloud
computing?
Cloud Computing is a technology that uses the internet and central remote servers to maintain data and
applications. Cloud computing allows consumers and businesses to use applications without installation and
access their personal files at any computer with internet access. Use of computing resources (hardware and
software) that are delivered as a service over a network (typically the Internet). The name comes from the use of
a cloud-shaped symbol as an abstraction for the complex infrastructure it contains in system diagrams. Cloud
computing entrusts remote services with a user's data, software and computation.
Reasons to Make the Switch to Cloud Computing
Saves timeBusinesses that utilize software programs for their management needs are disadvantaged,
because of the time needed to get new programs to operate at functional levels. By turning to cloud
computing, you avoid these hassles. You simply need access to a computer with Internet to view the
information youneed.
Less glitchesApplications serviced through cloud computing require fewer versions. Upgrades are
needed less frequently and are typically managed by data centers. Often, businesses experience problems
with software because they are not designed to be used with similar applications. Departments cannot
share data because they use different applications. Cloud computing enables users to integrate various
types of applications including management systems, word processors, and e-mail. The fewer glitches,the
more productivity expected from employees.
Going greenon average, individual personal computers are only used at approximately 10 to 20 percent
of their capacity. Similarly, computers are left idle for hours at a times soaking up energy. Pooling
resources into a cloud consolidates energy use. Essentially, you save on costs by paying for what you use
and extending the life of your PC.
Fancy technologyCloud computing offers customers more access to power. This power is not ordinarily
accessible through a standard PC. Applications now use virtual power. Users can even build virtual
assistants, which automate tasks such as ordering, managing dates, and offering reminders for upcoming
meetings.
MobilizationFrom just about anywhere in the world, services that you need are available. Sales are
conducted over the phone and leads are tracked by using a cell phone. Cloud computing opens users up to
a whole new world of wireless devices, all of which can be used to access any applications. Companies
are taking sales productivity to a whole new level, while at the same time, providing their sales
representatives with high quality, professional devices to motivate them to do their jobs well.
Consumer trendsBusiness practices that are most successful are the ones that reflect consumer trends.
Currently, over 69 percent of Americans with internet access use a source of cloud computing. Whether it
is Web e-mail, data storage, or software, this number continues to grow. Consumers are looking to
conduct business with a modern approach.
Social mediaSocial networkingis all the wave of the future among entrepreneurs. Companies are using
social networking sites such as Twitter, Facebook, and LinkedIn to heighten their productivity levels.
Blogs are used to communicate with customers about improvements that need to be made within
companies. LinkedIn is a popular website used by business professionals for collaboration purposes.
Customize All too often, companies purchase the latest software in hopes that it will improve their sales.
Sometimes, programs do not quite meet the needs of a company. Some businesses require a personalized
touch that ordinary software cannot provide. Cloud computing gives the user the opportunity to build
custom applications on a user-friendly interface. In a competitive world, your business needs to stand out
from the rest.
No need for hardware hiccups
IT staff cutswhen all the services you need are maintained by experts outside your business, there is no need to hire new
ones.
Low Barriers to Entry
A major benefit to cloud computing is the speed at which you can have an office up and running. Mordecai notes that he
could have a server functional for a new client within a few hours, although doing the research work to assess a particular
planner's needs and get them fully operating could take a week ortwo.
Improving Security
Obviously, the security of cloud computing is a major issue for anyone considering a switch.
"The data is secure because it is being accessed through encryption set up by people smarter than us," says Dave
Williams, CFP®, of Wealth Strategies Group in Cordova, Tenn. "Clients like accessing their data through a
cloud environment because they know it's secure, they know they can get access to it, and they know we are able
to bring together a lot of theirrecords."
Increased Mobility
One of the major benefits in cloud computing for Lybarger is the instant mobility.
"I used to work with a large broker-dealer and when I was traveling, I sometimes would have difficulty getting
my computer connected to the Internet with all of the proprietary software on my laptop," he says. "There were
times when I was traveling when I wanted to be able to take care of a client's business on the spot, but I wasn't
able to. Now, I can do it in an instant."
Limitless Scalability
If you're looking to grow, the scalability of cloud computing could be a big selling point. With applications
software, you can buy only the licenses you need right now, and add more as needed. The same goes for storage
space, according to Lybarger.
Strong Compliance
Planners who are already in the cloud believe that their compliance program is stronger than it was before. For
Thornton, who is registered with the state of Georgia (not large enough to require registration with the SEC), his
business continuity plan includes an appendix that lists all the Web sites and his user names and passwords so
that, in his words, "If I get run over by a truck tomorrow, whoever comes in to take over can access my business
continuity plan and pretty much pick up where I left off."
Volatility: New cloud vendors appear almost on a daily basis. Some will be credible, well resourced, and
professional. Others, not so much. Some are adding cloud to their conventional IT services to stay in the race,
and others are new entrants that are, as they say, cloud natives, in which case they do not suffer the pains and
challenges of reengineering legacy business models and support processes to the cloud. How can a CFO
perform due diligence on a provider‘s viability if it‘s new to the market and backed by impatient startup capital
that‘s expecting quick and positive returns? Are you concerned about the potentially complex nest of providers
that sit behind your provider‘s cloud offering? That is, the cloud providers that store its data, handle its
transactions, or manage its network? In the event that your provider ceases to exist, can they offer you
protection in the form of dataescrow?
The cloud ecosystem is far more complex than the on-premise world, even if it doesn‘t appear that way at first
blush. When you enter the cloud, have an exit strategy, and be sure you can execute it.
Legal precedent: To date, there are few legal precedents available to help shape the cloud decision- making
landscape and assist you in your decision-making processes. For example, last February, in an attempt to define
copyright in the cloud and what defines―fair use,‖ Google in effect sided with ReDigi (a provider that allows
users to store, buy, sell, and stream pre-owned digital music) against Capitol Records in a record industry
lawsuit.
Enterprises should develop an effective listening strategy for the latest court proceedings and decisions in the
legal jurisdictions in which you and your major customers operate. This can be as easy as creating your own
Google alert for keywords that are relevant to your company, industry, or regulatory environment.
Alternatively, ask your legal advisers what notification services they can offer you, or just subscribe to online
services such asfindlaw.com.
Learning from these proceedings and early court decisions will help you avoid pitfalls that your competitors
mayencounter.
Legislative and regulatory maturity: Lawyers, auditors, legislators, and regulators are still coming to grips
with cloud in its various forms. Navigating the complexities associated with the legislation and regulations that
can affect you and your provider‘s cloud ecosystem can be daunting, especially if you‘re operating across
multiple legal and international jurisdictions. For example, the US National Institute of Standards and
Technology has a well-defined Cloud Reference Architecture in which the role of Cloud Auditor is defined:
"Audits are performed to verify conformance to standards." Unfortunately, there are presently no universally
adopted standards for cloud computing, although there are a number of bodies (mostly sponsored by selected
vendors, such as the Cloud Security Alliance) attempting to define them in the areas of security,
interoperability, governance, and so on.4
The Contract In the public cloud model, the contract between your organization and your cloud provider takes
center stage. Your contract should be balanced, and reflect appropriate penalties and protections in the event of
non-performance by your provider. This may be easier said than done. You may not have sufficient financial
leverage to negotiate variations to the cloud provider‘s standardized contract. If the contract terms are mostly
favorable to the provider, yet the commercial benefits appear compelling to your organization, it may be worth
pricing in risk to your business case and then reassessing your position.
3.Explain the cloud architectures with the help of block schematics. What are various applications that
are providedby
i. Software asservices
ii. Platform asservices
iii. Implementation asservices
Note the definition of a common cloud management platform that delivers the business support systems and
operational support systems needed to deliver the different types of cloud services. The sophistication of these
BSS and OSS capabilities depend on the level of characteristics needed to deliver the cloud services. For
example, to support flexible pricing models, a public cloud service provider would need all of the BSS
capabilities along with the OSS metering capability. On the other hand, an enterprise that has chargeback
mechanisms in place will need the BSS billing capability along with the OSS metering capability.
Business requirements drive the cloud service offerings, and shows the range of offerings that are needed to
support the different requirements, including customers using cloud computing to supplement traditional IT.
Note that the cloud architecture will need consistent capability to monitor and control heterogeneous components
across traditional IT and cloud. Furthermore, with the different loosely coupled workloads emerging, the cloud
architecture will need to provide support for workload focused offerings, including analytics, application
development/test and collaboration/e-mail services. Technical requirements drive the underlying IT management
patterns, including a focus on handling the top adoption factors influencing cloud services – i.e. trust, security,
availability, and SLA management. Figure 3 summarizes the main capabilities in the operational support
systems. The architecture must focus on handling the major concerns of enterprises by facilitating
internal/external cloud interoperability. This requires the architecture, for example, to handle licensing and
security issues to span traditional IT, private and public clouds. Additionally, the
Architecture must support a self-service paradigm to manage clouds using a portal which requires a robust and
easy to use service management solution. A portal is key to access the catalog of services and to manage security
services. Of course, all of these services must be provided on top of a virtualized infrastructure of the underlying
IT resources that are needed to provide cloudservices.
Examples of IaaS include: Amazon CloudFormation (and underlying services such as Amazon EC2),
Rackspace Cloud, Terremark and Google ComputeEngine.
Examples of PaaS include: Amazon Elastic Beanstalk, Cloud Foundry, Heroku, Force.com, EngineYard,
Mendix, Google App Engine, Microsoft Azure and OrangeScape.
Examples of SaaS include: Google Apps, innkeypos, Quickbooks Online, Limelight Video Platform,
Salesforce.com and Microsoft Office 365.