Chapter 2 Review of Literature: 2.1 Overview
Chapter 2 Review of Literature: 2.1 Overview
Chapter 2 Review of Literature: 2.1 Overview
2.1 Overview
Hemant D. Vasava 8
Review of Literature
Sharing of data: The main benefit for the construction a distributed shared
memory system is the facility of a background where users at one site may be
able to access the data located on other sites. For example, in a distributed
university system where each campus stores data related to own campus, it is
possible for a user in one campus to access data in other campus. Without
this ability, the transfer of student records from one campus to another
campus would have to possibility to some external mechanism that would
couple present systems.
Hemant D. Vasava 9
Review of Literature
The facilities on failed site not utilized by the system until the recovery action
completed. After repair or recovery mechanism is completed, then failed come
back into the system and participate in efficient executions. In centralized
system recovery mechanism simple to implement, but, in case of distributed
computing systems it is very complicated, the capability of most of the system
to remain operational excluding the failure of one site results in increased
availability. It is very critical for shared data systems used for real time
submissions. For example, an airline system may have loss of ticket buyers
again its competitors because of data access loss of system.
The model defines that how the different components are configured in
distributed architectures, according this property there are several models
available. Amongst the most prominent ones are described below. In
subsequent Fig. 2.2 to 2.4, P denotes processor, M denotes memory and disk
are depicted by using cylinders. Memory, disk and topology can be configured
according to system application requirement (3,5).
In this shared memory model different processors will access shared memory
region by bus or high speed network. The main benefit of shared memory
model is that in this kind of architecture processors can access shared
Hemant D. Vasava 10
Review of Literature
This shared memory structural design usually has bigger memory stores on
each processor, so that addressing of the shared memory is escaped
whenever possible. Always, some of the shared data are not cached and
access will go to the shared memory region. The cached data also required to
manage coherence view. If the shared data are updated or accessed in local
memory by any processor than a cache by other processors will be updated
or removed (3). While an increasing number of processor cache coherency
will become overhead. Also shared memory computer are not scalable
beyond some limit and cannot accommodate than a fewer number of
processors.
In the shared disk model each processor has its own local memories and all
the processors can access each disk via bus of interconnection network as
Hemant D. Vasava 11
Review of Literature
shown in Fig. 2.3. There are two advantages of shared disk architecture over
shared memory system. First, is it is a less expensive architecture which
provide a good level of fault tolerance. Second is each process having its
local memory due to that memory bus will not become bottlenecks. If an
individual processor or process fail or local memories fails than other
processors will take over the task because shared data are located on shared
disks. It is also possible to create structure fault tolerance with RAID
architecture and it is found suitable in many applications (3).
Hemant D. Vasava 12
Review of Literature
The memory model combines the features of shared disk, shared memory,
and shared nothing models. At the upper level, it contains high speed
interconnection network which does not share memory or disk with each
other. (3,10). Thus, the top layer is a shared nothing architecture. Every site of
system will be the single shared memory network as depicted in Fig. 2.5.
On the other hand, each node could be a shared disk structure, and each of
the systems sharing a set of disks could be a shared memory structure. Thus,
a system could be constructed as a hierarchy, with a shared memory
archetypal with a few processors at the base, and a shared nothing archetypal
at the top, with possibly a shared disk construction in the middle (11). Fig. 2.5
Hemant D. Vasava 13
Review of Literature
Hemant D. Vasava 14
Review of Literature
known as topology (14). The two main types of topology are static and
dynamic. Still sites are connected with different fashion through the network,
according to that various topologies are defined as shown in Fig. 2.6.
Hemant D. Vasava 15
Review of Literature
Hemant D. Vasava 16
Review of Literature
As illustrated in Fig. 2.7, migration and replication are two frequent strategies
to design shared memory (24). In replication multiple copies of the same data
are stored in local memories of each node or copy of the requested data are
replicated to requesting node. In migration scheme only a single copy of
shared data available inside the system which are shipped to the requesting
site. Based on these data, distributing strategies, DSM algorithms are
classified into four types as shown in Fig. 2.8 (24-25). So, to minimize
coherence overhead developer require to select specific data handling
scheme and pattern of shared data management.
Hemant D. Vasava 17
Review of Literature
Hemant D. Vasava 18
Review of Literature
The horizontal line divides execution of each process. The notation W(X) 1
and R(X) 1 describes that a write to X with the value 1 and 1 read from X
have been done, independently. Process P1 write the value 1 to location X
which is read by process P2 subsequently. This characteristic best situation
for strict consistency model.
Sequential Consistency: It is the most natural model greatly restricts the use
of many performance optimizations commonly used by uniprocessor
hardware and compiler designers, thereby reducing the benefit of using a
multiprocessor (30). A multiprocessor system is sequentially consistent if the
result of any execution is the same as if the operations of all the processors
were executed in some sequential order and the operations of each individual
processor appear in this sequence in the order specified by its program as
described in Fig. 2.10. One of two aspect of this consistency model is to
protect execution order amongst individual process execution. And the
second is to maintain single execution direction amongst operation by all
processors. The former concept is to make each operation executes
atomically with respect to other memory operations.
if (FlagY == 0) if (FlagX == 0)
Hemant D. Vasava 19
Review of Literature
of Fig. 2.11 which permitted in the casual consistency model, but which are
not allowed in a sequentially consistent memory or a strictly consistent
memory. Here, memory writes W(X)2 and W(X)3 are concurrent, so it is not
required that all processes see them in the same order. If an application
failure occurs when district processes view simultaneous events in different
sequence, the system will violate the agreement done by casual consistent
model.
Hemant D. Vasava 20
Review of Literature
Hemant D. Vasava 21
Review of Literature
When the system process wants to access the remote data than it has to
identify the name and location of shared data and retrieve from there.
Therefore, at least all shared data should be visible to all the sites. It is
requires, some naming scheme to uniquely identify data contents of the
shared region to avoid conflicts (15,28). One possible solution is the unique
global name of each shared data contents in the virtual memory region. So,
process wishing to access data will use a unique name to identify data and
same way corporation process will use a unique name to globally identify
data. The virtual memory manager at specific node will perform address
translation of data contents (29). It is not useful if the granularity of data unit is
less than specific unit. In this case requesting process have knowledge about
the remote location of shared data.
Hemant D. Vasava 22
Review of Literature
Hemant D. Vasava 23
Review of Literature
The DSM protocol uses two methodologies. One is a directory based where a
single directory retains track of the sharing status of memory contents. And
second uses snooping where every cache block is supplemented by the
sharing status of that block. All cache controllers monitor the shared bus so
they can update the sharing status of the block of data, if necessary ¾ write
invalidate, a processor gain exclusive access of a block before writing by
invalidating all other copies ¾ write-update, when a processor writes, it
updates other shared copies of that block of data (25).
Hemant D. Vasava 24
Review of Literature
From the usual methods of architecting DSM system like hardware as cache
coherence configuration and network interfaces and based on application,
software based distributed shared memory systems can be implemented in
programming library, at the operating system level or at underlying memory
architect can be used to achieve the goal with different software parameters.
But advantage is that these kind of systems are more portable and easy
updatable software approach to DSM system design. The DSM systems
implementation of virtual shared memory model of independent physically
distributed memory system. It also involves various other implementation
choices depend upon its design requirements includes distributed shared
memory algorithms, level of implementation like in hardware, software or
Algorithms
Consistency Model
Environment Support
Granularity
Implementation Level
Language
Memory Access by
Naming Scheme
Protocol
Reliability
Design Criteria
Scalling Requirment
Semantics of Concurrent Access
System Upgradation
0 2 4 6 8 10 12
Hemant D. Vasava 25
Review of Literature
Hemant D. Vasava 26
Review of Literature
Advantages:
Disadvantages:
Hemant D. Vasava 27
Review of Literature
Hemant D. Vasava 28
Review of Literature
In this case, after following basic rules two process A and B can communicate
simply by reading and writing to a permitted location. The latency of
communication is less than the hardware latency of memory access on the
system. The available bandwidth between process communications is
programmed to be close to main memory of the system (19). This can
program communication technique more efficient. Here, some advantages
and disadvantages listed for such a kind of distributed architecture.
Advantages:
Disadvantages:
Hemant D. Vasava 29
Review of Literature
2. Switch Multiprocessors
Shared Variable
Multicomputer
based System
Object based
Page based
Single Bus
Switched
System
System
System
NUMA
Distributed
Architectures
Remote memory
MMU MMU MMU OS System System
access?
Encapsulation/
No No No No No Yes
methods?
Access using
Yes Yes Yes No No No
hardware?
Hemant D. Vasava 30
Review of Literature
Hemant D. Vasava 31