0% found this document useful (0 votes)
37 views30 pages

Chapter 8 Concurrency-P1

Operating system design involves managing processes and threads across single-processor, multi-processor, and distributed systems. Concurrency introduces issues like communication between processes, sharing and competing for resources, synchronization, and processor allocation. Semaphores are a common concurrency primitive that use wait/signal operations and counting to control access to shared resources and ensure processes synchronize properly.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
37 views30 pages

Chapter 8 Concurrency-P1

Operating system design involves managing processes and threads across single-processor, multi-processor, and distributed systems. Concurrency introduces issues like communication between processes, sharing and competing for resources, synchronization, and processor allocation. Semaphores are a common concurrency primitive that use wait/signal operations and counting to control access to shared resources and ensure processes synchronize properly.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 30

Concurrency:

Operating
S t
Systems: Mutual Exclusion
Internals
and Design
and Synchronization
Principles
p

Seventh
S h Edi
Edition
i
By William Stallings
„ Operating System design is concerned
with the management of processes and
threads:
„ Multiprogramming
„ Multiprocessing
p g
„ Distributed Processing
• Multiprogramming:
The management of multiple processes
within a uniprocessor system

• Multiprocessing :
The management of multiple processes
within a multiprocessor

• Distributed processing:
The management of multiple processes
executingg on multiple,
p distributed computer
p
systems.
C
Concurrency encompasses a host off
design
g issues,, including
g communication
among processes, sharing of and competing
for resources (such as memory
memory, files
files, and I/O
access), synchronization of the activities of
multiple processes
processes, and allocation of
processor time to processes.

These issues arise not jjust in multiprocessing


p g
and distributed processing environments but
even in single
single-processor
processor multiprogramming
systems.
Multiple
M lti l
Applications
Structured
Applications Operating
i
invented to allow System
processing time to Structure
b shared
be h d among extension of
active applications modular design
and structured
programmingi OS themselves
implemented as a
set of processes
or threads
h d
Concurrency Key Te r m s

Table 5.1 Some Key Terms Related to Concurrency


„ Interleaving and overlapping
„ can be viewed as examples of concurrent processing
„ both present the same problems

„ Uniprocessor–
p the relative speed
p of execution of
processes cannot be predicted
„ depends
p on activities of other processes
p
„ the way the OS handles interrupts
„ schedulingg p
policies of the OS
Difficulties of Concurrency

„ Sharing of global resources


„ Difficultfor the OS to manage the
allocation of resources optimally
„ Difficultto locate programming errors as
results are not deterministic and
reproducible
„ Occurs when multiple processes or
threads read and write data items
„ Thefinal result depends on the order of
execution
„ the “loser” of the race is the process
that updates last and will determine the
final value of the variable
Operating System Concerns
„ Design and management issues raised by the existence of
concurrency:
„ The OS must:
„ be able to keep track of various processes
„ allocate and de-allocate resources for each
active process
„ protect the data and physical resources of each
process against interference by other processes
„ ensure that the processes and outputs are independent
of tthee p
o processing
ocess g speed ((-----))
Process Interaction
Resource Competition
ƒ Concurrent processes come into conflict when they
are competing for use of the same resource
ƒ for example: I/O devices, memory, processor time, clock

In the case of competing processes three


control problems must be faced:

• the need for mutual exclusion


• deadlock
• starvation
Mutual Exclusion

Figure 5.1 Illustration of Mutual Exclusion


„ Must be enforced
„ A process that halts must do so without
interfering with other processes
„ No deadlock or starvation
„ Ap
process must not be denied access to a critical section
when there is no other process using it
„ No assumptions
p are made about relative process
p speeds
p
or number of processes
„ A process remains inside its critical section for a finite
time
i only
l
–uniprocessor system – the efficiency of
e ec tion co
execution could
ld be
– disabling interrupts noticeably degraded
guarantees mutual
g
exclusion – this
hi approachh will
ill not
work in amultiprocessor
architecture
„ Special Machine Instructions
„ Compare&Swap Instruction
„ also called a “compare and exchange
instruction”
„ a compare is made between a memory value
and a test value
„ if the values are the same a swap occurs
„ carried
i d outt atomically
t i ll
Figure 5.2 Hardware Support for Mutual Exclusion
E h
Exchange Instruction
I t ti

Figure 5.2 Hardware Support for Mutual Exclusion


„ Applicable to any number of processeson
either a single processor or multiple
processors sharingg main memoryy
p
„ Simple and easy to verify
„ It can be used to support multiple critical
sections; each critical section can be defined
by its own variable
Special Machine Instruction:
Disadvantages
g
„ Busy‐waiting is employed, thus while a
process is waiting for access to a critical
section it continues to consume processor
time
„ Starvation
St ti isi possible
ibl when
h a process leaves
l
a critical section and more than one process is
waiting
„ Deadlock is possible
Semaphore
A variable
i bl that
th t has
h an
There is no way to
integer value upon
inspect or manipulate
which only y three
semaphores other than
operations are
these three operations
defined:

1) May be initialized to a nonnegative integer value


2) The semWait operation decrements the value
3)) The semSignal
g operation
p increments the value
Consequences

There is no way to
There iis no way to
Th k
know which
hi h process You don’t
Y d ’ know
k
know before a will continue whether another
process decrements immediately on a process is waiting so
a semaphore
h uniprocessor
i system
t th number
the b off
whether it will when two processes unblocked processes
block or not are running may be zero or one
concurrently
S
Semaphore
h Primitives
P i ii
Bi
Binary S
SemaphorePrimitives
h Pi ii
.A q
queue is used to hold processes
p waitingg on the semaphore
p

Strongg Semaphores
p
• the process that has been blocked the longest is
released from the queue first (FIFO)

Weak Semaphores
• the order in which processes are removed from the
q e e is not specified
queue
Here processes A, B, and C depend on
a result from process D D.
Initially (1), A is running; B, C, and D are
ready; and the semaphore count is 1,
indicating that one of D’sD s results is
available.
When A issues a semWait instruction on
semaphore
p s , the semaphore
p
decrements to 0, and A can continue to
execute; subsequently it rejoins the ready
queue.

Then B runs (2), eventually issues a


semWait instruction, and is blocked,
allowing D to run (3).

When D completes a new result, it issues


a semSignal
Si l instruction,
i t ti which
hi h allows
ll B
to move to the ready queue (4).
D rejoins
j i ththe ready
d queue and dC
begins to run (5) but is blocked
when it issues a semWait
instruction.
instruction
Similarly, A and B run and are
blocked on the semaphore, allowing
D to resume execution (6)(6).

When D has a result, it issues a


semSignal , which transfers C to the
ready queue.

Later cycles of D will release A and


B from the Blocked state.

You might also like