0% found this document useful (0 votes)
2 views47 pages

Processes

Download as pptx, pdf, or txt
Download as pptx, pdf, or txt
Download as pptx, pdf, or txt
You are on page 1/ 47

Processes

Presented by Clean Code team


Under supervision of :
Dr. Ahmed Salem
Team Members
• Ahmed Khaled Elhady Mohamed
• Mohamed Abdalla Ahmed Elmayet
• Ahmed Mohamed Shawki Sherif
• Hossam Mohamed Ahmed Yousef
• Osama Mohamed Ragab
• Mohamed Waleed Mohamed Abdel-Fatah
Agenda
• Process Concept
• Process Elements
• Process Control Block
• Process State
• Process Scheduling
• Threads , Multithreading
• Multithreading models
• Benefits of Threads
Process Concept (1/3)
Program vs. Process
• A program is a passive entity such as the file that contains
the list of instructions stored on a disk always referred to as
an executable file.
• A program becomes a process when an executable file is
loaded into the memory and then becomes an active entity.
Process Concept (2/3)
• The fundamental task of any operating system is the
process management.
• OS must allocate resources to processes, enable sharing
of information, protect resources, and enable the
synchronization among processes.
Process Elements (1/2)
Segments of a process represents the following
components:
• Text Section: the program code. This is typically read
only ,and might be shared by a number of processes.
• Data Section: containing global variables.
• Heap: containing memory dynamically allocated during
run time.
• Stack: containing temporary data
Process Elements (2/2)
• Process in Memory
Process Control Block (PCB) (1/3)
• Each process is represented in the OS by a
Process Control Block (PCB)
Process Control Block (PCB) (2/3)
Process Control Block components :
• Process identification information
 Process identifier: numeric identifiers represent the unique
process identifier
 Useridentifier: the user who is responsible for the job).
• Processor state Information
 Process state– running, waiting, etc
• Program counter
 location of instruction to next execute
Process Control Block (PCB) (3/3)
• CPU registers
 contents of all process-centric registers
• CPU scheduling information
 priorities, scheduling queue pointers
• Memory-management information
 Memory allocated to the process
• Accounting information
 CPU used,clock time elapsed since start, time limits
Process State (1/3)
As a process executes, it changes state
• new: The process is being created
• running: Instructions are being executed
• waiting: The process is waiting for some event to occur
• ready: The process is waiting to be assigned to a
processor
• terminated: The process has finished execution
Process State (2/3)
• Diagram of Process State
CPU Switch From Process to Process
Process Scheduling
• Job queue– set of all processes in the system
• Ready queue– set of all processes residing in main
memory, ready and waiting to execute
• Device queues– set of processes waiting for an I/O device
Threads (1/4)
• A thread is a basic unit of CPU utilization; it comprises a
thread ID, a program counter, a register set, and a
stack.
• It shares with other threads belonging to the same
process its code section, data section, and other
operating-system resources, such as open files and
signals.
Threads (2/4)
• Let's say, for example, a program is not capable of
drawing pictures while reading keystrokes. The program
must give its full attention to the keyboard input lacking
the ability to handle more than one event at a time.
• The ideal solution to this problem is the seamless
execution of two or more sections of a program at the
same time. Threads allows us to do this
Threads (3/4)
Single vs. Multithreaded Process
Threads (4/4)
• Most software applications that run on modern
computers are multithreaded.
• An application typically is implemented as a separate
process with several threads of control.
 A web browser might have one thread display images or text
while another thread retrieves data from the network.
 A word processor may have a thread for displaying graphics,
another thread for responding to keystrokes from the user,
and a third thread for performing spelling and grammar
checking in the background.
Multithreaded server architecture
Finally, most operating-system kernels are now multithreaded.
Several threads operate in the kernel, and each thread
performs a specific task, such as managing devices, managing
memory, or interrupt handling.
Multithreading Models
• Ultimately, a relationship must exist between user
threads and kernel threads. We look at three common
models:
one-to-one model.
many-to-one model.
many-to-many model.
Two-level-model.
One to one model Many to one model

Many to many model Two level model


Benefits of Threads
• Responsiveness– may allow continued execution if part
of process is blocked, especially important for user
interfaces.
• Resource Sharing– threads share resources of process,
easier than shared memory or message passing.
• Economy– cheaper than process creation, thread
switching lower overhead than context switching.
• Scalability– process can take advantage of
multiprocessor architectures
• A global e-commerce company launches a massive discount
campaign for "Black Friday."
• The traffic spikes to millions of visitors in minutes, but the
problem is the existing servers can't handle the load!
Suddenly, the website crashes, customers get frustrated, and
sales are lost. The CEO exclaims: "Guys, we're losing money
every second! What's the solution?"
• At this point, the infrastructure engineer confidently steps in
and says:
• "Don't worry! With virtualization technology, we can transform
our existing hardware into a fleet of virtual servers, each
handling a portion of the load."
• Virtualization is a technology that allows users to run
multiple operating systems or applications on a single
device, meaning a single computer is divided into
several virtual machines. Each virtual machine operates
independently, even though they share the same
resources (CPU, memory, storage).
Four types of interfaces at three different levels

1.Instruction set architecture: the set of machine instructions, with two subsets:
•Privileged instructions: allowed to be executed only by the operating system.
•General instructions: can be executed by any program.
2.System calls as offered by an operating system.
3.Library calls, known as an application programming interface (API)
Ways of virtualization

Process VM Native VMM Hosted VMM

Differences
(a)Separate set of instructions, an interpreter/emulator, running atop an
OS.
(b)Low-level instructions, along with bare-bones minimal operating
system
(c)Low-level instructions, but delegating most work to a full-fledged OS.
Privileged vs. Non-Privileged
Instructions in Operating Systems
• Privileged instruction: : if and only if executed in user mode, it
causes a trap to the operating system
• Nonpriviliged instruction: execute it by any program
Condition for virtualization
•Necessary condition
For any conventional computer, a virtual machine monitor may be constructed if the set
of sensitive instructions for that computer is a subset of the set of privileged instructions.
•Problem: condition is not always satisfied
There may be sensitive instructions that are executed in user mode without causing a
trap to the operating system.
•Solutions
•Emulate all instructions
•Wrap nonprivileged sensitive instructions to divert control to VMM
•Paravirtualization: modify guest OS, either by preventing nonprivileged sensitive
instructions, or making them nonsensitive (i.e., changing the context).
Benefits of Virtualization
•Cost Efficiency
•Dynamic Load Balancing
•Flexibility and Isolation
•Better Resource Utilization
•Improved Scalability
•Enhanced Collaboration
Containers
• Containers are a virtualization
technology that isolates applications
within an independent runtime
environment while sharing the same
underlying operating system. Instead of
using full operating systems like in
virtual machines, containers run on a
single operating system and run
different applications within them.
How do containers work?
• Structure:
A container contains the applications and everything needed for them to run (such as
files, libraries, and tools), and operates independently from other containers and
applications.

•Isolated Environment:
Each container runs on the same underlying operating system but with complete isolation
between the applications inside different containers.
Difference between Containers and Virtual Machines
(VMs):

•Containers:
Containers share the underlying operating system but isolate applications. They consume
fewer resources because the underlying operating system is shared.
•Virtual Machines (VMs):
Each virtual machine has its own complete operating system, requiring more resources.
VMs and cloud computing
Cloud Computing refers to the delivery of computing services, including
servers, storage, databases, networking, software, and analytics, over the
internet ("the cloud"). Instead of owning and managing physical hardware and
software, users can access these resources on-demand from cloud providers.
• Three types of cloud services
•Infrastructure-as-a-Service covering the basic infrastructure
•Platform-as-a-Service covering system-level services
•Software-as-a-Service containing actual applications
Client-Specific Counterparts Thin-Client Solutions

• A major task of client machines


is to provide the means for users
Client Role to interact with remote servers.
• There are roughly two ways in
which this interaction.
Web browser rendering
is the process by which a
browser translates the
HTML, CSS, JavaScript,
and other resources of a
web page into the visual
display you see on your
screen. It involves
several steps that occur
behind the scenes.
Distributio
n
Transpare
ncy •Client software is more than just a user interface; it often handles
parts of the processing and data levels in client-server applications.
•Client software aims to provide distribution transparency,
ensuring the client is unaware of remote communication
complexities.
•The Client stub ensures seamless interactions by masking
differences in machine architectures and communication processes.
• Key transparency features include:
1. Access Transparency: Ensures local calls are transformed into messages for remote servers and
vice versa, without exposing the complexity to the client.
2. Location, Migration, and Relocation Transparency: Middleware may rebind clients to servers
if the server changes location, maintaining continuity.
3. Failure Transparency: Middleware retrieve server connections or uses cached data during
failures to maintain functionality.
4. Concurrency Transparency: Intermediate servers like transaction monitors manage
simultaneous requests with minimal client-side involvement.
Replication Transparency: Requests to replicated servers are managed by client-side software to
ensure consistent responses.
Server Role
A server is a process that provides specific services to a collection of clients. Its general workflow involves
waiting for a client request, processing it, and then waiting for the next request.
Types of Servers
1. Iterative Servers:
• Handle one request at a time, completing it before attending to the next.
• Simple in design but limited in handling concurrent requests efficiently.
2. Concurrent Servers:
o Use a dispatcher to assign incoming requests to separate threads or processes, enabling simultaneous handling
of multiple requests.
o Ideal for operations involving blocking tasks (e.g., disk I/O or communication with other servers).
o Multithreaded or process-forking servers are common implementations.

• Observation: Concurrent servers are the standard because they handle multiple client requests effectively,
even under heavy load.
Contacting
Servers:
End Points Clients communicate with servers through end points (ports) on the server's
machine:
Preassigned End Points: Well-known services like FTP (port 21)
and HTTP (port 80) use fixed ports globally assigned by IANA.
Dynamically Assigned End Points: Services without fixed ports use
operating systems or daemons to assign and track active ports. A
client first queries the daemon for the correct end point and then contacts
the specific server.
Interrupting a Server
Servers need to handle interruptions effectively, such as canceling ongoing operations
like file uploads. Two main approaches are used:

1. Abrupt Exit
•Client disconnects, forcing the server to terminate the session.
•Simple but inefficient.
2. Out-of-Band Data
• Client sends a special interruption signal to notify the server.

•Two implementations:
1. Separate Control Endpoint: A dedicated channel for urgent signals.
2. Same Connection: Uses the existing connection (e.g., TCP urgent
flag).
The differences between stateless and stateful servers
Object servers
• object servers are specialized servers designed to host and manage distributed objects.
• object servers provide the environment and mechanisms needed to execute and manage objects.
• These objects encapsulate data (state) and methods (code) and are the primary building blocks of
services in distributed systems.

• Why Use Object Servers


• Flexibility: Easy addition or modification of services.
• Reuse: Objects can be reused across servers/applications.
• Encapsulation: Promotes better organization and security.
• Threading Policies: Advanced threading for optimized performance.
Object Server Cycle
When a client sends a request to an object server:
1. Request Arrival:
o The request arrives at the server via the network stack, handled by the Local OS.
2. Request Demultiplexing:
o The Request Demultiplexer routes the request to the appropriate Object Adapter
based on object identifiers.
3. Activation Policy Handling:
o The Object Adapter checks the server's activation policy and determines if the
object is already active or needs to be instantiated.
4. Object Stub Invocation:
o The Object's Stub (Skeleton) processes the request and calls the appropriate
method on the object.
5. Response Generation:
o The invoked object processes the request, generates a response, and sends it back
to the client via the stub.
Server Clusters
• A server cluster is a group of interconnected machines (or nodes) that collectively provide services to clients.
These machines are connected through a network, typically with high bandwidth and low latency in local-
area clusters. Server clusters are widely used to ensure scalability, reliability, and load balancing for
various applications.
Code Migration
• Code migration in distributed systems involves transferring executable code from one machine to
another. Unlike traditional systems that primarily pass data, code migration includes moving entire
programs, sometimes even during execution.

• Benefits of Code Migration:


1. Improved Resource Allocation:
o Dynamically redistribute workloads to make better use of computational resources.
2. Energy Efficiency:
o Reduce energy consumption in data centers by consolidating workloads onto fewer machines.
3. System Flexibility:
o Enable adaptive systems that respond to changes in load, usage patterns, or hardware
availability.
Thanks for
your Listening

You might also like