Cmp 312 1
Cmp 312 1
Cmp 312 1
Core OS task
OS Architecture
Types of OS
Example of OS
It's worth noting that this overview provides a simplified timeline, and
there are numerous other operating systems and variations that have
played significant roles in the history and evolution of computing.
Batch operating system has (1. Single user, 2. single task and single machine)
A batch operating system is a type of operating system that processes a series of jobs or
tasks without requiring user intervention. In a batch processing environment, users
submit their jobs to the operating system, typically in the form of batch files or job
control language (JCL). The operating system then executes these jobs one after
another, in a sequential manner, without user interaction.
Here are the key characteristics and features of a batch operating system:
Job Submission: Users submit their jobs as a batch, usually providing instructions and
data files necessary for the job's execution. These jobs are typically stored on external
storage media, such as punch cards, magnetic tapes, or disk files.
Job Scheduling: The operating system has a job scheduler that determines the order in
which the submitted jobs are executed. It considers factors such as priority, resource
availability, and job dependencies to optimize the overall system performance.
Job Execution: The batch operating system takes each job from the batch queue and
allocates the necessary system resources for its execution. It loads the job into memory,
sets up the environment, and initiates its execution.
No User Interaction: once a job starts executing, there is any user interaction or
intervention until the job completes or encounters an error. The operating system
executes the job using the predefined instructions and processes the data files associated
with it.
Job Completion and Output: After a job completes, the operating system typically
generates output files containing the results or reports of the job's execution. These
output files are often stored for further processing or delivered to the user.
Job Control Language (JCL): A batch operating system often uses a specific language,
such as JCL, to define the job control statements and provide instructions to the
operating system. JCL specifies parameters, file names, resource requirements, and
other details necessary for the proper execution of jobs.
Batch operating systems are commonly used in scenarios where large volumes of
similar or repetitive tasks need to be processed efficiently. For example, payroll
processing, billing systems, and data processing applications often employ batch
operating systems. They maximize the utilization of computing resources by allowing
the system to process multiple jobs without requiring constant user input, thus
improving overall efficiency and throughput.
2. Time Shearing:
Time Shearing has (1. Single System, 2. Multiple task, 3. Multiple users).
The operating system allocates a small time slice to the first task or user.
The task executes for that allocated time slice.
At the end of the time slice, an interrupt is generated, indicating that the time slice
has expired.
The operating system's scheduler then selects the next task to run, based on
predefined scheduling algorithms.
The context of the current task is saved, including the values of registers and
program counters.
The context of the next task is restored, and its execution resumes from where it
was interrupted.
This process continues, with tasks being rapidly switched and executed in a
round-robin fashion.
Concurrency Support: Time sharing systems enable the execution of multiple tasks
concurrently, allowing users to run different applications simultaneously without
interference.
Time sharing is a fundamental concept in modern operating systems and plays a crucial
role in providing efficient and interactive computing experiences. It allows for the
illusion of parallel execution on single or limited computing resources, facilitating
multitasking and improving overall system performance.
Network OS: The mode of connection which are carried out using Network Os is wired
(Lan connection) the connection is made using cables. It emerges with the ability of
multitasking, multiprogramming, multisystem and multiusers
Resource Sharing: One of the primary functions of a NOS is to enable the sharing of
network resources among users and computers. This includes shared files, printers,
scanners, and other peripheral devices. The NOS provides mechanisms for users to
access and utilize these shared resources efficiently.
File and Print Services: A NOS typically offers file and print services, allowing users
to access and manage files stored on remote servers within the network. Users can
create, modify, and share files across the network. Additionally, NOS facilitates
centralized printing, where users can send print jobs to shared printers on the network.
User and Group Management: NOS provides user authentication and authorization
mechanisms, allowing administrators to create user accounts, assign access rights, and
manage user privileges within the network. It also supports the creation of user groups
to simplify the management of permissions and access control.
Network Management: NOS provides tools and utilities for network administrators to
manage and monitor the network infrastructure. This includes monitoring network
performance, configuring network devices, troubleshooting connectivity issues, and
generating network usage reports.
Network Operating Systems play a crucial role in managing and coordinating the
activities of multiple computers within a network, allowing for efficient resource
sharing, centralized management, and secure communication.
Here are some key features and characteristics of a Distributed Operating System:
Here are some key features and characteristics of a Real-Time Operating System:
Response Time: RTOS aims to provide quick response times for time-critical events.
It minimizes interrupt latency and context-switching overhead to ensure that the
system can respond rapidly to external stimuli and events.
Timing Services: RTOS includes timing services, such as timers and clock
management, to accurately measure and control time intervals. These services are
essential for scheduling tasks, synchronizing operations, and meeting time constraints.
Interrupt Handling: RTOS efficiently handles interrupts and prioritizes them based on
their urgency. It provides mechanisms for rapid and predictable interrupt response,
allowing critical tasks to preempt lower-priority tasks and ensuring that important
events are handled promptly.
Resource Management: RTOS manages system resources, such as CPU, memory, and
peripherals, to ensure efficient utilization. It provides mechanisms for resource
allocation, sharing, and synchronization, allowing tasks to access and utilize resources
without conflicts.
Fault Tolerance: RTOS incorporates fault tolerance mechanisms to handle errors and
exceptions that may occur during real-time operations. It includes features such as
error handling, exception handling, and system recovery techniques to maintain
system integrity and reliability.
Certification and Standards: Depending on the application domain, some RTOS may
undergo certification processes to ensure compliance with industry-specific standards,
such as DO-178C for avionics or IEC 61508 for industrial automation.
Note: Real-Time Operating Systems are widely used in applications where precise
timing, responsiveness, and reliability are critical, such as aerospace and defense
systems, industrial automation, robotics, medical devices, and automotive systems.
These systems require the ability to perform time-sensitive tasks with minimal delay
or jitter, making RTOS an essential component for achieving predictable and
deterministic behavior.
CMP 312
Recap
Function of OS
System Architecture
Process Management: The OS manages and schedules processes (or tasks) running on the
computer. It allocates system resources such as CPU time, memory, and input/output devices
to ensure efficient multitasking and optimal performance.
File System Management: The OS provides a file system that organizes and manages files
and directories on storage devices. It handles tasks such as file creation, deletion, and access
permissions, ensuring data integrity and efficient storage utilization.
Device Management: It manages input and output devices, such as keyboards, mice, printers,
and storage devices. The OS provides drivers and protocols to enable communication
between software and hardware components, allowing applications to interact with devices
seamlessly.
User Interface: The OS provides a user interface (UI) that allows users to interact with the
computer system. This can be through a command-line interface (CLI), graphical user
interface (GUI), or a combination of both, enabling users to execute programs, access files,
and configure system settings.
Security: The OS incorporates security measures to protect the computer system from
unauthorized access, viruses, malware, and other threats. It includes user authentication
mechanisms, access control policies, and often provides firewall and antivirus functionality.
Different types of operating systems exist, such as Windows, macOS, Linux, and mobile
operating systems like Android and iOS, each tailored for specific devices or platforms. They
provide a foundation upon which software applications can run, manage resources efficiently,
and enable users to interact with computers and devices effectively.
Types of OS
1. Mac OS
2. Windows OS
3. Linux OS
4. Chrome OS
5. Android OS
6. Java OS
7. Symbian OS
8. Embedded OS
Process management
Recap
Programs
OS Process
Thread
Program
OS Process:
In the context of computing, a process refers to an instance of a computer program that is
being executed or run by the operating system. It is the fundamental unit of work in a
computer system, representing a running program along with its associated resources.
When a program is lunched, the operating system creates a corresponding process to manage
its execution. Each process has its own memory space, which includes variables, data and
instruction specific to that process. It also includes other resources such as open files,
network connection and input/output devices.
Process are managed by the operating system scheduler, which allocates CPU time and
system resources in a fair and efficient manner. The scheduler determines the order and
duration in which processes are executed, allowing multiple program to run concurrently on a
single computer system.
Processes can interact with each other through inter-process communication mechanisms
provided by the operating system, such as shared memory, pipes, sockets, or message
passing. This enables processes to exchange data, coordinate activities, and collaborate in
various ways.
Each process is assigned a unique identifier called a process ID(PID), which helps track and
manage them. Processes can have different states, such as running, waiting, or terminated
depending on their current status.
in summary, a process represents the execution of a program, including its code, data and
resources. It’s a fundamental concept in computer system, allowing for multitasking and
efficient utilization of a computing resources.
A process life circle are different stages that a program undergoes from time of
execution (lunching) to termination.
1. New: this is the initial stage when a process is first created. The necessary resources
are allocated to the process, and it awaits admission into the system.
2. Ready: in this state, is waiting to be assign to a processor for execution. It has all the
necessary resources, and once the processor becomes available, it can transition to the
running state.
3. The process is being executed by the processor. It is actively using CPU time to
perform its tasks. Depending on the scheduling algorithm employed by the operating
system, the process may preempt and moved back to the ready state if another higher-
priority process needs the CPU.
4. Wait: A process in this state is unable to proceed until a certain event occurs. This
event could be waiting for user input, waiting for resource to become available, or
waiting for the completion of an I/O operation. Once the event occurs, the process can
transition back to the ready state.
5. Terminate: When a process completes its execution or is explicitly terminated by the
operating system or user, it enters the terminated states. In this state, the process is
removed from the system, and its resources are deallocated.
interrupt
New Terminatio
Ready Running
n
i/o response
Wait
The condition of I/O response can be running process that needs the user intervention
in other to continue.
Example: let’s consider the program installation that needs user input to continue.
Simply the statistic or information of a process or task, the PCB is responsible for the
allocation of the information in the register
A Process Control Block (PCB), also known as a task control block or process
descriptor, is a data structure used by operating systems to manage individual
processes or tasks. It contains essential information about a specific process and helps
the operating system keep track of its execution.
The PCB is created by the operating system when a new process is initiated and is
associated with that process throughout its lifetime. It serves as a central repository of
information related to the process. Here are some key components typically found in a
PCB:
2. Process State: Indicates the current state of the process, such as running, waiting,
ready, or terminated. The state is updated as the process progresses through its
execution.
3. Program Counter (PC): Keeps track of the address of the next instruction to be
executed within the process. When a process is interrupted or scheduled for execution,
the PC value is saved in the PCB.
4. CPU Registers: These registers store the current values of the processor's registers
that are being used by the process. This includes the general-purpose registers, stack
pointer, and other relevant registers.
5. Memory Management Information: Tracks the memory allocation and usage details of
the process. It includes information such as the base address, limit, and page tables
associated with the process's memory segments.
6. Process Priority: Represents the priority assigned to the process by the operating
system's scheduling algorithm. It determines the order in which processes are
executed.
7. I/O Information: Contains details about the I/O devices the process is using or waiting
for. This information helps the operating system manage and coordinate the process's
interaction with external devices.
8. Accounting Information: Includes statistical data about the process, such as CPU
usage, execution time, and memory usage. This information aids in performance
monitoring and resource allocation decisions.
The PCB is crucial for context switching, where the operating system switches
between different processes, allowing multitasking and efficient resource utilization.
When a process is interrupted or scheduled out, the CPU state is saved into the PCB,
and the state of the next process to be executed is restored from its PCB.
Overall, the PCB provides a comprehensive snapshot of a process's essential attributes
and facilitates efficient process management by the operating system.
The Process Control Block is typically stored in the operating system's memory and is
associated with each active process. When a process is scheduled for execution, the
operating system uses the information in the PCB to set up the CPU and manage the
process's execution. The PCB is updated as the process progresses, reflecting changes
in its state, resource utilization, and execution context.
Overall, the Process Control Block provides the necessary data and control
information for the operating system to effectively manage and coordinate the
execution of processes within the system.
Thread:
Example: in Microsoft word there are numerous function that you can be able to carry
on like typing, print, save, redo, undo therefore all memory and resources allocation
of this task
1. Relationship with Processes: A process can have one or multiple threads. Threads
within the same process share the same memory space and resources, such as files and
open network connections. Each thread has its own program counter, stack, and
thread-specific data, but they can access shared data within the process.
2. Lightweight: Threads are often referred to as "lightweight processes" because they are
more lightweight and faster to create and manage compared to full-fledged processes.
Creating a new thread within a process is quicker and requires fewer resources than
creating a new process.
4. Shared Resources: Threads within a process share the same resources, such as
memory, files, and I/O devices. However, this shared access must be carefully
managed to avoid conflicts and ensure data integrity. Synchronization mechanisms,
like locks or semaphores, are commonly used to coordinate access to shared
resources.
5. Communication: Threads within the same process can communicate with each other
more easily than processes, as they can directly access shared memory. This allows
for efficient data sharing and coordination between threads within an application.
6. Benefits and Use Cases: Threads are commonly used in situations where parallelism
and concurrent execution are required, such as multi-threaded server applications,
multimedia processing, and computationally intensive tasks. By dividing a task into
multiple threads, it becomes possible to execute different parts of the task
simultaneously, potentially reducing execution time and improving responsiveness.
7. Thread States: Threads have different states, such as running, ready, waiting, or
terminated. The operating system's scheduler manages the state transitions and
decides which threads to execute based on scheduling algorithms and priorities.
It's important to note that threads are executed within the context of a process, while
processes are separate entities with their own memory space. Threads provide a way
to achieve concurrency and parallelism within a single process, allowing for more
efficient and responsive applications.
Recap
Multitasking VS Multiprogramming
OS Scheduling Concept
Top queue
Reading queue
Device queue
Scheduler
Long term
Short Term
Medium Term
OS Scheduling is the activity of a process manager that handles the admission and removal of
running process into and from the CPU base on a given strategy.
The same Scheduler admit and remove resources out of the CPU that is to say it handles the
execution and the termination.])
Multitasking VS Multiprogramming
1. Multitasking
2. Multiprogramming