Von Neumann Architecture
Von Neumann Architecture
Von Neumann Architecture
The Von Neumann architecture is a computer design model that uses a processing unit
and a single separate storage structure to hold both instructions and data. It is named after
mathematician and early computer scientist John Von Neumann. The term “stored program
computer” is generally used to mean a computer of this design, although as modern computers
are usually of this type, the term has fallen into disuse.
The earliest computing machines had fixed programs. Some very simple computers still use this
design, either for simplicity or training purposes. For example, a desk calculator is a fixed
program computer. It can do basic mathematics, but it cannot be used as a word processor or to
run any applications. To change the program of such a machine, you have to re-wire, re-structure,
or even re-design the machine.
The idea of the stored-program computer changed all that. By creating instruction set
architecture and detailing the computation as a series of instructions (the program), the machine
becomes much more flexible. By treating those instructions in the same way as data, a stored-
program machine can easily change the program, and can do so under program control. A stored-
program design also lets programs modify themselves while running.
Memory Address Register:
The MAR is the register of a computer’s Control Unit that contains the address of a register to
fetch or store from or to the computer storage
Memory Data Register:
The MDR is the register of a computer’s Control Unit that contains the data to store in the
computer storage (e.g. RAM, ROM), or the data after a fetch from the computer storage.
Instruction Register:
In computing, an instruction register is the part of a CPU’s control unit that stores the
instruction currently being executed. In simple processors each instruction to be executed is
loaded into the instruction register which holds it while it is decoded, prepared and ultimately
executed, which can take several steps.
Program Counter:
The Program Counter is a register in a computer processor which indicates where the computer is
in its instruction sequence. The PC holds the address of the next instruction to be executed. The
PC is automatically incremented for each instruction cycle so that instructions are normally
retrieved sequentially from memory.
Operating Systems
An operating system (OS) is a computer program that manages the hardware and software
resources of a computer. At the foundation of all system software, the OS performs basic tasks
such as controlling and allocating memory, prioritizing system requests, controlling input and
output devices, facilitating networking, and managing files. It also may provide a graphical user
interface for higher level functions.
An Operating System is a program that controls the execution of an application program and acts
as an interface between the user and the computer hardware. It provides an interface in which a
user can execute programs in a convenient and efficient manner.
Functions of an OS:
1. File Management
The OS manages the secondary memory. It also manages the creation, deletion of files,
folders. Also the storing, retrieving, naming and protection of files are taken care of.
2. Device Management
The OS manages the peripheral devices. It manages the i/p, o/p and any failures of the
devices
3. Memory Management
The OS manages the primary Memory (RAM). It should ensure the sufficiency of
memory for program execution.
4. Security Management
The OS manages the security of the data. It protects the data from catastrophic failures,
corruption of data and prevention from illegal access.
5. Process Management
The OS manages the processes, which are the sequences of instructions executed by the
CPU. It is responsible for the creation, deletion, suspension, resumption, scheduling and
synchronization of processes.
6. Providing User Interface
Provide a friendly environment to the User to use the resources effectively.
Real-Time OS:
It is an OS required to run a real time applications. It may support multiple simultaneous tasks or
single tasking. A real time application is an application that responds to certain inputs extreme
quickly (milliseconds/ microseconds).
Ex: Medical diagnosis equipment, Life support systems, machinery, scientific instruments, and
Industrial systems.
Single User/ Single Tasking OS:
These OS allows a single User to perform only one task at a time. Any process can be executed
only after the completion of the current process.
Ex: MS-DOS, Palm-OS
Adv – Less memory space
Powerful or expensive computer not required
Disadv- One User at a time
Only one task at a time
Single User/ Multitasking OS:
These OS allows a single User to perform multiple tasks at a time. The multitasking has increased
the productivity as multiple jobs can be accomplished in a shorter time.
Ex: Microsoft Windows, Macintosh OS,
Adv- simultaneous task execution
Increased productivity
Easily switch between multiple programs
Disadv – Only one user supported
Large memory space
Increased complexity
Multi-user/ Multitasking OS:
These OS allows multiple users to use programs that are simultaneously running on a single
network server, termed Terminal Server.
Ex: UNIX, VMS, MVS (Mainframe OS)
Adv- Multiple access to the terminal server via the terminal client
Multiple programs can be run by multiple users
Simple changes on the server is only required rather than on every individual systems.
Disadv – Increased memory space
If network connection to the server is broken, no work can be done.
Evolution of Operating Systems
Operating system and computer architecture have had a great deal of influence on each other. To
facilitate the use of the hardware, OS’s were developed. As operating systems were designed and
used, it became obvious that changes in the design of the hardware could simplify them.
Simple Batch Systems
When punched cards were used for user jobs, processing of a job involved physical actions by the
system operator, e.g., loading card, pressing switches etc. These actions wasted a lot of CPU
time. To speed up processing, jobs with similar needs were batched together and were run as a
group. Batch processing was implemented by a component of BP Systems, called batch monitor
in computers’ memory. The remaining memory was used to process a user job.
In a Simple batch system, users left jobs with the operator and came back the next day for the
results. Users had no interaction with computer during program execution. The delay between the
job submission and completion was considerable in batch processed system as a number of
programs were put in a batch and the entire batch had to be processed before the results were
printed
Even though disks are faster than card reader/ printer they are still two orders of magnitude
slower than CPU. It is thus useful to have several programs ready to run waiting to run in the
main program in the CPU. When one program needs input/output from disk it is suspended and
another program whose data is already in main memory is taken up for execution. This is called
Multiprogramming.
Multiprogramming increases CPU utilization by organizing jobs such that the CPU always has a
job to execute. Multiprogramming is the first instance where the OS must make a decision for the
user. It ensures concurrent operation of the CPU and I/O subsystem. It ensures that the CPU is
allocated to a program only when it is not performing an I/O operation.
Time Sharing System
Multiprogramming features were superimposed on Batch Programming to ensure good utilization
of CPU but from the point of view of a user the service was poor as the response time, i.e., the
time elapsed between submitting a job and getting the results was unacceptably high.
Time sharing system provides a mechanism for concurrent executions, which requires
sophisticated CPU scheduling schemes. To ensure orderly execution, the system must provide,
mechanism for job synchronization and communication, and must ensure that jobs do not get
stuck.
Distributed Systems
A recent trend in computer system is to distribute computation among several processors. The
processors in a distributed system may vary in size and function, and referred by a number of
different names, such as sites, nodes, computers and so on. The processors communicate with one
another using communication lines, such as an Ethernet.