Unit-I
Historical Development
Introduction to Computer
The word “computer” comes from the word “compute”, which means, “to calculate”. Hence, a
computer can be considered as a calculating device that can perform arithmetic operation at high
speed. A computer is often referred to as a “data processor” because it can store, process, and
retrieve data whenever desired (see figure 1).
Output
Input
Data
Computer
(Data Processor)
Information
Figure 1:- A computer converts data into information
Characteristics of Computer
The main characteristics of a computer are given bellow:
a) Automatic: It carries out a job normally without any human intervention.
b) Speed: It can perform several billion and even trillion simple arithmetic operation per
second.
c) Accuracy: It performs every calculation with the same accuracy.
d) Diligence: It is free from monotony, tiredness and lack of concentration.
e) Versatility: It can perform a wide variety of tasks.
f) Storage: They have a large amount of memory to hold a very large amount of data.
g) Power of Remembrance: A computer can remember and recall data because of its
capacity to hold data.
h) No Feelings: A computer is completely lacking of emotions. They cannot make
judgments based on feelings.
History (Evolution) of Computer
The history of computer can be classified into different eras:
Prepared By: Er. Krishna Prd. Neupane, MSc. in Computer Engineering & B.E. in Electronics and Communication Engineering
Page 1
a) First Era (Pre-Mechanical Era): In this era, simple tools were used for calculation.
Counting was done using stones and keep records. Around 500BC, a simple counting tool
called Abacus was invented in China. It would perform all arithmetic operations like
addition, subtraction, multiplication and division. Invention of logs by John Napier allow
multiplication and division to be reduced to addition and subtraction, where the logarithm
values were carved (cut up) on ivory (bone) sticks which are now called Napier‟s Bones.
b) Second Era (Mechanical Era): This period is called mechanical because the machines
were based on moving parts and they didn‟t have any logical control in operation. In
1642, at the age of 19, Blaise Pascal of France, created a gear driven adding machine
named Pascaline as an aid for his father who was a tax collector. Later in 1671, Leibniz
of Germany invented a first calculator for multiplication. It was similar to Pascal‟s
calculating machine but more reliable and accurate. Charles Babbage proposed a steam
driven calculating machine having the size of a room in 1822, which he called the
Difference Engine. This machine was able to compute tables of numbers. And his next
brainstorm, which he called the Analytical Engine. This device had several features
including provisions for inputting data, storing information, performing arithmetic
calculations and printing out results which are also found in modern computers.
Analytical engine provided the foundation for the modern computer. Therefore, Charles
Babbage is known as the father of modern computer. In 1842, Lady Augusta Ada wrote
the first program for the difference engine made by Babbage. Therefore, Ada earned her
spot in history as the first computer programmer.
c) Third Era (Electro-Mechanical Era): The end of Mechanical Era occurred when
physics paved (covered) the way for electrical innovation. The beads of the abacus were
replaced by bits in the modern computer. Essentially a bit (binary digit) is a small
electrical charge that represents a 1 or 0. Since, both the electrical and mechanical
components were used, this era is known as electro-mechanical era. In 1890, an
American scientist Herman Hellerith developed punched card and used as input media in
computer. In 1939, Atanasoff and Berry developed a computer called ABC (Atanasoff
and Berry Computer). In this computer, 45 vacuum tubes were used for performing
internal logic operations and capacitors were used for internal data storage. In 1944, an
American professor Howard Aiken designed first fully automatic calculating machine. It
was named as Mark-I. This calculating machine operated under the control of given
instructions.
d) Fourth Era (Electronic Era): This era is fully driven by electronic devices as
components of computer. The first electronic computer, ENIAC (Electronic Numerical
Integrator And Calculator) was built in 1946. It contained about 18,000 vacuum tubes
and occupied more than 1,500 square feet with weight of 30 tons. The ENIAC was
Prepared By: Er. Krishna Prd. Neupane, MSc. in Computer Engineering & B.E. in Electronics and Communication Engineering
Page 2
programmed by physically connecting electrical wires in the proper order. It is very
difficult to detect errors and to change the program and also stores limited amount of
data.
An altering and entering program into ENIAC was very tedious. To overcome this
problem, a new concept of stored program was presented by John Von Neumann called
EDVAC (Electronic Discrete Variable Automatic Computer). According to the Von
Neumann theory, data and program can be stored in the same memory of the computer
and automatically performs the operations. In 1951, UNIVAC-1 (Universal Automatic
Computer) was developed and which also called first digital computer. In 1952, the IBM
(International Business Machine) corporation introduced the 701 commercial computers
which was improved version of UNIVAC-1.
Generations of Computers
The term generation indicates the stages of evolution or development of computers based on the
type of technology used in the construction of computer over a period of time. The computers in
electronic era are divided into different generations:
a) First Generation (Vacuum Tubes): In this generation vacuum tubes were used for
electrical current flow between electrodes and magnetic drums for memory. Punch card
was used for input and printout display for output. This generation computers used
machine language with limited storage capacity. Examples: ENIAC, EDVAC, UNIVACI, IBM 650 etc.
b) Second Generation (Transistors): The transistor technology was used in this
generation. The transistor is smaller in size, fast and more reliable than vacuum tube.
Therefore, the transistor technology was used in computer in place of vacuum tube
technology. This generation computer used assembly language with large storage
capacity. Transistor technology reduced size and price of a computer. Examples: IBM
1401, IBM 7030, GE 635(General Electric 635) etc.
c) Third Generation (Integrated Circuits): In this generation, integrated circuits (IC)
were used for memory and processing units. Many transistors are placed on a single IC,
which drastically change the speed and efficiency of the computers. Small Scale
Integration (SSI: which includes around 10 transistors in a single IC) and Medium Scale
Integration (MSI: which includes around 100 transistors in a single IC) technologies were
used for memory and processor units. Keyboard was used as input and monitor as output.
These computers started to use operating system (OS) and high level language.
Examples: IBM 360, PDP-8 Series (Programmable Data Processor-8 Series) etc.
Prepared By: Er. Krishna Prd. Neupane, MSc. in Computer Engineering & B.E. in Electronics and Communication Engineering
Page 3
d) Fourth Generation (Microprocessors): In this generation, Large Scale Integration (LSI:
thousands of transistor on a single chip) and Very Large Scale Integration (VLSI:
hundred of thousands of transistors on a single chip) technologies were used for
processor units. These developments were followed by creation of microprocessors.
These computers support versatile inputs and outputs and also use fourth generation
language (4GL). Examples: IBM PCs, Intel PCs, Macintosh PCs etc.
e) Fifth Generation (Artificial Intelligence): Computers based on Artificial Intelligence
(AI) are known as fifth generation computers. These computers use Ultra Large Scale
Integration (ULSI) technology and hence capable of calculating billions of operations in a
second. AI uses expert system, which is software package and enables thinking and
making decision on the basis of rules setup by human specialist. It uses knowledge based
problem solving techniques and uses AI programming tools (PROLOG and LISP).
Types of Computers
Computers can be classified based on their processing speed, amount of data hold, purpose of the
computer and working principle. Depending upon their operating principle or working principle
computers are divided into three categories; which includes instructions and form of input data
that they accept and process. These are:
a) Analog Computers: The word analog means continuously varying in quantity. The
analog computers accept input data in continuous form and output is obtained in the form
of graph. The voltage, current, sound etc. are examples of analog signal. These values
continuously increase or decrease. The analog computers have low memory size and have
fewer functions. These computers are very fast in processing but output return is not very
accurate. Example: Polish analog computer AKAT-1, Differential Analyzer etc.
b) Digital Computers: The word digital means discrete (Individually separate and distinct).
It refers to binary system, which consists of only two digits i.e. 0 or 1. Digital data
consists of binary data represented by OFF (low or 0) and ON (high or 1) electric pulses.
In digital computers, quantities are counted rather than measured. Accurate result and
high speed data processing are the main features of digital computers. Example:
Calculators, personal computers, digital watches etc.
c) Hybrid Computers: The hybrid computer combines best features of both analog and
digital computers. In these computers, the users can process both the continuous (analog)
and discrete (digital) data. These are special purpose computers for scientific fields.
These are very fast and accurate. Examples: Computer is in ICU (Intensive Care Unit),
computer used in Missile etc.
Prepared By: Er. Krishna Prd. Neupane, MSc. in Computer Engineering & B.E. in Electronics and Communication Engineering
Page 4
Unit-II
Introduction to Computer Systems
Fundamental concepts of Computers
Computer is an electronic machine that is used to solve different kinds of problems.
Generally, computer is divided into two sub systems:
i. Computer Software: A set of instructions given to the computer is known as
computer software. The software tells the computer what to do and how to
perform the given task of the user. Although the range of software is vast and
varied, most software can be divided into two major categories:
Users
Application Software
System Software
Hardware
Figure: Software and Hardware Relationship
ii.
System Software: It is a term referring to any computer software whose
purpose is to help run the computer system. Most of them are responsible for
directly controlling individual hardware components of computer systems.
Specific kinds of system software include operating system (OS), device
driver, utility program (formatting of disk, removing bugs from program).
Examples: DOS (Disk Operating System), Windows, Linux etc.
Application Software: It is a set of one or more programs, which solves a
specific problem. It includes programs that do real work for users. Example:
Word Processors, Spreadsheets etc.
Computer Hardware: The physical parts of a computer are known as computer
hardware. We can touch, see and feel the hardware. The hardware components are
keyboard, mouse, hard disk, CPU, printer etc.
Prepared By: Er. Krishna Prd. Neupane, MSc. in Computer Engineering & B.E. in Electronics and Communication Engineering
Page 5
Firmware
Firmware is a type of software that provides control, monitoring and data manipulation of
engineered products and systems. Typical examples of devices containing firmware are
embedded systems (such as traffic lights, consumer appliances, remote controls and
digital watches).
Block Diagram of Digital Computer
A digital computer is capable of performing various tasks and can be illustrated with the
help of following block diagram:
Memory Unit
Secondary
Memory
Output
Input
Primary
Memory
Control Unit
A
L
U
R
CPU
Figure: Block diagram of a digital computer
The different components of a typical digital computer and their major functions are
described below.
i. Input Unit: The input unit is a device that is used to feed the data and instructions
into the computer. Keyboard and mouse are commonly used as input devices.
Prepared By: Er. Krishna Prd. Neupane, MSc. in Computer Engineering & B.E. in Electronics and Communication Engineering
Page 6
ii.
iii.
iv.
Output Unit: An output unit provides the information and results of a
computation to outside world. The output device gives the desired result to the
user. Example: Monitor, Printer etc.
Central Processing Unit (CPU): CPU is a major component of any computer. It
performs all the processing related activities. It receives data and instructions
from outside world, stores them temporarily, processes the data as per the
instructions and sends the result to the outside world as information.
The CPU is a combination of three components Arithmetic and Logical Unit
(ALU), Control Unit (CU) and Register Array (R).
a. Arithmetic Logic Unit (ALU): It contains electronic circuits necessary to
perform all the arithmetic and logical operations. The arithmetic
operations include addition, subtraction, multiplication, division etc.
Similarly, logical operations include logical AND, OR, complement, Shift
etc.
b. Control Unit (CU): It controls all other units in the computer. It controls
the flow of data and instruction to and from memory (or I/O) to ALU. The
main tasks of the CU are to fetch, decode and execute of an instruction
and control & synchronizes its working.
c. Register Array (R): CPU contains special purpose temporary storage
locations called registers. Registers quickly accept, store and transfer data
and instructions. Number of registers like Accumulator, Stack Pointer,
Address register etc. are contain in CPU to form register array.
Memory Unit: It is the location where data and instructions are stored. Memory
is the main storage unit in a computer. Memory can be divided into two parts:
a. Primary Memory: It is the main memory of the computer. It stores and
provides information very quickly. Example: RAM, ROM etc.
b. Secondary Memory: It has very large storage capacity and used as a
backup memory for future reference. Example: Hard Disk, Floppy Disk
etc.
Memory
It is one of the major components of computer system. It can be internal or external. CPU
registers, cache memory, buffer, RAM, ROM, Hard Disk, Floppy Disk, Magnetic tape etc. are all
memory devices used for storing data and instructions. Memory can be divided into two parts:
i.
Primary Memory or Main Memory: Primary memory can be volatile and non-volatile.
RAM is volatile which means it forgets all data while computer is turned off. The ROM
which holds data even if computer is turned off and hence is non-volatile.
Prepared By: Er. Krishna Prd. Neupane, MSc. in Computer Engineering & B.E. in Electronics and Communication Engineering
Page 7
a. RAM (Random Access Memory): It is considered “random access” because any
data in the memory can be directly access in any order. RAM also refers to read
and write memory. RAM can be further divided into DRAM and SRAM.
DRAM (Dynamic RAM): It is the most common type of RAM used to
store data and instructions in computer. It has memory cells with pair of
transistor and capacitors requiring constant refreshing.
SRAM (Static RAM): It uses multiple transistors (or flip-flops) typically
four to six for each memory cell but doesn‟t have a capacitor so doesn‟t
need refreshing circuits.
b. ROM (Read Only Memory): The ROM contains instructions or programs that
are permanently stored on the chips by the manufacturer themselves. The
instruction stored in ROM can only be read but can‟t be modified. The programs
stored in a ROM are called firmware. The ROM contains the Basic Input/Basic
Output (BIOS) which is a set of instructions that are automatically activated when
computer is turned on. ROM can be further divided into PROM, EPROM and
EEPROM:
PROM (Programmable ROM): PROM is a blank chip on which the user
can write his own program instructions and data but only once and then
can‟t be changed.
EPROM (Erasable PROM): It is similar to the PROM but program can
be erased and reprogrammed by exposing the chip into high intensity
ultraviolet light for 10 to 20 minutes.
EEPROM (Electrically EPROM): It is a special type of PROM that can
be erased by exposing it to a relatively high voltage of an electric charge
on byte by byte within millisecond range of time and can be
reprogrammed up to 10000 times.
ii.
Secondary Memory or Auxiliary or Backup Memory
Secondary memory is known as auxiliary memory or simply the storage. It is non-volatile
and can stored data on long term basis for future use. The presence of magnetic field
represents a „1‟ bit and its absence represents a „0‟ bit. Some of the main secondary
memory devices are:
a. Magnetic Storage: It refers to keep information or data on magnetized materials.
The surface of magnetic storage is coated with millions of tiny iron particles so
that data can be stored on them. There are two types of magnetic storage:
i. Magnetic Disk: Magnetic disk is commonly used in computers as
secondary memory. A magnetic disk is a circular metal or plastic disk
coated on both sides with magnetic recording material (ferrous oxide). The
data on magnetic disk is recorded on both sides of the disk and as
magnetic fields. There are two fundamental types of magnetic disks.
Prepared By: Er. Krishna Prd. Neupane, MSc. in Computer Engineering & B.E. in Electronics and Communication Engineering
Page 8
Floppy Disk: The floppy disk is a thin and flexible plastic disk
coated both side with magnetic recording material (ferrous or iron
oxide).
Hard Disk: It is a most commonly used storage device in personal
computers and laptops. It includes the hard disk, and the motor that
rotates the platter. The hard disk and drive is a single unit.
ii. Magnetic Tape: It is the most popular and oldest storage medium used to
store large amount of data and instructions permanently. The magnetic
tape is a plastic ribbon with width 0.25 inch to 1 inch and one side coated
with magnetic material (ferrous oxide or iron oxide).
b. Optical Storage: Today, the most widely used and reliable storage devices are
the optical storages. These devices use laser technology. The most popular optical
storage devices are CD-ROM, DVD-ROM etc.
c. External Storage: The storage devices which can be connected to the system
externally are called external storage devices. Some of the commonly used
external devices are: Flash Memory, Portable Hard Disk etc.
Cache Memory
Cache is a relatively small and fastest memory located between CPU and main memory. The
main objective of cache memory is to reduce the average cost (time or energy) to access data
from the main memory. There are basically different levels of caches. For example: L1, L2 and
L3 caches.
Buffer
Buffer is a memory which is used for an uninterrupted flow of information, especially, when
there is faster device transferring the data to a slower device.
Peripheral Devices
A peripheral device is generally defined as any auxiliary device such as a computer mouse or
keyboard that connects to and works with the computer in some way. Peripheral devices provide
the means of communication between the computer and the user. These devices are also called
input-output (I/O) devices. There are three different types of peripherals:
a. Input Devices: These devices interact with or send data from the user to the computer.
For examples: mice, keyboards, digitizer, joystick, electronic pen etc.
b. Output Devices: These devices provide output to the user from the computer. For
examples: monitors, printers, projector, plotter etc.
c. Input/Output Devices: These devices perform both input and output functions. For
examples: touchscreen, headset, network card etc.
Prepared By: Er. Krishna Prd. Neupane, MSc. in Computer Engineering & B.E. in Electronics and Communication Engineering
Page 9
Unit-III
Programming Preliminaries
Introduction to Program
Basic commands that instruct the computer system to do something are called
instructions. An organized list of instructions that, causes the computer to behave in a
predetermined manner is a program.
Introduction Programming Language
Similar to our natural languages Nepali, English etc. used to communicate among each
other. Likewise, if we want to communicate with computer we use language. Such type
of language which instructs computer to perform user specified task is called
programming language. Therefore, programming language is a standardized
communication technique for describing instructions for a computer.
Programming languages are classified mainly into two categories on the basis of creating
instructions:
a. Low Level Language: These are much closer to hardware. A program can‟t be run on
different hardware and these are specific to hardware. Low level language is also divided
into two types:
i. Machine Language: It is the lowest level language because it uses strings of 0‟s
and 1‟s in the form of voltage to give instructions and data to the computers.
There was no translator used to compile or assemble this language. However, it is
very difficult to learn machine language and also tough task to debug errors. For
example: 1000001; this is the sequence of 1‟s and 0‟s.
ii. Assembly Language: An assembly language contains the English short
abbreviation called mnemonics rather than sequence of 0‟s and 1‟s. It is easier to
understand than the machine level language. For example: ADD A, B; which
means it adds values of A and B and stores result into A. To translate assembly
language into machine code translation software (or program) is required, which
is called assembler.
b. High Level Language: The languages are called high level language if their syntaxes are
closer to human language. Most of the high level language uses English like languages
and make easier to program. High level languages are sometimes used to refer all
languages above assembly level language. Which are:
i. Procedural Oriented Languages: These languages are very close to the human
languages. Because English like language, it takes very less time to write code
and debug the errors. These languages are called procedural because the
procedure (or function) of the program i.e. instructions of language is process step
by step. Examples: FORTRAN, C, C++ etc.
Prepared By: Er. Krishna Prd. Neupane, MSc. in Computer Engineering & B.E. in Electronics and Communication Engineering
Page 10
ii.
iii.
Problem Oriented Languages: These languages are more closer to human than
procedural language by making computer to process much like as human
behavior. These languages especially focused on database management systems.
Examples: SQL, ORACLE etc.
Natural Languages: Natural languages are those which we use in our daily
activities. The main objectives of the natural languages are to make the
connection between human and computer more natural and to make the machine
smarter. Examples: PROLOG (Programming Logic) and LISP (List Processing).
Generation of Programming Language
Generation of programming language is based on increasing power of programming styles.
There are basically five programming language generations:
1.
2.
3.
4.
5.
First Generation Language (1GL) : Machine Language
Second Generation Language (2GL) : Assembly Language
Third Generation Language (3GL) : Procedural Oriented Language
Fourth Generation Language (4GL) : Problem Oriented Language
Fifth Generation Language (5GL) : Natural Language
Program Design Methodology
The development of program is not straight forward as it seems at first sight. The programmer
must have a detailed knowledge of the programming language; understand the facts and
problems, objectives and users of the program. Program designing or developing process follows
almost the same steps as any problem solving task. There are five major steps in the process of
program designing methodology. They are:
1. Defining the problem: It involves problem analysis, specifying the input, process and
output required.
2. Planning the solution: It is a structure or detail design phase. Programmer plans the
solution of the given problem using standard program development tools.
3. Coding the program: On the basis of planned solution, programmer codes the computer
program on the computer.
4. Testing the program: This phase covers the testing and modification of solution.
5. Documenting the program: In this phase, detail documentation of the solution is
presented.
The commonly used program development tools are algorithm, flowchart, and pseudo code.
One or more of these tools can be used while designing the program.
Prepared By: Er. Krishna Prd. Neupane, MSc. in Computer Engineering & B.E. in Electronics and Communication Engineering
Page 11
Stages of Software Development
Software development is a process of splitting software development work into distinct phases
(or stages) containing activities with the intent of better planning and management. It is often
considered a subset of the Systems Development Life Cycle (SDLC). Basic software
development process can be described by waterfall model. It includes following stages:
1. Requirements Analysis: It collects information from end users and stakeholders then
analyze how much idea or information put into action for development of software. In
this phase detail blueprint of various phases is developed.
2. Software Design: In this phase design of the system is designed. The system analyst
design the logical designer for the system designer and designer designs back end and
front end both.
3. Implementation: It is the realization of an execution of a plan, design, specification, or
policy. It implements design in real world.
4. Testing and Verification: Software testing involves the execution of a software
component or system component to evaluate one or more properties of interest.
5. Deployment: Once the functional and non-functional testing is done, the product is
deployed in the customer environment or released into the market.
6. Maintenance: It is the modification of a software product after delivery to correct faults,
to improve performance or other attributes. A common perception of maintenance is that
it merely involves fixing defects.
The waterfall model is a sequential development approach, in which development is seen as
flowing steadily downwards (like a waterfall), through several phases as depicted below:
Analysis
Design
Implementation
Testing & Verification
Deployment
Maintenance
Figure: Waterfall Software development model.
Prepared By: Er. Krishna Prd. Neupane, MSc. in Computer Engineering & B.E. in Electronics and Communication Engineering
Page 12
Text Editor
A text editor is a type of program used for editing plain text files. Such programs are sometimes
known as "notepad" software. Text editors are provided with operating systems and software
development packages, and can be used to change configuration files, documentation files and
programming language source code.
Assembler
A computer will not understand any program written in a language, other than its machine
language. The programs written in other languages must be translated into the machine language.
Such translation is performed with the help of software. A program which translates an assembly
language program into a machine language program is called an assembler.
Compiler
It is a program which translates a high level language program into a machine language program.
A compiler is more intelligent than an assembler. It checks all kinds of limits, ranges, errors etc.
But its program run time is more and occupies a larger part of the memory. It has slow speed.
Because a compiler goes through the entire program and then translates the entire program into
machine codes.
Interpreter
An interpreter is a program which translates statements of a program into machine code. It
translates only one statement of the program at a time. It reads only one statement of program,
translates it and executes it. Then it reads the next statement of the program again translates it
and executes it. In this way it proceeds further till all the statements are translated and executed.
On the other hand, a compiler goes through the entire program and then translates the entire
program into machine codes. A compiler is 5 to 25 times faster than an interpreter.
By the compiler, the machine codes are saved permanently for future reference. On the other
hand, the machine codes produced by interpreter are not saved. An interpreter is a small program
as compared to compiler. It occupies less memory space, so it can be used in a smaller system
which has limited memory space.
Linker
In high level languages, some built in header files or libraries are stored. These libraries are
predefined and these contain basic functions which are essential for executing the program.
These functions are linked to the libraries by a program called Linker.
Prepared By: Er. Krishna Prd. Neupane, MSc. in Computer Engineering & B.E. in Electronics and Communication Engineering
Page 13
Source File
Compiler
Object File
Linker
Library
Executable
program
Figure: Compilation Process
Algorithm
In simple words an algorithm is a step-by-step procedure for solving a problem. Algorithms can
be expressed in any language, from natural languages like English to programming languages
like C. We use algorithms every day. Making algorithm is one of the principal challenges in
programming language. An algorithm must always terminate after a finite number of steps.
Important characteristics of algorithms are:
Finiteness
Definiteness
Language independent
Input
Output
Flowchart
Instead of directly converting the algorithm into program, an intermediate step called flowchart,
is used before development of program. Hence, the pictorial representation of algorithm is called
flowchart. Some of the symbols which are used in flowcharts are:
Prepared By: Er. Krishna Prd. Neupane, MSc. in Computer Engineering & B.E. in Electronics and Communication Engineering
Page 14
Start or Stop
Connector
Process
Input or Output
Decision
Flow
Figure: Flow chart symbols
Examples of Algorithm and Flowchart
1) Write an algorithm and draw a flowchart to add two numbers just given by the user
Answer:
Algorithm
Flowchart
Step 1 : Start
Start
Step 2 : Declare variables a,b and sum
Step 3 : Read values of a and b
Declare Variables a,b and sum
Read a and b
Step 4 : Add a and b and assign the result to
sum ; sum
a+b
Add a and b; sum
a+b
Step 5 : print sum
Print Sum
Step 6 : Stop
End
Prepared By: Er. Krishna Prd. Neupane, MSc. in Computer Engineering & B.E. in Electronics and Communication Engineering
Page 15
2) Write an algorithm and draw a flowchart to find user input number is odd or even.
Answer:
Note: Odd and even number is calculated by dividing given number by 2. If remainder is
zero then a number is even and if remainder is 1 then a number is odd.
Algorithm
Flowchart
Step 1 : Start
Start
Step 2 : Declare variables num, rem
Step 3 : Read num
Declare Variables num and rem
Read num
Step 4 : Calculate remainder rem=num%2
Step 5 : if (rem==0) then print number is
even and go to step 6
else
print number is odd and go to step 6
Calculate rem=num%2
N
Step 6 : Stop
Is
rem==0?
Y
Print Even
Print Odd
End
3) Develop an algorithm and draw flowchart for finding the sum of the first 100 natural
numbers.
Answer:
Note: Here first 100 natural numbers are added. That means, sum=1+2+3+…. +100
Prepared By: Er. Krishna Prd. Neupane, MSc. in Computer Engineering & B.E. in Electronics and Communication Engineering
Page 16
Algorithm
Flowchart
Step 1 : Start
Start
Step 2 : Declare variables i, and sum
Declare Variables i and sum
Step 3 : Initialize i=1 and sum=0
Step 4 : Repeat from step 4 to step 6 until i<=100
Initialize i=1 and sum=0
Step 5 : Calculate sum= sum + i
Step 6 : Increment i by i=i+1
N
Step 7 : Print sum
Is
i<=100
Y
?
Step 8 : Stop
Print sum
Calculate
sum=sum+i
i+=i+1
End
Pseudo Code
Pseudo means not genuine or false or fake. Therefore, pseudo code means not a true program
code, but is a code that gives some reflection of guide line for actual code. Pseudo code is false
code only in the sense that it is not the programming language code that is used to direct the
action of the computer. Pseudo code is a normal language construction modeled to look like
statements available in many programming languages. Generally, pseudo code is the mixture of
structured English and code. Structure English means writing mathematical procedures in
English language using programming language structures (like IF, ELSE, DO, WHILE etc.).
For example: IF coffee
Then we drink coffee
Else
We drink Tea
Good Luck
Prepared By: Er. Krishna Prd. Neupane, MSc. in Computer Engineering & B.E. in Electronics and Communication Engineering
Page 17