Loc 1notesbca 1

Download as pdf or txt
Download as pdf or txt
You are on page 1of 35

NOTES

BCA-104: LOGICAL ORGANIZATION OF COMPUTER-I

External Marks: 80 Internal Marks: 20

UNIT - I Information Representation: Number Systems, Binary Arithmetic, Fixed-point and Floating
point representation of numbers, BCD Codes, Error detecting and correcting codes, Character
Representation – ASCII, EBCDIC, Unicode

Introduction:

Computer Logical Organization refers to the level of abstraction above the digital logic level, but below
the operating system level. At this level, the major components are functional units or subsystems that
correspond to specific pieces of hardware built from the lower level building blocks.

In the modern world of electronics, the term Digital is generally associated with a computer because the
term Digital is derived from the way computers perform operation, by counting digits. For many years,
the application of digital electronics was only in the computer system. But now-a-days, digital
electronics is used in many other applications. Following are some of the examples in which Digital
electronics is heavily used.

 Industrial process control


 Military system
 Television
 Communication system
 Medical equipment
 Radar
 Navigation

Signal
Signal can be defined as a physical quantity, which contains some information. It is a function of one or
more than one independent variables. Signals are of two types.

 Analog Signal
 Digital Signal

Analog Signal
An analog signal is defined as the signal having continuous values. Analog signal can have infinite
number of different values. In real world scenario, most of the things observed in nature are analog.
Examples of the analog signals are following.

 Temperature
 Pressure
 Distance
 Sound
 Voltage
 Current
 Power

Graphical representation of Analog Signal (Temperature)

The circuits that process the analog signals are called as analog circuits or system. Examples of the
analog system are following.

 Filter
 Amplifiers
 Television receiver
 Motor speed controller

Disadvantage of Analog Systems

 Less accuracy
 Less versatility
 More noise effect
 More distortion
 More effect of weather

Digital Signal
A digital signal is defined as the signal which has only a finite number of distinct values. Digital signals
are not continuous signals. In the digital electronic calculator, the input is given with the help of
switches. This input is converted into electrical signal which have two discrete values or levels. One of
these may be called low level and another is called high level. The signal will always be one of the two
levels. This type of signal is called digital signal. Examples of the digital signal are following.
 Binary Signal
 Octal Signal
 Hexadecimal Signal

Graphical representation of the Digital Signal (Binary)

The circuits that process the digital signals are called digital systems or digital circuits. Examples of the
digital systems are following.

 Registers
 Flip-flop
 Counters
 Microprocessors

Advantage of Digital Systems

 More accuracy
 More versatility
 Less distortion
 Easy communicate
 Possible storage of information

Comparison of Analog and Digital Signal

S.N. Analog Signal Digital Signal

1 Analog signal has infinite values. Digital signal has a finite number of values.

2 Analog signal has a continuous nature. Digital signal has a discrete nature.
3 Analog signal is generated by Digital signal is generated by A to D converter.
transducers and signal generators.

4 Example of analog signal − sine wave, Example of digital signal − binary signal.
triangular waves.

Digital Number System

A digital system can understand positional number system only where there are a few symbols called
digits and these symbols represent different values depending on the position they occupy in the number.
A value of each digit in a number can be determined using
 The digit
 The position of the digit in the number
 The base of the number system (where base is defined as the total number of digits available in
the number system).

Number systems are classified into:

(a) Positional Number System

(b) Non-Positional Number System

(A) Positional Number System


Positional Number System uses digits for the representation. Positional Number System is further
classified into:

(1) Binary Number System

Binary Number System is a base 2 number system having only 2 digits 0 and 1.

(2) Octal Number System

Octal Number System is a base 8 number system having eight digits from 0 to 7.

(3) Decimal Number System

Decimal Number System is a base 10 number system having ten digits from 0 to 9.

(4) Hexadecimal Number System

Hexadecimal Number System is a base 16 number system having sixteen digits from 0 to F. 0 to 9 is
represented similar to the decimal number systems and the digits from 10 to 15 are represented as A to F.

(B) Non-Positional Number System

Non-Positional Number System does not use digits for the representation instead it use symbols for the
representation.

For example, the roman number system is a good example of the non-positional number system.
Binary Number System
The binary number system uses only two digits: 0 and 1. The numbers in this system have a base of 2.
Digits 0 and 1 are called bits and 8 bits together make a byte. The data in computers is stored in terms of
bits and bytes. The binary number system does not deal with other numbers such as 2,3,4,5 and so on. For
example: 100012, 1111012, 10101012 are some examples of numbers in the binary number system.

Octal Number System


The octal number system uses eight digits: 0,1,2,3,4,5,6 and 7 with the base of 8. The advantage of this
system is that it has lesser digits when compared to several other systems, hence, there would be fewer
computational errors. Digits like 8 and 9 are not included in the octal number system. Just as the binary,
the octal number system is used in minicomputers but with digits from 0 to 7. For example: 35 8, 238,
1418 are some examples of numbers in the octal number system.

Decimal Number System


The decimal number system uses ten digits: 0,1,2,3,4,5,6,7,8 and 9 with the base number as 10. The
decimal number system is the system that we generally use to represent numbers in real life. If any
number is represented without a base, it means that its base is 10. For example: 723 10, 3210, 425710 are
some examples of numbers in the decimal number system.

Hexadecimal Number System


The hexadecimal number system uses sixteen digits/alphabets: 0,1,2,3,4,5,6,7,8,9 and A,B,C,D,E,F with
the base number as 16. Here, A-F of the hexadecimal system means the numbers 10-15 of the decimal
number system respectively. This system is used in computers to reduce the large-sized strings of the
binary system. For example: 7B316, 6F16, 4B2A16 are some examples of numbers in the hexadecimal
number system.
Conversion Rules of Number Systems
A number can be converted from one number system to another number system. Like binary numbers can
be converted to octal numbers and vice versa, octal numbers can be converted to decimal numbers and
vice versa and so on. Let us see the steps required in converting number systems.

Conversion of Binary / Octal / Hexadecimal Number Systems to Decimal


Number System
To convert a number from the binary/octal/hexadecimal system to the decimal system, we use the
following steps. The steps are shown by an example of a number in the binary system.

Example: Convert 1001112 into the decimal system.

Solution:

Step 1: Identify the base of the given number. Here, the base of 100111 2 is 2.

Step 2: Multiply each digit of the given number, starting from the rightmost digit, with the exponents of
the base. The exponents should start with 0 and increase by 1 every time as we move from right to left.
Since the base is 2 here, we multiply the digits of the given number by 20, 21, 22 , and so on from right to
left.
Step 3: We just simplify each of the above products and add them.

Here, the sum is the equivalent number in the decimal number system of the given number. Or, we can
use the following steps to make this process simplified.

100111 = (1×25) + (0×24) + (0×23) + (1×22) + (1×21) + (1×20)

= (1×32) + (0×16) + (0×8) + (1×4) + (1×2) + (1×1)

= 32 + 0 + 0 + 4 + 2 + 1

= 39
Thus, 1001112 = 3910.

Conversion of Decimal Number System to Binary / Octal / Hexadecimal Number


System

To convert a number from the decimal number system to binary/octal/hexadecimal number system, we
use the following steps. The steps are shown on how to convert a number from the decimal system to the
octal system.

Example: Convert 432010 into the octal system.

Solution:

Step 1: Identify the base of the required number. Since we have to convert the given number into the
octal system, the base of the required number is 8.

Step 2: Divide the given number by the base of the required number and note down the quotient and the
remainder in the quotient-remainder form. Repeat this process (dividing the quotient again by the base)
until we get the quotient less than the base.

Step 3: The given number in the octal number system is obtained just by reading all the remainders and
the last quotient from bottom to top.
Therefore, 432010 = 103408.

Conversion from One Number System to Another Number System


To convert a number from one of the binary/octal/hexadecimal systems to one of the other systems, we
first convert it into the decimal system, and then we convert it to the required systems by using the above-
mentioned processes.

Example: Convert 10101111002 to the hexadecimal system.

Solution:

Step 1: Convert this number to the decimal number system as explained in the above process.
Thus, 10101111002 = 70010 → (1).

Step 2: Convert the above number (which is in the decimal system), into the required number system.

Here, we have to convert 70010 into the hexadecimal system using the above-mentioned process. It should
be noted that in the hexadecimal system, the numbers 11 and 12 are written as B and C respectively.

Thus, 70010 = 2BC16 → (2).

From the equations (1) and (2), 10101111002 = 2BC16.


Number Systems Examples
 Example 1: Convert 30010 into the binary system base- 2.

Solution: 30010 is in the decimal system. We divide 300 by 2 and note down the quotient and the
remainder. We will repeat this process for every quotient until we get a quotient that is less than 2.

The equivalent number in the binary system is obtained by reading all the remainders and just the last
quotient from bottom to top as shown above.

Thus, 30010 = 1001011002.

 Example 2: Convert 5BC16 into the decimal system.

Solution: 5BC16 is in the hexadecimal system. We know that B=11 and C= 12 in the hexadecimal
system. So we get the equivalent number in the decimal system using the following process:
Thus, 5BC16 = 146810.

 Example 3: Convert 1448 into the hexadecimal system.

Solution: The base of 1448 is 8. First, we will convert this number into the decimal system as follows:

Thus, 1448 = 10010 → (1). Now we will convert this into the hexadecimal system as follows:
Thus, 10010 = 6416 → (2).

From the equations (1) and (2), we can conclude that: 1448 = 6416.

Complement Arithmetic
Complements are used in the digital computers in order to simplify the subtraction operation and for the
logical manipulations. For each radix-r system (radix r represents base of number system) there are two
types of complements.

S.N. Complement Description

1 Radix Complement The radix complement is referred to as the r's


complement

2 Diminished Radix Complement The diminished radix complement is referred to as the


(r-1)'s complement

Binary system complements


As the binary system has base r = 2. So the two types of complements for the binary system are 2's
complement and 1's complement.

1's complement

The 1's complement of a number is found by changing all 1's to 0's and all 0's to 1's. This is called as
taking complement or 1's complement. Example of 1's Complement is as follows.
2's complement

The 2's complement of binary number is obtained by adding 1 to the Least Significant Bit (LSB) of 1's
complement of the number.
2's complement = 1's complement + 1
Example of 2's Complement is as follows.

Binary Arithmetic
Binary arithmetic is essential part of all the digital computers and many other digital system.

Binary Addition
It is a key for binary subtraction, multiplication, division. There are four rules of binary addition.

In fourth case, a binary addition is creating a sum of (1 + 1 = 10) i.e. 0 is written in the given column and
a carry of 1 over to the next column.

Example − Addition
Binary Subtraction
Subtraction and Borrow, these two words will be used very frequently for the binary subtraction.
There are four rules of binary subtraction.

Example − Subtraction

Binary Multiplication
Binary multiplication is similar to decimal multiplication. It is simpler than decimal multiplication
because only 0s and 1s are involved. There are four rules of binary multiplication.

Example − Multiplication
Binary Division
Binary division is similar to decimal division. It is called as the long division procedure.

Example − Division

Fixed-point and Floating point representation of numbers


What is a Fixed Point
There are three sections in fixed point representation. They are the singed field, integer field, and the
fractional field. Assume a number such as 1000.100. The 1 in the leftmost end is the signed field. It
signifies whether the number is negative or positive. After that, the 000 is the integer field. The ‘.’ is the
radix or decimal point. The number after the radix point is the fractional field.

In fixed point representation, the number of digits before and after the radix cannot be changed. Assume a
number like + 20.05. Considering two digits in front of the radix and two digits after the radix, the
minimum number that can be represented is -99.99 and the maximum number is +99.99. In this scenario,
a number such as 20.223 cannot be represented as it has three digits after the radix point. As an
alternative, the number can be represented as 20.22. This is called precision reduction. It is not the actual
value, just an approximation.

Overall, fixed point representation allows improving the performance. On the other hand, it can only be
used to represent a limited range of values.

What is Floating Point


Floating point representation can be used to overcome the limitations of fixed point representation.
Therefore, most modern computers use floating point representation to store fractional numbers in
memory. It can represent very large and very small numbers precisely. It is based on the scientific
notation.

Figure 2: Scientific Notation

A number in floating point representation is as follows.

+/- Mantissa x 10 exponent


The sign indicates whether the number is negative or positive. The mantissa is the significand or the
fraction. 10 defines the base of the decimal.

For example, 22.33 can be represented as 2.233 x 101, 0.2233 x 102, 0.02233 x 103, etc. They all represent
the same number. Floating point representation is not always unique.
Similarly, floating point representation can be applied to binary numbers. The formula is as follows. The
base is 2.

+/- Mantissa x 2 exponent


Difference between Fixed Point and Floating Point
Definition
Fixed point is a representation of real data type for a number that has a fixed number of digits after the
radix point. Floating point is a formulaic representation of real numbers as an approximation so as to
support a tradeoff between range and precision.

Number Representation
While fixed point can be used to represent a limited range of values, floating point can be used to
represent a wide range of values.

Performance
The performance of the fixed point is higher than floating point.

Flexibility
Floating point representation is more flexible than fixed point representation.

Conclusion
Fixed point and floating point are two methods of representing numbers. The difference between fixed
point and floating point is that fixed point has a specific number of digits reserved for the integer part and
fractional part while floating point does not have a specific number of digits reserved for the integer part
and fractional part.

BINARY CODED DECIMAL (BCD)


The binary coded decimal (BCD) is a type of binary code used to represent a given decimal number in
an equivalent binary form. Its main advantage is that it allows easy conversion to decimal digits for
printing or display and faster calculations.
The most common BCD code is the 8421 BCD code. In this, the BCD equivalent of a decimal number is
written by replacing each decimal digit in integer and fractional parts with its four-bit binary equivalent
‘(or nibble). Here 8, 4, 2 and 1 represent the weights of different bits in the four-bit groups, starting from
the (MSB) most significant bit (to extreme left) and proceeding towards the least significant (LSB) bit.

This feature makes it a weighted code, whose main characteristic is that each binary digit in the four bit
group representing a given decimal digit is assigned a weight, and for each group of four bits, the sum of
the weights of those binary digits whose value is 1 is equal to the decimal digit which they represent.
For example, if we look at table, we find that the decimal digit 9 when represented in 8421 BCD is 1001.
Now the decimal digit assigned to first 1 is 8 and to the second 1 is 1. If we add 8 and 1 we get the
required decimal number which is 9.
The 4221 BCD and 5421 BCD are other weighted BCD codes shown in table. The numbers 4, 2, 2, 1 in
4221 BCD and 5, 4, 2 and 1 in 5421 BCD represent weights of the relevant bits.
Now let us consider some examples, where we convert the given decimal numbers to BCD.
The 8421 BCD code for 9.2 is 1001.0010.

The 4221 BCD code for 9.2 is 1111.0010.


The 5421 BCD code for 9.2 is 1100.0010.
BCD code is useful for outputting to displays that are always numeric (0 to 9), such as those found in
digital clocks or digital voltmeters.

Error detecting and correcting codes

Introduction
In digital systems, the analog signals will change into digital sequence (in the form of bits). This
sequence of bits is called as “Data stream”. The change in position of single bit also leads to
catastrophic (major) error in data output. Almost in all electronic devices, we find errors and we use
error detection and correction techniques to get the exact or approximate output.

What is an Error
The data can be corrupted during transmission (from source to receiver). It may be affected by
external noise or some other physical imperfections. In this case, the input data is not same as the
received output data. This mismatched data is called “Error”.

The data errors will cause loss of important / secured data. Even one bit of change in data may affect
the whole system’s performance. Generally the data transfer in digital systems will be in the form of
‘Bit – transfer’. In this case, the data error is likely to be changed in positions of 0 and 1 .

Types Of Errors
In a data sequence, if 1 is changed to zero or 0 is changed to 1, it is called “Bit error”.

There are generally 3 types of errors occur in data transmission from transmitter to receiver. They are

• Single bit errors

• Multiple bit errors

• Burst errors

Single Bit Data Errors


The change in one bit in the whole data sequence , is called “Single bit error”. Occurrence of single
bit error is very rare in serial communication system. This type of error occurs only in parallel
communication system, as data is transferred bit wise in single line, there is chance that single line to
be noisy.
Multiple Bit Data Errors
If there is change in two or more bits of data sequence of transmitter to receiver, it is called “Multiple
bit error”. This type of error occurs in both serial type and parallel type data communication
networks.

Burst Errors
The change of set of bits in data sequence is called “Burst error”. The burst error is calculated in
from the first bit change to last bit change.
Here we identify the error form fourth bit to 6th bit. The numbers between 4th and 6th bits are also
considered as error. These set of bits are called “Burst error”. These burst bits changes from
transmitter to receiver, which may cause a major error in data sequence. This type of errors occurs in
serial communication and they are difficult to solve.

Error Detecting Codes


In digital communication system errors are transferred from one communication system to another,
along with the data. If these errors are not detected and corrected, data will be lost . For effective
communication, data should be transferred with high accuracy .This can be achieved by first
detecting the errors and then correcting them.

Error detection is the process of detecting the errors that are present in the data transmitted from
transmitter to receiver, in a communication system. We use some redundancy codes to detect these
errors, by adding to the data while it is transmitted from source (transmitter). These codes are called
“Error detecting codes”.

Types of Error detection


1. Parity Checking
2. Cyclic Redundancy Check (CRC)
3. Longitudinal Redundancy Check (LRC)
4. Check Sum

Parity Checking
Parity bit means nothing but an additional bit added to the data at the transmitter before transmitting
the data. Before adding the parity bit, number of 1’s or zeros is calculated in the data. Based on this
calculation of data an extra bit is added to the actual information / data. The addition of parity bit to
the data will result in the change of data string size.

This means if we have an 8 bit data, then after adding a parity bit to the data binary string it will
become a 9 bit binary data string.

Parity check is also called as “Vertical Redundancy Check (VRC)”.

There is two types of parity bits in error detection, they are


 Even parity
 Odd parity

Even Parity
 If the data has even number of 1’s, the parity bit is 0. Ex: data is 10000001 -> parity bit 0
 Odd number of 1’s, the parity bit is 1. Ex: data is 10010001 -> parity bit 1

Odd Parity
 If the data has odd number of 1’s, the parity bit is 0. Ex: data is 10011101 -> parity bit 0
 Even number of 1’s, the parity bit is 1. Ex: data is 10010101 -> parity bit 1

NOTE: The counting of data bits will include the parity bit also.

The circuit which adds a parity bit to the data at transmitter is called “Parity generator”. The parity
bits are transmitted and they are checked at the receiver. If the parity bits sent at the transmitter and
the parity bits received at receiver are not equal then an error is detected. The circuit which checks
the parity at receiver is called “Parity checker”.

Messages with even parity and odd parity

Cyclic Redundancy Check (CRC)


A cyclic code is a linear (n, k) block code with the property that every cyclic shift of a codeword
results in another code word. Here k indicates the length of the message at transmitter (the number of
information bits). n is the total length of the message after adding check bits. (actual data and the
check bits). n, k is the number of check bits.
The codes used for cyclic redundancy check there by error detection are known as CRC codes
(Cyclic redundancy check codes).Cyclic redundancy-check codes are shortened cyclic codes. These
types of codes are used for error detection and encoding. They are easily implemented using shift-
registers with feedback connections. That is why they are widely used for error detection on digital
communication. CRC codes will provide effective and high level of protection.

CRC Code Generation


Based on the desired number of bit checks, we will add some zeros (0) to the actual data. This new
binary data sequence is divided by a new word of length n + 1, where n is the number of check bits to
be added . The reminder obtained as a result of this modulo 2- division is added to the dividend bit
sequence to form the cyclic code. The generated code word is completely divisible by the divisor that
is used in generation of code. This is transmitted through the transmitter.

Example

At the receiver side, we divide the received code word with the same divisor to get the actual code
word. For an error free reception of data, the reminder is 0. If the reminder is a non – zero, that
means there is an error in the received code / data sequence. The probability of error detection
depends upon the number of check bits (n) used to construct the cyclic code. For single bit and two
bit errors, the probability is 100 % .

For a burst error of length n – 1, the probability of error detecting is 100 % .

A burst error of length equal to n + 1 , the probability of error detecting reduces to 1 – (1/2)n-1 .

A burst error of length greater than n – 1 , the probability of error detecting is 1 – (1/2)n .
Back to top

Longitudinal Redundancy Check


In longitudinal redundancy method, a BLOCK of bits are arranged in a table format (in rows and
columns) and we will calculate the parity bit for each column separately. The set of these parity bits
are also sent along with our original data bits.

Longitudinal redundancy check is a bit by bit parity computation, as we calculate the parity of each
column individually.

This method can easily detect burst errors and single bit errors and it fails to detect the 2 bit errors
occurred in same vertical slice.

Back to top

Check Sum
Checksums are similar to parity bits except, the number of bits in the sums is larger than parity and
the result is always constrained to be zero. That means if the checksum is zero, error is detected. A
checksum of a message is an arithmetic sum of code words of certain length. The sum is stated by
means of 1’s compliment and stored or transferred as a code extension of actual code word. At
receiver a new checksum is calculated by receiving the bit sequence from transmitter.

The checksum method includes parity bits, check digits and longitudinal redundancy check (LRC).
For example, if we have to transfer and detect errors for a long data sequence (also called as Data
string) then we divide that into shorter words and we can store the data with a word of same width.
For each another incoming bit we will add them to the already stored data. At every instance, the
newly added word is called “Checksum”.
At receiver side, the received bits checksum is same as that of transmitter’s, there is no error found.

We can also find the checksum by adding all data bits. For example, if we have 4 bytes of data as
25h, 62h, 3fh, 52h.

Then, adding all bytes we get 118H

Dropping the carry Nibble, we get 18H

Find the 2’s complement of the nibble, i.e. E8H

This is the checksum of the transmitted 4 bits of data.

At receiver side,to check whether the data is received without error or not, just add the checksum to
the actual data bits (we will get 200H). By dropping the carry nibble we get 00H. This means the
checksum is constrained to zero. So there is no error in the data.

In general, there are 5 types of checksum methods like

 Integer addition checksum


 One’s complement checksum
 Fletcher Checksum
 Adler Checksum
 ATN Checksum (AN/466)

Example
As of now we discussed about the error detection codes. But to receive the exact and perfect data
sequence without any errors, is not done enough only by detecting the errors occurred in the data. But
also we need to correct the data by eliminating the presence of errors, if any. To do this we use some
other codes.

Error Correcting Codes


The codes which are used for both error detecting and error correction are called as “Error Correction
Codes”. The error correction techniques are of two types. They are,

 Single bit error correction


 Burst error correction
The process or method of correcting single bit errors is called “single bit error correction”. The
method of detecting and correcting burst errors in the data sequence is called “Burst error
correction”.

Hamming code or Hamming Distance Code is the best error correcting code we use in most of the
communication network and digital systems.

Hamming Code
This error detecting and correcting code technique is developed by R.W.Hamming . This code not
only identifies the error bit, in the whole data sequence and it also corrects it. This code uses a
number of parity bits located at certain positions in the codeword. The number of parity bits depends
upon the number of information bits. The hamming code uses the relation between redundancy bits
and the data bits and this code can be applied to any number of data bits.

What is a Redundancy Bit?


Redundancy means “The difference between number of bits of the actual data sequence and the
transmitted bits”. These redundancy bits are used in communication system to detect and correct the
errors, if any.

How the Hamming code actually corrects the errors?


In Hamming code, the redundancy bits are placed at certain calculated positions in order to eliminate
errors. The distance between the two redundancy bits is called “Hamming distance”.

To understand the working and the data error correction and detection mechanism of the hamming
code, let’s see to the following stages.
Number of parity bits
As we learned earlier, the number of parity bits to be added to a data string depends upon the number
of information bits of the data string which is to be transmitted. Number of parity bits will be
calculated by using the data bits. This relation is given below.

2P >= n + P +1

Here, n represents the number of bits in the data string.

P represents number of parity bits.

For example, if we have 4 bit data string, i.e. n = 4, then the number of parity bits to be added can be
found by using trial and error method. Let’s take P = 2, then

2P = 22 = 4 and n + P + 1 = 4 + 2 + 1 = 7

This violates the actual expression.

So let’s try P = 3, then

2P = 23 = 8 and n + P + 1 = 4 + 3 + 1 = 8

So we can say that 3 parity bits are required to transfer the 4 bit data with single bit error correction.

Where to Place these Parity Bits?


After calculating the number of parity bits required, we should know the appropriate positions to
place them in the information string, to provide single bit error correction.

In the above considered example, we have 4 data bits and 3 parity bits. So the total codeword to be
transmitted is of 7 bits (4 + 3). We generally represent the data sequence from right to left, as shown
below.

bit 7, bit 6, bit 5, bit 4, bit 3, bit 2, bit 1, bit 0

The parity bits have to be located at the positions of powers of 2. I.e. at 1, 2, 4, 8 and 16 etc.
Therefore the codeword after including the parity bits will be like this

D7, D6, D5, P4, D3, P2, P1


Here P1, P2 and P3 are parity bits. D1 —- D7 are data bits.

Constructing a Bit Location Table


In Hamming code, each parity bit checks and helps in finding the errors in the whole code word. So
we must find the value of the parity bits to assign them a bit value.

By calculating and inserting the parity bits in to the data bits, we can achieve error correction through
Hamming code.

Let’s understand this clearly, by looking into an example.

Ex:

Encode the data 1101 in even parity, by using Hamming code.

Step 1

Calculate the required number of parity bits.

Let P = 2, then

2P = 22 = 4 and n + P + 1 = 4 + 2 + 1 = 7.

2 parity bits are not sufficient for 4 bit data.

So let’s try P = 3, then

2P = 23 = 8 and n + P + 1 = 4 + 3 + 1 = 8
Therefore 3 parity bits are sufficient for 4 bit data.

The total bits in the code word are 4 + 3 = 7

Step 2

Constructing bit location table

Step 3

Determine the parity bits.

For P1 : 3, 5 and 7 bits are having three 1’s so for even parity, P1 = 1.

For P2 : 3, 6 and 7 bits are having two 1’s so for even parity, P2 = 0.
For P3 : 5, 6 and 7 bits are having two 1’s so for even parity, P3 = 0.

By entering / inserting the parity bits at their respective positions, codeword can be formed and is
transmitted. It is 1100101.

NOTE: If the codeword has all zeros (ex: 0000000), then there is no error in Hamming code.

To represent the binary data in alphabets and numbers, we use alphanumeric codes.

Alpha Numeric Codes


Alphanumeric codes are basically binary codes which are used to represent the alphanumeric data.
As these codes represent data by characters, alphanumeric codes are also called “Character codes”.
These codes can represent all types of data including alphabets, numbers, punctuation marks and
mathematical symbols in the acceptable form by computers. These codes are implemented in I/O
devices like key boards, monitors, printers etc.
In earlier days, punch cards are used to represent the alphanumeric codes.

They are:
 ASCII code
 EBCDI code
 UNICODE

ASCII CODE
ASCII means American Standard Code for Information Interchange. It is the world’s most popular
and widely used alphanumeric code. This code was developed and first published in 1967. ASCII
code is a 7 bit code that means this code uses 27 = 128 characters. This includes

26 lower case letters (a – z), 26 upper case letters (A – Z), 33 special characters and symbols (like !
@ # $ etc), 33 control characters (* – + / and % etc) and 10 digits (0 – 9).

In this 7 bit code we have two parts, the leftmost 3 bits and right side 4 bits. The left most 3 bits are
known “ZONE bits” and the right side 4 bits are known as “NUMERIC bits”.
The 8 bit ASCII code can represent 256 (28) characters. It is called USACC – II or ASCII – 8 codes.

Example:
If we want to print the name LONDAN, the ASCII code is?

The ASCII-7 equivalent of L = 100 1100

The ASCII-7 equivalent of O = 100 1111

The ASCII-7 equivalent of N = 100 1110

The ASCII-7 equivalent of D = 100 0100

The ASCII-7 equivalent of A = 100 0001

The ASCII-7 equivalent of N = 100 1110

The output of LONDAN in ASCII code is 1 0 0 1 1 0 0 1 0 0 1 1 1 1 1 0 0 1 1 1 0 1 0 0 0 1 0 0 1 0 0


0 0 0 1 1 0 0 1 1 1 0.

UNICODE
The draw backs in ASCII code and EBCDI code are that they are not compatible to all languages and
they do not have sufficient set of characters to represent all types of data. To overcome these
drawback this UNICODE is developed.
UNICODE is the new concept of all digital coding techniques. In this we have a different character
to represent every number. It is the most advanced and sophisticated language with the ability to
represent any type of data. SO this is known as “Universal code”. It is a 16 bit code, with which we
can represent 216 = 65536 different characters.
UNICODE is developed by the combined effort of UNICODE consortium and ISO (International
organization for Standardization).

EBCDI CODE
EBCDI stands for Extended Binary Coded Decimal Interchange code. This code is developed by
IBM Inc Company. It is an 8 bit code, so we can represent 28 = 256 characters by using EBCDI
code. This include all the letters and symbols like 26 lower case letters (a – z), 26 upper case letters
(A – Z), 33 special characters and symbols (like ! @ # $ etc), 33 control characters (* – + / and %
etc) and 10 digits (0 – 9).

In the EBCDI code, the 8 bit code the numbers are represented by 8421 BCD code preceded by 1111.
EBDIC CODES

You might also like