COA - Unit 2 Data Representation 1

Download as pptx, pdf, or txt
Download as pptx, pdf, or txt
You are on page 1of 59

Computer Organization

and Architecture
UNIT 2 : Data representation
Processor basics
Data Representation

Data representation is of two types:


• Fixed Point Representation
• Floating point Representation
Binary fixed-point format
• bA bB bC …. bK, where each bi is 0 or 1 and a binary point is present
in some fixed but implicit position.
Fixed point number system
• Base or radix point is fixed and assumed to be the right of the
rightmost digit(Least significant bit). (example: 610 = 01102)
• As radix point is fixed, the number system is referred to as fixed
number system
• Positive and negative numbers can be represented
Floating-point number
• consists of a pair of fixed-point numbers M, E, which
denote the number M x BE, where B is a predetermined
base. Ex: 0.0001 = 1.0x2-4
• Allows the representation of numbers having both
integer part and fractional part
Word length
• Information is represented in a digital computer by means of
binary words, where a word is a unit of information of some
fixed length n.
• An n bit word allows up to 2 different items to be represented.
n

• 8-bit words are called bytes.


• Word size is typically a multiple of 8.
• Common CPU word sizes are 8, 16, 32, and 64 bits.
For example, with n = 4, we can encode the 10 decimal digits as
follows:
0 = 0000
1=0001
2 = 0010
3 = 0011
4 = 0100
5 = 0101
6 = 0110
7 = 0111
8=1000
9=1001
Data representation
• Word is often restricted to mean a 32-bit (4 bytes) word.
• Fixed-point numbers come in lengths of 1, 2, 4, or more bytes.
• Floating-point numbers also come in several lengths, the shortest
(single precision) number being one word (32 bits) long.
• ARM6 has instructions of length 4 bytes only
• 68020' s instructions range in length from 2 to 10 bytes.
Bytes of a word
Little and Big - endian
Fixed-Point Numbers

Factors should be taken into account:


• The number types to be represented; for example, integers or real numbers.
• The range of values (number magnitudes) likely to be encountered.
• The precision of the numbers, which refers to the maximum accuracy of the
representation.
• The cost of the hardware required to store and process the numbers.
Fixed Representation

• It’s the representation for integers only where the decimal


point is always fixed. i.e., at the end of the rightmost point.
• Numbers are represented as a signed integer and unsigned
integer
• Unsigned integer – positive numbers
• Negative numbers can be represented in various ways:
• Sign - Magnitude Representation
• One’s Complement (1’s) Representation
• Two’s Complement (2’s) Representation
Sign - Magnitude Representation

• The most significant (leftmost) bit in the word is the sign bit.
• If the sign bit is 0, the number is positive; if the sign bit is 1, the number is
negative.
• One of the drawbacks for sign-magnitude numbers is addition and subtraction
need to consider both sign of the numbers and their relative magnitude.
• Another drawback is there are two representations for 0(Zero). i.e., +0 and -0.
Example: Find the sign-magnitude representation of -6
Step 1: Find binary representation of 6 using 8 bits
610=000001102
Step 2: If the number you want to represent is negative, flip leftmost bit(Most
significant bit)
100001102
Q. Find the 8-bit signed number for the following numbers
• +6
• -14
• +24
• -64
• Unsigned 8-bit binary numbers, the decimal range is 0-
255 [28 = 256]
• Sign magnitude 8-bit binary number, the decimal range is
-127 to 0 for negative numbers and 0-127 for positive
numbers
• Maximum positive number : 01111111 = +127
• Maximum negative number : 11111111 = -127
• Problem : Two different representation of 0, +0 = 0000
and -0 = 1000
One’s Complement (1’s) Representation

• The number that results when we change all 1’s to 0’s and the 0’s
to 1’s.
Example: 1s complement of 0101 is 1010 .
• The process of forming the 1s complement of a given number is
equivalent to subtracting that number from 2n -1
•2 -1 -> 16-1 -> 1510 -> 11112 (i.e for 4 bit number, subtracting a
4

given number from 1111)


Disadvantages in 1’s complement
representation

Two different representations of 0


• +0 – 0000
• -0 – 1111
Advantage: Subtraction can be done using addition
Two’s Complement (2’s) Representation

2’s Complement from 1’s complement:


• 2’s complement of a number is obtained by adding 1 to 1’s complement
of that number. (1’s complement number + 1)
Example: 2’s complement of 0101 is 1010(1’s complement) +1 = 1011
• The MSB / leftmost bit is 0 for positive numbers and 1 for negative
numbers.
Advantage: Unique representation of 0 (Zer0)
Sign extention

• In 2’s complement, while adding 2 binary numbers, it may result


an extra bit called carry, to represent it, we have to allocate an
extra bit.
• Padding o or 1 at the MSB is called sign extension
• +20 = 10100 = 8 bit positive number - 0001 0100
• -20 = 01100 = 8 bit negative number – 1110 1100
Floating-point number

• A floating-point number contains a mantissa M, an exponent E,


and a base B.
• It is called as the decimal or binary point floats over the base
depending on the exponent value
• The mantissa M is also referred to as the significand or fraction.
• S = 1 bit sign
• E = 8 bits of exponent
• M = 23 bits of "mantissa"

• S = 1 bit sign
IEEE-32 bit Floating-point number representation
• E = 11 bits of exponent
• M = 52 bits of "mantissa"

IEEE-64 bit Floating-point number representation


Floating point format

• Normalized form: First binary point is shifted to the


right of the first bit and the number is multiplied by the
correct scaling factor to get the same value
• Scaling factor : 2
Single-Precision representation
Double-Precision representation

• E’ = E+1023
• Excess-1023 format
• 0-2047 range
• 0- exact zeros
• 2047 – infinity
• E’ – range – 0<E’<2047, E range -1022<=E<=1023
• 1 000 000 000 000 000 000 = 1.0 xlO18
• IEEE : 1 01111111 10000000000000000000000
-309.1875 = single & double precision
representation ?
Step 1 : find binary equalent => -100110101.0011
Step 2: Normalize the number
-100110101.0011 = -1.001101010011x28
17.125 = single and double precision
representation?
12.5 ?
-127.1075?
41.625?
• theInstitute of Electrical and Electronics Engineers (IEEE)
sponsored a standard format for 32-bit and larger floating-point
numbers, known as the IEEE 754 standard [IEEE 1985].
• It has been widely adopted by computer manufacturers.
• Besides specifying the permissible formats for M, E, and B, the
IEEE standard prescribes methods for handling round-off errors,
overflow, underflow, and other exceptional conditions
• THE IEEE 754 FLOATING-POINT NUMBER FORMAT comprises a
23-bit mantissa field M, an 8-bit exponent field E, and a sign
bit 5. The base B is two.
Hexadecimal Number System
Binary to Hexadecimal Conversion
For the integer part:
– Scan the binary number from right to left.
– Translate each group of four bits into the corresponding hexadecimal digit.
• Add leading zeros if necessary.
For the fracFonal part:
– Scan the binary number from left to right.
– Translate each group of four bits into the corresponding hexadecimal digit.
• Add trailing zeros if necessary.
Examples
• 1. (1011 0100 0011)2 = (B43)16
• 2. (10 1010 0001)2 = (2A1)16 //two leading 0’s added
• 3. (.1000 010)2 = (.84)16 //A trailing 0 is added
• 4. (101 . 0101 111)2 = (5.5E)16 //A leading 0 and a
trailing zero are added
Hexadecimal to Binary Conversion

• Translate every hexadecimal digit into its 4-bit binary equivalent.


Examples:
• (3A5)16 = (0011 1010 0101)2
• (12.3D)16 = (0001 0010 . 0011 1101)2
• (1.8)16 = (0001 . 1000)2
Computer Arithmetic
Fixed and floating point operations

• Addition and subtraction of signed numbers


• Fast adders

• Multiplication algorithms of signed and unsigned numbers


• Booth algorithm

• Integer division
Fixed point Arithmetic

• Booth’s multiplication technique,


• Fast multiplication techniques and
• Binary division techniques

https://www.cs.umd.edu/~meesh/411/CA-online/chapter/81/index.html
Floating point Arithmetic unit

• Arithmetic operations on floating point numbers consist of addition,


subtraction, multiplication and division.
• The operations are done with algorithms similar to those used on sign
magnitude integers (because of the similarity of representation) — example,
only add numbers of the same sign.
• If the numbers are of opposite sign, must do subtraction.
ADDITION

• Example on decimal value given in scientific notation:

3.25 x 10 ** 3
+ 2.63 x 10 ** -1
—————–
first step: align decimal points
second step: add
3.25       x 10 ** 3
+  0.000263 x 10 ** 3
——————–
3.250263 x 10 ** 3
(presumes use of infinite precision, without regard for accuracy)
 
third step:  normalize the result (already normalized!)
 
Example on floating pt. value given in binary:
 
.25 =    0 01111101 00000000000000000000000
100 =    0 10000101 10010000000000000000000
To add these fl. pt. representations,
 
step 1:  align radix points
 
shifting the mantissa left by 1 bit decreases the exponent by 1
 
shifting the mantissa right by 1 bit increases the exponent by 1
 
we want to shift the mantissa right, because the bits that fall off the end should come from the least significant end
of the mantissa
 
-> choose to shift the .25, since we want to increase it’s exponent.
-> shift by  10000101
-01111101
———
00001000    (8) places.
 
0 01111101 00000000000000000000000 (original value)
0 01111110 10000000000000000000000 (shifted 1 place)
(note that hidden bit is shifted into msb of mantissa)
0 01111111 01000000000000000000000 (shifted 2 places)
0 10000000 00100000000000000000000 (shifted 3 places)
0 10000001 00010000000000000000000 (shifted 4 places)
0 10000010 00001000000000000000000 (shifted 5 places)
 
0 10000011 00000100000000000000000 (shifted 6 places)
0 10000100 00000010000000000000000 (shifted 7 places)
0 10000101 00000001000000000000000 (shifted 8 places)
step 2: add (don’t forget the hidden bit for the 100)
 
0 10000101 1.10010000000000000000000  (100)
+    0 10000101 0.00000001000000000000000  (.25)
—————————————
0 10000101 1.10010001000000000000000
 
step 3:  normalize the result (get the “hidden bit” to be a 1)
It already is for this example.

result is 0 10000101 10010001000000000000000


SUBTRACTION

• Same as addition as far as alignment of radix points


• Then the algorithm for subtraction of sign mag. numbers takes over.
• before subtracting,
• compare magnitudes (don’t forget the hidden bit!)
• change sign bit if order of operands is changed.
• don’t forget to normalize number afterward.
MULTIPLICATION
Example on decimal values given in scientific notation:
 3.0 x 10 ** 1
+  0.5 x 10 ** 2
—————–
 Algorithm:  multiply mantissas
add exponents
3.0 x 10 ** 1
+  0.5 x 10 ** 2
—————–
1.50 x 10 ** 3
Example in binary:     Consider a mantissa that is only 4 bits.
0 10000100 0100
x 1 00111100 1100

Add exponents:
always add true exponents (otherwise the bias gets added in
twice)
DIVISION

• It is similar to multiplication.
• do unsigned division on the mantissas (don’t forget the hidden bit)
• subtract TRUE exponents
• The organization of a floating point adder unit and the algorithm is given below.
• The floating point multiplication algorithm is given below. A similar
algorithm based on the steps discussed before can be used for division.
Rounding
• The floating point arithmetic operations discussed above may produce a result with more digits than
can be represented in 1.M.
• In such cases, the result must be rounded to fit into the available number of M positions.
• The extra bits that are used in intermediate calculations to improve the precision of the result are
called guard bits. 
• It is only a tradeoff of hardware cost (keeping extra bits) and speed versus accumulated rounding error,
because finally these extra bits have to be rounded off to conform to the IEEE standard.
Rounding Methods:
• Truncate
–   Remove all digits beyond those supported
–   1.00100 -> 1.00

• Round up to the next value


–   1.00100 -> 1.01
Round down to the previous value
–   1.00100 -> 1.00
–   Differs from Truncate for negative numbers
Round-to-nearest-even
–   Rounds to the even value (the one with an LSB of 0)
–   1.00100 -> 1.00
–   1.01100 -> 1.10
–   Produces zero average bias
–   Default mode

• A product may have twice as many digits as the multiplier and multiplicand
–   1.11 x 1.01 = 10.0011
• For round-to-nearest-even, we need to know the value to the right of the LSB (round
bit) and whether any other digits to the right of the round digit are 1’s (the sticky bit is
the OR of these digits).
• The IEEE standard requires the use of 3 extra bits of less significance than the 24 bits
(of mantissa) implied in the single precision representation – guard bit, round bit and
sticky bit.
• When a mantissa is to be shifted in order to align radix points, the bits that fall off the
least significant end of the mantissa go into these extra bits (guard, round, and sticky
bits).
• These bits can also be set by the normalization step in multiplication, and by extra bits
of quotient (remainder) in division.
• The guard and round bits are just 2 extra bits of precision that are used in calculations.
• The sticky bit is an indication of what is/could be in lesser significant bits that are not
kept. If a value of 1 ever is shifted into the sticky bit position, that sticky bit remains a
1 (“sticks” at 1), despite further shifts.

You might also like