COA - Unit 2 Data Representation 1
COA - Unit 2 Data Representation 1
COA - Unit 2 Data Representation 1
and Architecture
UNIT 2 : Data representation
Processor basics
Data Representation
• The most significant (leftmost) bit in the word is the sign bit.
• If the sign bit is 0, the number is positive; if the sign bit is 1, the number is
negative.
• One of the drawbacks for sign-magnitude numbers is addition and subtraction
need to consider both sign of the numbers and their relative magnitude.
• Another drawback is there are two representations for 0(Zero). i.e., +0 and -0.
Example: Find the sign-magnitude representation of -6
Step 1: Find binary representation of 6 using 8 bits
610=000001102
Step 2: If the number you want to represent is negative, flip leftmost bit(Most
significant bit)
100001102
Q. Find the 8-bit signed number for the following numbers
• +6
• -14
• +24
• -64
• Unsigned 8-bit binary numbers, the decimal range is 0-
255 [28 = 256]
• Sign magnitude 8-bit binary number, the decimal range is
-127 to 0 for negative numbers and 0-127 for positive
numbers
• Maximum positive number : 01111111 = +127
• Maximum negative number : 11111111 = -127
• Problem : Two different representation of 0, +0 = 0000
and -0 = 1000
One’s Complement (1’s) Representation
• The number that results when we change all 1’s to 0’s and the 0’s
to 1’s.
Example: 1s complement of 0101 is 1010 .
• The process of forming the 1s complement of a given number is
equivalent to subtracting that number from 2n -1
•2 -1 -> 16-1 -> 1510 -> 11112 (i.e for 4 bit number, subtracting a
4
• S = 1 bit sign
IEEE-32 bit Floating-point number representation
• E = 11 bits of exponent
• M = 52 bits of "mantissa"
• E’ = E+1023
• Excess-1023 format
• 0-2047 range
• 0- exact zeros
• 2047 – infinity
• E’ – range – 0<E’<2047, E range -1022<=E<=1023
• 1 000 000 000 000 000 000 = 1.0 xlO18
• IEEE : 1 01111111 10000000000000000000000
-309.1875 = single & double precision
representation ?
Step 1 : find binary equalent => -100110101.0011
Step 2: Normalize the number
-100110101.0011 = -1.001101010011x28
17.125 = single and double precision
representation?
12.5 ?
-127.1075?
41.625?
• theInstitute of Electrical and Electronics Engineers (IEEE)
sponsored a standard format for 32-bit and larger floating-point
numbers, known as the IEEE 754 standard [IEEE 1985].
• It has been widely adopted by computer manufacturers.
• Besides specifying the permissible formats for M, E, and B, the
IEEE standard prescribes methods for handling round-off errors,
overflow, underflow, and other exceptional conditions
• THE IEEE 754 FLOATING-POINT NUMBER FORMAT comprises a
23-bit mantissa field M, an 8-bit exponent field E, and a sign
bit 5. The base B is two.
Hexadecimal Number System
Binary to Hexadecimal Conversion
For the integer part:
– Scan the binary number from right to left.
– Translate each group of four bits into the corresponding hexadecimal digit.
• Add leading zeros if necessary.
For the fracFonal part:
– Scan the binary number from left to right.
– Translate each group of four bits into the corresponding hexadecimal digit.
• Add trailing zeros if necessary.
Examples
• 1. (1011 0100 0011)2 = (B43)16
• 2. (10 1010 0001)2 = (2A1)16 //two leading 0’s added
• 3. (.1000 010)2 = (.84)16 //A trailing 0 is added
• 4. (101 . 0101 111)2 = (5.5E)16 //A leading 0 and a
trailing zero are added
Hexadecimal to Binary Conversion
• Integer division
Fixed point Arithmetic
https://www.cs.umd.edu/~meesh/411/CA-online/chapter/81/index.html
Floating point Arithmetic unit
3.25 x 10 ** 3
+ 2.63 x 10 ** -1
—————–
first step: align decimal points
second step: add
3.25 x 10 ** 3
+ 0.000263 x 10 ** 3
——————–
3.250263 x 10 ** 3
(presumes use of infinite precision, without regard for accuracy)
third step: normalize the result (already normalized!)
Example on floating pt. value given in binary:
.25 = 0 01111101 00000000000000000000000
100 = 0 10000101 10010000000000000000000
To add these fl. pt. representations,
step 1: align radix points
shifting the mantissa left by 1 bit decreases the exponent by 1
shifting the mantissa right by 1 bit increases the exponent by 1
we want to shift the mantissa right, because the bits that fall off the end should come from the least significant end
of the mantissa
-> choose to shift the .25, since we want to increase it’s exponent.
-> shift by 10000101
-01111101
———
00001000 (8) places.
0 01111101 00000000000000000000000 (original value)
0 01111110 10000000000000000000000 (shifted 1 place)
(note that hidden bit is shifted into msb of mantissa)
0 01111111 01000000000000000000000 (shifted 2 places)
0 10000000 00100000000000000000000 (shifted 3 places)
0 10000001 00010000000000000000000 (shifted 4 places)
0 10000010 00001000000000000000000 (shifted 5 places)
0 10000011 00000100000000000000000 (shifted 6 places)
0 10000100 00000010000000000000000 (shifted 7 places)
0 10000101 00000001000000000000000 (shifted 8 places)
step 2: add (don’t forget the hidden bit for the 100)
0 10000101 1.10010000000000000000000 (100)
+ 0 10000101 0.00000001000000000000000 (.25)
—————————————
0 10000101 1.10010001000000000000000
step 3: normalize the result (get the “hidden bit” to be a 1)
It already is for this example.
Add exponents:
always add true exponents (otherwise the bias gets added in
twice)
DIVISION
• It is similar to multiplication.
• do unsigned division on the mantissas (don’t forget the hidden bit)
• subtract TRUE exponents
• The organization of a floating point adder unit and the algorithm is given below.
• The floating point multiplication algorithm is given below. A similar
algorithm based on the steps discussed before can be used for division.
Rounding
• The floating point arithmetic operations discussed above may produce a result with more digits than
can be represented in 1.M.
• In such cases, the result must be rounded to fit into the available number of M positions.
• The extra bits that are used in intermediate calculations to improve the precision of the result are
called guard bits.
• It is only a tradeoff of hardware cost (keeping extra bits) and speed versus accumulated rounding error,
because finally these extra bits have to be rounded off to conform to the IEEE standard.
Rounding Methods:
• Truncate
– Remove all digits beyond those supported
– 1.00100 -> 1.00
• A product may have twice as many digits as the multiplier and multiplicand
– 1.11 x 1.01 = 10.0011
• For round-to-nearest-even, we need to know the value to the right of the LSB (round
bit) and whether any other digits to the right of the round digit are 1’s (the sticky bit is
the OR of these digits).
• The IEEE standard requires the use of 3 extra bits of less significance than the 24 bits
(of mantissa) implied in the single precision representation – guard bit, round bit and
sticky bit.
• When a mantissa is to be shifted in order to align radix points, the bits that fall off the
least significant end of the mantissa go into these extra bits (guard, round, and sticky
bits).
• These bits can also be set by the normalization step in multiplication, and by extra bits
of quotient (remainder) in division.
• The guard and round bits are just 2 extra bits of precision that are used in calculations.
• The sticky bit is an indication of what is/could be in lesser significant bits that are not
kept. If a value of 1 ever is shifted into the sticky bit position, that sticky bit remains a
1 (“sticks” at 1), despite further shifts.