·
M. Morris Mano
.1l
Preface
This book deals with computer architecture as well as computer organization
and design. Computer architecture is concerned with the structure and behav
ior of the various functional modules of the computer and how they interact
to provide the processing needs of the user. Computer organization is con
cerned with the way the hardware components are connected together to form
a computer system. Computer design is concerned with the development of
the hardware for the computer taking into consideration a given set of specifJCa
tions.
The book provides the basic knowledge necessary to understand the
hardware operation of digital computers and covers the three subjects associ
ated with computer hardware. Chapters 1 through 4 present the va.rious digital
components used in the organization and design of digital computers. Chap
ters 5 through 7 show the detailed steps that a designer must go through in
order to design an elementary basic computer. Chapters 8 through 10 deal with
the organization and architecture of the cential processing unit. Chapters
11
and 12 present the organization and architecture of input-output and memory.
Chapter
13 introduces the concept of multiprocessing. The plan of the book is
to present the simpler material first and introduce the more advanced subjects
later. Thus, the first seven chapters cover material needed for the basic under
standing of computer organization, design, and programming of a simple
digital computer. The last six chapters present the organization and architec
ture of the separate functional units of the digital computer with an emphasis
on
more
advanced topics.
The material in the third edition is organized in the same manner as in the
second edition and many of the features remain the same. The third edition,
however, offers several improvements over the second edition. All chapters
except two (6 and 10) have been completely revised to bring the material up to
date and to clarify the presentation. Two new chapters were added: chapter 9
on pipeline and vector processing, and chapter 13 on multiprocessors. Two
sections deal with the reduced instruction set computer
(RISC). Chapter 5 has
been revised completely to simplify and clarify the design of the basic com
puter. New problems have been formulated for eleven of the thirteen chapters.
The physical organization of a particular computer including its registers,
XV
xvi
Preface
the data flow, the microoperations, and control functions can be described
symbolically by means of a hardware description language. In this book we
develop a simple register transfer language and use it to specify various com
puter operations in a concise and precise manner. The relation of the register
transfer language to the hardware organization and design of digital computers
is fully explained.
The book does not assume prior knowledge of computer hardware and
the material can be understood without the need of prerequisites. However,
some experience in assembly language programming with a microcomputer
will make the material easier to understand. Chapters 1 through 3 can be
skipped if the reader is familiar with digital logic design.
The following is a brief description of the subjects that are covered in each
chapter with an emphasis on the revisions that were made in the third edition.
Chapter 1 introduces the fundamental knowledge needed for the design
of digital systems constructed with individual gates and flip-flops. It covers
Boolean algebra, combinational circuits, and sequential circuits. This provides
the necessary background for understanding the digital circuits to be
presented.
Chapter 2 explains in detail the logical operation of the most common
standard digital components. It includes decoders, multiplexers, registers,
counters, and memories. These digital components are used as building blocks
for the design of larger units in the chapters that follow.
Chapter 3 shows how the various data types found in digital computers
are represented in binary form in computer registers. Emphasis is on the
representation of numbers employed in arithmetic operations, and on the
binary coding of symbols used in data processing.
Chapter 4 introduces a register transfer language and shows how it is
used to express microoperations in symbolic form. Symbols are defined for
arithmetic, logic, and shift microoperations. A composite arithmetic logic shift
unit is developed to show the hardware design of the most common micro
operations.
Chapter 5 presents the organization and design of a basic digital com
puter. Although the computer is simple compared to commercial computers, it
nevertheless encompasses enough functional capabilities to demonstrate the
power of a stored program general purpose device. Register transfer language
is used to describe the internal operation of the computer and to specify the
requirements for its design. The basic computer uses the same set of instruc
tions as in the second edition but its hardware organization and design has
been completely revised. By going through the detailed steps of the design
presented in this chapter, the student will be able to understand the inner
workings of digital computers.
Chapter 6 utilizes the twenty five instructions of the basic computer to
illustrate techniques used in assembly language programming. Programming
examples are presented for a number of data processing tasks. The relationship
Preface
xvii
between binary programs and symbolic code is explained by examples. The
basic operations of an assembler are presented to show the translation from
symbolic code to an equivalent binary program.
Chapter 7 introduces the concept of microprogramming. A specific micro
programmed control unit is developed to show by example how to write
microcode for a typical set of instructions. The design of the control unit is
carried-out in detail including the hardware for the microprogram sequencer.
Chapter 8 deals with the central processing unit (CPU). An execution unit
with common buses and an arithmetic logic unit is developed to show the
general register organization of a typical CPU. The operation of a memory stack
is explained and some of its applications are demonstrated. Various instruction
formats are illustrated together with a variety of addressing modes. The most
common instructions found in computers are enumerated with an explanation
of their function. The last section introduces the reduced instruction set com
puter (RISC) concept and discusses its characteristics and advantages.
Chapter 9 on pipeline and vector processing is a new chapter in the third
edition. (The material on arithmetic operations from the second edition has
been moved to Chapter 10. ) The concept of pipelining is explained and the way
it can speed-up processing is illustrated with several examples. Both arithmetic
and instruction pipeline is considered. It is shown how RISC processors can
achieve single-cycle instruction execution by using an efficient instruction
pipeline together with the delayed load and delayed branch techniques. Vector
processing is introduced and examples are shown of floating-point operations
using pipeline procedures.
Chapter 10 presents arithmetic algorithms for addition, subtraction, mul
tiplication, and division and shows the procedures for implementing them with
digital hardware. Procedures are developed for signed-magnitude and
signed-2's complement fixed-point numbers, for floating-point binary
numbers, and for binary coded decimal (BCD) numbers. The algorithms are
presented by means of flowcharts that use the register transfer language to
specify the sequence of microoperations and control decisions required for their
implementation.
Chapter 11 discusses the techniques that computers use to communicate
with input and output devices. Interface units are presented to show the way
that the processor interacts with external peripherals. The procedure for
asynchronous transfer of either parallel or serial data is explained. Four modes
of transfer are discussed: programmed 110, interrupt initiated transfer, direct
memory access, and the use of input-output processors. Specific examples
illustrate procedures for serial data transmission.
Chapter 12 introduces the concept of memory hierarchy, composed of
cache memory, main memory, and auxiliary memory such as magnetic disks.
.
The organization and operation of associative memories is explained in detail.
The concept of memory management is introduced through the presentation of
the hardware requirements for a cache memory and a virtual memory system.
xviii
Preface
Chapter
13 presents the basic characteristics of mutiprocessors. Various
interconnection structures are presented. The need for interprocessor arbitra
tion, communication, and synchronization is discussed. The cache coherence
problem is explained together with some possible solutions.
Every chapter includes a set of problems and a list of references. Some of
the problems serve as exercises for the material covered in the chapter. Others
are of a more advanced nature and are intended to provide practice in solving
problems associated with computer hardware architecture and design. A solu
tions manual is available for the instructor from the publisher.
The book is suitable for a course in computer hardware systems in an
electrical engineering, computer engineering, or computer science depart
ment. Parts of the book can be used in a variety of ways: as a first course in
computer hardware by covering Chapters 1 through 7; as a course in computer
organization and design with previous knowledge of digital logic design by
reviewing Chapter 4 and then covering chapters
5 through 13; as a course in
computer organization and architecture that covers the five functional units of
digital computers including control (Chapter 7), processing unit (Chapters 8
and
9), arithmetic operations (Chapter 10), input-output (Chapter 11), and
memory (Chapter 12). The book is also suitable for self-study by engineers and
scientists who need to acquire the basic knowledge of computer hardware
architecture.
Acknowledgments
My thanks goes to those who reviewed the text: particularly Professor Thomas
L. Casavant of the University of Iowa; Professor Murray R. Berkowitz of George
Mason University; Professor Cern Ersoy of Brooklyn Polytechnic University;
Professor Upkar Varshney of the University of Missouri, Kansas City; Professor
Karan Watson of Texas A&M University, and Professor Scott
F.
Midkiff of the
Virginia Polytechnic Institute.
M. Morris Mano
Contents
Preface
XV
CHAPTER ONE
Digital Logic Circuits
1·1
Digital Computers
1·2
Logic Gates
1-3
Boolean Algebra
I0
Complemmr of a Function
Map Simplification
/of
Prodvct-af·Swru Sfm�Jlifico.tion
16
Don't-Care Condirioru
Combinational Circuits
19
Hai{-Mkr
MJI.Mkf
20
Flip-Flop�
SR FU,.Fiop
22
0 FU,.Fiop
23
24
JK Flip-Flop
T Flip-Flop
24
E.dgc-Tfiumd Flip.Fiops
25
27
Exti!ation Talks
Sequential Circuits
28
Flip-Flop Input Equ4tioru
SIOU Table
30
SIOU Oiogr11m
31
32
�Example
36
� Proced.at
Problems
References
1-4
1-S
,1-6
1-7
iii
1
4
7
II
18
22
iv
Contents
CHAPTER TWO
41
Digital Components
2-1
2-2
2·3
2-4
2-5
2-6
2-7
41
43
Integrated Circuits
Decoders
NAND Gate Decoder
45
Decoder Expansion
46
Encoders
47
48
50
Multiplexers
Registers
Register with Parallel Load
51
53
Shift Registers
Bidirectional Shift Register with Parallel Load
Binary Counters
Binary Counter with Parallel Load
58
Memory Unit
Random-Access Memory
Read-Only Memory
61
Types of ROMs
62
53
56
58
60
63
65
Problems
References
CHAPTER THREE
Data Representation
3-1
3-2
3-3
Data Types
68
Number Systems
Octal and Hexadecimal Numbers
69
Decimal Representation
72
Alphanumeric Representation
73
Complements
75
(r-l)'s Complement
(r's) Complement
75
Subtraction of Unsigned Numbers
76
Fixed-Point Representation
78
Integer Representation
Arithmetic Addition
79
Arithmetic Subtraction
80
Overflow
80
Decimal Fixed-Point Representation
81
67
67
74
77
Contents
3-4
3-5
3-6
83
84
Floating-Point Representation
Other Binary Codes
Gray Code
84
Other Decimal Codes
85
Ocher Alphanumeric Codes
86
Error Detection Codes
Problems
References
87
89
91
CHAPTER FO UR
Register Transfer and Microoperations
4·1
4·2
4-3
4-4
4-5
4-6
4-7
93
95
97
Register Transfer language
Register Transfer
Bus and Memory Transfers
Three-Stare Bus Buffers
Memory Transfer
10 I
I00
102
Arithmetic Microoperations
1 03
Binary Adder
Binary Adder-Subtractor
I04
Binary lncremenrer
1 05
Arithmetic Circuit
I06
108
Logic Microoperations
List of Logic Microoperations
1 09
Hardware Implementation
III
III
Some Applications
Shift Microoperations
Hardware Implementation
93
II5
Arithmetic Logic Shift Unit
Problems
References
114
116
119
122
CHAPTER FIVE
5-1
Basic Computer Organization and Design
123
Instruction Codes
123
Stored Prowam Organization
Indirect Address
I26
I25
V
vi
Contents
5-2
5-3
5-4
5-5
5-6
5-7
5-8
5-9
5-10
127
Computer Registers
Common Bus System
1 29
132
Computer Instructions
1 34
Instruction Set Completeness
135
139
Timing and Control
Instruction Cycle
Fetch and Decode
1 39
1 41
Determine the Type of Instruction
Register-Reference Instructions
1 43
145
Memory-Reference Instructions
1 45
AND to AC
ADD wAC
1 46
LOA: Load to AC
1 46
STA: Store AC
1 47
BUN: Branch UnconditionaUy
1 47
BSA: Branch and Save Return Address
ISZ: Increment and Skip If Zero
149
Control Flowchart
1 49
14 7
Input-Output and Interrupt
Input-Output Configuration
Input-Output Instructions
Program Interrupt
15 3
1 56
Interrupt Cycle
1 51
1 52
Complete Computer Description
Design of Basic Computer
Control
Control
Control
Control
Logic Gates
1 60
of Registers and Memory
1 60
of Single Flip-Flops
1 62
of Common Bus
1 62
Design of Accumulator Logic
Control of AC Register
Adder and Logic Circuit
1 65
1 66
Problems
References
150
157
157
164
167
171
CHAPTER SIX
Programming the Basic Computer
6-1
6-2
Introduction
Machine Language
173
173
174
Contents
6-3
6-4
179
Assembly Language
Rules of the Language
An Example
1 81
Translation to Binary
1 79
1 82
183
The Assembler
Representation of Symbolic Program
in Memory
1 84
First Pass
1 85
Secorui Pass
1 87
6-5
6-6
Program Loops
190
Programming Arithmetic and Logic
Operations
192
Multiplication Program
1 93
Double-Precision Addition
1 96
1 97
Logic Operations
1 97
Shift Operations
6-7
198
Subroutines
Subroutines Parameters and Dara Linkage
6-8
Input-Output Programming
204
Character Manipulation
205
Program Interrupt
Problems
References
200
203
208
211
CHAPTER SEVEN
Microprogrammed Control
7-1
7-2
7-3
213
216
Control Memory
Address Sequencing
Coruiitional Branching
Mapping of Instruction
Subroutines
220
213
21 7
21 9
Microprogram Example
220
Computer Configuration
Microinstruction Format
222
Symbolic Microinstructions
225
The Fetch Routine
226
Symbolic Microprogram
227
229
Binary Microprogram
220
vii
viii
Contents
7-4
231
Design of Control Unit
232
Microprogram Sequencer
235
238
Problems
References
CHAPTER EIGHT
241
Central Processing Unit
8-1
8-2
8-3
8-4
8-5
8-6
8-7
8-8
241
242
Introduction
General Register Organization
244
Control Word
Examples of Microoperations
246
247
Stack Organization
247
Register Stack
249
Memory Stack
251
Reverse Polish Notation
Evaluation of Arithmetic Expressions
253
255
Instruction Formats
Three-Address Instructions
Tw:J-Address Instructions
One-Address Instructions
Zero-Address Instructions
RISC Instructions
259
258
258
259
259
260
Addressing Modes
Numerical Example
264
266
Data Transfer and Manipulation
Data Transfer Instructions
267
Data Manipulation Instructions
268
269
Arithmetic Instructions
Logical and Bit Manipulation Instructions
Shift Instructions
271
Program Control
274
Status Bit Conditions
Conditional Branch Instructions
275
Subroutine Call and Return
278
Program Interrupt
279
Types of Interrupts
281
Reduced Instruction Set Computer (RISC)
CISC Characteristics
RISC Characteristics
283
284
270
273
282
Contents
Overlapped Register Windows
B.,-keley ruse 1
288
285
291
297
Problems
References
CHAPTER N IN E
299
Pipeline and Vector Processing
9-1
9-2
9-3
9-4
9-5
9-6
9-7
299
302
Parallel Processing
Pipelining
General Considerations
304
307
310
Arithmetic Pipeline
Instruction Pipeline
Example: Four-Segment Instruction Pipeline
Data Dependency
31 3
Handling of Branch Instructions
314
31 1
315
R ISC Pipeline
Example: Three-Segment Instruction Pipeline
Delayed Load
31 7
Delayed Branch
31 8
Vector Processing
Vector
Op.,-ations
321
Matrix Multiplication
322
Memory Interleaving
324
Sup...comput£rs
325
Array Processors
Attached Array Processor
SIMD Array Processor
326
327
Problems
References
31 6
319
326
329
330
CHAPTER TEN
Computer Arithmetic
10-1
10.2
Introduction
Addition and Subtraction
Addition and Subtraction with Signed-Magnitude
335
Data
333
333
334
ix
X
Contents
10-3
10-4
10-5
10-6
10-7
Hardware Implementation
336
Hardware Algorithm
337
Addition and Subtraction with Signed-2's
Complement Data
338
Multiplication Algorithms
Hardware Implementation for Signed-Magnitude
Data
341
Hardware Algorithm
342
Booth Multiplication Algorithm
343
Array Multiplier
346
Division Algorithms
Hardware Implementation for Signed-Magnitude
Data
349
Divide Overflow
351
Hardware Algorithm
352
Other Algorithms
353
Floating-Point Arithmetic Operations
Basic Considerations
354
Register Configuration
357
Addition and Subtraction
358
360
Multiplication
Division
362
348
354
363
Decimal Arithmetic Unit
BCD Adder
365
368
BCD Subtraction
369
Decimal Arithmetic Operations
Addition and Subtraction
Multiplication
371
Division
374
Floating-Point Operations
340
371
376
376
380
Problems
References
CHAPTER ELEVEN
Input-Output Organization
11-1
11-2
381
Peripheral Devices
ASCII Alphanumeric Characters
383
Input-Output Interface
110 Bus and Interface Modules
110 versus Memory Bus
387
381
386
385
Contents
11-3
11-4
11-5
11-6
11-7
11-8
Isolated versus Memory-Mapped 110
Example of 110 Interface
389
388
391
Asynchronous Data Transfer
Strobe Control
391
Handshaking
393
Asynchronous Serial Transfer
396
Asynchronous Communication Interface
First-In, First-Out Buffer
400
Modes of Transfer
Example of Programmed 110
403
Interrupt-Initiated 110
406
Software Considerations
406
Priority Interrupt
Daisy-Chaining Priority
408
Parallel Priority Interrupt
409
Priority Encoder
41 1
Interrupt Cycle
41 2
Software Routines
413
Initial and Final Operations
414
Direct Memory Access (DMA)
DMA Controller
DMA Transfer
41 6
41 8
Input-Output Processor (lOP)
CPU-lOP Communication
422
IBM 370 110 Channel
423
Intel 8089 lOP
427
Serial Communication
Character-Oriented Protocol
432
Transmission Example
433
Data Transparency
436
Bit-Oriented Protocol
437
Problems
References
398
402
407
415
420
429
439
442
CHAPTER TWELVE
Memory Organization
12-1
12-2
445
448
Memory Hierarchy
Main Memory
RAM and ROM Chips
445
449
xi
xii
Contents
12-3
12·4
12·5
12-6
12·7
Memory Address Map
450
Memory Connection to CPU
452
Auxiliary Memory
Magnetic Disks
Magnetic Tape
454
455
456
Associative Memory
Hardware Organization
March Logic
459
Read Operation
460
Write Operation
461
452
457
Cache Memory
Associative Mapping
464
Direct Mapping
465
Set-Associative Mapping
467
Writing into Cache
468
469
Cache Initialization
Virrual Memory
Address Space and Memory Space
470
Address Mapping Using Pages
472
Associative Memory Page Table
474
Page Replacement
475
Memory Management Hardware
Segmented-Page Mapping
477
Numerical Example
479
482
Memory Protection
Problems
References
462
469
476
483
486
CHAPTER THIRTEEN
Multiprocessors
13-1
13-2
13·3
Characteristics of Multiprocessors
Interconnection Structures
Time-Shared Common Bus
491
Multipart Memory
493
Crossbar Switch
494
Multistage Switching Network
496
Hypercube Interconnection
498
lnterprocessor Arbitration
System Bus
500
489
489
491
500
Contents
13·4
13-5
Serial Arbitration Procedure
502
Parallel Arbitration Logic
503
Dynamic Arbitration Algorithms
505
lnterprocessor Communication and
Synchronization
506
507
lnterprocessor Synchronization
Mutual Exclusion with a Semaphore
508
509
Cache Coherence
Conditions for Incoherence
509
Solutions to the Cache Coherence Problem
Problems
References
Index
51 0
5 12
5 14
5 15
xiii
CHAPTER ONE
Digital Logic
Circuits
lN THIS CHAPTER
1-1
digital
..
lrit
1-1
Digital Computers
1-2
logic Gates
1-3
Boolean Algebra
1-4
Map Simplification
1-5
1-6
Combinational Circuits
1-7
Sequential Circuit>
Rip-Flops
Digital Computers
The dgita
i
l computer is a dig it al system that performs various computational
tasks. The word digital implies t.hat the information in the computer is repre
sented by variables that take a limited number of discrete values. These values
are processed intemally by components that can maintain a limited number of
disaete states. The decimal digits 0, 1, 2, ..., 9, for example, provide 10
discrete values. The first electronic digital computers, developed in the late
1940s, were used primarily for numerical computations. ln this case the dis
crete el ements are the digits.From this application the termdigitAlc:omputuha.s
emerged. In practice, digital computers function more reliably if only two
states are used.Because of the physical restriction of components, and because
human logic tends to be binary (i.e., true-<>r-false, yes-<>r-no statements),
digital components that are constrained to take discrete values are further
constrained to take only two values and are said to be bituzry.
Digital computers use the binary number system, which has two digits:
0 and 1. A binary digit is called a mi. Information is represented in digital
computers in groups ofbits. By using various codingtechniques, groups ofbits
can be made to represent not only binary numbers but also other discrete
2
CHAPTER ONE Digital Logic Circuits
symbols, such as decimal digits or letters of the alphabet. By judicious use of
binary arrangements and by using various coding techniques, the groups of
bits are used to develop complete sets of instructions for performing various
types of computations.
In contrast to the common decimal numbers that employ the base 10
system, binary numbers use a base 2 system with two digits: 0 and I. The
decimal equivalent of a binary number can be found by expanding it into a
power series with a base of 2. For example, the binary number 1001011 repre
sents a quantity that can be converted to a decimal number by multiplying each
bit by the base 2 raised to an integer power as follows:
The seven bits 10010ll represent a binary number whose decimal equivalent
is 75. However, this same group of seven bits represents the letter K when used
in conjunction with a binary code for the letters of the alphabet. It may also
represent a control code for specifying some decision logic in a particular digital
computer. In other words, groups of bits in a digital computer are used to
represent many different things. This is similar to the concept that the same
letters of an alphabet are used to construct different languages, such as English
and French.
A computer system is sometimes subdivided into two functional entities:
hardware and software. The hardware of the computer consists of all the
electronic components and electromechanical devices that comprise the phys
ical entity of the device. Computer software consists of the instructions and
data that the computer manipulates to perform various data-processing tasks.
program
program.
data base.
A sequence of instructions for the computer is called a
are manipulated by the program constitute the
The data that
A computer system is composed of its hardware and the system software
available for its use. The system software of a computer consists of a collection
of programs whose purpose is to make more effective use of the computer. The
programs included in a systems software package are referred to as the
ating system.
oper
They are distinguished from application programs written by the
user for the purpose of solving particular problems. For example, a high-level
language program written by a user to solve particular data-processing needs
is an application program, but the compiler that translates the high-level
language program to machine language is a system program. The customer
who buys a computer system would need, in addition to the hardware, any
available software needed for effective operation of the computer. The system
software is an indispensable part of a total computer system. Its function is to
compensate for the differences that exist between user needs and the capability
of the hardware.
computer hardware
The hardware of the computer is usually divided into three major parts,
as shown in Fig. 1-1. The central processing unit (CPU) contains an arithmetic
SOCTION 1·1 0\gllal Compuru>
3
Figure 1·1 Block diaeram of a digital computer.
and logic unit for manipulating dala, a number of registers for storing dala, and
and executing inslnlctions. The memory of a
computer contains storage for n
i slnlctions and data. It is called a random·
access memory (RAM) because the CPU can access any location in memory at
random and retrieve the binaryinformation within a fixed interval of time. The
input and output processor (lOP) contains electronic circuits for communicat
ing and controlling the transfer of information between the computer and the
outside world. The input and output devices connected to the computer
include keyboards, printers, terminals, magnetic disk drives, and other com·
munication devices.
This book provides the basic knowledge necessary to understand the
hardware operations of a computer syst�m. The subject is sometimes consid
ered from three different points of view, depending on the interest of the
investigator. When dealing with computer hardware it is customary to distin·
guish between what is referred to as com puter organi.ution, computer design,
and computer architecture.
Compul�r organiZRiion is concerned with the way the hardwa re compo
nents operate and the way they are connected together to form the computer
system. The various components are assumed to be in place and the task is to
investigate the organhational slnlcture to verify that the computer parts oper
ate as intended.
Compute cksign is concerned with the hardware design of the computer.
Once the computer specifications are formulated, it is the task of the designer
to develop hardware for the system. Computer design is concerned with the
determination of what hardware should be used and how the parts should be
connected. This aspect of computer hardware is sometimes referred to as
control circuits for fetching
CO"'P"t"
0'8111d%11tiOrt
computer dalgn
computer
m:hiteclurt
computer implrolhltation.
Computer architecture is conc erned with the slnlcture and behavior of the
computer as seen by the user. It includes the information formats, the inslnlc-
4
CHAPTER ONE Digital Logic Circuits
tion set, and techniques for addressing memory. The architectural design of
a computer system is concerned with the specifications of the various func
tional modules, such as processors and memories, and structuring them to
gether into a computer system.
The book deals with all three subjects associated with computer hard
ware. In Chapters 1 through 4 we present the various digital components used
in the organization and design of computer systems. Chapters
5 through 7
cover ·the steps that a designer must go through to design and program an
elementary digital computer. Chapters 8 and 9 deal with the architecture of the
central processing unit. In Chapters 11 and 12 we present the organization and
architecture of the input-output processor and the memory unit.
1 -2
Logic Gates
Binary information is represented in digital computers by physical quantities
called signals . Electrical signals such as voltages exist throughout the computer
in either one of two recognizable states. The two states represent a binary
variable that can be equal to 1 or
0. For example, a particular digital computer
0.5 volt to represent
may employ a signal of 3 volts to represent binary 1 and
0. The input terminals of digital circuits accept binary signals of 3 and
0.5 volts and the circuits respond at the output terminals with signals of 3 and
0.5 volts to represent binary input and output corresponding to 1 and 0,
binary
respectively.
Binary logic deals with binary variables and with operations that assume
a logical meaning. It is used to describe, in algebraic or tabular form, the
manipulation and processing of binary information. The manipulation of bi
gates
nary information is done by logic circuits called
hardware that produce signals of binary 1 or
gates .
Gates are blocks of
0 when input logic requirements
are satisfied. A variety of logic gates are commonly used in digital computer
systems. Each gate has a distinct graphic symbol and its operation can be
described by means of an algebraic expression. The input-output relationship
of the binary variables for each gate can be represented in tabular form by a
truth table.
The names, graphic symbols, algebraic functions, and truth tables of
eight logic gates are listed in Fig. 1-2. Each gate has one or two binary input
variables designated by A and Band one binary output variable designated by
AND
x. The
input
AND gate
produces the
AND logic
function: that is, the output is 1 if
A and input B are both equal to 1; otherwise, the output is 0. These
conditions are also specified in the truth table for the
AND
gate. The table
shows that output x is 1 only when both input A and input Bare 1. The algebraic
operation symbol of the AND function is the same as the multiplication symbol
of ordinary arithmetic. We can either use a dot between the variables or
Name
AND
OR
Inverter
Buffer
NAND
NOR
Graphic
AIKehral(
symh(J/
func/I(Jn
;=C)---,
;=:[)--A
A
---{>o-------t:>--
;�
;�
x
Exclusive-NOR
or equivalence
x =A
x=(AB)'
x=(A+B)'
x=AfBB
0'
x=A'B+AB'
A
B
�
�
Figure
..
12
x
x=(AfBB)'
or
.\=A'B'+AB
Digital logic gates.
5
I
I
0
I
0
I
A
8
x
0
0
x=A+B
.r
Exclusive-OR
(XOR)
B
0'
x=AB
x =A'
x
tah/e
A
x=A•B
x
x
Tmtll
I
I
A
8
0
0
I
I
0
0
x
0
I
A
8
0
0
I
I
0
I
0
I
A
8
0
0
I
I
0
I
0
I
0
A
B
x
0
0
I
I
0
I
0
I
x
0
x
0
6
CHAPTER ONE Digital Logic Circuits
OR
inverter
NOR
exclusive-OR
concatenate the variables without an operation symbol between them. AND
gates may have more than two inputs, and by definition, the output is 1 if and
only if all inputs are I.
The OR gate produces the inclusive-OR function; that is, the output is 1
if input A or input B or both inputs are I; otherwise, the output is 0. The
algebraic symbol of the OR function is +, similar to arithmetic addition. OR
gates may have more than two inputs, and by definition, the output is 1 if any
input is I.
The inverter circuit inverts the logic sense of a binary signal. It produces
the NOT, or complement, function. The algebraic symbol used for the logic
complement is either a prime or a bar over the variable symbol. In this book
we use a prime for the logic complement of a binary variable, while a bar over
the letter is reserved for designating a complement microoperation as defined
in Chap. 4.
The small circle in the output of the graphic symbol of an inverter desig
nates a logic complement. A triangle symbol by itself designates a buffer
circuit. A buffer does not produce any particular logic function since the binary
value of the output is the same as the binary value of the input. This circuit
is used merely for power amplification. For example, a buffer that uses 3 volts
for binart 1 will produce an output of 3 volts when its input is 3 volts. However,
the amount of electrical power needed at the input of the buffer is much less
than the power produced at the output of the buffer. The main purpose of the
buffer is to drive other gates that require a large amount of power.
The NAND function is the complement of the AND function, as indicated
by the graphic symbol, which consists of an AND graphic symbol followed by
a small circle. The designation NAND is derived from the abbreviation of
NOT-AND. The NOR gate is the complement of the OR gate and uses an OR
graphic symbol followed by a small circle. Both NAND and NOR gates may
have more than two inputs, and the output is always the complement of the
AND or OR function, respectively.
The exclusive-OR gate has a graphic symbol similar to the OR gate except
for the additional curved line on the input side. The output of this gate is I if
any input is 1 but excludes the combination when both inputs are I. The
exclusive-OR function has its own algebraic symbol or can be expressed in
terms of AND, OR, and complement operations as shown in Fig. 1-2. The
exclusive-NOR is the complement of the exclusive-OR, as indicated by the
small circle in the graphic symbol. The output of this gate is 1 only if both inputs
are equal to 1 or both inputs are equal to 0. A more fitting name for the
exclusive-OR operation would be an odd function; that is, its output is I if an
odd number of inputs are I. Thus in a three-input exclusive-OR (odd) function,
the output is 1 if only one input is 1 or if all three inputs are 1. The exclusive-OR
and exclusive-NOR gates are commonly available with two inputs, and only
seldom are they found with three or more inputs.
SECTION 1-J Boolean Algebra
1 -3
Boolean function
7
Boolean Algebra
Boolean algebra is an algebra that deals with binary variables and logic oper
ations. The variables are designated by letters such as A, B, x, andy. The three
basic logic operations are AND, OR, and complement. A Boolean function can
be expressed algebraically with binary variables, the logic operation symbols,
parentheses, and equal sign. For a given value of the variables, the Boolean
function can be either 1 or 0. Consider, for example, the Boolean function
F = x + y'z
:ruth
table
:.,gic diagram
The function F is equal to 1 if x is 1 or if bothy' and z are equal to I; F is equal
to 0 otherwise. But saying that y' = 1 is equivalent to saying that y = 0 since
y' is the complement of y. Therefore, we may say that F is equal to 1 if x = 1
or if yz = 01. The relationship between a function and its binary variables can
be represented in a truth table. To represent a function in a truth table we need
a list of the 2' combinations of then binary variables. As shown in Fig. l-3(a),
there are eight possible distinct combinations for assigning bits to the three
variables x, y, and z. The function F is equal to 1 for those combinations where
x = 1 or yz = 01; it is equal to 0 for all other combinations.
A Boolean function can be transformed from an algebraic expression into
a logic diagram composed of AND, OR, and inverter gates. The logic diagram
for F is shown in Fig. l-3(b). There is an inverter for input y to generate its
complementy'. There is an AND gate for the termy'z, and an OR gate is used
to combine the two terms. In a logic diagram, the variables of the function are
taken to be the inputs of the circuit, and the variable symbol of the function
is taken as the output of the circuit.
The purpose of Boolean algebra is to facilitate the analysis and design of
digital circuits. It provides a convenient tool to:
1. Express in algebraic form a truth table relationship between binary
variables.
F
Figure 1 .. 3
y
0
0
0
0
I
I
I
I
0
0
I
I
0
0
I
I
0
I
0
I
0
I
0
I
0
I
0
0
I
I
I
I
(a) Truth table
Truth table and logic diagram for F = x
+ y1z..
:�F
z
(b) Logic diagram
8
CHAPTER ONE Digital Logic Circuits
2. Express in algebraic form the input-<mtput relationship of logic
diagrams.
Boolean expression
3. Find simpler circuits for the same function.
A Boolean function specified by a truth table can be expressed algebraically in
many different ways. By manipulating a Boolean expression according to
Boolean algebra rules, one may obtain a simpler expression that will require
fewer gates. To see how this is done, we must first study the manipulative
capabilities of Boolean algebra.
Table 1-1 lists the most basic identities of Boolean algebra. All the iden
tities in the table can be proven by means of truth tables. The first eight
identities show the basic relationship between a single variable and itself, or
in conjunction with the binary constants 1 and 0. The next five identities (9
through 13) are similar to ordinary algebra. Identity 14 does not apply in
ordinary algebra but is very useful in manipulating Boolean expressions.
Identities 15 and 16 are called DeMorgan's theorems and are discussed below.
The last identity states that if a variable is complemented twice, one obtains
the original value of the variable.
TABLE 1· 1
(1)
(3 )
(5 )
(7 )
(9 )
(11)
(13 )
(15 )
(17 )
Basic Identities of Boolean Algebra
X +0 =X
x+ 1 = 1
x+ x= x
x+ x' = 1
x+ y = y + x
x+ (y + z) = (x+ y) + z
x( y + z) = xy + xz
(x+ y) ' = x'y '
(x') ' = x
(2 )
(4 )
(6)
(8)
(10 )
(12 )
(14 )
(16)
x·O=O
X ·1 = X
X ·X = X
x·x' = 0
xy = yx
x( yz) = (xy )z
x+ yx= (x+ y)(x+ z)
(xy ) ' = x' + y '
The identities listed in the table apply to single variables or to Boolean
functions expressed in terms of binary variables. For example, consider the
following Boolean algebra expression:
AB' + C'D + AB' + C'D
By letting x = AB' + C'D the expression can be written as x + x. From identity
5 in Table 1-1 we find that x + x = x. Thus the expression can be reduced to
only two terms:
AB' + C'D + A'B + C'D = AB' + C'D
DeMorgan's theorem
DeMorgan's theorem is very important in dealing with NOR and NAND
gates. It states that a NOR gate that performs the (x + y)' function is equivalent
SECTION 1-3
Boolean Algebra
9
to the function x'y ' . Similarly, a NAND function can be expressed by either
(xy)' or (x' +y ' ). For this reason the NOR and NAND gates have two distinct
graphic symbols, as shown in Figs. 1-4 and 1-5. Instead of representing a NOR
gate with an OR graphic symbol followed by a circle, we can represent it by
an AND graphic symbol preceded by circles in all inputs. The invert-AND
symbol for the NOR gate follows from DeMorgan's theorem and from the
convention that small circles denote complementation. Similarly, the NAND
gate has two distinct symbols, as shown in Fig. 1-5.
To see how Boolean algebra manipulation is used to simplify digital
circuits, consider the logic diagram of Fig. 1-6(a). The output of the circuit can
be expressed algebraically as follows:
F =ABC+ABC'+A'C
Each term corresponds to one AND gate, and the OR gate forms the logical
sum of the three terms. Two inverters are needed to complement A' and C'.
The expression can be simplified using Boolean algebra.
F =ABC+ABC'+A'C =AB(C+C')+A'C
=AB+A'C
Note that(C +C)' = 1 by identity 7 andAB ·1 =ABby identity 4 in Table 1-1.
The logic diagram of the simplified expression is drawn in Fig. 1-6(b). It
requires only four gates rather than the six gates used in the circuit of Fig.
1-6(a). The two circuits are equivalent and produce the same truth table rela
tionship between inputs A, B, C and output F.
Figure
1·4 Two graphic symbols for NOR gate.
�� (x+y+z)'
(a) OR-invert
Figure
=
(x+y+z)'
(b) invert-AND
1·5 Two graphic symbols for NAND gate.
J=i>-(xyz)'
(a) AND-invert
{ v--x'y'z'
�v-- x'+y'+z'
(b) invert-OR
=
(xyz)'
10
CHAPTER ONE
Digital Logic Circuits
F
(a) F = ABC+ABC'+A'C
A--�------�--�
B
-t-----{__J
F
(B)F = AB+A'C
Figure 1 ..6
Complement of
a
Two logic diagrams for the same Boolean function.
Function
The complement of a function F when expressed in a truth table is obtained
by interchanging l's and D's in the values of F in the truth table. When the
function is expressed in algebraic form, the complement of the function can be
derived by means of DeMorgan's theorem. The general form of DeMorgan's
theorem can be expressed as follows:
(xi+X2 +X3 +···+Xn) ' = xix2x�···x�
(X1 X2 X3···Xn) ' = Xi+X2+X�+···+X�
From the general DeMorgan's theorem we can derive a simple procedure for
obtaining the complement of an algebraic expression. This is done by changing
all OR operations to AND operations and all AND operations to OR operations
and then complementing each individual letter variable. As an example, con
sider the following expression and its complement:
F = AB +C ' D ' +B ' D
F ' = (A ' +B ' )(C +D)(B +D ' )
SECfiON l -4
Map Simplification
11
The complement expression is obtained by interchanging AND and OR oper
ations and complementing each individual variable. Note that the complement
of C ' is C .
1 -4
Map Simplification
The complexity of the logic diagram that implements a Boolean function is
related directly to the complexity of the algebraic expression from which the
function is implemented. The truth table representation of a function is unique,
but the function can appear in many different forms when expressed alge
braically. The expression may be simplified using the basic relations of Boolean
algebra. However, this procedure is sometimes difficult because it lacks specific
rules for predicting each succeeding step in the manipulative process. The map
method provides a simple, straightforward procedure for simplifying Boolean
expressions. This method may be regarded as a pictorial arrangement of the
truth table which allows an easy interpretation for choosing the minimum
number of terms needed to express the function algebraically. The map
method is also known as the I<arnaugh map or K-map.
Each combination of the variables in a truth table is called a minterm. For
example, the truth table of Fig. 1-3 contains eight minterms. When expressed
in a truth table a function of n variables will have 2" minterms, equivalent to
the 2" binary numbers obtained from n bits. A Boolean function is equal to 1
for some minterms and to 0 for others. The information contained in a truth
table may be expressed in compact form by listing the decimal equivalent of
those minterms that produce a 1 for the function. For example, the truth table
of Fig. 1-3 can be expressed as follows:
F(x, y, z) =
�
(1, 4, 5, 6, 7)
The letters in parentheses list the binary variables in the order that they appear
in the truth table. The symbol � stands for the sum of the minterms that follow
in parentheses. The minterms that produce 1 for the function are listed in their
decimal equivalent. The minterms missing from the list are the ones that
produce 0 for the function.
The map is a diagram made up of squares, with each square representing
one minterm. The squares corresponding to minterms that produce 1 for the
function are marked by a 1 and the others are marked by a 0 or are left empty.
By recognizing various patterns and combining squares marked by 1's in the
map, it is possible to derive alternative algebraic expressions for the function,
from which the most convenient may be selected.
The maps for functions of two, three, and four variables are shown in Fig.
1-7. The number of squares in a map of n variables is 2". The 2" minterms are
listed by an equivalent decimal number for easy reference. The minterm
12
CHAPTER ONE
Digital logic Circuits
B
BC
A
(b) Three-variable map
(a) Two-variable map
CD
00
AB
00
0
{II
01
A
10
I
01
4
5
12
13
8
9
00
c
'""ilio
3
2
7
6
15
14
II
10
(c) Four-variable map
Figure
adjacent squares
1� 7 Maps for two- , three-, and four,variable functions.
numbers are assigned in an orderly arrangement such that adjacent squares
represent minterms that differ by only one variable. The variable names are
listed across both sides of the diagonal line in the corner of the map. The O's
and 1's marked along each row and each column designate the value of the
variables. Each variable under brackets contains half of the squares in the map
where that variable appears unprimed. The variable appears with a prime
(complemented) in the remaining half of the squares.
The minterm represented by a square is determined from the binary
assignments of the variables along the left and top edges in the map. For
example, minterm 5 in the three-variable map is 101 in binary, which may be
obtained from the 1 in the second row concatenated with the 01 of the second
column. This minterm represents a value for the binary variables A, B, and C,
with A and C being unprimed and B being primed (i.e. , AB 'C). On the other
hand, minterrn 5 in the four-variable map represents a minterm for four
variables. The binary number contains the four bits 0101, and the correspond
ing term it represents is A 'BC 'D.
Minterms o f adjacent squares i n the map are identical except for one
variable, which appears complemented in one square and uncomplemented
in the adjacent square. According to this definition of adjacency, the squares
at the extreme ends of the same horizontal row are also to be considered
SECflON 1 -4
Map Simplification
13
adjacent. The same applies t o the top and bottom squares o f a column. A s a
result, the four corner squares of a map must also be considered to be adjacent.
A Boolean function represented by a truth table is plotted into the map
by inserting 1's in those squares where the function is I . The squares contain
ing 1's are combined in groups of adjacent squares. These groups must contain
a number of squares that is an integral power of 2. Groups of combined
adjacent squares may share one or more squares with one or more groups.
Each group of squares represents an algebraic term, and the OR of those terms
gives the simplified algebraic expression for the function. The following exam
ples show the use of the map for simplifying Boolean functions.
In the first example we will simplify the Boolean function
F(A, B, C) =
I (3, 4, 6, 7)
The three-variable map for this function is shown in Fig. 1-8. There are four
squares marked with 1's, one for each minterm that produces 1 for the func
tion. These squares belong to minterms 3, 4, 6, and 7 and are recognized from
Fig. 1-7(b). Two adjacent squares are combined in the third column. This
column belongs to both B and C and produces the term BC. The remaining two
squares with 1's in the two comers of the second row are adjacent and belong
to row A and the two columns of C', so they produce the term AC'. The
simplified algebraic expression for the function is the OR of the two terms:
F = BC +AC '
The second example simplifies the following Boolean function:
F(A, B, C) =
I
(0, 2, 4, 5, 6)
The five min terms are marked with 1' s in the corresponding squares of the
three-variable map shown in Fig. 1-9. The four squares in the first and fourth
columns are adjacent and represent the term C'. The remaining square marked
with a 1 belongs to min term 5 and can be combined with the square of min term
4 to produce the term AB ' . The simplified function is
F = C' +A B'
Figure 1-8 Map for F(A. B. C )
=
:1: (3 ,4.6, 7).
B
,.....-J-.
A{a
'-y--'
c
14
CHAPTER ONE
Digital Logic Circuits
B
___.,_____,
A
Figure 1-9
{
m
'--y--"
c
Map for F (A , 8, C)
=
l (0,2,4, 5,6).
The third example needs a four-variable map.
F(A , B, C, D) = I (0, 1, 2, 6, 8, 9, 10)
The area in the map covered by this four-variable function consists of the
squares marked with 1's in Fig. 1-10. The function contains 1's in the four
comers that, when taken as a group, give the term B ' D ' . This is possible
because these four squares are adjacent when the map is considered with top
and bottom or left and right edges touching. The two 1's on the left of the top
row are combined with the two 1's on the left of the bottom row to give the
term B ' C ' . The remaining 1 in the square of minterm 6 is combined with
minterm 2 to give the term A 'CD ' . The simplified function is
F = B ' D ' + B ' C ' + A 'CD '
Product-of-Sums Simplification
The Boolean expressions derived from the maps in the preceding examples
were expressed in sum-of-products form. The product terms are AND terms
and the sum denotes the ORing of these terms. It is sometimes convenient to
obtain the algebraic expression for the function in a product-of-sums form. The
Figure 1-10 Map for F(A, B , C , D)
=
:!: (0, 1 , 2 ,6,8,9, 10).
m}·
c
A
{�
.__,....,
D
SECfiON 1 -4
Map Simplification
15
sums are OR terms and the product denotes the ANDing o f these terms. With
a minor modification, a product-of-sums form can be obtained from a map.
The procedure for obtaining a product-of-sums expression follows from
the basic properties of Boolean algebra. The 1's in the map represent the
minterms that produce 1 for the function. The squares not marked by 1
represent the minterms that produce 0 for the function. If we mark the empty
squares with 0' s and combine them into groups of adjacent squares, we obtain
the complement of the function, f ' . Taking the complement of F' produces an
expression for F in product-of-sums form. The best way to show this is by
example.
We wish to simpllfy the following Boolean function in both sum-of
products form and product-of-sums form:
F(A, B, C, D) = I (0, 1, 2, 5, 8, 9, 10)
The 1' s marked in the map of Fig. 1-11 represent the minterms that produce
a 1 for the function. The squares marked with 0' s represent the min terms not
included in F and therefore denote the complement of F. Combining the
squares with 1's gives the simplified function in sum-of-products form:
F = B 'D ' +B ' C ' +A ' C ' D
I f the squares marked with 0 's are combined, a s shown i n the diagram, we
obtain the simplified complemented function:
F' = AB +CD +BD'
Taking the complement of F', we obtain the simplified function in product-of
sums form:
F = (A ' +B ')(C ' +D ')(B ' + D)
Figure 1· 1 1
Map for F(A, 8, C, D)
=
l (0, 1 , 2 , 5,8,9, 10).
c
,....._._,
I
I
0
0
I
0
0
0
0
0
oI
I
0
I
I
I
'-y---o'
D
16
CHAPTER ONE
Digital Logic Circuits
NAND
implementation
NOR
implementation
The logic diagrams of the two simplified expressions are shown in Fig. 1-12.
The sum-of-products expression is implemented in Fig. 1-12(a) with a group
of AND gates, one for each AND term. The outputs of the AND gates are
connected to the inputs of a single OR gate. The same function is implemented
in Fig. 1-12(b) in product-of-sums form with a group of OR gates, one for each
OR term. The outputs of the OR gates are connected to the inputs of a single
AND gate. In each case it is assumed that the input variables are directly
available in their complement, so inverters are not included. The pattern
established in Fig. 1-12 is the general form by which any Boolean function is
implemented when expressed in one of the standard forms. AND gates are
connected to a single OR gate when in sum-of-products form. OR gates are
connected to a single AND gate when in product-of-sums form.
A sum-of-products expression can be implemented with NAND gates as
shown in Fig. 1-13(a). Note that the second NAND gate is drawn with the
graphic symbol of Fig. 1-S(b). There are three lines in the diagram with small
circles at both ends. Two circles in the same line designate double complemen
tation, and since (x ')' = x, the two circles can be removed and the resulting
diagram is equivalent to the one shown in Fig. 1-12(a). Similarly, a product-of
sums expression can be implemented with NOR gates as shown in Fig. 1-13(b).
The second NOR gate is drawn with the graphic symbol of Fig. 1-4(b). Again
the two circles on both sides of each line may be removed, and the diagram
so obtained is equivalent to the one shown in Fig. 1-12(b).
Don't-Care Conditions
don't-care conditions
The 1' s and 0' s in the map represent the minterms that make the function equal
to 1 or 0. There are occasions when it does not matter if the function produces
0 or 1 for a given minterm. Since the function may be either 0 or 1, we say that
we don't care what the function output is to be for this min term. Minterms that
may produce either 0 or 1 for the function are said to be don't-care conditions
and are marked with an x in the map. These don't-care conditions can be used
to provide further simplification of the algebraic expression.
Figure 1�12 Logic diagrams with AND and OR gates.
8' ---r---...
D ' --r-.c_.-/
c ' --r...,___.,
A '--1--r--....
F
F
D --.._
.,_
_.,
(a) Sum of products:
F = B'D ' + B 'c' + A 'C 'D
(b) Product of sums:
F = (A ' + B ' ) (C '+ D ' ) (B ' + D )
SECflON l -4
B'
17
Map Simplification
A'
D'
F
C'
B'
c'
F
D'
A'
D
D
(a) Wit h
NAND gates
Figure 1,13
(b) With
NOR gates
Logic diagrams with NAND or NOR gates.
When choosing adjacent squares for the function in the map, the x 's may
be assumed to be either 0 or I, whichever gives the simplest expression. In
addition, an x need not be used at all if it does not contribute to the simplifi
cation of the function. In each case, the choice depends only on the simplifi
cation that can be achieved. As an example, consider the following Boolean
function together with the don't-care mmterms:
F(A, B, C) =
d(A, B , C) =
I (0, 2, 6)
I (1, 3, 5)
The mmterms listed with F produce a 1 for the function. The don't-care mill
terms listed with d may produce either a 0 or a 1 for the function. The remaining
minterms, 4 and 7, produce a O for the function. The map is shown in Fig. 1-14.
The minterms of F are marked with l's, those of d are marked with x 's, and
the remaining squares are marked with O's. The l's and x ' s are combined in
any convenient manner so as to enclose the maximum number of adjacent
squares. It is not necessary to include all or any of the x 's, but all the l's must
be included. By including the don't-care minterms 1 and 3 with the l's in the
first row we obtain the term A ' . The remaining 1 for min term 6 is combined
with min term 2 to obtain the term BC ' . The simplified expression is
F = A ' +BC '
Note that don't-care min term 5 was not included because it does not contribute
to the simplification of the expression. Note also that if don't-care mmterms
1 and 3 were not included with the l's, the simplified expression for F would
have been
F = A 'C ' +BC '
This would require two AND gates and an OR gate, as compared to the ex
pression obtained previously, which requires only one AND and one OR gate.
18
CHA PTER ONE
Digital logic Circuits
8
c
Figure 1-14 Example of map with don't..care conditions.
The function is determined completely once the x 's are assigned to the
1's or O's in the map. Thus the expression
F = A ' +BC '
represents the Boolean function
F(A, B, C) =
I (0, 1, 2, 3, 6)
It consists of the original minterms 0, 2, and 6 and the don't-care minterms 1
and 3. Minterm 5 is not included in the function . Since minterms 1, 3, and 5
were specified as being don't-care conditions, we have chosen min terms 1 and
3 to produce a 1 and minterm 5 to produce a 0. This was chosen because this
assignment produces the simplest Boolean expression.
1 -5
block diagram
Combinational Circuits
A combinational circuit is a connected arrangement of logic gates with a set of
inputs and outputs. At any given time, the binary values of the outputs are a
function of the binary combination of the inputs. A block diagram of a combi
national circuit is shown in Fig. 1-15. The n binary input variables come from
an external source, the m binary output variables go to an external destination,
and in between there is an interconnection of logic gates. A combinational
circuit transforms binary information from the given input data to the required
output data. Combinational circuits are employed in digital computers for
generating binary control decisions and for providing digital components
required for data processing.
A combinational circuit can be described by a truth table showing the
binary relationship between the n input variables and the m output variables.
The truth table lists the corresponding output binary values for each of the 2"
input combinations. A combinational circuit can also be specified with m
Boolean functions, one for each output variable. Each output function is
expressed in terms of the n input variables.
SECTION l -5
19
output
variables
n input
variables
Figure 1 .. 15
Combinational Circuits
m
Block diagram of a combinational circuit.
The analysis of a combinational circuit starts with a given logic circuit
diagram and culminates with a set of Boolean functions or a truth table. If the
digital circuit is accompanied by a verbal explanation of its function, the
Boolean functions or the truth table is sufficient for verification. If the function
of the circuit is under investigation, it is necessary to interpret the operation of
the circuit from the derived Boolean functions or the truth table. The success
of such investigation is enhanced if one has experience and familiarity with
digital circuits. The ability to correlate a truth table or a set of Boolean functions
with an information-processing task is an art that one acquires with experience.
The design of combinational circuits starts from the verbal outline of the
problem and ends in a logic circuit diagram. The procedure involves the
following steps:
1. The problem is stated.
2. The input and output variables are assigned letter symbols.
3.
The truth table that defines the relationship between inputs and outputs
is derived.
4.
The simplified Boolean functions for each output are obtained.
5.
The logic diagram is drawn.
To demonstrate the design of combinational circuits, we present two
examples of simple arithmetic circuits. These circuits serve as basic building
blocks for the construction of more complicated arithmetic circuits.
Half-Adder
The most basic digital arithmetic circuit is the addition of two binary digits. A
combinational circuit that performs the arithmetic addition of two bits is called
a half-adder. One that performs the addition of three bits (two significant bits
and a previous carry) is called a full-adder. The name of the former stems from
the fact that two half-adders are needed to implement a full-adder.
The input variables of a half-adder are called the augend and addend
bits. The output variables the sum and carry. It is necessary to specify two
output variables because the sum of 1 + 1 is binary 10, which has two digits.
We assign symbols x and y to the two input variables, and S (for sum) and C
20
CHAPTER ONE
Digital Logic Circuits
X
y
c
s
0
0
0
0
0
0
0
I
I
I
0
I
I
I
I
0
(a) Truth table
Figure 1-16
;�:
(b) Logic diagram
Half-adder.
(for carry) to the two output variables. The truth table for the half-adder is
shown in Fig. l-16(a). The C output is 0 unless both inputs are I. The S output
represents the least significant bit of the sum. The Boolean functions for the
two outputs can be obtained directly from the truth table:
S = x'y +xy ' = x a1 y
c = xy
The logic diagram is shown in Fig. l-16(b). It consists of an exclusive-OR gate
and an AND gate.
Full-Adder
A full-adder is a combinational circuit that forms the arithmetic sum of three
input bits. It consists of three inputs and two outputs. Two of the input
variables, denoted by x and y, represent the two significant bits to be added.
The third input,
z,
te'?teseuts the <:any from the ptev'..ous \owet s\.gn\.ncant
position. Two outputs are necessary because the arithmetic sum of three binary
digi.t:nanges \.n val.ue 'irom \) to �. ana 'Omary 2 or 3 needs two digits. The two
outputs are designated by the symbols S (for sum) and C (for carry). The binary
variable S gives the value of the least significant bit of the sum. The binary
variable C gives the output carry. The truth table of the full-adder is shown in
Table 1-2. The eight rows under the in?ut variables designate all possib\e
combinations that the binary variables may have. The value of the output
variables are determined from the arithmetic sum of the input bits. When all
input bits are 0, the output is 0. The S output is equal to 1 when only one input
is equal to 1 or when all three inputs are equal to I . The C output has a carry
of 1 if two or three inputs are equal to I.
The maps of Fig. 1-17 are used to find algebraic expressions for the two
output variables. The l's in the squares for the maps of S and C are determined
directly from the minterms in the truth table. The squares with l's for the S
output do not combine in groups of adjacent squares. But since the output is
1 when an odd number of inputs are I, S is an odd function and represents
SECTION 1 ·5
Combinational Circuits
21
TABLE 1 - 2 Truth Table for Full-Adder
Outputs
Inputs
X
0
0
0
0
y
0
0
1
0
0
0
1
0
1
0
1
0
c
5
0
0
0
1
0
0
1
0
1
0
0
the exclusive-OR relation of the variables (see the discussion at the end of Sec.
1-2). The squares with 1's for the C output may be combined in a variety of
ways. One possible expression for C is
C = xy + (x'y +ry')z
'
Realizing that x'y +xy = x a1 y and including the expression for output S , we
obtain the two Boolean expressions for the full-adder:
S = x a1 y a1 z
C = xy + (x a1 y)z
The logic diagram of the full-adder is drawn in Fig. 1-18. Note that the full
adder circuit consists of two half-adders and an OR gate. When used in
subsequent chapters, the full-adder (FA) will be designated by a block diagram
as shown in Fig. 1-18(b).
Figure 1-17 Maps for full-adder.
S = x'y'z + x'yz' + xy'z' + xyz
= x EB y E9 z
C = xy + xz + yz
= xy + (x'y + .ry') z
22
CHAPTER ONE
Digital Logic Circuits
x=Bs
Y
FA
c
z
(a)
Logic
diagram
(b) Block
diagram
Figure 1� 18 Full�adder circuit.
1 -6
clocked sequential
circuit
Flip-Flops
The digital circuits considered thus far have been combinational, where the
outputs at any given time are entirely dependent on the inputs that are present
at that time. Although every digital system is likely to have a combinational
circuit, most systems encountered in practice also include storage elements,
which require that the system be described in terms of sequential circuits. The
most common type of sequential circuit is the synchronous type. Synchronous
sequential circuits employ signals that affect the storage elements only at
discrete instants of time. Synchronization is achieved by a timing device called
a clock pulse generator that produces a periodic train of clock pulses. The clock
pulses are distributed throughout the system in such a way that storage
elements are affected only with the arrival of the synchronization pulse.
Oocked synchronous sequential circuits are the type most frequently encoun
tered in practice. They seldom manifest instability problems and their timing
is easily broken down into independent discrete steps, each of which may be
considered separately.
The storage elements employed in clocked sequential circuits are called
flip-flops. A flip-flop is a binary cell capable of storing one bit of information.
It has two outputs, one for the normal value and one for the complement value
of the bit stored in it. A flip-flop maintains a binary state until directed by a
clock pulse to switch states. The difference among various types of flip-flops
is in the number of inputs they possess and in the manner in which the inputs
affect the binary state. The most common types of flip-flops are presented
below.
SR Flip-Flop
The graphic symbol of the SR flip-flop is shown in Fig. 1-19(a). It has three
inputs, labeled S (for set), R (for reset), and C (for clock). It has an output Q
and sometimes the flip-flop has a complemented output, which is indicated
with a small circle at the other output terminal. There is an arrowhead-shaped
symbol in front of the letter C to designate a dynamic input. The dynamic
23
SECTION 1 -6 Flips-Flops
Q
c
s
R
Q (t + I )
0
0
0
I
Q (c)
0
I
?
I
I
0
No
change
Clear to O
Set to I
Indeterminate
I
(b) Characteristic table
(a) Graphic symbol
Figure 1 · 1 9
SR flip-flop.
indicator symbol denotes the fact that the flip-flop responds to a positive
transition (from
0
to
1)
of the input clock signal .
The operation of the SR flip-flop is as follows . If there is no signal at the
clock input C, the output of the circuit cannot change irrespective of the values
at inputs S and R . Only when the clock signal changes from
output be affected according to the values in inputs S and R . If S
0
to
1
can the
1 and R
=
=
0
1, output Q is set to 1 . If S
0 and R
1 when C
changes from 0 to 1, output Q is cleared to 0. If both S and R are 0 during the
clock transition, the output does not change . When both 5 and R are equal to
1, the output is unpredictable and may go to either 0 or 1, depending on
when C changes from 0 to
=
=
internal timing delays that occur within the circuit.
The characteristic table shown in Fig .
1-19(b)
summarizes the operation
of the SR flip-flop in tabular form . The S and R columns give the binary values
of the two inputs . Q(t) is the binary state of the Q output at a given time
(referred to as
present state).
Q(t
+ 1)
is the binary state of the Q output after
next state).
1)
state . If S
1
If S
the occurrence of a clock transition (referred to as
clock transition produces no change of state [i . e . , Q(t +
and R
=
1,
the flip-flop goes to the 0 (clear)
flip-flop goes to the
S
=
R
=
1
=
=
=
R
=
Q(t)] . If S
and R
=
0,
0,
=
a
0
the
(set) state . The SR flip-flop should not be pulsed when
1 since it produces an indeterminate next state . This indeterminate
condition makes the SR flip-flop difficult to manage and therefore it is seldom
used in practice .
D Flip-Flop
The D (data) flip-flop is a slight modification of the SR flip-flop . An SR flip-flop
is converted to a D flip-flop
by
inserting an inverter between S and R and
assigning the symbol D to the single input. The D input is sampled during the
occurrence of a clock transition from 0 to 1. If D = 1, the output of the flip-flop
goes to the 1 state, but if D = 0, the output of the flip-flop goes to the 0 state .
The graphic symbol and characteristic table of the D flip-flop are shown
in Fig.
1-20 .
From the characteristic table we note that the next state Q(t +
1)
24
CHAPTER ONE
Digital Logic Circuits
Q (t + I )
D
Clear to 0
Set to l
(b) Characteristic table
(a) Graphic symbol
Figure 1-ZO
D flip-flop.
is determined from the D input. The relationship can be expressed by a
characteristic equation:
Q(t + 1) = D
This means that the Q output of the flip-flop receives its value from the D input
every time that the clock signal goes through a transition from 0 to 1 .
Note that n o input condition exists that will leave the state o f the D
flip-flop unchanged. Although a D flip-flop has the advantage of having only
one input (excluding C), it has the disadvantage that its characteristic table does
not have a "no change" condition Q(t + 1) = Q(t) . The "no change" condition
can be accomplished either by disabling the clock signal or by feeding the
output back into the input, so that clock pulses keep the state of the flip-flop
unchanged.
]K Flip-Flop
A JK flip-flop is a refinement of the SR flip-flop in that the indeterminate
condition of the SR type is defined in the JK type. Inputs J and K behave like
inputs S and R to set and clear the flip-flop, respectively. When inputs J and
K are both equal to 1, a clock transition switches the outputs of the flip-flop
to their complement state.
The graphic symbol and characteristic table of the JK flip-flop are shown
in Fig. 1-21 . The J input is equivalent to the S (set) input of the SR flip-flop,
and the K input is equivalent to the R (clear) input. Instead of the indeterminate
condition, the JK flip-flop has a complement condition Q(t + 1) = Q ' (t) when
both J and K are equal to 1 .
T Flip-Flop
Another type of flip-flop found in textbooks is the T (toggle) flip-flop. This
flip-flop, shown in Fig. 1-22, is obtained from a JK type when inputs J and K
are connected to provide a single input designated by T. The T flip-flop
SECTION 1 -6
K
25
Q (t + I )
Q (r)
0
I
Q' (r)
(a) Graphic symbol
Flips-Hops
No change
Clear to 0
Set to I
Complement
(b) Characteristic table
Figure 1-21 ]K flip-flop.
T
Q (t + I )
Q (t )
Q' (t)
No change
Complement
(b) Characteristic table
(a) Graphic symbol
Figure 1·22
T
flip-flop.
therefore has only two conditions. When T = 0 (J = K = 0) a clock transition
does not change the state of the flip-flop. When T = 1 (J = K = 1) a clock
transition complements the state of the flip-flop. These conditions can be
expressed by a characteristic equation:
Q(t + 1) = Q(t) Ell T
Edge·Triggered Flip· Flops
The most common type of flip-flop used to synchronize the state change during
a clock pulse transition is the edge-triggered flip-flop. In this type of flip-flop,
output transitions occur at a specific level of the clock pulse. When the pulse
input level exceeds this threshold level, the inputs are locked out so that the
flip-flop is unresponsive to further changes in inputs until the clock pulse
returns to 0 and another pulse occurs. Some edge-triggered flip-flops cause a
transition on the rising edge of the clock signal (positive-edge transition), and
others cause a transition on the falling edge (negative-edge transition).
Figure 1-23(a) shows the clock pulse signal in a positive-edge-triggered
D flip-flop. The value in the D input is transferred to the Q output when the
clock makes a positive transition. The output cannot change when the clock
is in the 1 level, in the 0 level, or in a transition from the 1 level to the 0 level.
26
CHAPTER ONE
Digital logic Circuits
j
Positive
clock
transition
(a) Positive-edge-triggered D flip-flop.
�
Output
cannot
Negativ
change
clock
transition
(b) Negative-edge-triggered D flip-flop.
Figure I· 23
master-slave
flip-flop
Edge-triggered flip-flop.
The effective positive clock transition includes a minimum time called the setup
time in which the D input must remain at a constant value before the transition,
and a definite time called the hold time in which the D input must not change
after the positive transition. The effective positive transition is usually a very
small fraction of the total period of the clock pulse.
Figure 1-23(b) shows the corresponding graphic symbol and timing dia
gram for a negative-edge-triggered D flip-flop. The graphic symbol includes a
negation small circle in front of the dynamic indicator at the C input. This
denotes a negative-edge-triggered behavior. In this case the flip-flop responds
to a transition from the 1 level to the 0 level of the clock signal.
Another type of flip-flop used in some systems is the master-slave flip
flop. This type of circuit consists of two flip-flops. The first is the master, which
responds to the positive level of the clock, and the second is the slave, which
responds to the negative level of the clock. The result is that the output changes
during the 1-to-0 transition of the clock signal. The trend is away from the use
of master-slave flip-flops and toward edge-triggered flip-flops.
Flip-flops available in integrated circuit packages will sometimes provide
special input terminals for setting or clearing the flip-flop asynchronously.
These inputs are usually called "preset" and "clear." They affect the flip-flop
on a negative level of the input signal without the need of a clock pulse. These
inputs are useful for bringing the flip-flops to an initial state prior to its clocked
operation.
SECTION !-6
Flips-Flops
27
Excitation Tables
The characteristic tables of flip-flops specify the next state when the inputs and
the present state are known. During the design of sequential circuits we usually
know the required transition from present state to next state and wish to find
the flip-flop input conditions that will cause the required transition. For this
reason we need a table that lists the required input combinations for a given
change of state. Such a table is called a flip-flop excitation table.
Table 1-3 lists the excitation tables for the four types of flip-flops. Each
table consists of two columns, Q(t) and Q(t + 1), and a column for each input
to show how the required transition is achieved. There are four possible
transitions from present state Q(t) to next state Q(t + 1). The required input
conditions for each of these transitions are derived from the information
available in the characteristic tables. The symbol x in the tables represents a
don't-care condition; that is, it does not matter whether the input to the
flip-flop is 0 or 1 .
TABLE 1-3 Excitation Table for Four Flip-Flops
SR flip-flop
Q(t)
Q(t + I)
s
0
0
0
I
0
0
I
0
X
D flip-flop
Q(t)
Q(t + I)
D
X
0
0
0
I
0
0
I
0
K
Q(t)
Q(t + I)
T
X
X
0
0
0
I
0
I
0
I
I
0
R
0
I
0
JK flip-flop
Q(t)
Q(t + I)
0
0
0
I
0
T flip-flop
0
X
X
I
0
The reason for the don't-care conditions in the excitation tables is that
there are two ways of achieving the required transition. For example, in a JK
flip-flop, a transition from present state of 0 to a next state of 0 can be achieved
by having inputs J and K equal to 0 (to obtain no change) or by letting J = 0
and K = 1 to clear the flip-flop (although it is already cleared). In both cases
J must be 0, but K is 0 in the first case and 1 in the second. Since the required
transition will occur in either case, we mark the K input with a don't-care x
28
CHAPTER ONE
Digital Logic Circuits
and let the designer choose either 0 or 1 for the K input, whichever is more
convenient.
1-7
Sequential Circuits
A sequential circuit is an interconnection of flip-flops and gates. The gates by
themselves constitute a combinational circuit, but when included with the
flip-flops, the overall circuit is classified as a sequential circuit. The block
diagram of a clocked sequential circuit is shown in Fig. 1-24. It consists of a
combinational circuit and a number of clocked flip-flops. In general, any
number or type of flip-flops may be included. As shown in the diagram, the
combinational circuit block receives binary signals from external inputs and
from the outputs of flip-flops. The outputs of the combinational circuit go to
external outputs and to inputs of flip-flops. The gates in the combinational
circuit determine the binary value to be stored in the flip-flops after each clock
transition. The outputs of flip-flops, in turn, are applied to the combinational
circuit inputs and determine the circuit's behavior. This process demonstrates
that the external outputs of a sequential circuit are functions of both external
inputs and the present state of the flip-flops. Moreover, the next state of
flip-flops is also a function of their present state and external inputs. Thus a
sequential circuit is specified by a time sequence of external inputs, external
outputs, and internal flip-flop binary states.
Flip-Flop Input Equations
input equation
An example of a sequential circuit is shown in Fig. 1-25. It has one input
variable x, one output variable y, and two clocked D flip-flops. The AND gates,
OR gates, and inverter form the combinational logic part of the circuit. The
interconnections among the gates in the combinational circuit can be specified
by a set of Boolean expressions. The part of the combinational circuit that
generates the inputs to flip-flops are described by a set of Boolean expressions
called flip-flop input equations. We adopt the convention of using the flip-flop
input symbol to denote the input equation variable name and a subscript to
Figure 1 .. 24
Block diagram of a clocked synchronous sequential circuit.
SECTION 1·7
29
Sequential Circuits
A
A'
B
B'
Figure 1�25
Example of a sequential circuit.
designate the symbol chosen for the output of the flip-flop. Thus, in Fig. 1-25,
we have two input equations, designated DA and 08 . The first letter in each
symbol denotes the D input of a D flip-flop. The subscript letter is the symbol
name of the flip-flop. The input equations are Boolean functions for flip-flop
input variables and can be derived by inspection of the circuit. Since the output
of the OR gate is connected to the D input of flip-flop A, we write the first input
equation as
DA = Ax + Bx
where A and B are the outputs of the two flip-flops and x is the external input.
The second input equation is derived from the single AND gate whose output
is connected to the D input of flip-flop B:
D. = A 'x
30
CHAPTER ONE
Digital Logic Circuits
The sequential circuit also has an external output, which is a function of
the input variable and the state of the flip-flops. This output can be specified
algebraically by the expression
y = Ax ' + Bx '
From this example we note that a flip-flop input equation is a Boolean
expression for a combinational circuit. The subscripted variable is a binary
variable name for the output of a combinational circuit. This output is always
connected to a flip-flop input.
State Table
present state
next state
The behavior of a sequential circuit is determined from the inputs, the outputs,
and the state of its flip-flops. Both the outputs and the next state are a function
of the inputs and the present state. A sequential circuit is specified by a state
table that relates outputs and next states as a function of inputs and present
states. In clocked sequential circuits, the transition from present state to next
state is activated by the presence of a clock signal.
The state table for the circuit of Fig. 1-25 is shown in Table 1-4. The table
consists of four sections, labeled present state, input, next state, and output. The
present-state section shows the states of flip-flops A and B at any given time
t . The input section gives a value of x for each possible present state. The
next-state section shows the states of the flip-flops one clock period later at time
t + I. The output section gives the value of y for each present state and input
condition.
The derivation of a state table consists of first listing all possible binary
combinations of present state and inputs. In this case we have eight binary
combinations from 000 to Il l . The next-state values are then determined from
the logic diagram or from the input equations. The input equation for flip-flop
A is
DA = Ax + Bx
The next-state value of a each flip-flop is equal to its D input value in the present
state. The transition from present state to next state occurs after application of
a clock signal. Therefore, the next state of A is equal to 1 when the present state
and input values satisfy the conditions Ax = 1 or Bx = 1, which makes DA
equal ! . This is shown in the state table with three 1's under the column for
next state of A. Similarly, the input equation for flip-flop B is
D, = A 'x
SECTION l-7
Sequential Circuits
31
The next state of B in the state table is equal to 1 when the present state of A
is 0 and input x is equal to 1. The output column is derived from the output
equation
y = Ax ' + Bx '
TABLE 1-4 State Table for Circuit of Fig. 1-25
Present
state
state table
A
B
0
0
0
0
0
0
1
0
0
Next
state
Input
Output
X
A
B
y
0
I
0
0
0
0
0
I
0
0
0
0
I
0
0
I
0
0
0
0
0
1
1
1
1
0
I
0
I
0
The state table of any sequential circuit is obtained by the procedure used
in this example. In general, a sequential circuit with m flip-flops, n input
variables, and p output variables will contain m columns for present state, n
columns for inputs, m columns for next state, and p columns for outputs. The
present state and input columns are combined and under them we list the 2m + "
binary combinations from 0 through 2m . , - 1. The next-state and output
columns are functions of the present state and input values and are derived
directly from the circuit or the Boolean equations that describe the circuit.
State Diagram
state diagram
The information available in a state table can be represented graphically in a
state diagram. In this type of diagram, a state is represented by a circle, and
the transition between states is indicated by directed lines connecting the
circles. The state diagram of the sequential circuit of Fig. 1-25 is shown in Fig.
1-26. The state diagram provides the same information as the state table and
is obtained directly from Table 1-4. The binary number inside each circle
identifies the state of the flip-flops. The directed lines are labeled with two
binary numbers separated by a slash. The input value during the present state
is labeled first and the number after the slash gives the output during the
present state. For example, the directed line from state 00 to 01 is labeled 1/0,
meaning that when the sequential circuit is in the present state 00 and the input
32
CHAPTER O N E
Digital Logic Circuits
Figure 1�26
State diagrams of sequential circuit.
is 1, the output is 0. After a clock transition, the circuit goes to the next state
01 . The same clock transition may change the input value. If the input changes
to 0, the output becomes 1, but if the input remains at 1, the output stays at
0. This information is obtained from the state diagram along the two directed
lines emanating from the circle representing state 01. A directed line connect
ing a circle with itself indicates that no change of state occurs.
There is no difference between a state table and a state diagram except
in the manner of representation. The state table is easier to derive from a given
logic diagram and the state diagram follows directly from the state table. The
state diagram gives a pictorial view of state transitions and is the form suitable
for human interpretation of the circuit operation. For example, the state dia
gram of Fig. 1-26 clearly shows that starting from state 00, the output is 0 as
long as the input stays at 1. The first 0 input after a string of 1's gives an output
of 1 and transfers the circuit back to the initial state 00.
Design Example
binary counter
The procedure for designing sequential circuits will be demonstrated by a
specific example. The design procedure consists of first translating the circuit
specifications into a state diagram. The state diagram is then converted into a
state table. From the state table we obtain the information for obtaining the
logic circuit diagram.
We wish to design a clocked sequential circuit that goes through a se
quence of repeated binary states 00, 01, 10, and 11 when an external input x
is equal to 1. The state of the circuit remains unchanged when x = 0. This type
of circuit is called a 2-bit binary counter because the state sequence is identical
to the count sequence of two binary digits. Input x is the control variable that
specifies when the count should proceed.
The binary counter needs two flip-flops to represent the two bits. The
state diagram for the sequential circuit is shown in Fig. 1-27. The diagram is
drawn to show that the states of the circuit follow the binary count as long as
SECTION 1-7
Figure 1 .. 27
Sequential Circuits
33
State diagram for binary counter.
x = 1. The state following 11 is 00, which causes the count to be repeated. If
x = 0, the state of the circuit remains unchanged. This sequential circuit has
excitation table
no external outputs, and therefore only the input value is labeled in the
diagram. The state of the flip-flops is considered as the outputs of the counter.
We have already assigned the symbol x to the input variable. We now
assign the symbols A and B to the two flip-flop outputs. The next state of A
and B, as a function of the present state and input x, can be transferred from
the state diagram into a state table. The first five columns of Table 1-5 constitute
the state table. The entries for this table are obtained directly from the state
diagram.
The excitation table of a sequential circuit is an extension of the state table.
This extension consists of a list of flip-flop input excitations that will cause the
TABLE 1 .. 5 Excitation Table for Binary Counter
Present
state
A
B
0
0
0
0
0
0
1
1
0
0
Input
X
0
1
0
1
0
1
0
Next
state
Flip-flop inputs
A
B
0
0
0
0
1
1
0
0
1
1
0
1
0
,.
K.
Is
Ks
0
0
0
X
X
X
X
0
X
X
X
X
X
X
0
0
0
X
X
0
X
X
0
X
X
0
34
CHAPTER ONE
Digital Logic Circuits
required state transitions. The flip-flop input conditions are a function of the
type of flip-flop used. If we employ ]K flip-flops, we need columns for the J
and K inputs of each flip-flop. We denote the inputs of flip-flop A by !A and
K., and those of flip-flop B by J, and K, .
The excitation table for the ]K flip-flop specified in Table 1-3 is now u!.ed
to derive the excitation table of the sequential circuit. For example, in the first
row of Table 1-5, we have a transition for flip-flop A from 0 in the present state
to 0 in the next state. In Table 1-3 we find that a transition of states from Q(t) = 0
to Q(t + 1) = 0 in a ]K flip-flop requires that input J = 0 and input K = x . So
0 and x are copied in the first row under ]A and KA, respectively. Since the first
row also shows a transition for flip-flop B from 0 in the present state to 0 in
the next state, 0 and x are copied in the first row under J, and K, . The second
row of Table 1-5 shows a transition for flip-flop B from 0 in the present state
to 1 in the next state. From Table 1-3 we find that a transition from Q(t) = 0
to Q(t + 1) = 1 requires that input J = 1 and input K = x . So 1 and x are
copied in the second row under ], and K,, respectively. This process is contin
ued for each row of the table and for each flip-flop, with the input conditions
as specified in Table 1-3 being copied into the proper row of the particular
flip-flop being considered.
Let us now consider the information available in an excitation table such
as Table 1-5. We know that a sequential circuit consists of a number of flip-flops
and a combinational circuit. From the block diagram of Fig. 1-24, we note that
the outputs of the combinational circuit must go to the four flip-flop inputs ]A,
KA, ],, and K, . The inputs to the combinational circuit are the external input
x and the present-state values of flip-flops A and B. Moreover, the Boolean
functions that specify a combinational circuit are derived from a truth table that
shows the input-output relationship of the circuit. The entries that list the
combinational circuit inputs are specified under the "present state" and "in
put" columns in the excitation table. The combinational circuit outputs are
specified under the "flip-flop inputs" columns. Thus an excitation table trans
forms a state diagram to a truth table needed for the design of the combina
tional circuit part of the sequential circuit.
The simplified Boolean functions for the combinational circuit can now
be derived. The inputs are the variables A, B, and x. The outputs are the
variables ]A, K., ],, and K, . The information from the excitation table is trans
ferred into the maps of Fig. 1-28, where the four simplified flip-flop input
equations are derived:
]A = Bx
KA = Bx
J, = X
K, = X
The logic diagram is drawn in Fig. 1-29 and consists of two ]K flip-flops and
an AND gate. Note that inputs J and K determine the next state of the counter
when a clock signal occurs. If both J and K are equal to 0, a clock signal will
SECTION 1·7
Js = x
Figure 1�28
Sequential Circuits
B
B
B
B
Ks = x
Maps for combinational circuit of counter.
A
B
Figure 1-29 Logic diagram of a 2�bit binary counter.
35
36
CHAPTER ONE
Digital Logic Circuits
have no effect; that is, the state of the flip-flops will not change. Thus when
x = 0, all four inputs of the flip-flops are equal to 0 and the state of the flip-flops
remains unchanged even though clock pulses are applied continuously.
Design Procedure
The design of sequential circuits follows the outline described in the preceding
example. The behavior of the circuit is first formulated in a state diagram. The
number of flip-flops needed for the circuit is determined from the number of
bits listed within the circles of the state diagram. The number of inputs for the
circuit is specified along the directed lines between the circles. We then assign
letters to designate all flip-flops and input and output variables and proceed
to obtain the state table.
For m flip-flops and n inputs, the state table will consist of m columns for
the present state, n columns for the inputs, and m columns for the next state.
The number of rows in the table will be up to 2m + ", one row for each binary
combination of present state and inputs. For each row we list the next state as
specified by the state diagram. Next, the flip-flop type to be used in the circuit
is chosen. The state table is then extended into an excitation table by including
columns for each input of each flip-flop. The excitation table for the type of
flip-flop in use can be found in Table 1-3. From the information available in this
table and by inspecting present state-to-next state transitions in the state table,
we obtain the information for the flop-flop input conditions in the excitation
table.
The truth table for the combinational circuit part of the sequential circuit
is available in the excitation table. The present-state and input columns consti
tute the inputs in the truth table. The flip-flop input conditions constitute the
outputs in the truth table. By means of map simplification we obtain a set of
flip-flop input equations for the combinational circuit. Each flip-flop input
equation specifies a logic diagram whose output must be connected to one of
the flip-flop inputs. The combinational circuit so obtained, together with the
flip-flops, constitutes the sequential circuit.
The outputs of flip-flops are often considered to be part of the outputs
of the sequential circuit. However, the combinational circuit may also contain
external outputs. In such a case the Boolean functions for the external outputs
are derived from the state table by combinational circuit design techniques.
A set of flip-flop input equations specifies a sequential circuit in algebraic
form. The procedure for obtaining the logic diagram from a set of flip-flop input
equations is a straightforward process. First draw the flip-flops and label all
their inputs and outputs. Then draw the combinational circuit from the
Boolean expressions given by the flip-flop input equations. Finally, connect
outputs of flip-flops to inputs in the combinational circuit and outputs of the
combinational circuit to flip-flop inputs.
PROBLEMS
1·1.
1·2.
1·3.
1-4.
1-5.
1�.
1·7.
]
Detennine bymeans cia truth table the validilyofDeMorgan's theomn for
(ABC)' • A' + 8' + C'.
Ust the truth tableofa three-variable e><dusiv�R (odd) function: r = A 61
sec.
Simplify the following expressions using Boolean algebra.
a. A + AB
b. JIB + AB'
c. A'BC + J\C
d. A'B +ABC' + ABC
Simplify the following expressions using Boolean algebra.
a. JIB + J\(CD + CD')
b. (BC' + A'D) (AB' + CD')
Using DeMcngan's thoomn, show that
a. (A + 8)'(11' + B')' Q 0
b. A + J\'B + J\'B' • I
Given the Boolean ""f'T"5sion F • r'y + ryz':
a. Derive an algebraic expression for the complement f'.
b. Show that F • F' a 0.
c. Show that F + f' • I.
Given the Boolean function
three variables:
F • .xy': + z'y'z + ryz
a.
b.
c.
d.
1-8.
1·9.
Ust the truth table of the function.
Draw the logic diagram using the original Boolean expression.
Simplify the algebraic expression using Boolean algebra.
Ust the truth table or the functlon from the simplified expression and
show thal li Is the same as the INih !able In part (a).
e. Draw the logic diagram from the slmpUfled expression and axnpare the
total number or gates wilh the diagram of part (b).
Simplify the following Boolean functions using three-variable maps.
a. f(r,y,z) • I (0, 1,5, 7)
b. f(r.y.z) a I (1,2,3,6,7)
c. F(r, y.z) • I. (3,5,6, 7)
cl. f(J\,B, C) - I (0,2.3.4.6)
Simplify the following Boolean functlons using four-variable maps.
a. F(A, 8, C, D) • I (4, 6, 7, 15)
b. F(J\, B. C,D) = I (3, 7. \1, 13. 14. 15)
c. F(J\, B, C, D) - I (0,1, 2, 4,5, 7, 11, 15)
d. F(A. B. c. D) - I (0, 2, 4, 5, 6, 7, 8, 10, 13, IS)
38
CHAPTER ONE
Digital Logic Circuits
1-10.
Simplify the following expressions in (1) sum-of-products form and (2 )
product-of-sums form.
a.
x'z' + y 'z' + yz'
b. AC'
1-11.
+
B'D
+
+
A ' CD
xy
+
ABCD
Simplify the following Boolean function in sum-of-products form by means
of a four-variable map. Draw the logic diagram with (a) AND-OR gates; (b)
NAND gates.
F (A , B, C, D) = I (0, 2, 8, 9, 10, 11, 14 , 15)
1-12.
Simplify the following Boolean function in product-of-sums form by means
of a four-variable map. Draw the logic diagram with (a) OR-AND gates; (b)
NOR gates.
F(w, x, y , z) = I (2 , 3, 4 , 5 , 6, 7 , 11, 14 , 15)
1-13.
Simplify the Boolean function
F together with the don't-care conditions d in
(1) sum-of-products form and (2 ) product-of-sums form.
F(w, x, y, z) = I (0, 1, 2 , 3, 7, 8, 10)
d(w, x, y , z) = I (5 , 6, 11 , 15)
1-14.
S
Using Table 1-2 , derive the Boolean expression for the (sum) output of the
full-adder in sum-of-products form. Then by algebraic manipulation show
that
S can be expressed as the exclusive-OR of the three input variables.
S = x E!l y E!l z
1-15.
A majority function is generated in a combinational circuit when the output
if the input variables have more 1's than O's. The output is 0
is equal to 1
otherwise. Design a three-input majority function.
1-16.
Design a combinational circuit with three inputs
x, y , z and three outputs
A, B, C. When the binary input is 0, 1, 2 , or 3, the binary output is one greater
than the input. When the binary input is 4, 5 , 6, or 7, the binary output is
one less than the input.
1-17.
Show that a
between the
1-18.
JK flip-flop can be converted to a
J and K inputs.
D flip-flop with an inverter
1-19.
JK flip-flop listed
JK flip-flop and compare
Using the information from the characteristic table of the
in Fig. 1-21(b), derive the excitation table for the
your answer with Table 1-3.
x
A sequential circuit has two D flip-flops A and B, two inputs and , and
one output
The flip-flop input equations and the circuit output are as
z.
follows:
= x'y + xA
= x ' B + xA
z=B
D,
Da
y
SEC'I1Ctl 1·7
1·20.
1·21.
Sequential O.rcuil>
39
•· Draw the logic diagram oE the c:ircWt.
b. Tabulate the state table.
Design a 2-bit count-down countet. This s
i a sequential c:ircW
t with two
llip•flops and one input x. When x • 0, the state of the Dip-flops does nnt
change. When x • I, the state sequence is U, \0, 01, 00, 11, and repeat.
Design a sequential circuit with two TK flip-Oops A and B and two inputs E
and x. U E = 0, the circui
t remains in the same state regardless of thevalue
of x. When E = 1 and x = 1, the circuit goes through the state transitions
from 00 to 01 to 10 to II back to 00, and repeat. When E a I and x = 0, the
c:ircWt goes through the state transitions &om 00 to 11 to10 to 01 bad< to 00,
and repeat.
REFERENCES
1.
Hill, F. J., and G. R. P
et
erson. • lntrodU£tion to Switching Tlltwyond Logioll Design, 3rd
ed.
New York: john W�ey. 1981.
2. Mano, M. M, Digitd Design, 2nd ed. Englewood Cliffs, N): Prenlitt HAIL 1991.
3. Roth, C. H.. Fund.ttmcJtols of Logic Design, 3rd ed. St. Paul, MN: West Publishing,
1985.
4. Sandige, R. S., Moden Digitol Design. New Yodc McGraw-Hill. 1990.
S. Shlva, S. G., /ntrodudion to LDgic Design. Glenview, 11.; Smtt, Foresman, 1988.
6. Wakerly, J. F., Digitol Design Prindpl., 1md Practi«s. Englewood Oiffs, N): Prentice
HaU, 1990.
7. Ward, S. A., and R. H. Halstead, Jr., U>mputation Strudum. Cambridge, MA: MIT
Press, 1990.
CHAPTER T W O
Digital Components
IN THIS CHAPTER
2·1
Integrated Circuits
2-2
Decoders
2·3
Multiplexers
Registen
2-4
2·1
IC
2·5
Shift Regineno
U
Binary Counters
2-7
Memory Unit
Integrated Circuits
Digital circuits are constructed with integrated circuits. Azl n
i tegrated cin:uit
(abbreviated IC) is a small silicon semiconductor crystal. called a dlip, contain·
ing the electronic components for the digital gates. The various gates are
n
i te.rconnected inside the chip to form the required circuit. Thechip ismounted
n
I a ceramic or plastic container, and connections are welded by thin gold wines
to external pins to form the n
i tegrated circuit. The number of pins may range
from 14 in a small IC package to 100 or more in a larger package. Each IC has
a numeric designation printed on the surface of the package for identification.
Each vendor publishes a data book or catalog that contains the exact descrip
tion and all the necessa.ry information about the ICs that it manufactures.
As the technology of ICs has improved the number of gates that can be
put in a single chip has increased considerably. The differentiation between
those chips that have a few internal gates and those having hundreds or
thousands of gates s
i made by a customary reference to a packa$1! as being
either a small·, medium·, or large-scale integration device.
SmaU-5Cille nttgralion
i
(SSI) devices contain several independent gates in
a single package. The inputs and outputs of the gates are connected directly
,
SSI
42
MSI
LSI
VLSI
TTL
CHAPTER 1WO
Digital Components
to the pins in the package. The number of gates is usually less than 10 and is
limited by the number of pins available in the !C.
Medium-scale integration (MSI) devices have a complexity of approximately
10 to 200 gates in a single package. They usually perform specific elementary
digital functions such as decoders, adders, and registers.
Large-scale integration (LSI) devices contain between 200 and a few thousand gates in a single package. They include digital systems, such as proces
sors, memory chips, and programmable modules.
Very-large-scale integration (VLSI) devices contain thousands of gates
within a single package. Examples are large memory arrays and complex
microcomputer chips. Because of their small size and low cost, VLSI devices
have revolutionized the computer system design technology, giving designers
the capability to create structures that previously were not economical.
Digital integrated circuits are classified not only by their logic operation
but also by the specific circuit technology to which they belong. The circuit
technology is referred to as a digital logic family . Each logic family has its own
basic electronic circuit upon which more complex digital circuits and functions
are developed. The basic circuit in each technology is either a NAND, a NOR,
or an inverter gate. The electronic components that are employed in the
construction of the basic circuit are usually used for the name of the technol
ogy. Many different logic families of integrated circuits have been introduced
commercially. The following are the most popular.
TTL
Transistor-transistor logic
ECL
Emitter-coupled logic
MOS
Metal-oxide semiconductor
CMOS
Complementary metal-oxide semiconductor
TTL is a widespread logic family that has been in operation for many
years and is considered as standard. ECL has an advantage in systems requir
ing high-speed operation. MOS is suitable for circuits that need high compo
nent density, and CMOS is preferable in systems requiring low power
consumption.
The transistor-transistor logic family was an evolution of a previous
technology that used diodes and transistors for the basic NAND gate. This
technology was called DTL, for "diode-transistor logic." Later the diodes were
replaced by transistors to improve the circuit operation and the name of the
logic family was changed to "transistor-transistor logic." This is the reason for
mentioning the word "transistor" twice. There are several variations of the TTL
family besides the standard TTL, such as high-speed TTL, low-power TTL,
Schottky TTL, low-power Schottky TTL, and advanced Schottky TTL. The
SECTION 2-2
ECL
CMOS
43
power supply voltage for TIL circuits is 5 volts, and the two logic levels are
approximately 0 and 3.5 volts.
The emitter-coupled logic (ECL) family provides the highest-speed digital
circuits in integrated form. ECL is used in systems such as supercomputers and
signal processors where high speed is essentiaL The transistors in ECL gates
operate in a nonsaturated state, a condition that allows the achievement of
propagation delays of 1 to 2 nanoseconds.
The metal-oxide semiconductor (MOS) is a unipolar transistor that
depends on the flow of only one type of carrier, which may be electrons
(n-channel) or holes (p-channel). This is in contrast to the bipolar transistor
used in TIL and ECL gates, where both carriers exist during normal operation.
A p-channel MOS is referred to as PMOS and an n-channel as NMOS. NMOS
is the one that is commonly used in circuits with only one type of MOS
transistor. The complementary MOS (CMOS) technology uses PMOS and
NMOS transistors connected in a complementary fashion in all circuits. The
most important advantages of CMOS over bipolar are the high packing density
of circuits, a simpler processing technique during fabrication, and a more
economical operation because of low power consumption.
Because of their many advantages, integrated circuits are used exclu
sively to provide various digital components needed in the design of computer
systems. To understand the organization and design of digital computers it is
very important to be familiar with the various components encountered in
integrated circuits. For this reason, the most basic components are introduced
in this chapter with an explanation of their logical properties. These compo
nents provide a catalog of elementary digital functional units commonly used
as basic building blocks in the design of digital computers.
2-2
decoder
Decoders
Decoders
Discrete quantities of information are represented in digital computers with
binary codes. A bin