An Invitation To Design Verification: Chapter Highlights
An Invitation To Design Verification: Chapter Highlights
An Invitation To Design Verification: Chapter Highlights
C H A P T E R 1
An Invitation to Design
Verification
Chapter Highlights
• Verification methodology
rom the makeup of a project team, we can see the importance of design verification. A
F typical project team usually has an equal number of design engineers and verification
engineers. Sometimes verification engineers outnumber design engineers by as many
as two to one. This stems from the fact that to verify a design, one must first understand the
specifications as well as the design and, more important, devise a different design
approach from the specifications. It should be emphasized that the approach of the verifi-
cation engineer is different from that of the design engineer. If the verification engineer fol-
lows the same design style as the design engineer, both would commit the same errors and
not much would be verified.
From the project development cycle, we can understand the difficulty of design verifica-
tion. Statistical data show that around 70% of the project development cycle is devoted
1
Lam.book Page 2 Thursday, February 3, 2005 12:58 PM
to design verification. A design engineer usually constructs the design based on cases
representative of the specifications. However, a verification engineer must verify the
design under all cases, and there are an infinite number of cases (or so it seems). Even with
a heavy investment of resources in verification, it is not unusual for a reasonably complex
chip to go through multiple tape-outs before it can be released for revenue.
The impact of thorough design verification cannot be overstated. A faulty chip not only
drains budget through respin costs, it also delays time-to-market, impacts revenue,
shrinks market share, and drags the company into playing the catch-up game. Therefore,
until people can design perfect chips, or chip fabrication becomes inexpensive and has a
fast turnaround time, design verification is here to stay.
Design verification is the reverse process of design. Design verification starts with an
implementation and confirms that the implementation meets its specifications. Thus, at
every step of design, there is a corresponding verification step. For example, a design step
that turns a functional specification into an algorithmic implementation requires a verifi-
cation step to ensure that the algorithm performs the functionality in the specification.
Similarly, a physical design that produces a layout from a gate netlist has to be verified to
ensure that the layout corresponds to the gate netlist. In general, design verification
encompasses many areas, such as functional verification, timing verification, layout verifi-
cation, and electrical verification, just to name a few. In this book we study only functional
verification and refer to it as design verification. Figure 1.2 shows the relationship between
the design process and the verification process.
Lam.book Page 3 Thursday, February 3, 2005 12:58 PM
design flow
higher functional specification less
algorithmic description
RTL
abstraction details
gate netlist
transistor netlist
physical layout
lower more
specifications
gate netlist
equivalence checking
are they equivalent?
layout
On a finer scope, design verification can be further classified into two types. The first type
verifies that two versions of design are functionally equivalent. This type of verification is
called equivalence checking. One common scenario of equivalence checking is comparing
two versions of circuits at the same abstraction level. For instance, compare the gate
netlist of a prescan circuit with its postscan version to ensure that the two are equivalent
under normal operating mode.
However, the two versions of the design differ with regard to abstraction level. For exam-
ple, one version of the design is at the level of specification and the other version is at the
gate netlist level. When the two versions differ substantially with regard to level of abstrac-
tion, they may not be functionally equivalent, because the lower level implementation
may contain more details that are allowed, but are unspecified, at the higher level. For
example, an implementation may contain timing constraints that are not part of the origi-
nal specification. In this situation, instead of verifying the functional equivalence of the
two versions, we verify instead that the implementation satisfies the specifications. Note
that equivalence checking is two-way verification, but this is a one-way verification
because a specification may not satisfy an unspecified feature in the implementation. This
type of verification is known as implementation verification, property checking, or model
checking. Based on the terminology of property checking, the specifications are properties
that the implementation must satisfy. Based on the terminology of model checking, the
implementation or design is a model of the circuit and the specifications are properties.
Hence, model checking means checking the model against the properties.
levels of lower abstraction. Second, even if the specifications are written in a precise math-
ematical language, few synthesis software programs can produce implementations that
meet all requirements. Usually, the software program synthesizes from a set of functional
specifications but fails to meet timing requirements.
Another method—the more widely used method—to uncover errors of this type is through
redundancy. That is, the same specifications are implemented two or more times using
different approaches, and the results of the approaches are compared. In theory, the more
times and the more different ways the specifications are implemented, the higher the con-
fidence produced by the verification. In practice, more than two approaches is rarely used,
because more errors can be introduced in each alternative verification, and costs and time
can be insurmountable.
The design process can be regarded as a path that transforms a set of specifications into an
implementation. The basic principle behind verification consists of two steps. During the
first step, there is a transformation from specifications to an implementation. Let us call
this step verification transformation. During the second step, the result from the verifica-
tion is compared with the result from the design to detect any errors. This is illustrated in
Figure 1.3 (A). Oftentimes, the result from a verification transformation takes place in the
head of a verification engineer, and takes the form of the properties deduced from the spec-
ifications. For instance, the expected result for a simulation input vector is calculated by a
verification engineer based on the specifications and is an alternative implementation.
Obviously, if verification engineers go through the exact same procedures as the design
engineers, both the design and verification engineers are likely to arrive at the same con-
clusions, avoiding and committing the same errors. Therefore, the more different the
design and verification paths, the higher confidence the verification produces. One way to
achieve high confidence is for verification engineers to transform specifications into an
implementation model in a language different from the design language. This language is
called verification language, as a counterpart to design language. Examples of verification
languages include Vera, C/C++, and e. A possible verification strategy is to use C/C++ for
the verification model and Verilog/VHSIC Hardware Description Language (VHDL) for the
design model.
During the second step of verification, two forms of implementation are compared. This is
achieved by expressing the two forms of implementation in a common intermediate form
so that equivalency can be checked efficiently. Sometimes, a comparison mechanism can
be sophisticated—for example, comparing two networks with arrival packets that may be
out of order. In this case, a common form is to sort the arrival packets in a predefined way.
Another example of a comparison mechanism is determining the equivalence between a
Lam.book Page 6 Thursday, February 3, 2005 12:58 PM
Here we see that the classic simulation-based verification paradigm fits the verification
principle. A simulation-based verification paradigm consists of four components: the cir-
cuit, test patterns, reference output, and a comparison mechanism. The circuit is simu-
lated on the test patterns and the result is compared with the reference output. The
implementation result from the design path is the circuit, and the implementation results
from the verification path are the test patterns and the reference output. The reason for
considering the test patterns and the reference output as implementation results from the
verification path is that, during the process of determining the reference output from the
test patterns, the verification engineer transforms the test patterns based on the specifica-
tions into the reference output, and this process is an implementation process. Finally, the
comparison mechanism samples the simulation results and determines their equality
with the reference output. The principle behind simulation-based verification is illus-
trated in Figure 1.3 (C).
As discussed earlier, the first type of error is introduced during an implementation pro-
cess. The second type of error exists in the specifications. It can be unspecified functional-
ity, conflicting requirements, and unrealized features. The only way to detect the type of
error is through redundancy, because specification is already at the top of the abstraction
hierarchy and thus there is no reference model against which to check. Holding a design
review meeting and having a team of engineers go over the design architecture is a form of
verification through redundancy at work. Besides checking with redundancy directly,
examining the requirements in the application environment in which the design will
Lam.book Page 7 Thursday, February 3, 2005 12:58 PM
design path
design
specification
equivalent? equivalence checking
alternative design
verification path
A
design
design output
specification
equivalent? simulation
Figure 1.3 The basic principle of design verification. (A) The basic methodology of verification by
redundancy. (B) A variant of the basic methodology adapted in model checking. (C) Simulation methodology
cast in the form of verification by redundancy.
reside when it has become a product also detects bugs during specification, because the
environment dictates how the design should behave and thus serves as a complementary
form of design specification. Therefore, verifying the design requirements against the
environment is another form of verification through redundancy. Furthermore, some of
these types of errors will eventually be uncovered as the design takes a more concrete
form. For example, at a later stage of implementation, conflicting requirements will
surface as consistencies, and features will emerge as unrealizable given the available tech-
nologies and affordable resources.
Lam.book Page 8 Thursday, February 3, 2005 12:58 PM
Besides a test plan, a verification methodology has to decide what language or languages
should be used during verification. Verilog or VHDL are common design languages. Verifi-
cation code often demands a language of its own, because the code is usually of a higher
level and does not have to conform to strict coding style guidelines like design code. For
example, arithmetic operators occur often in verification code but seldom in design code.
On the contrary, design code usually has to be able to be synthesized, but verification code
does not. Hence, an ideal verification language resembles a software language more than a
hardware language. Popular verification languages include Vera, e, C/C++, and Java.
To study verification methodologies further, let’s group verification methodologies into two
categories: simulation-based and formal method-based verification. The distinguishing
factor between the two categories is the existence or absence of vectors. Simulation-based
verification methodologies rely on vectors, whereas formal method-based verification
methodologies do not. Another way to distinguish simulation-based and formal method
based verification is that simulation is input oriented (for example, the designer supplies
input tests) and formal method verification is output oriented (for example, the designer
supplies the output properties to be verified). There is a hybrid category called semiformal
verification that takes in input vectors and verifies formally around the neighborhood of
the vectors. Because the semiformal methodology is a combination of simulation-based
and formal technology, it is not described separately here.
variant or the alternative design manifests in the reference output. During simulation-
based verification, the design is placed under a test bench, input stimuli are applied to the
test bench, and output from the design is compared with reference output. A test bench
consists of code that supports operations of the design, and sometimes generates input
stimuli and compares the output with the reference output as well. The input stimuli can
be generated prior to simulation and can be read into the design from a database during
simulation, or it can be generated during a simulation run. Similarly, the reference output
can be either generated in advance or on the fly. In the latter case, a reference model is
simulated in lock step with the design, and results from both models are compared.
Before a design is simulated, it runs through a linter program that checks static errors or
potential errors and coding style guideline violations. A linter takes the design as input
and finds design errors and coding style violations. It is also used to glean easy-to-find
bugs. A linter does not require input vectors; hence, it checks errors that can be found
independent of input stimuli. Errors that require input vectors to be stimulated will escape
linting. Errors are static if they can be uncovered without input stimuli. Examples of static
errors include a bus without a driver, or when the width of a port in a module instantiation
does not match the port in the module definition. Results from a linter can be just alerts to
potential errors. A potential error, for example, is a dangling input of a gate, which may or
may not be what the designer intended. A project has its own coding style guidelines
enforced to minimize design errors, improve simulation performance, or for other pur-
poses. A linter checks for violations of these guidelines.
Next, input vectors of the items in the test plan are generated. Input vectors targeting spe-
cific functionality and features are called directed tests. Because directed tests are biased
toward the areas in the input space where the designers are aware, bugs often happen in
areas where designers are unaware; therefore, to steer away from these biased regions and
to explore other areas, pseudorandom tests are used in conjunction with directed tests. To
produce pseudorandom tests, a software program takes in seeds, and creates tests and
expected outputs. These pseudorandomly generated tests can be vectors in the neighbor-
hood of the directed tests. So, if directed tests are points in input space, random tests
expand around these points.
After the tests are created, simulators are chosen to carry out simulation. A simulator can
be an event-driven or cycle-based software or hardware simulator. An event simulator
evaluates a gate or a block of statements whenever an input of the gate or the variables to
which the block is sensitive change values. A change in value is called an event. A cycle-
based simulator partitions a circuit according to clock domains and evaluates the subcir-
cuit in a clock domain once at each triggering edge of the clock. Therefore, event count
Lam.book Page 10 Thursday, February 3, 2005 12:58 PM
affects the speed a simulator runs. A circuit with low event counts runs faster on event-
driven simulators, whereas a circuit with high event counts runs faster on cycle-based
simulators. In practice, most circuits have enough events that cycle-based simulators out-
perform their event-driven counterparts. However, cycle-based simulators have their own
shortcomings. For a circuit to be simulated in a cycle-based simulator, clock domains in
the circuit must be well defined. For example, an asynchronous circuit does not have a
clear clock domain definition because no clock is involved. Therefore, it cannot be simu-
lated in a cycle-based simulator.
A hardware simulator or emulator models the circuit using hardware components such as
processor arrays and field programmable gate arrays (FPGAs). First, the components in a
hardware simulator are configured to model the design. In a processor array hardware
simulator, the design is compiled into instructions of the processors so that executing the
processors is tantamount to simulating the design. In an FPGA-based hardware accelera-
tion, the FPGAs are programmed to mimic the gates in the design. In this way, the results
from running the hardware are simulation results of the design. A hardware simulator can
be either event driven or cycle based, just like a software simulator. Thus, each type of
simulator has its own coding style guidelines, and these guidelines are more strict than
those of software simulators. A design can be run on a simulator only if it meets all coding
requirements of the simulator. For instance, statements with delays are not permitted on a
hardware simulator. Again, checking coding style guidelines is done through a linter.
The quality of simulating a test on a design is measured by the coverage the test provides.
The coverage measures how much the design is stimulated and verified. A coverage tool
reports code or functional coverage. Code coverage is a percentage of code that has been
exercised by the test. It can be the percentage of statements executed or the percentage of
branches taken. Functional coverage is a percentage of the exercised functionality. Using a
coverage metric, the designer can see the parts of a design that have not been exercised,
and can create tests for those parts. On the other hand, the user could trim tests that dupli-
cate covered parts of a circuit.
When an unexpected output is observed, the root cause has to be determined. To deter-
mine the root cause, waveforms of circuit nodes are dumped from a simulation and are
viewed through a waveform viewer. The waveform viewer displays node and variable val-
ues over time, and allows the user to trace the driver or loads of a node to determine the
root cause of the anomaly.
When a bug is found, it has to be communicated to the designer and fixed. This is usually
done by logging the bug into a bug tracking system, which sends a notification to the
owner of the design. When the bug is logged into the system, its progress can be tracked. In
Lam.book Page 11 Thursday, February 3, 2005 12:58 PM
a bug tracking system, the bug goes through several stages: from opened to verified, fixed,
and closed. It enters the opened stage when it is filed. When the designer confirms that it is
a bug, he moves the bug to the verified stage. After the bug is eradicated, it goes to the fixed
stage. Finally, if everything works with the fix, the bug is resolved during the closed stage. A
bug tracking system allows the project manager to prioritize bugs and estimate project
progress better.
Design codes with newly added features and bug fixes must be made available to the team.
Furthermore, when multiple users are accessing the same data, data loss may result (for
example, two users trying to write to the same file). Therefore, design codes are main-
tained using revision control software that arbitrates file access to multiple users and pro-
vides a mechanism for making visible the latest design code to all.
The typical flow of simulation-based verification is summarized in Figure 1.4. The compo-
nents inside the dashed enclosure represent the components specific to the simulation-
based methodology. With the formal verification method, these components are replaced
by those found in the formal verification counterparts.
design
regression
revision control
Two basic approaches are behind equivalence checking. The first approach searches the
input space in a systematic way for an input vector or vectors that would distinguish the
two circuits. This approach, called SAT (satisfiability), is akin to automatic test pattern
generation algorithms. The other approach converts the two circuits into canonical repre-
sentations, and compare them. A canonical representation has the characteristic property
that two logical functions are equivalent if and only if their respective representations are
isomorphic—in other words, the two representations are identical except for naming. A
reduced ordered binary decision diagram is a canonical representation. Binary decision
diagrams of two equivalent functions are graphically isomorphic.
1. Compare circuits before and after scan insertion to make sure that adding scan
chains does not alter the core functionality of the circuit.
2. Ensure the integrity of a layout versus its RTL version. This is accomplished by
first extracting from the layout a transistor netlist and comparing the transistor
netlist with the RTL version.
3. Prove that changes in an engineering change order (ECO) check-in are restricted to
the scope intended. This is done by identifying the changed regions in the circuits.
If an equivalence check fails, the checker generates an input sequence of vectors that, when
simulated, demonstrates the differences between the two circuits. From the waveforms, the
user debugs the differences. A failure can result from a true error or an unintended bound-
ary condition. An unintended boundary condition arises when the scan circuitry is not
disabled. Then a checker will generate a sequence of vectors that runs the postscan circuit
in test mode while running the prescan circuit in normal operating mode. Surely these two
Lam.book Page 13 Thursday, February 3, 2005 12:58 PM
circuits are not equivalent under this input sequence. Therefore, when comparing a
prescan circuit and a postscan circuit, the postscan circuit must be configured in normal
operating mode.
The other type of formal verification is property checking. Property checking takes in a
design and a property which is a partial specification of the design, and proves or dis-
proves that the design has the property. A property is essentially a duplicate description of
the design, and it acts to confirm the design through redundancy. A program that checks a
property is also called a model checker, referring the design as a computational model of
a real circuit. Model checking cast in the light of the basic verification principle is visual-
ized in Figure 1.3 (B).
The idea behind property checking is to search the entire state space for points that fail the
property. If a such point is found, the property fails and the point is a counterexample.
Next, a waveform derived from the counterexample is generated, and the user debugs the
failure. Otherwise, the property is satisfied. What makes property checking a success in
industry are symbolic traversal algorithms that enumerate the state space implicitly. That
is, it visits not one, but a group of points at a time, and thus is highly efficient.
Even though symbolic computation has taken a major stride in conquering design com-
plexity, at the time of this writing only the part of the design relevant to the property
should be given to a property verifier, because almost all tools are incapable of automati-
cally carving out the relevant part of the design and thus almost certainly will run into
memory and runtime problems if the entire design is processed.
Furthermore, the power and efficiency of a property verifier is highly sensitive to the prop-
erty being verified. Some properties can be decided readily whereas others will never fin-
ish or may not even be accepted by the tool. For example, some property verifiers will not
accept properties that are unbound in time (for example, an acknowledgment signal will
eventually be active). A bound property is specified within a fixed time interval (for exam-
ple, an acknowledgment signal will be active within ten cycles).
A failure can result from a true design bug, a bug in the property or unintended input, or
state configurations. Because a property is an alternative way of expressing the design, it is
as equally prone to errors as the design. A failure can be caused by unintended input or
state configurations. If the circuit under verification is carved out from the design, the
input waveforms to the circuit must be configured to be identical to them when the circuit
is embedded in the design. Failure to do so sends unexpected inputs (for example, inputs
that would not occur when the circuit is a part of the design) to the circuit, and the circuit
may fail over these unexpected inputs because the circuit has not been designed to handle
them. To remove unintended configurations from interfering with property verification,
Lam.book Page 14 Thursday, February 3, 2005 12:58 PM
the inputs or states are constrained to produce only the permissible input waveforms.
Underconstraining means that input constraining has not eliminated all unexpected
inputs, and this can trigger false failures. Overconstraining, which eliminates some legal
inputs, is much more dangerous because it causes false successes and the verifier will not
alert the user.
Some practical issues with property verifiers include long iteration times in determining
the correct constraining parameters, and debugging failures in the properties and in the
design. Because only a portion of the design is given to the property verifier, the environ-
ment surrounding that portion has to be modeled correctly. Practical experience shows
that a large percentage of time (around 70 percent) in verification is spent getting the cor-
rect constraints. Second, debugging a property can be difficult when the property is writ-
ten in a language other than that of the design (for instance, the properties are in
SystemVerilog whereas the design is in Verilog). This is because many property verifiers
internally translate the properties into finite-state machines in Verilog, and verify them
against the design. Therefore, the properties are shown as the generated machine code,
which is extremely hard to relate to signals in the design or is hard to interpret its meaning.
Finally, debugging failures in a property has the same difficulty as any other debugging.
Effective use of a theorem prover requires a solid understanding of the internal operations
of the tool and a familiarity with the mathematical proof process. Although less automatic,
efficient usage of a theorem prover can handle much larger designs than model checkers
and requires less memory. Furthermore, a theorem prover accepts more complex proper-
ties. For example, a theorem prover can allow the properties written in higher order logic
(HOL) whereas almost all model checkers only accept first-order logic and computation
tree logic or their variants. HOL is more expressive than first-order logic and enables a
concise description of complex properties.
A typical flow of formal verification is shown in Figure 1.5. Remember that the components
in this diagram replace the components inside the dashed box of Figure 1.4. Although
Lam.book Page 15 Thursday, February 3, 2005 12:58 PM
design
define property
extract design
property user guidance
subcircuit bug
bug
property equivalence
checking checking
debug
constraining
formal verification does not require test vectors, it does require a test bench that configures
the circuit in the correct operational mode, such as input constraining.
Another selling point for formal verification is completeness, in the sense that it does not
miss any point in the input space—a problem from which simulation-based verification
suffers. However, this strength of formal verification sometimes leads to the misconception
Lam.book Page 16 Thursday, February 3, 2005 12:58 PM
that once a design is verified formally, the design is 100% free of bugs. Let’s compare
simulation-based verification with formal verification and determine whether formal
verification is perceived correctly.
Simulating a vector can be conceptually viewed as verifying a point in the input space. With
this view, simulation-based verification can be seen as verification through input space
sampling. Unless all points are sampled, there exists a possibility that an error escapes veri-
fication. As opposed to working at the point level, formal verification works at the property
level. Given a property, formal verification exhaustively searches all possible input and
state conditions for failures. If viewed from the perspective of output, simulation-based
verification checks one output point at a time; formal verification checks a group of output
points at a time (a group of output points make up a property). Figure 1.6 illustrates this
comparative view of simulation-based verification and formal verification. With this per-
spective, the formal verification methodology differs from the simulation-based methodol-
ogy by verifying groups of points in the design space instead of points. Therefore, to verify
completely that a design meets its specifications using formal methods, it must be further
proved that the set of properties formally verified collectively constitutes the specifications.
The fact that formal verification checks a group of points at a time makes formal verifica-
tion software less intuitive and thus harder to use.
A major disadvantage of formal verification software is its extensive use of memory and
(sometimes) long runtime before a verification decision is reached. When memory capac-
ity is exceeded, tools often shed little light on what went wrong, or give little guidance to fix
property1
properties verified
property2
Figure 1.6 An output space perspective of simulation-based verification versus formal verification
Lam.book Page 17 Thursday, February 3, 2005 12:58 PM
the problem. As a result, formal verification software, as of this writing, is applicable only
to circuits of moderate size, such as blocks or modules.
assign v = x + y + z;
The signals on the sensitivity list are c and d. The polarity for c is positive edge, and no
polarity is specified for d, which by default means either positive or negative edge. This
block is executed whenever a positive event occurs at c. A positive edge is one of the follow-
ing: a change from 0 to 1, X, or Z; or from X or Z to 1. The block is also triggered when an
event occurs at d, positive or negative edge. A negative edge is one of the following: a
change from 1 to either 0, X, or Z; or from X or Z to 0. Changes on variables e, m, or n will not
cause the block to execute because these three variables are not on the sensitivity list.
can be rewritten as a procedural block, but not vice versa. The following procedural block
and continuous assignment are equivalent:
// continuous assignment
assign v = x + y + z;
always @(posedge x)
y = x;
1.6.2 Nondeterminism
When multiple processes execute simultaneously, a natural question to ask is how these
processes are scheduled. When multiple processes are triggered at the same time, the order
in which the processes are executed is not specified by Institute of Electrical and Electron-
ics Engineers (IEEE) standards. Thus, it is arbitrary and it varies from simulator to simulator.
This phenomenon is called nondeterminism. Let’s look at two common cases of nondeter-
minism that are caused by the same root cause but that manifest in different ways.
The first case arises when multiple statements execute in zero time, and there are several
legitimate orders in which these statements can execute. Furthermore, the order in which
they execute affects the results. Therefore, a different order of execution gives different but
correct results. Execution in zero time means that the statements are evaluated without
advancing simulation time. Consider the following statements:
always @(d)
q = d;
assign q = ~d;
These two processes, one procedural block and one continuous assignment, are sched-
uled to execute at the same time when variable d changes. Because the order in which they
execute is not specified by the IEEE standard, two possible orders are possible. If the
always block is evaluated first, variable q is assigned the new value of d by the always
Lam.book Page 20 Thursday, February 3, 2005 12:58 PM
block. Then the continuous assignment is executed. It assigns the complement of the new
value of d to variable q. If the continuous assignment is evaluated first, q gets the comple-
ment of the new value of d. Then the procedural assignment assigns the new value
(uncomplemented) to q. Therefore, the two orders produce opposite results. This is a
manifestation of nondeterminism.
Another well-known example of nondeterminism of the first type occurs often in RTL
code. The following D-flip–flop (DFF) module uses a blocking assignment and causes
nondeterminism when the DFFs are instantiated in a chain:
endmodule
module DFF_chain;
endmodule
When the positive edge of clk arrives, either DFF instance can be evaluated first. If dff1
executes first, q1 is updated with the value of d1. Then dff2 executes and makes q2 the
value of q1. Therefore, in this order, q2 gets the latest value of d1. On the other hand, if
dff2 is evaluated first, q2 gets the value of q1 before q1 is updated with the value of d1.
Either order of execution is correct, and thus both values of q2 are correct. Therefore, q2
may differ from simulator to simulator.
blocking assignment to a nonblocking assignment, the output of dff1 always gets the
updated value of d1, whereas that of dff2 always gets the preupdated value of q1, regard-
less of the order in which they are evaluated.
Nonblocking assignment does not eliminate all nondeterminism or race problems. The
following simple code contains a race problem even though both assignments are non-
blocking. The reason is that both assignments to variable x occur at the end of the current
simulation time and the order that x gets assigned is unspecified in IEEE standards. The
two orders produce different values of x and both values of x are correct:
Both always blocks are triggered when a positive edge of clk arrives. One interleaving
order is
x = 1’b0;
y = x;
x = 1’b1;
x = 1’b0;
x = 1’b1;
y = x;
There is no simple fix for this nondeterminism. The designer has to reexamine the func-
tionality the code is supposed to implement and, more often than not, nondeterminism is
not inherent in the functionality but is introduced during the implementation process.
1. Active
2. Inactive
3. Nonblocking assign update
4. Monitor
5. Future events
Active events at the same simulation time are processed in an arbitrary order. An example
of an active event is a blocking assignment. The processing of all the active events is called
a simulation cycle.
Inactive events are processed only after all active events have been processed. An example
of an inactive event is an explicit zero-delay assignment, such as #0 x = y, which occurs at
the current simulation time but is processed after all active events at the current simula-
tion time have been processed.
A nonblocking assignment executes in two steps. First, it samples the values of the right-
side variables. Then it updates the values to the left-side variables. The sampling step of a
nonblocking assignment is an active event and thus is executed at the moment the non-
blocking statement is encountered. A nonblocking assign update event is the updating
step of a nonblocking assignment and is executed only after both active and inactive
events at the current simulation time have been processed.
Lam.book Page 23 Thursday, February 3, 2005 12:58 PM
1.7 Summary 23
Monitor events are the execution of system tasks $monitor and $strobe, which are exe-
cuted as the last events at the current simulation time to capture steady values of vari-
ables at the current simulation time. Finally, events that are to occur in the future are the
future events.
1.7 Summary
In this chapter we defined design verification as the process of ensuring that a design
meets its specifications—the reverse of the design process. We discussed the general verifi-
cation principle through redundancy, and simulation-based and formal verification meth-
odologies, and then illustrated typical flows of these two methodologies. We then
contrasted simulation and formal verification, and emphasized their input- and output-
oriented nature. We viewed simulation-based verification as operating on a point basis
whereas formal verification operates on a subspace basis. Next, we listed some limitations
of formal verification—in particular, the inability to detect errors in the specification,
incomplete functional coverage caused by verifying subcircuits, user errors, and software
bugs in the verification tool. As a refresher of Verilog, we reviewed scheduling and execu-
tion semantics, during which we discussed the concurrent nature of the Verilog language
and its inherent nondeterminism.