0% found this document useful (0 votes)
34 views6 pages

Managing Logic Transformations Final

Sequential Equivalence Checking: A New Approach to Functional Verification of Datapath and Control Logic Changes

Uploaded by

evilgreenie
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
34 views6 pages

Managing Logic Transformations Final

Sequential Equivalence Checking: A New Approach to Functional Verification of Datapath and Control Logic Changes

Uploaded by

evilgreenie
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 6

Sequential Equivalence Checking: A New

Approach to Functional Verification of Datapath


and Control Logic Changes

Abstract

This paper discusses the effect of sequential changes on functional verification. Sequential
transformations are typically done at system-level and RTL design to improve power, timing
and area. Several sequential transformations to datapath and control logic are shown.
Sequential changes are avoided late in the design process because of their impact on
functional verification. Consequently, designers tend to favor datapath and control logic
modifications that least disturbs testbenches. However “verification limited design” ties the
hands of engineers and sometimes sequential changes can not be avoided. This paper
introduces a novel solution, sequential equivalence checking, as well as suggestions to
improve testbench robustness to minimize impact of sequential changes on functional
verification.

Don’t touch that design!


Systems on Chip (SOC) and processor design teams are challenged to meet aggressive
power, performance and area requirements. As chip complexity grows, teams must
verify thousands of lines of code to achieve design confidence. To be successful, teams
must reach system goals and verification criteria within tight market windows.

Design confidence increases as a function of the number of unique test sequences and
completeness of system-level testing. As RTL modifications subside, functional
verification investments compound, yielding a “tested” module. At this point, simulation
test suites take days or weeks to complete and are most sensitive to design change.
Assurance in functional verification takes billions of simulation vectors to achieve. It is
no wonder that designers become resistant to change as deadlines near.

But what happens when initial place and route results indicate the design does not meet
system timing or power goals? Upon evaluation, the designer may conclude that an
overhaul to the datapath, and perhaps control logic is required! This may be a simple
change or a major rewrite. Fear sets in with the realization of how much work is required
to re-verify the modified design and that the changes will depreciate all previous
functional verification investment.

Under these circumstances most engineers go for the least invasive change possible.
However, this can create problems: fixes that are simple in a global context become
complex in a local implementation or fixes that minimize risk may not completely meet
the system criteria and require multiple iterations of change. Any change to the RTL, no
matter how small, has the potential to introduce subtle side effects.

Copyright 2005 Calypto Design Systems, Inc. -1-


Making Sequential Changes
In most cases designers meet their timing, power and area goals by making micro-
architectural transformations to RTL code. Micro-architectural transformations are
sequential modifications. As RTL models undergo optimization, designers perform
what-if analysis on key design features such as the size of caches, number of stages in
pipelines and power management. In SOC and processor design, power, timing and area
characteristics are difficult to estimate and trade-off. Alternative micro-architectures
have vastly different datapath and control logic implementation that require significant
effort develop and considerable additional effort to verify.

Simulation can be used to verify micro-architectural optimizations. However, simulation


test-suites run for weeks and sequential changes can invalidate existing testbenches.
Testbenches are compromised by differences in design latency, throughput or I/O signals.
When this happens, testbenches require manual inspection and adjustment. In many
cases the testbench is more complicated than the design, making testbench changes
susceptible to their own bugs. Depending on the situation, testbench failures can ripple
from block level regressions into sub-system or even system-level tests. Design teams
can localize the impact of sequential changes by writing testbenches with transaction or
handshake interfaces.

Sequential equivalence checking gives designers another functional verification option.


By proving equivalence between designs with sequential differences, all previous
functional verification investment is leveraged across future RTL implementations.
Sequential equivalence checking ensures functional correctness of micro-architectural
transformations, giving designers the confidence make RTL changes late in the design
process. This technology also offers quick detection of side effects and ensures RTL
optimizations remain consistent with the original functional intent.

An example of a sequential change


A logic design, module or full chip can be thought of as a machine consisting of
sequential state (or memory) and combinational logic. Combinational logic is responsible
for computing the next machine state as a function of inputs and current machine state at
any point in time. The machine outputs are a function of current state and/or inputs at that
point in time depending on whether the logic is constructed as a Moore or Mealy
machine.

The design described in figure 1 takes four input values. It outputs the average of the
four values and absolute difference between input value 3 and the average. It is fully
described by the RTL code, schematic and state timing diagram. The Average4 design
representation in figure 1 has one level of state, namely the registers on the output.

Copyright 2005 Calypto Design Systems, Inc. -2-


Figure 1: Average4 design
always @(posedge clk) begin : update
if (rst) begin
xavg_out <= 0;
xdif_out <= 0;
end else begin : normal
reg [9:0] xsum_next;
reg [7:0] xavg_next;
xsum_next = x0_in + x1_in + x2_in + x3_in;
xavg_next = xsum_next >> 2;
xavg_out <= xavg_next;
xdif_out <= (xavg_next >= x3_in)?(xavg_next - x3_in):(x3_in - xavg_next);
end
end

Changing the design to read the input values serially modifies the state and timing. This
serial transformation is an example of a simple sequential modification. Figure 2 shows a
serial input implementation of the Average4 design. Notice the added registers to store
the inputs, finite state machine (FSM) and corresponding latency in the output.

Figure 2: Serial implementation of Average4 circuit

In our example, the Average4 design in figure 2 is functionality equivalent to the


Average4 design figure 1. A pair of designs is judged to be functionally equivalent if
starting from a consistent state, over time both designs produce the same output given the
same input sequence. In this example, simple inspection shows that sample output in
cycle N of figure 1 will match output in cycle (N*4)+1 of figure 2. This example
demonstrates that functionally equivalent designs may be substantially different in terms
of combinational logic and the manner in which they store and access state.

Copyright 2005 Calypto Design Systems, Inc. -3-


Sequential transformations to improve power
As chips become more complex, the traditional trade-off between performance and area
is compounded by the addition of power. Influence on dynamic power decreases as
designs move from system-level to gate-level. Consequently, design changes that
improve dynamic power are typically nontrivial sequential changes, for example clock
gating and operator-sharing optimizations.

As the scope of change increases so does the likelihood of introducing subtle side effects.
Consequently significant modifications, like sequential changes are avoided late in the
design process even if they give the greatest savings. This avoidance creates a serious
limitation in converging towards power-optimal designs because accurate power
estimation requires capacitance information from place and route.

The next example demonstrates a sequential change in the Average4 control logic to
improve power. Power is reduced by employing resource sharing and modifying the state
machine encoding to iterate around a single adder and register. Figure 3 shows a power
optimized version of the Average4 design.

Figure 3: Power optimized Average4 design


begin : normal
reg [7:0] xavg_next;

if (state==2'b00)
xsum_in <= x3_in;
else if (state==2'b01)
xsum_in <= xsum_in + x3_in;
else // state==10
begin
xavg_next = (xsum_in + x3_in)>>2;
xavg_out <= xavg_next;
if (xavg_next >= x3_in)
xdif_out <= xavg_next - x3_in;
else
xdif_out <= x3_in - xavg_next;
end
end

Power management begins at the system-level. System-level models run sufficiently fast
to enable booting an operating system and running application traces. This gives accurate
estimation of switching activity. System-level models are then refined into RTL
implementation. RTL models have the same functionality and can take into account
physical implementation details like capacitance to further analysis dynamic power. The
existence of a functionally accurate system-level model allows for sequential equivalence
checking between system-level descriptions and RTL implementations.

Copyright 2005 Calypto Design Systems, Inc. -4-


Sequential transformations to improve timing
Timing problems and long paths are common RTL design issues. While changes that
improve timing may be simple, the cost of verification late in the design process is
sizeable.

Consider one of the least invasive RTL changes to improve timing, datapath logic re-
timing around pipeline flip-flops. This sequential change solves marginal timing issues
and is unlikely to affect simulation testbenches. However, traditional combinational
equivalence checkers can not handle re-timed logic because the flip-flops are no longer
considered equivalent design points.

In figure 4, we have re-timed combinational logic around the state elements in the
Average4 design. For simplification purposes the cycle length in the timing waveform is
shorten and the detailed register placement between adders, subtractors, comparators and
mux logic is omitted from the diagram. The net result from these changes is increased
overall system performance with the same functionality as the original Average4 design.

Figure 4: Re-timed Average4 design

When simple re-timing will not achieve timing closure, more ambitious sequential
changes are required. In such cases, it may be necessary to increase pipeline stages.
The nature of re-timing RTL control logic poses some of the toughest verification
problems; corner case bugs are often introduced. For example, it is anecdotally known
that sequential re-timing of control logic that manages instruction level parallelism and
multithreading in CPU design has resulted in live-lock and other errors which are not
only difficult to debug, but also hard to simulate. A simulation based verification
methodology would have to account for latency and throughput differences as well as be
augmented with directed and random tests to account for sequential changes to control
logic.

To avoid re-validating test suites to account for extra delay in output, testbenches should
be written with latency programmable or insensitive interfaces. Designers must plan

Copyright 2005 Calypto Design Systems, Inc. -5-


ahead for verification of sequential changes or the cost of this increasingly common
design change will have negative effects on project schedule and cost.

Sequential Equivalence Checking to the rescue


Calypto Design Systems offers SLEC, a block-level, sequential equivalence checker.
SLEC is built on a hybrid formal engine that overcomes the limitations of combinational
equivalence checkers that require designs to have matching flip-flops. SLEC detects
design differences, if they exist, in minutes, giving designers immediate feedback on
RTL changes. Sequential equivalence checking augments system-level and full chip
simulations by verifying that new RTL implementations are functionally equivalent to a
previously verified reference design.

SLEC verifies micro-architectural optimizations for power, timing and area do not
introduce functional side effects. Since formal methods yield high coverage, sequential
equivalence checking gives designers confidence to make changes late in the design
process. Instead of relying on test benches or properties, sequential equivalence checking
uses a golden RTL model or system-level reference design written in Verilog, VHDL,
SystemC or C/C++. Unlike combinational equivalence checkers, SLEC proves
functional equivalence across levels of sequential and data abstraction. Sequential
changes like re-timing and resource sharing require only minor changes to setup
parameters to reflect new timing information.

As a result, sequential equivalence checking boosts verification productivity. SLEC


gives designers the flexibility and confidence to make micro-architectural changes to
achieve challenging timing, power and performance goals.

Conclusion
Micro-architectural optimizations for power, timing and area often result in sequential
changes to RTL datapath and control logic. Sequential changes have a sizable impact on
functional verification. Not only do they require careful re-verification of the entire
design but they can also invalidate simulation testbench results. Design teams find
themselves pulled between achieving system-level requirements, minimizing design
change and functional verification completeness.

Sequential changes late in the design process are becoming more common. Designers
should improve their functional verification by encapsulating testbench interfaces and
utilize sequential equivalence checking. Sequential equivalence checking verifies designs
without using testbenches, quickly identifies design side effects and natively handles
designs with sequential differences. This dramatically improves designer productivity
when making system optimizations, giving users confidence and eliminating
‘verification-limited design’.

Copyright 2005 Calypto Design Systems, Inc. -6-

You might also like