Coverage Cookbook
Coverage Cookbook
Cookbook
Online Methodology Documentation from the
Mentor Graphics Verification Methodology Team
Contact VMDOC@mentor.com
http://verificationacademy.com
Table of Contents
Articles
Coverage 1
Datestamp:
- This document is a snapshot of dynamic content from the Online Methodology Cookbook
Coverage
The Coverage Cookbook describes the different types of coverage that are available to keep track of the progress of the
verification process, how to create a functional coverage model from a specification, and provides examples of how to
implement functional coverage for different types of designs.
What is coverage?
As the saying goes, "What doesn't get measured might not get done." And that is certainly true when trying to determine
a design project's verification progress, or trying to answer the question "Are we done?" Whether your simulation
methodology is based on a directed testing approach or constrained-random verification, to understand your verification
progress you need to answer the following questions:
• Were all the design features and requirements identified in the testplan verified?
• Were there lines of code or structures in the design model that were never exercised?
Coverage is the metric we use during simulation to help us answer these questions. Yet, once coverage metrics become
an integral part of our verification process, it opens up the possibility for more accurate project schedule predictions, as
well as providing a means for optimizing our overall verification process. At this stage of maturity we can ask questions
such as:
• When we tested feature X, did we ever test feature Y at the exact same time?
• Has our verification progress stalled for some unexpected reason?
• Are there tests that we could eliminate to speed up our regression suite and still achieve our coverage goals?
Hence, coverage is a simulation metric we use to measure verification progress and completeness.
In general, coverage is a metric we use to meaure the controllability quality of a testbench. For example, code coverage
can directly identify lines of code that were never activated due to poor controllability issues with the simulation input
stimulus. Similarly, functional coverage can identify expected behaviors that were never activated during a simulation
run due to poor controllability.
Although our discussion in this section is focused on coverage, it's important to note that we can address observability
concerns by embedding assertions in the design model to facilitate low-level observability, and creating monitors within
and on the output ports of our testbench to facilitate high-level observability.
Summary
So what is coverage? Simply put, coverage is a metric we use to measure verification progress and completeness.
Coverage metrics tells us what portion of the design has been activated during simulation (that is, the controllability
quality of a testbench). Or more importantly, coverage metrics identify portions of the design that were never activated
during simulation, which allows us to adjust our input stimulus to improve verification.
There are different kinds of coverage metrics available to you, and the process of how to use them is discussed in the
Coverage Cookbook examples.
Kinds of coverage
No single metric is sufficient at completely characterizing the verification process. For example, we might achieve 100%
code coverage during our simulation regressions. However, this would not mean that 100% of the functionality was
verified. The reason for this is that code coverage does not measure the concurrent interaction of behavior within, or
between multiple design blocks, nor does it measure the temporal sequences of functional events that occur within a
design. Similarly, we might achieve 100% functional coverage, yet only achieve 90% code coverage. This might indicate
that there is either a problem with the fidelity in our functional coverage model (that is, an important behavior of the
design was missing from the coverage model), or possibly some functionality was implemented that was never initially
specified (for example, perhaps the specification and testplan needs to be updated with some late stage change in the
requirements). Hence, to get a complete picture of a project's verification progress we often need multiple metrics.
Coverage Classification
To begin our discussion on the kinds of coverage metrics, it is helpful to first identify various classifications of coverage.
In general, there are multiple ways in which we might classify coverage, but the two most common ways are to classify
them by either their method of creation (such as, explicit versus implicit), or by their origin of source (such as,
specification versus implementation).
For instance, functional coverage is one example of an explicit coverage metric, which has been manually defined and
then implemented by the engineer. In contrast, line coverage and expression coverage are two examples of an implicit
coverage metric since its definition and implementation is automatically derived and extracted from the RTL
representation.
Coverage Metrics
There are two primary forms of coverage metrics in production use in industry today and these are:
• Code Coverage Metrics (Implicit coverage)
• Functional Coverage/Assertion Coverage Metrics (Explicit coverage)
References
[1] A. Piziali, Functional Verification Coverage Measurement and Analysis, Kluwer Academic Publishers, 2004.
Code Coverage
In this section, we introduce various coverage metrics associated with a design model's implicit implementation coverage
space. In general, these metrics are referred to as code coverage or structural coverage metrics.
Benefits:
Code coverage, whose origins can be traced back to the 1960's, is one of the first methods invented for systematic
software testing.[1] One of the advantages of code coverage is that it automatically describes the degree to which the
source code of a program has been activated during testing-thus, identifying structures in the source code that have not
been activated during testing. One of the key benefits of code coverage, unlike functional coverage, is that creating the
structural coverage model is an automatic process. Hence, integrating code coverage into your existing simulation flow is
easy and does not require a change to either your current design or verification approach.
Limitations:
In our section titled What is coverage, we discussed three important conditions that must occur during simulation to
achieve successful testing. They were:
1. The testbench must generate proper input stimulus to activate a design error.
2. The testbench must generate proper input stimulus to propagate all effects resulting from the design error to an output
port.
3. The testbench must contain a monitor that can detect the design error that was first activated then propagated to a
point for detection.
Code coverage is a measurement of structures within the source code that have been activated during simulation. One
limitation with code coverage metrics are that you might achieve 100% code coverage during your regression run, which
means that your testbench provided stimulus that activated all structures within your RTL source code, yet there are still
bugs in your design. For example, the input stimulus might have activated a line of code that contained a bug, yet the
testbench did not generate the additional required stimulus that propagates the effects of the bug to some point in the
testbench where it could be detected. In fact, researchers have studied this problem and found cases where a testbench
achieved 90% code coverage-yet, only 54% of the code was covered would be observable during a simulation run.[2]
That means that a bug could exist on a line of code that had been marked as covered—yet the bug was never detected
due to insufficient input stimulus to propagate the bug to an observability point.
Another limitation of code coverage is that it does not provide an indication on exactly what functionality defined in the
specification was actually tested. For example, you could run into a situation where you achieved 100% code coverage,
and then assume you are done. Yet, there could be functionality defined in the specification that was never tested—or
even functionality that had never been implemented! Code coverage metrics will not help you find these situations.
Even with these limitations, the automatic aspect of code coverage makes it a relatively simple way to identify input
stimulus deficiencies in your testbench. And is a great first choice for coverage metrics as you start to evolve your
advanced verification process capabilities.
Toggle Coverage
Toggle coverage is a code coverage metric used to measure the number of times each bit of a register or wire has toggled
its value. Although this is a realitively basic metric, many projects have a testing requirement that all ports and registers,
at a minimum, must have experienced a zero-to-one and one-to-zero transition.
In general, reviewing a toggle coverage analysis report can be overwhelming and of little value if not carefully focused.
For example, toggle coverage is often used for basic connectivity checks between IP blocks. In addition, it can be useful
to know that many control structures, such as a one-hot select bus, have been fully exercised.
Line Coverage
Line coverage is a code coverage metric we use to identify which lines of our souce code have been executed during
simulation. A line coverage metric report will have a count associated with each line of source code indicating the total
number of times the line has executed. The line execution count value is not only useful for identifying lines of source
code that have never executed, but also useful when the engineer feels that a minimum line execution threshold is
required to achieve sufficient testing.
Line coverage analysis will often reveal that a rare condition required to activate a line of code has not occurred due to
missing input stimulus. Alternatively, line coverage analysis might reveal that the data and control flow of the source
code prevented it either due to a bug in the code, or dead code that is not currently needed under certain IP
configurations. For unused or dead code, you might choose to exclude or filter this code during the coverage recording
and reporting steps, which allows you to focus only on the relavent code.
Statement Coverage
Statement coverage is a code coverage metric we use to identify which statements within our souce code have been
executed during simulation. In general, most engineers find that statement coverage analysis is more useful than line
coverage since a statement often spans multiple lines of source code-or multiple statements can occur on a single line of
source code.
A metrics report used for statement coverage analysis will have a count associated with each line of source code
indicating the total number of times the statement has executed. This statement execution count value is not only useful
for identifying lines of source code that have never executed, but also useful when the engineer feels that a minimum
statement execution threshold is required to achieve sufficient testing.
Block Coverage
Block coverage is a variant on the statement coverage metric which identifies whether a block of code has been executed
or not. A block is defined as a set of statements between conditional statements or within a procedural definition, the key
point being that if the block is reached, all the lines within the block will be executed. This metric is used to avoid
unscrupulous engineers from achieving a higher statement coverage by simply adding more statements to their code.
Branch Coverage
Branch coverage (also referred to as decision coverage) is a code coverage metric that reports whether Boolean
expressions tested in control structures (such as the if, case, while, repeat, forever, for and loop statements) evaluated to
both true and false. The entire Boolean expression is considered one true-or-false predicate regardless of whether it
contains logical-and or logical-or operators.
Expression Coverage
Expression coverage (sometimes referred to as condition coverage) is a code coverage metric used to determine if each
condition evaluated both to true and false. A condition is an Boolean operand that does not contain logical operators.
Hence, expression coverage measures the Boolean conditions independently of each other.
coverage tool to exclude the unused or unreachable code during the coverage recording and reporting steps. Formal tools
can be used to automate the identification of unreachable code, and then automatically generate the exclusion files.
References
[1] J. Miller, C. Maloney, "Systematic mistake analysis of digital computer programs." Communications of the ACM 6
(2): 58-63, February 1963.
[2] F. Fallah, S. Devadas, K. Keutzer: "OCCOM: Efficient Computation of Observability-Based Code Coverage Metrics
for Functional Verification." Proceedings of the Design Automation Conference, 1998: 152-157
[3] DO-178B, "Software Considerations in Airborne Systems and Equipment Certification", RCTA, December 1992,
pp.31, 74.
[4] M. Stuart, D. Dempster: Verification Methodology Manual for Code Coverage in HDL Designs - TransEDA, August
2000
Functional Coverage
The objective of functional verification is to determine if the design requirements, as defined in our specification, are
functioning as intended. But how do you know if all the specified functionality was actually implemented? Furthermore,
how do we know if all the specified functionality was really tested? Code coverage metrics will not help us answer these
questions.
In this section, we introduce an explicit coverage metric referred to as functional coverage, which can be associated with
either the design's specification or implementation coverage space. The objective of measuring functional coverage is to
measure verification progress with respect to the functional requirements of the design. That is, functional coverage helps
us answer the question: Have all specified functional requirements been implemented, and then exercised during
simulation? The details on how to create a functional coverage model are discussed separately in the Specification to
functional coverage chapter.
Benefits:
The origin of functional coverage can be traced back to the 1990's with the emergence of constrained-random
simulation. Obviously, one of the value propositions of constrained-random stimulus generation is that the simulation
environment can automatically generate thousands of tests that would have normally required a significant amount of
manual effort to create as directed tests. However, one of the problems with constrained-random stimulus generation is
that you never know exactly what functionality has been tested without the tedious effort of examining waveforms after a
simulation run. Hence, functional coverage was invented as a measurement to help determine exactly what
functionality a simulation regression tested without the need for visual inspection of waveforms.
Today, the adoption of functional coverage is not limited to constrained-random simulation environments. In fact,
functional coverage provides an automatic means for performing requirements tracing during simulation, which is often
a critical step required for DO-254 compliance checking. For example, functional coverage can be implemented with a
mechanism that links to specific requirements defined in a specification. Then, after a simulation run, it is possible
to automatically measure which requirements were checked by a specific directed or constrained-random test—as well
as automatically determine which requirements were never tested.
Limitations:
Since functional coverage is not an implicit coverage metric, it cannot be automatically extracted. Hence, this requires
the user to manually create the coverage model. From a high-level, there are two different steps involved in creating a
functional coverage model that need to be considered:
1. Identify the functionality or design intent that you want to measure
2. Implementing the machinery to measure the functionality or design intent
The first step is addressed through verification planning, and the details are addressed in the section on getting from a
specification to functional coverage.
The second step involves coding the machinery for each of the coverage items identified in the verification planning
step (for example, coding a set of SystemVerilog covergroups for each verification objective identified in the verification
plan). During the coverage model implementation phase, there are also many details that need to be considered, such as:
identifying the appropriate point to trigger a measurement and defining controllability (disable/enable) aspects for the
measurement. These and many other details are addressed in the detailed coverage examples.
Since the functional coverage must be manually created, there is always a risk that some functionality that was specified
is missing in the coverage model.
Assertion Coverage
The term assertion coverage has many meanings in the industry today. For example, some people define assertion
coverage as the ratio of number of assertions to RTL lines of code. However, assertion density is a more accurate term
that is often used for this metric. For our discussion, we use the term assertion coverage to describe an implementation of
of coverage properties using assertions.
A single write and read bus sequence for our non-pipelined bus protocol are illustrated in Figure 2.
Figure 2. Write and read cycles for a simple nonpipelined bus protocol
To verify our bus example, it's important to test the boundary conditions for the address bus for both the write sequence
and read sequence (that is, the bits within addr at some point contained all zeros and all ones). In addition, it's also
important that we have covered a sufficient number of non-boundary conditions on the address bus during our
regression. We are only interested in sampling the address bus when the slave is selected and the enable strobe is active
(that is, sel==1'b1 && en==1'b1). Finally, we will want to keep track of separate write and read events for these
coverage items to ensure that we have tested both these operations sufficiently.
This is one example of using cover groups to model functional coverage (e.g., the SystemVerilog covergroup
construct). In addition, we could apply the same data coverage approach to measuring the read and write data busses.
Now, let's look at cover properties with respect to this example. There is a standard sequence that is followed for both
the write and read cycle. For example, let's examine a write cycle. At clock one, since both the slave select (sel) and bus
enable (en) signals are de-asserted, our bus is in an INACTIVE state. The first clock of the write sequence is called the
bus START state, which the master initiates by asserting one of the slave select line (sel==1'b1). During the START
state, the master places a valid address and valid data on the bus. The data transfer (referred to as the bus ACTIVE state)
actually occurs when the master asserts the bus enable strobe signal (en). In our case, it is detected on the rising edge of
clock three. The address, data, and control signals all remain valid throughout the ACTIVE state.
When the ACTIVE state completes, the bus enable strobe signal (en) is de-asserted by the bus master, and thus
completes the current single write operation. If the master has finished transferring all data to the slave (that is, there are
no more write operations), then the master de-asserts the slave select signal (sel). Otherwise, the slave select signal
remains asserted, and the bus returns to the bus START state to initiate a new write operation. Multiple back-to-back
write operations (without returning to the bus INACTIVE state) are known as burst write.
From a temporal coverage perspective, a set of assertions could be written to ensure proper sequencing of states on the
bus. For example, the only legal bus state transitions are illustrated in Figure 3. Furthermore, it's important to test a single
write and read cycle, as well as the burst read in write operation. In fact, we might want to measure the various burst
write and read cycles.
By combining cover groups and cover properties, we are able to achieve a higher fidelity coverage model that more
accurately allows us to measure key features of the design.
Details on how to code temporal coverage are covered in the APB3 Bus protocol monitor example.
simulations that capture coverage metrics early in the project cycle (that is, prior to seriously gathering coverage metrics)
to work out any potential issues in your coverage flow.
From a high-level perspective, there are generally four main steps involved in a functional coverage flow, which include:
1. Create a functional coverage model
2. If using assertions, instrument the RTL model to gather coverage
3. Run simulation to capture and record coverage metrics
4. Report and analyze the coverage results
Part of the analysis step is to identify coverage holes, and determine if the coverage hole is due to one of three
conditions:
1. Missing input stimulus required to activate the uncovered functionality
2. A bug in the design (or testbench) that is preventing the input stimulus from activating the uncovered functionality
3. Unused functionality for certain IP configuations or expected unreachable functionality related during normal
operating conditions
The first condition requires you to either write additional directed tests or adjust random constraints to generate the
required input stimulus that targets the uncovered functionality. The second condition obviously requires the engineer to
fix the bug that is preventing the uncovered functionality from being exercised. The third condition can be addressed by
directing the coverage tool to exclude the unused or unreachable functionality during the coverage recording and
reporting steps.
Specification to coverage
Arriving at functional coverage closure is a process that starts with the functional specification for the design, which is
analysed to determine:
• What features need to be tested
• Under what conditions the features need to be tested
• What testbench infrastructure is required to drive and monitor the design's interfaces
• How the testbench will check that the features work
Deriving a functional coverage model is not an automatic process, it requires interpretation of the available specifications
and the implementation of the model requires careful thought.
The Process
The process that results in a functional coverage model is usually iterative and the model is built up over time as each
part of the testbench and stimulus is constructed. Each iteration starts with the relevant and available functional
specification documents which are analysed in order to identify features that need to be checked by some combination of
configuration and stimulus generation within the testbench.
In general terms, a testbench has two sides to it, a control path used to stimulate the design under test to get it into
different states to allow its features to be checked; and an analysis side which is used to observe what the design does in
response to the stimulus. A self-checking mechanism should be implemented in the testbench to ensure that design is
behaving correctly, this is usually referred to as the scoreboard. The role of the functional coverage model is to ensure
that the tests that the DUT passes have checked the design features for all of the relevant conditions. The functional
coverage model should be based on observations of how the design behaves rather than how it has been asked to behave
and should therefore be implemented in the analysis path. The easiest way to think about this is that with a testbench, the
stimulus that runs on it and the scoreboard(s) have to be designed to test all the features of a design, and that the
functional coverage model is used to ensure that all the desired variations of those tests have been seen to complete
successfully.
Verification is an incomplete process, even for "simple" designs it can be difficult to verify everything in time available.
For reasonable sized designs there is a trade-off between what could be verified and the time available to implement, run,
and debug test cases, this leads to prioritisation based on the technical and commercial background to the project. A wise
verification strategy is to start with the highest priority items and work down the priority order, whilst being prepared to
re-prioritise the list as the project progresses. The functional coverage model should evolve as each design feature is
tested, and each additional part of the functional coverage model should be put in place before the stimulus.
Process Guidelines
The functional coverage model is based on functional requirements
The testbench is designed to test the features of the design. The role of the functional coverage model is to check that the
different variants of those features have been observed to work correctly. Features may also be referred to as
requirements or in some situations as stories.
For instance - a DUT generates a data packet with a CRC field. The CRC is based on the contents of the packet which
has, say 10 variants. The testbench generates stimulus that makes the DUT produce the data packets and the scoreboard
checks the CRC field to make sure that the DUT has calculated it correctly. The role of the functional coverage monitor
in this case is to ensure that all 10 packet variants are checked out.
Covergroup Modeling SystemVerilog Covergroups Checking permutations of condition and state when a known result is
acheived
Cover Property SystemVerilog Assertions - sequences and Checking that a set of state transitions has been observed
Modeling properties
Covergroup functional coverage relies on sampling the value of one or more data fields to count how many times
different permutations of those values occur.
Cover Property or temporal based coverage is based on counting how many times a particular sequence of states and/or
conditions occured during a test. Temporal coverage is usually used to get coverage on control paths or protocols where
timing relationships may vary. Examples include:
• Whether a FIFO has been driven into an overflow or underflow condition
• Whether a particular type of bus cycle has been observed to complete
The first step in developing a functional coverage model is deciding which of these two approaches should be taken for
each of the areas of concern.
Are there times when the data coverage sample is not valid?
If there are, then guards will have to be coded into the functional coverage implementation code.
Summary
When considering how a design feature is to be tested, and what the covergroup based functional coverage model for that
feature should be, remember to answer these questions:
What are the dependencies between the values? Identify the important cross products between data values
Are there illegal conditions? Identify values, or combinations of values that should not occur
When is the data invalid? Identify conditions when the data should not be sampled
Hybrid Coverage
There may be times when a hybrid of data coverage and temporal coverage techniques is required to collect specific
types of functional coverage. For example, checking that all modes of protocol transfer have occured is best done by
writing a property or sequence that identifies when the transfer has completed successfully and then sampling a
covergroup based on the interesting signal fields of the protocol to check that all relevant conditions are seen to have
occurred.
The APB Bus protocol monitor contains an example implementation of using hybrid functional coverage.
Design Type Covergroups Assertions Functional Coverage modeling strategy Link to example
Control based designs Maybe Yes In this style of design there are timing relationships APB Bus Protocol
between different signals which Example
need to be checked and seen to work
Peripheral style design, programmed Yes Maybe Most of the functional coverage can be derived from UART Coverage
via registers content of the registers which Example
are used to control and monitor the behaviour of the
device. The register interface may
also serve the data path. There may be scope for using
assertions on signal interfaces.
DSP datapath style design Yes No In this class of design, the stimulus pumps data through Biquad Filter Example
the design datapath and compares
the output against a reference model. The functional
coverage is primarily about ensuring
that the algorithm 'knobs' have been tested sufficiently.
Aggregator/Controller style, e.g. Yes Yes Coverage of combinations of abstract stimulus on To be released
Memory Controller multiple ports, coverage of Config registers,
coverage of features of target DDR specification
SoC with vertical reuse of Yes Maybe At the SoC level not all functional coverage is relevant To be released
UVM analysis components and careful attention to the
verification plan is required.
Covergroup Labeling
The way in which you use labeling when coding a covergroup can have a huge impact on understanding the coverage
results. A covergroup can be assigned a option.name string which helps with identification of which particular part of a
testbench the coverage is associated with. In side a covergroup, coverpoints can be labelled and bins can be named.
Using all of these techniques makes it much easier to understand the coverage results during analysis.
Covergroup naming
If multiple instances of the same covergroup are used within a testbench, then the option.name parameter can be used to
assign an identity string to each instance. The name string can be passed in as an argument when the covergroup is
constructed. In a UVM environment, the name could be passed in using get_full_name() method.See the following code
example.
// Class containing a covergroup
class my_cov_mon;
endclass: my_cov_mon
super.new(name, parent);
my_cg = new(this.get_full_name()); // Gets the UVM hierarchy for the component
endfunction
endclass: my_cov_mon
A covergroup can also be named programatically using the covergroup set_inst_name() built-in method.
// UVM Covergroup based component
class my_cov_mon extends uvm_subscriber #(my_txn);
covergroup my_cg;
//...
endgroup: my_cg
covergroup tx_word_format_cg()
with function sample(bit[5:0] lcr);
option.name = "tx_word_format";
option.per_instance = 1;
coverpoint LCR[5:0];
endgroup: tx_word_format_cg
covergroup tx_word_format_cg
with function sample(bit[5:0] lcr);
option.name = "tx_word_format";
option.per_instance = 1;
endgroup: tx_word_format_cg
In order to check that all possible word formats have been transmitted we could implement a covergroup by creating a
coverpoint for LCR[5:0] and not specifying any bins. This would create a set of default bins, one for each possible value
of the register, as shown in the left hand code example. If the functional coverage collected samples these bits at least
once, then there is no problem, but if not then it is reasonably difficult to figure out which bin corresponds to which
condition - see the 'before' screen shot from the Questa covergroup browser. Here, not using labels has caused the
simulator to use auto-bins, which means that the missing bin values need to be converted to binary and then mapped to
the register fields to identify the missing configurations.
A better way to implement the covergroup is to use a labeled coverpoint for each register field and then using the bins
syntax for each of the values in the register truth table. When this is simulated, the cross products created reflect the
different bin labels, which makes it much easier to determine which functional coverage conditions have not been
sampled. It also makes it easier to see whether there are any gross coverage conditions that have been missed. See the
'after' screen shot from the Questa covergroup GUI for the refactored covergroup.
Implementation Options
The analysis of functional coverage information is affected by the way in which the coverage results are reported. There
are three covergroup options which impact coverage reporting and can cause considerable confusion, and these are:
• option.per_instance
• option.get_inst_coverage
• type_option.merge_instances
If these options are not specified in the code that implements a covergroup, then they are not enabled by default. In other
words, they are set to 0.
These three options should be explicitly declared in covergroup to ensure that the coverage computation and reporting is
consistent and as required.
per_instance option
If the covergroup option.per_instance is set to 1, then the covergroup reporting is broken out per instance, but the overall
coverage reported is still the weighted average. In the example quoted, this would enable the coverage for each port to be
examined, possibly leading to a detection of a design bug or a short-coming in the stimulus generation.
merge_instances option
If the covergroup type_option.merge_instances is set to 1, then the overall coverage reported for all the instances of the
covergroup is a merge, or logical OR, of all the coverage rather than a weighted average. This is potentially useful if you
have multiple instances of the same design IP and and it is being exercised in different ways by different parts of the
testbench. One outcome from using the merge_instances option is that one covergroup instance achieves 100% coverage
masking another instance that achieves 0% coverage, since the overall coverage will be reported as 100%.
get_inst_coverage option
To help with the scenario where the merge_instances option has been enabled, the option.get_inst_coverage variable can
be set to 1 to enable the SystemVerilog $get_inst_coverage() system call to return the coverage for an instance of a
covergroup, therefore allowing the coverage for all individual instances to be checked. If the merge_instances option is
set to 0, then the get_inst_coverage variable has no effect.
Summary
Interaction between per_instance and merge_instances settings:
0 0 Overall coverage reported as a weighted average of the coverage for all instances of the
covergroup
1 0 Overall coverage reported as a weighted average of the coverage for all instances of the
covergroup,
and broken out for each instance of the covergroup.
0 1 Overall coverage reported as a merge of the coverage for the individual instances of the
covergroup
With the APB3 protocol, a single master can interface to several slave peripheral devices. The master generates a set of
control fields for address, write, and write data which are common to all the slaves. Each slave is selected by an
individual peripheral select line (PSEL) and then enabled by a common PENABLE signal. Each slave generates response
signals, ready, read data and status which are multiplexed back to the master. The block diagram shows a typical APB3
peripheral block.
The timing relationship between the APB3 signals is shown in the timing diagram below.
See the unknown signal properties section of the example for an example implementation.
Timing Relationships
The timing relationships between the signals in the protocol can be described using sequences and properties. If a
covered sequence completes or an asserted and covered property passes then functional coverage can be assumed for the
function in question. For the APB3 protocol, the following temporal relationships can be defined:
• Once PREADY is sampled at logic 1, PENABLE shall go low by the next clock edge
• When a PSEL line goes to a logic 1, then the following signals shall be stable until the end of the cycle when
PREADY is sampled at a logic 1
• PSEL
• PWRITE
• PADDR
• PWDATA (iff PWRITE is at logic 1)
• There shall be at least one clock cycle where PENABLE is at logic 0, between bus transfers
• When a PSEL line goes to a logic 1, then PENABLE shall go to a logic 1 on the following clock edge
See the Timing Relationships section on the example page for an implementation of these properties.
Other Properties
There may be other protocol rules which are not strictly temporal in nature. For the APB3 protocol the following
property is true:
• Only one PSEL line shall be active at a logic 1 at any time
See the Other Properties section of the examples page for an implementation.
Functional Coverage
In addition to the functional coverage represented by the protocol assertions which check for valid transfers, we need to
check that all possible types of transfer have occurred. This is best done by using data coverage for the various bus fields
to check that we have seen transfers complete for each of the valid values. The fields that are relevant to bus protocol
functional coverage are:
• PSEL - That all PSEL lines on the bus have been seen to be active - i.e. transfers occurred to all peripherals on the bus
• PWRITE - That we have seen reads and writes take place
• PSLVERR - That we have seen normal and error responses occur
Creating a cross product between these fields checks that all types of transfer have occurred between the master and each
slave on the APB3 bus. See the Functional Coverage section on the examples page for an implementation.
Other types of functional coverage that could be collected would be:
• Peripheral delay - checking that a range of peripheral delays have been observed
• Peripheral address ranges - Checking that specific address ranges have been accessed
However, these are likley to be design specific and should be collected using a separate monitor.
PENABLE de-assertion PENABLE is de-asserted once PREADY becomes active Assertion, Cover directive 1
PSEL to PENABLE There is only one clock delay between PSEL and PENABLE Assertion, Cover directive 1
Signal Stability When PSEL becomes active, the PWRITE, PADDR, and PWDATA Assertion, Cover directive 1
signals
should be stable to the end of the cycle
Other Checks
Functional Coverage
APB3 Protocol All types of APB3 protocol transfers have taken place with all types of Covergroup 2
response for all active PSEL lines
property CONTROL_SIGNAL_VALID(signal);
@(posedge PCLK)
$onehot(PSEL) |-> !$isunknown(signal);
endproperty: CONTROL_SIGNAL_VALID
Timing Relationships
The monitor implements the timing relationships described in English on the previous page. The functional coverage
strategy is to assume that if these assertions do not fail but are seen to complete with a cover directive then they add valid
functional coverage:
// PENABLE goes low once PREADY is sampled
property PENABLE_DEASSERTED;
@(posedge PCLK)
$rose(PENABLE && PREADY) |=> !PENABLE;
endproperty: PENABLE_DEASSERTED
// FROM PSEL being active, then signal must be stable until end of cycle
property PSEL_ASSERT_SIGNAL_STABLE(signal);
@(posedge PCLK)
(!$stable(PSEL) && $onehot(PSEL)) |-> $stable(signal)[*1:$] ##1 $fell(PENABLE);
endproperty: PSEL_ASSERT_SIGNAL_STABLE
Other Properties
The monitor checks that only one PSEL line is active at a logic 1 at any point in time. Since this property is checked on
every clock cycle, if there are no failures then it implies functional coverage.
// Check that only one PSEL line is valid at a time:
property PSEL_ONEHOT0;
@(posedge PCLK)
$onehot0(PSEL);
endproperty: PSEL_ONEHOT0
Functional Coverage
To check that we have seen transfers complete correctly for each of the possible protocol conditions for each of the
peripherals on the bus, we implement an array of covergroups, one for each peripheral, which collects the protocol
coverage specific to that peripheral. The covergroups are sampled when a simple sequence holds. Note that to improve
performance each covergroup is only sampled when the relevant PSEL line is true.
// Functional Coverage for the APB transfers:
//
// Have we seen all possible PSELS activated?
// Have we seen reads/writes to all slaves?
// Have we seen good and bad PSLVERR results from all slaves?
covergroup APB_accesses_cg(i);
endgroup: APB_accesses_cg
sequence END_OF_APB_TRANSFER;
@(posedge PCLK)
$rose(PENABLE & PREADY);
endsequence: END_OF_APB_TRANSFER
UART Overview
The function of an Universal Aysynchronous Receiver Transmitter (UART) is to transmit and receive characters of
differing formats over a pair of serial lines asynchronously. With an asynchronous serial link, there is no shared sampling
clock, instead the receive channel samples the incoming serial data stream with a clock that is 16x the data rate. When
there is no data to transmit the data lines are held high, and transmission of a data character commences by taking the
data line low for one bit period to transmit the start bit. The receiving end detects the start bit and then samples and
unpacks the serial data stream that can consist of between 5 and 8 bits of data, parity and then a stop bit which is always
a 1.
Register Map
The UART design in this example is based on the industry standard 16550a UART. It has 10 registers which control its
operation and in a system these are used by software to control the device and to send and receive characters. The
transmit and receive paths are buffered with 16 word deep FIFOs.
The register map is summarised here:
Line Control (LCR) 0xC 8 R/W Sets the format of the UART data word
Modem Control (MCR) 0x10 8 R/W Used to control the modem interface outputs
Modem Status (MSR) 0x18 8 R Used to monitor the modem interface inputs
For the UVM testbench, a UVM register model will be written to abstract stimulus for configuring and controlling the
operation of the UART. One benefit of using this register model is that we can reference it for the functional coverage
model. For more details on the UART functionality and the detailed register map, please refer to the datasheet.
External Interfaces
The UART block has a number of discrete interfaces which need to be driven or monitored. The UART example
testbench is implemented using UVM, therefore the driving and monitoring of these interfaces will be done by Universal
Verification Components (UVCs) or agents. If the testbench was implemented using another methodology, then BFM or
BFM-like models would be used. However, the principles of how you model and collect coverage are essentially the
same.
The UART has the following external interfaces which will need to be driven and monitored in the testbench.
• APB Host interface – Requires an APB agent
• TX Serial line – Requires a passive UART agent
• RX Serial line – Requires an active UART agent
• Modem interface – Requires a simple parallel I/O agent
• Interrupt line – Requires a monitor
Testbench Architecture
The UVM testbench architecture used for this example is shown in the block diagram.
An outline functional test plan for the UART has been created as part of the process of mapping its features to test cases
and functional coverage.
In order to check that the transmit channel is working correctly we can compare the content of the analysis transaction
written by the passive UART monitor when a character is received with the character originally written to the transmit
buffer of the UART. This implies scoreboard analysis connections to the UART agent and the APB agent. The UART
transmit buffer writes will have to be buffered in a FIFO structure in the scoreboard so that they can be compared with
the characters received by the UART.
The transmit channel has two buffer status bits (TX empty and TX FIFO empty) which are read back from the Line
Status Register, these need to be tested by the stimulus generation path. There is also a TX FIFO empty status interrupt
which will be discussed in the section on interrupts.
We need to see all possible permutations of these configuration settings in order to say that we have achieved functional
coverage for the transmit channel. An example implementation of the SystemVerilog covergroup used to collect this
functional coverage is implemented in the example UART testbench.
Which values are important? LCR[5:0] - defining all permutations of UART serial word format
When is the right time to sample? When a character has been transmitted
The checking mechanism used by the receive scoreboard is to compare the data sent by the UART agent with the data
read from the receive buffer of the UART device. Any errors inserted by the UART agent need to be seen to be detected
by the design either as bits set in the Line Status Register (LSR) or by the generation of a line status interrupt. The checks
that need to be made by the testbench for the receive channel include:
• That a start bit is detected correctly
• That parity has been received correctly - if not a parity error is generated
• That at least one stop bit has been received - if not a framing error is generated
• That a data overrun condition is detected correctly
• That the data received flag works correctly
• That a break condition is detected correctly
There are a number of receive channel interrupt conditions that are considered in the section on interrupts.
Which values are important? LCR[5:0] - defining all permutations of the UART serial word
format
LSR[4:0] - Status bits for Break Interrupt (BI), Framing Error (FE),
Parity Error (PE), Overrun Error (OE) and Data Received (DR)
What are the dependencies between the values? For error free RX conditions DR and all word formats
For injected error RX, cross product of LCR & LSR bits
When is the right time to sample? When a RX character has been received and DR is valid
Which values are important? MCR[4:0] - Controlling outputs and loopback mode
MSR[7:0] - input status and changes to input values
What are the dependencies between the values? Each of the modem signals are orthogonal, but the loopback
mode creates a dependency between the MCR bits and the MSR bits.
For coverage all permutations are relevant.
When is the right time to sample? When a change occurs on the Modem interface, or there is a write to the
MCR,
determined by the modem scoreboard.
When is the data invalid? Immediately after a change in the loopback mode, handled by the scoreboard
UART Interrupts
Interrupt handling
Which values are important? IER[3:0] - Enables for the four sources of interrupts
IIR[[3:0] - Identifying the interrupt source
What are the dependencies between the values? Interrupts should only occur if they are enabled
Need to see all valid permutations of interrupt enables and interrupt sources
Are there illegal conditions? Invalid conditions are interrupt sources reported when an interrupt type is not enabled
When is the right time to sample? For the interrupt enables, when an interrupt occurs.
For interrupt ids, when an interrupt occurs, followed by a read from the IIR register
Which values are important? LCR[5:0] - Definining the different word formats
FC[7:6] - Defining the different FIFO threshold values
What are the dependencies between the values? Need a cross between the LCR and FCR bits to ensure that FIFO threshold
interrupts
have occurred for all possible permutations.
When is the right time to sample? When an RX FIFO threshold interrupt occurs
Which values are important? LSR[4:1] - Defining the different types of RX line status
What are the dependencies between the values? None, each status bit has a distinct source
Are there illegal conditions? When the break condition occurs, PE and FE are not valid
When is the right time to sample? When a line status interrupt occurs, followed by a read from the LSR
Which values are important? LCR[5:0] - Defining the UART serial format
What are the dependencies between the values? Cross product defining all permutations of the word format
When is the right time to sample? When a TX empty interrupt occurs, followed by a read from the LSR
Which values are important? MSR[3:0] - The modem i/p signal change flags
What are the dependencies between the values? None, each signal is orthogonal
When is the right time to sample? When a modem status interrupt occurs, followed by a read from the MSR
When is the data invalid? The MSR flags are reset on read, so a second read will return invalid status
Which values are important? DIV1 and DIV2 register contents, potentially all possible values
What are the dependencies between the values? DIV1 & DIV are concatonated, otherwise no dependencies
When is the right time to sample? On the rising edge of the BAUD_O signal
When is the data invalid? If the divider registers are being programmed, or have just been programmed
in which case the divide ratio will not match the register content (this is not an error)
Register Interface
Which values are important? Address bits [7:0] and read/write bit
Only interested in valid register addresses
What are the dependencies between the values? Need to cross the valid addresses with the read/write bit to get the register access space
Are there illegal conditions? The MSR and LSR registers are read only, so writes to these registers are invalid
When is the right time to sample? When an APB bus transaction completes
Registers
Reset Values All registers return the specified reset values Test result 1
Register Accesses All registers have been accesses for all possible access modes Covergroup, cross 1
Bit level register accesses All read-write bits in the registers toggle correctly Test result 1
APB Protocol APB Protocol has been tested in all modes APB Monitor 1
Transmitter
Character formats All possible character formats are transmitted correctly Covergroup, cross 1
TX FIFO Empty flag The FIFO empty flag is set when the FIFO is empty and is read back correctly Design Assertion, 1
Covergroup
TX empty flag The transmit empty flag is set correctly and is read back correctly Design Assertion, 1
Covergroup
Receiver
Character formats All possible character formats are received correctly Covergroup, cross 1
Data Received Flag The data received flag is set when data is available and is read back correctly Design Assertion, 1
Covergroup
RX Line Status
Framing Error Framing errors are detected for one or two stop bits Design Assertion, 2
Covergroup
Parity Error Parity errors are detected for all types of parity Design Assertion, 1
Covergroup
Break Indication A break condition is detected correctly for all character formats Covergroup, cross 2
Overrun Error RX overrun is detected for all character formats Covergroup, cross 2
FIFOE That the FIFO error condition is valid for all error/indication types Covergroup, cross 2
Status Any valid combination of error/indicator has been observed Covergroup, cross 2
Modem Interface
Modem Outputs All combinations of modem output values have been seen Covergroup, Cross 3
Modem Inputs All combinations of modem input values have been seen Covergroup, Cross 3
The modem input status change signals work correctly Design Assertion, 3
Covergroup
Loopback mode Modem output bits are routed to the right modem status bits Covergroup 2
Interrupts
Interrupt Enable All combinations of the interrupt enable bits have been used Covergroup, cross 1
Interrupt ID All valid interrupt IDs have been detected Covergroup, cross 1
Receive FIFO Interrupt Seen for all possible character formats Covergroup, cross 1
Receive Line Status Interrupts generated for all possible combinations of errors and indicators for all Covergroup, cross 1
Interrupt character formats
Transmit empty interrupt Generated for all character formats Covergroup, cross 1
Modem Status interrupt Generated for all combinations of the signal change bits Covergroup, cross 3
Receive timeout interrupt Has been checked for the shortest and longest character format and 4 other formats Covergroup 4
Baud Rate
Divider values Check UART operation for a range of baud rate divider values Covergroup 1
Check baud rate divider ratio for a selection of values via baud rate divider output Covergroup 2
Code Coverage
Statement coverage Check each executable line of the RTL has been covered 1
Branch coverage Check each branch in the RTL has been taken 1
FSM coverage Each arc in the RTL FSMs has been taken 1
Notes:
1. The priority column indicates the relative importance of each feature. Items marked priority 1 will be verified first,
followed by prioirity 2 items, down to priority 4.
2. The APB interface behaviour is checked by inserting the APB protocol monitor in the testbench, connected to the
APB port on the UART, its functional coverage will be merged with the other UART functional coverage
3. There are several checks that are performed using assertions which the designer has implemented in the design, these
are included in the table as Design Assertions
4. Code coverage is included as a category in the test plan so that it can be tracked
`uvm_component_utils(uart_tx_coverage_monitor)
option.name = "tx_word_format";
option.per_instance = 1;
endgroup: tx_word_format_cg
endclass: uart_tx_coverage_monitor
`uvm_component_utils(uart_modem_coverage_monitor)
option.name = "mcr_settings_cg";
option.per_instance = 1;
endgroup: mcr_settings_cg
option.name = "msr_inputs_cg";
option.per_instance = 1;
MSR_INPUTS: cross DCTS, DDSR, TERI, DDCD, CTS, DSR, RI, DCD, LOOPBACK;
endgroup: msr_inputs_cg
uart_reg_block rm;
endfunction: write
endclass: uart_modem_coverage_monitor
option.name = "interrupt_enable";
option.per_instance = 1;
INT_SOURCE: coverpoint en {
bins rx_data_only = {4'b0001};
bins tx_data_only = {4'b0010};
bins rx_status_only = {4'b0100};
bins modem_status_only = {4'b1000};
bins rx_tx_data = {4'b0011};
bins rx_status_rx_data = {4'b0101};
bins rx_status_tx_data = {4'b0110};
bins rx_status_rx_tx_data = {4'b0111};
bins modem_status_rx_data = {4'b1001};
bins modem_status_tx_data = {4'b1010};
bins modem_status_rx_tx_data = {4'b1011};
bins modem_status_rx_status = {4'b1100};
bins modem_status_rx_status_rx_data = {4'b1101};
bins modem_status_rx_status_tx_data = {4'b1110};
bins modem_status_rx_status_rx_tx_data = {4'b1111};
illegal_bins no_enables = {0}; // If we get an interrupt with no enables it's an error
}
endgroup: int_enable_cg
option.name = "interrupt_enable_and_source";
option.per_instance = 1;
IEN: coverpoint en {
bins rx_data_only = {4'b0001};
bins tx_data_only = {4'b0010};
bins rx_status_only = {4'b0100};
bins modem_status_only = {4'b1000};
bins rx_tx_data = {4'b0011};
bins rx_status_rx_data = {4'b0101};
bins rx_status_tx_data = {4'b0110};
bins rx_status_rx_tx_data = {4'b0111};
bins modem_status_rx_data = {4'b1001};
bins modem_status_tx_data = {4'b1010};
bins modem_status_rx_tx_data = {4'b1011};
bins modem_status_rx_status = {4'b1100};
bins modem_status_rx_status_rx_data = {4'b1101};
bins modem_status_rx_status_tx_data = {4'b1110};
bins modem_status_rx_status_rx_tx_data = {4'b1111};
illegal_bins no_enables = {0}; // If we get an interrupt with no enables its an error
}
endgroup: int_enable_src_cg
option.name = "rx_word_format_interrupt";
option.per_instance = 1;
endgroup: rx_word_format_int_cg
option.name = "lsr_int_src_cg";
option.per_instance = 1;
endgroup: lsr_int_src_cg
There are a few things to note about the bins in this covergroup:
• If a Break occurs, then it is also likely to create framing and parity errors
• The receive line status interrupt enable also enables the RX timeout, this will not be detected by this covergroup
which is why there is a no_ints bin
option.name = "modem_int_src_cg";
option.per_instance = 1;
endgroup: modem_int_src_cg
Note that the fidelity of this covergroup is reduced since wildcard bins are used to check that each of the MSR interrupt
source bits is seen to be active true, rather than all combinations. The reasoning behind this is that each bit is orthogonal
to the other, and that therefore there is no functional relationship between them.
coverpoint div {
bins div_ratio[] = {16'h1, 16'h2, 16'h4, 16'h8,
16'h10, 16'h20, 16'h40, 16'h80,
16'h100, 16'h200, 16'h400, 16'h800,
16'h1000, 16'h2000, 16'h4000, 16'h8000,
16'hfffe, 16'hfffd, 16'hfffb, 16'hfff7,
16'hffef, 16'hffdf, 16'hffbf, 16'hff7f,
16'hfeff, 16'hfdff, 16'hfbff, 16'hf7ff,
16'hefff, 16'hdfff, 16'hbfff, 16'h7fff,
16'h00ff, 16'hff00, 16'hffff};
}
endgroup: baud_rate_cg
option.name = "reg_access_cg";
option.per_instance = 1;
RW: coverpoint we {
endgroup: reg_access_cg
Datapath Coverage
What is a datapath block?
A datapath block takes an input data stream and implements a transform function that generates the output data. The
transfer function may have settings which change its characteristics, or it may be a fixed implementation. In its path from
the input to the output, the data does not interact with other blocks, hence the term datapath. Examples of datapath blocks
include custom DSP functions, modems, encoders and decoders and error correction hardware.
A datapath block is generally tested with meaningful, rather than random, data and the output is related to the input by
the transform function and is therefore meaningful as well. The input to a datapath block is most likely generated from a
software (c) based model of the system in which the function was originally modelled, and the output of the block is
usually compared against the output from a golden reference model.
In some cases the output data may require subjective testing. For instance, a video encoding block would require a video
format signal as its input and the encoded output would have to be checked visually to check that the result of the
encoding was of an acceptable quality.
Functional coverage for a datapath block is usually focussed on its settings (sometimes referred to as the "knobs"), or the
parameters which affect its transform function. The role of the functional coverage model is to check that the block has
been tested with all desired combinations of parameter settings. The value of the data that is fed into the datapath block
may also be relevant to the coverage since it could be used to prove that a combination of input values has been
processed against each valid set of parameters.
In theory, the BiQuad filter design can handle a continuous, or infinite, range of possible input and co-efficient values, so
the the verification problem needs to be constrained to something practical. In this case, the IIR filter is going to be used
as a programmable filter for audio data with frequencies between 50 Hz and 20 KHz, and it will be tested for correct
operation as a Low Pass, High Pass and Band Pass filter over the frequency range, varying the co-efficient values to set
the corner frequencies. The co-efficients are stored in registers which can be programmed using an APB interface. The
input data will be a frequency swept sine wave and the resultant output sine wave will be checked to make sure that the
right level of attenuation has been achieved according to the intended characteristics of the filter. The diagram below
illustrates the filter testbench architecture.
For each filter type, the filter parameters for corner frequencies will be tested at 200 Hz intervals in the 0 - 4 KHz range,
and then at 1 KHz intervals in the 4-20 KHz range - this equates to 36 possible sets of co-efficient values, each of which
are valid for a particular corner frequency.
The input frequency sweep waveform will be sampled to ensure that it covers all the frequencies of interest and this
information should be crossed with the set of co-efficient values to ensure that all possible combinations have been
observed. This strategy is summarised in the BiQuad IIR Filter Test Plan.
In terms of sampling, the covergroups for a particular filter type should only be sampled when the filter has been
configured in that mode, and they should be sampled when the input frequency crosses a frequency increment boundary.
Which values are important? The calculated discrete values for the filter co-efficients, ordered by filter type. Several discrete frequencies.
What are the dependencies between the The co-efficients should be crossed with the input frequency to check all options have been tested.
values?
Are there illegal conditions? No since we are representing a sub-set of a continuous range of values, but some filter/frequency values are
out of range.
When is the right time to sample? When the frequency sweep waveform is sampled at one of the frequencies of interest.
When is the data invalid? The right covergroup needs to be sampled for the right type of filter (LP, HP, BP)
See the example implementation of the BiQuad functional coverage model for more details.
Covergroup Design
Each filter configuration is represented by a set of co-efficient values. These are effectively unique and can be separated
out into groups of values that apply to each of the three filter types, these values then need to be crossed with the filter
input frequency to check that coverage has been obtained for all possible combinations.
One way to do this would be to create a single covergroup with separate coverpoints for each filter type, with bins for
each combination of filter co-efficient values. However, at any particular time the BiQuad filter can only be configured
to operate in one mode, so there is a covergroup for each of the filter types, and only one of each of the covergroups will
be sampled at each particular frequency change. The example shown in the code example is for the Low Pass filter type,
but the other covergroups only differ in terms of the co-efficient values.
class LP_FILTER_cg_wrapper extends uvm_object;
`uvm_object_utils(LP_FILTER_cg_wrapper)
// Co-Efficient values
bit[15:0] b10;
bit[15:0] b11;
bit[15:0] b12;
bit[15:0] a10;
bit[15:0] a11;
// LP_FILTER Covergroup:
covergroup LP_FILTER_cg() with function sample(int frequency);
option.name = "LP_FILTER_cg";
option.per_instance = 1;
endclass: LP_FILTER_cg_wrapper
A SystemVerilog covergroup is intantiated inside a class, it has to be constructed in the class constructor method
(new()). The Low Pass filter covergroup is instantiated inside a wrapper class, this allows it to be created when required
by constructing the wrapper object. The covergroups sample() method is then chained into the wrapper object's sample()
method. This is the recommended way to implement covergroups in a class environment.
Inside the covergroup itself there is a coverpoint for the frequency which has a set of bins which correspond to each of
the input frequencies of interest. The coverpoint for the co-efficients is based on the concatonated value of all of the
co-efficients (an 80 bit value), and the bins correspond to the co-efficient values for different configurations from a 200
Hz knee frequency up to 20 Khz. The cross product of the two coverpoints is LP_X.
`uvm_component_utils(biquad_functional_coverage)
biquad_env_config cfg;
LP_FILTER_cg_wrapper lp_cg;
HP_FILTER_cg_wrapper hp_cg;
BP_FILTER_cg_wrapper bp_cg;
endclass: biquad_functional_coverage
super.new(name, parent);
endfunction
lp_cg = LP_FILTER_cg_wrapper::type_id::create("Low_Pass_cg");
hp_cg = HP_FILTER_cg_wrapper::type_id::create("High_Pass_cg");
bp_cg = BP_FILTER_cg_wrapper::type_id::create("Band_Pass_cg");
endfunction: build_phase
case(cfg.mode)
LP: begin
lp_cg.b10 = cfg.RM.B10.f.value[15:0];
lp_cg.b11 = cfg.RM.B11.f.value[15:0];
lp_cg.b12 = cfg.RM.B12.f.value[15:0];
lp_cg.a10 = cfg.RM.A10.f.value[15:0];
lp_cg.a11 = cfg.RM.A11.f.value[15:0];
lp_cg.sample(t);
end
HP: begin
hp_cg.b10 = cfg.RM.B10.f.value[15:0];
hp_cg.b11 = cfg.RM.B11.f.value[15:0];
hp_cg.b12 = cfg.RM.B12.f.value[15:0];
hp_cg.a10 = cfg.RM.A10.f.value[15:0];
hp_cg.a11 = cfg.RM.A11.f.value[15:0];
hp_cg.sample(t);
end
BP: begin
bp_cg.b10 = cfg.RM.B10.f.value[15:0];
bp_cg.b11 = cfg.RM.B11.f.value[15:0];
bp_cg.b12 = cfg.RM.B12.f.value[15:0];
bp_cg.a10 = cfg.RM.A10.f.value[15:0];
bp_cg.a11 = cfg.RM.A11.f.value[15:0];
bp_cg.sample(t);
end
endcase
endfunction: write
Although, the functional coverage model has been implemented as a UVM class, the same principles could be applied to
a module or interface based implementation.