Verification Martial Arts: A Verification Methodology Blog

Archive for the 'Coverage, Metrics' Category

Coverage coding: simple tips and gotchas

Posted by paragg on 28th March 2013

Author - Bhushan Safi (E-Infochips)

Functional coverage has been the most widely accepted way by which we track the completeness of any constrained random testbench. However, does achieving 100% functional coverage means that the DUV is bug free? Certainly not , but it boosts the confidence of the verification engineer and management team.

Based on my experience of defining functional covergroups for different projects, I realized that coverage constructs and options in the SystemVerilog language have their own nuances for which one needs to keep an eye out. These “gotchas” have to be understood so that coverage can be used optimally to achieve appropriate usage results in correct alignment with the intent desired. Let me talk about some of these finer aspects of coverage so that you can use the constructs more productively.

Usage of ignore_bins

The ‘ignore_bins’ construct is meant to exclude a collection of bins from coverage.  While using this particular construct, you might end up with multiple ‘shapes’ issues (By ‘shapes’ I mean “Guard_OFF” and “Guard_ON”, which appears in the report whenever ‘ignore_bins’ is used). Lets look at a simple usage of ignore_bins is as shown in figure 1.

Looking at the code in figure 1, we would assume that since we have set “cfg.disable = 1” the bin with value 1 would be ignored from the generated coverage report.  Here we use the ‘iff’ condition to try to match our intent of not creating a bin for the variable under the said condition.  However in simulations, where the sample_event is not triggered, we see that we end up having an instance of our covergroup which still expects both the bins to be hit. (See the generated report in figure 1). Why does this happen? If you dig deep into the semantics, you will understand that the “iff” condition will come into action only when the event sample_event is triggered. So if we are writing  ‘ignore_bins’ for a covergroup which may/may not be sampled on each run then we need to look for an alternative.  Indeed there is a way to address this requirement and that is through the usage of the multi-mastered intelligent ternary operator. Look at the code in figure 2 to see how the ternary operator is used to model the same intent.

Now the report is as you expect!!!

Using the above mentioned coding style we make sure that the bin which is not desired in specific conditions is ignored irrespective of the condition of whether or not the covergroup is being sampled.  Also, we use the value “2’b11” to make sure that we don’t end up in ignoring a valid value for the variable concerned.

Using detect_overlap

The coverage option called “detect_overlap” helps in issuing a warning if there is an overlap between the range list (or transition list) of two bins of a coverpoint. Whenever we have plenty of ranges to be covered, and there is a possibility of overlap, it is important to use this option.

Why is it important and how can you be impacted if you don’t use it? You might actually end up with incorrect and unwanted coverage results!

Let’s look at an example. In the above scenario, if a value of 25 is generated, the coverage scores reported would be 50% when the desired outcome would ideally have been 25%. This is because the value ‘25’ contributes to two bins out of four bins when that was probably not wanted. The usage of ‘detect_overlap’ would have warned you about this and you could have fixed the bins to make sure that such a scenario doesn’t occur.

Coverage coding for crosses and assigning weight

What does the LRM (Table 19-1—Instance-specific coverage options) say about the ’weight’ attribute?  

If set at the covergroup syntactic level, it specifies the weight of this covergroup instance for computing the overall instance coverage of the simulation. If set at the coverpoint (or cross) syntactic level, it specifies the weight of a coverpoint (or cross) for computing the instance coverage of the enclosing covergroup. The specified weight shall be a non-negative integral value.”

What kinds of surprises can a combination of cross and option.weight create?

The SystemVerilog LRM shows a very simple way of writing a cross. Let’s look at the code below.

The expectation here is that for a single simulation (expecting one of the bins to be hit), we will end up with 25 % coverage as we have specified the weight of the individual coverpoints to zero. However, what essentially happens is the following, 2 internal coverpoints for check_4_a and check_4_b are generated, which are used to compute the coverage score of the ‘crossed’ coverpoint here. So you’ll end up having a total of four coverpoints, two of which have option.weight specified to 0 (i.e. CHECK_A and CHECK_B) and two of which are coverpoints with option.weigh as 1 (i.e. check_4_a and check_4_b). Thus for a single simulation, you will not get the 25% coverage desired.

Now with this report we see the following issues:

  • => We see four coverpoints while expectation is only two coverpoints
  • => The weights of the individual coverpoints is set to be expected to zero as option.weight is set to ‘0’
  • => The overall coverage numbers are undesired.

In order to avoid above disastrous results we need to take care of following aspects:

  • => Use the type_option.weight = 0, instead of option.weight = 0.
  • => Use the coverpoint labels instead of coverpoint names to specify the cross.

Hope my findings will be useful for you and you will use these options/attributes appropriately to get the best value out of your coverage metrics (without losing any sleep or debug cycles to figure out why they didn’t behave as you expected them to)!

Posted in Coding Style, Coverage, Metrics, Customization, Reuse, SystemVerilog, Tutorial | 9 Comments »

SNUG-2012 Verification Round Up – Language & Methodologies – I

Posted by paragg on 25th February 2013

As in the previous couple of years, last year’s SNUG – Synopsys User Group showcased an amazing number of useful user papers   leveraging the capabilities of the SystemVerilog language and verification methodologies centered on it.

I am always excited when I see this plethora of useful papers and I try to ensure that I set aside some time to go through all these user experiences.  Now, as we wait for SNUG, Silicon Valley to kick-start the SNUG events for this year, I would want to look back at some of the very interesting and useful paper from the different SNUGs of the year 2012.  Let me start with talking about a few papers in the area of the System Verilog language and SV methodologies.

Papers leveraging the SystemVerilog language and constructs

Hillel Miller of Freescale in the paper “Using covergroups and covergroup filters for effective functional coverageuncovers the mechanisms available for carving out the coverage goals. In the p1800-2012 of the SystemVerilog LRM, new constructs are provided just for doing this. The construct that is focused on is the “with” construct. The new construct provides the ability to carve out of a multidimensional range of possibilities for a sub-range of goals. This is very relevant in a “working” or under development setup that requires frequent reprioritization to meet tape-out goals.

The paperTaming Testbench Timing: Time’s Up for Clocking Block Confusionsby Jonathan Bromley, Kevin Johnston of Verilab, reviews the key features and purpose of clocking blocks and then examines why they continue to be a source of confusion and unexpected behavior for many verification engineers. Drawing from the authors’ project and mentoring experience, it highlights typical usage errors and how to avoid them. They clarify the internal behavior of clocking blocks to help engineers understand the reasons behind common problems, and show techniques that allow clocking blocks to be used productively and with confidence. Finally, they consider some areas that may cause portability problems across simulators and indicate how to avoid them.

Inference of latches and flops based on coding styles has always been a topic creates multiple viewpoints. There are other such scenarios of synthesis and simulation mismatches that one typically comes across. To address all such ambiguity, language developers have provided different constructs to provide for an explicit resolution based on the intent. To help us gain a deeper understanding of the topic, Don Mills of Microchip Technology Inc., presented the related concepts in the paper “Yet Another Latch and Gotchas Paper” @ SNUG Silicon Valley. This paper discusses and provides solutions to issues that designers using SystemVerilog for design come across, such as: Case expression issue for casez and casex, Latches generated when using unique case or priority case, SRFF coding style problems with synthesis, SystemVerilog 2009 new definition of logic

Gabi Glasser from Intel presented the paper “Utilizing SystemVerilog for Mixed-Signal Validation@ SNUG Israel, where he proposed a mechanism for simplifying analysis and increasing coverage for mixed signal simulations.  The method proposed here was to take advantage of SystemVerilog capabilities, which enables defining a hash (associative) array with unlimited size. During the simulation, vectors are created for required analog signals, allowing them to be analyzed within the testbench along or at the end of the simulation, without requiring saving these signals into a file. The flow change enables the ability to launch a large scale mixed signal regression while allowing an easier analysis of coverage data.

Design pattern is a general reusable solution to a commonly recurring problem within a given context. The benefit of using design patterns is clear: it gives a common language for designers when approaching a problem, and gives a set of tools, widely used, to solve issues as they come up.  The paper Design Patterns In Verification” by Guy Levenbroun of Qualcomm explores several common problems, which might rise, during the development of a testbench, and how we can use design patterns to solve these problems. The patterns are categorized majorly into following areas: creational (eg factory), structural (eg composite) and behavioral (eg template) are covered in the paper.

Arik Shmayovitsh, Avishay Tvila, Guy Lidor of Sigma Designs , in their paper “Truly reusable Testbench-to-RTL  connection for System Verilog , presents  a novel approach of  connecting the DUT and testbench using consistent semantics while  reusing the testbench. This is achieved by abstracting the connection layer of each testbench using the SystemVerilog ‘bind’ construct. This ensures that the only thing that is required to be done to reuse the testbench for a new DUT would be to identify the instance of the corresponding DUT.

In the paper, A Mechanism for Hierarchical Reuse of Interface Bindings, Thomas Zboril of Qualcomm (Canada) explores another method to instantiate SV interfaces, connect them to the DUT and wrap the virtual interfaces for use in the test environment. This method allows the reuse of all the code when the original block level DUT becomes a lower level instance  in a larger subsystem or chip. The method involves three key mechanisms: Hierarchical virtual interface wrappers, Novel approach of using hierarchical instantiation of SV interfaces, Another novel approach of automatic management of hierarchical references via SV macros (new)

Thinh Ngo & Sakar Jain of Freescale Semiconductor, in their paper, “100% Functional Coverage-Driven Verification Flow propose a coverage driven verification flow that can efficiently achieve 100% functional coverage during simulation. The flow targets varied functionality, focuses at transaction level, measures coverage during simulation, and fails a test if 100% of the expected coverage is not achieved. This flow maps stimulus coverage to functional coverage, with every stimulus transaction being associated with an event in the coverage model and vice versa. This association is derived from the DUT specification and/or the DUT model. Expected events generated along with stimulus transactions are compared against actual events triggered in the DUT. The comparison results are used to pass or fail the test. 100% functional coverage is achieved via 100% stimulus coverage. The flow enables every test with its targeted functionality to meet 100% functional coverage provided that it passes.

Papers on Verification Methodology

In the paper, Top-down vs. bottom-up verification methodology for complex ASICs, Paul Lungu & Zygmunt Pasturczyk of Ciena at Canada covers the simulation methodology used for two large ASICs requiring block level simulations. A top-down verification methodology was used for one of the ASICs while a larger version needed an expanded bottom-up approach using extended simulation capabilities. Some techniques and verification methods such as chaining of sub environments from block to top-level are highlighted  along with challenges and solutions found by the verification team. The paper presents a useful technique of  of passing a RAL (Register Abstraction Layer) mirror to the C models which are used as scoreboards in the environment. The paper also presents a method of generating stable clocks inside the “program” block.

In the paper,Integration of Legacy Verilog BFMs and VMM VIP in UVM using Abstract Classes by Santosh Sarma of Wipro Technologies(India) presents an alternative approach where Legacy BFMs written in Verilog and not implemented using ‘Classes’ are hooked up to higher level class based components to create a standard UVM VIP structure. The paper also discusses an approach where existing VMM Transactors that are tied to such Legacy BFMs can be reused inside the UVM VIP with the help of the VCS provided UVM-VMM Interoperability Library. The implementation makes use of abstract classes to define functions that invoke the BFM APIs. The abstract class is then concretized using derived classes which give the actual implementation of the functions in the abstract class. The concrete class is then bound to the Verilog instance of the BFM using the SystemVerilog “bind” concept. The concrete class handle is then used by the UVM VIP and the VMM Transactor to interact with the underlying Verilog BFM. Using this approach the UVM VIP can be made truly reusable by using run time binding of the Verilog BFM instance to the VIP instead of using hardcoded macro names or procedural calls.

A Unified Self-Check Infrastructure - A Standardized Approach for Creating the Self-Check Block of Any Verification Environmentby John Sotiropoulos, Matt Muresa , Massi Corba of Draper Laboratories Cambridge, MA, USA presents a structured approach for developing a centralized “Self-Check” block for a verification environment. The approach is flexible enough to work with various testbench architectures and is portable across different verification methodologies. Here, all of the design’s responses are encapsulated under a common base class, providing a single “Self-Check” interface for any checking that needs to be performed. This abstraction, combined with a single centralized scoreboard and a standardized set of components, provides the consistency needed for faster development and easier code maintenance. It expands the concept of ‘self-check’ to incorporate the white-box monitors (tracking internal DUT state changes etc.) and Temporal Models (reacting to wire changes) along-with traditional methodologies for enabling self-checking.

For VMM users looking at migrating to UVM, there is another paper from Courtney Schmitt of Analog Devices, Inc.Transitioning to UVM from VMMdiscusses the process of transitioning to a UVM based  environment from VMM Differences and parallels between the two verification methodologies are presented to show that updating to UVM is mostly a matter of getting acquainted with a new set of base classes. Topics include UVM phases, agents, TLM ports, configuration, sequences, and register models. Best practices and reference resources are highlighted to make the transition from VMM to UVM as painless as possible.

Posted in Announcements, Coverage, Metrics, Creating tests, Customization, Modeling, Optimization/Performance, Reuse, SystemVerilog, UVM, Uncategorized, VMM, VMM infrastructure | 3 Comments »

Webinar on Deploying UVM Effectively

Posted by Shankar Hemmady on 22nd October 2012

Over the past two years, several design and verification teams have begun using SystemVerilog testbench with UVM. They are moving to SystemVerilog because coverage, assertions and object-oriented programming concepts like inheritance and polymorphism allow them to reuse code much more efficiently.  This helps them in not only finding the bugs they expect, but also corner-case issues. Building testing frameworks that randomly exercise the stimulus state space of a design-under-test and analyzing completion through coverage metrics seems to be the most effective way to validate a large chip. UVM offers a standard method for abstraction, automation, encapsulation, and coding practice, allowing teams to build effective, reusable testbenches quickly that can be leveraged throughout their organizations.

However, for all of its value, UVM deployment has unique challenges, particularly in the realm of debugging. Some of these challenges are:

  • Phase management: objections and synchronization
  • Thread debugging
  • Tracing issues through automatically generated code, macro expansion, and parameterized classes
  • Default error messages that are verbose but often imprecise
  • Extended classes with methods that have implicit (and maybe unexpected) behavior
  • Object IDs that are distinct from object handles
  • Visualization of dynamic types and ephemeral classes

Debugging even simple issues can be an arduous task without UVM-aware tools. Here is a public webinar that reviews how to utilize VCS and DVE to most effectively deploy, debug and optimize UVM testbenches.

Link at

Posted in Coverage, Metrics, Debug, UVM | Comments Off

Build your own code generator!

Posted by Amit Sharma on 25th June 2012

One of the critical problems developers cope with during SoC development process (architecture plan, RTL design, verification, software development) is the constant need to synchronize between different forms of the same data structure: the SoC registers database. The SoC registers database can be found at the SoC architecture team (who write the SoC registers description document), design engineers (who implement the registers structure in RTL code), verification engineers (who write the verification infrastructure – such as RAL code, and write verification tests – such as exhaustive read/write tests from all registers), and software engineers (who use the registers information to write embedded software).

Since the same information is used in different forms, it is only natural to have a single, central database which will hold all SoC registers data. You would ideally like to generate all the required output files  (documentation, UVM or VMM Regsiter Abstraction Model, RTL, C headers, …) from this central database ..  Different vendors and CAD teams provide different automation solutions for doing this.

The RALF specification contains all of the necessary information to generate RTL and testbench  code that implements the specified registers. There are many ways to code and implement RTL, so it is not possible to provide a general purpose solution.  As far as the testbench abstraction model is concerned, there are multiple ways of customizing your model post generation in both UVM and VMM. Callbacks, factories, configurable RAL model  attributes are some of the ways through which the desired customization can be brought in.  “The ‘user’ in RALF : get ralgen to generate ‘your’ code” highlights a very convenient way of adding of bringing in  SystemVerilog compatible code which will be copied as-is into the RAL model and in the desired scope. When it comes down to generating the RTL and the ‘C’ headers, we cannot leave the customization to such a late stage.  Also, different organizations and project groups have their own RTL and C-code coding styles which means  a generated output of a very generic nature might not be very helpful. For RTL generation, engineers would want the generated code to be power and gate-count efficient. Similar for C registers header generation, it often needs to follow coding styles and match CPU Firmware API. How do we bring in all this customizations to the end user?

Using the RALF C++ API, you have full access to parsed RALF data (through C++ routines), which you can use to implement a customized RTL code generator, or any other feature that needs RALF data. So, you can use  it to generate your C header files, HTML documentation, or translate the i/p RALF files to another register description format, custom covergroups and coverage convergence sequences (DAC 2012 User Track poster 6U.8 — Register Verification on a Fast Lane: Using Automation to Converge on UVM REG Coverage Models)

I have seen two instances of the need to generate a different register specification in the recent past and that is one of the reasons I decided to put this down in a blog.   Let me talk about the first instance..

One of the project groups were in the process of migrating from their own SV base classes to UVM. They had their own Register description format from which they used to generate their Register Abstraction Model. This was a proven flow.

So, when they migrated to UVM , they wanted to have a flow which would validate the changes that they are doing..

Given that they were moving to using RALF and ‘ralgen’, they didn’t want to create Register Specification in the legacy format anymore. So, they wanted to have some automation for generating scripts in the earlier format.  So, how did they go about doing this?..   They took the RAL C++ APIs and using them there were able to create the necessary automation to to generate the legacy format from RALF in no time.. (From what I remember, it was a half days work).. Everyone were involved in doing what they were best at, and that helped in the overall scheme of things.

The other customer had their own format from which they were generating RTL, firmware code and HTML. They had the necessary automation to create RALF for generating the UVM register Model. They also had a mechanism in place to generate IPXACT from this format as well as vice versa.. So, to complete the traceability matrix, they wanted a RALF  to IPXACT conversion.. Again, the most logical approach was to take the RALF CPP APIs and get those to iterate through the parsed RALF data and generate IPXACT.. Though, this effort is not complete, it just took a day or so to be able to generate valid IPXACT1.5 schema and all that is required now is some more additional work to smoothen the corners.

How do you start using these APIs and build your own code/html generators? You need to include ‘ralf.hpp” (which is in $VCS_HOME/include) in your ‘generator’ block. And then to compile the code, you need to pick up the shared library from the VCS installation.

$CPP_COMPILER $CFLAGS -I${VCS_HOME}/include –L${VCS_HOME}/lib –lralf your-file.cpp $LDFLAGS

#include "ralf.hpp"
int main(int argc, char *argv[])
// Check basic command line usage…
if (argc < 3) {
fprintf(stderr, "Error: Wrong Usage.\n");
// Show Correct Usage …
* Parse command line arguments to get the essential
* constructor arguments. See documentation
* of class ralf::vmm_ralf’s constructor parameters.

* Create a ralf::vmm_ralf object by passing in proper
constructor arguments. */
ralf::vmm_ralf ralf_data(ralf_file, top, rtl_dir,
* Get the top level object storing the parsed RALF
* block/system data and traverse that, top-down, to get
* access to complete RALF data.
const ralf::vmm_ralf_blk_or_sys_typ * top_lvl_blk_or_sys
= ralf_data.getTopLevelBlockOrSys();
* TODO–Traverse the parsed RALF data structure top-down
* using/starting-from ‘top_lvl_blk_or_sys’ for getting
* complete access to the RALF data and then, do whatever
* you would want to do, with the parsed RALF data. One
* typical usage of parsed RALF data could be, to generate
* RTL code in your own style.
// TODO – Add your RTL generator code here.
* As part of this library, Synopsys also provides a
* default RTL generator, which can be invoked by
* invoking ‘generateRTL()’ method of ‘ralf::vmm_ralf’
* class, as demonstrated below.

Essentially, you have a handle to the parsed database, and with the available APIs you can do whatever you want with it :) ..  The documentation of the APIs are in the documentation shipped with the VCS installation.. Also, if you are like me and would rather hack away at existing code rather than start with something from scratch, you can just check with Synopsys support to give you existing templates to dump out code in specific format, and you can starting modifying that for your requirements..

Posted in Automation, Coverage, Metrics, Customization, Interoperability, Register Abstraction Model with RAL, Tools & 3rd Party interfaces, UVM, VMM infrastructure | Comments Off

Why do we need an integrated coverage database for simulation and formal analysis?

Posted by Shankar Hemmady on 23rd January 2012

Closing the coverage gap has been a long-standing challenge in simulation-based verification, resulting in unpredictable delays while achieving functional closure. Formal analysis is a big help here. However, most of the verification metrics that give confidence to a design team are still governed by directed and constrained random simulation. This article describes a methodology that embraces formal analysis along with dynamic verification approaches to automate functional convergence:

I would love to learn what you do to attain functional closure.

Posted in Coverage, Metrics, Formal Analysis, Verification Planning & Management | Comments Off

Auto-Generation of Performance Charts with the VMM Performance Analyzer

Posted by Amit Sharma on 30th September 2011

Amit Sharma, Synopsys

Quoting from one of Janick’s earlier blog on the VMM Performance Analyzer Analyzing results of the Performance Analyzer with Excel,
”The VMM Performance Analyzer stores its performance-related data in a SQL database.SQL was chosen because it is an IEEEANSI/ISO standard with a multiplicity of implementation, from powerful enterprise systems like Oracle, to open source versions like MySQL to simple file-based like SQLite. SQL offers the power of a rich query and analysis language to generate the reports that are relevant to yourapplication.”

And given that everyone doesn’t understand SQL, he goes on to show how one can get VMM Performance Analyzer data from a SQLite database into an Excel spreadsheet and then subsequently analyze the data by doing any additional computation and creating the appropriate graphs. This involves a set of steps leveraging the SQlite ODBC (Open Database Conduit) and thus requires the installation of the same.

This article presents a mechanism how TCL scripts are used to bring in the next level of automation so that the users can retrieve the required data from the SQL DB and even automate the process of results analysis by auto-generating the relevant performance charts for statistical analysis.. Also, as users migrate to using DVE as a single platform for their design debug, coverage analysis, verification planning, it is shown how these scripts can be integrated into DVE, so that the generation process is a matter of using the relevant menus and clicking on the appropriate buttons in DVE.

For generating the SQL databases with the VMM Performance Analyzer, an SQLite Installation is required which can be obtained from Once, you have installed it, you would need to set the SQLITE3_HOME environment variable to the path where its installed. Once that is done, these are the following steps that you need to follow to generate the appropriate graphs out of the data gets generated out of your batch regressions runs..

First, you need to download the utility from the link provided in the article DVE TCL Utility for Auto-Generation of Performance Charts

Once it is extracted, you can try it on the tl_bus examples that ships with the utility. You would need to go the directory vmm_perf_utility/tl_bus.

Use make to run the tl_bus which will generate the sql_data.db and sql_data.sql. Now, go to the ‘chart_utility’ directory

(cd vmm_perf_utility/chart_utility/)

The TCL scripts which are involved in the automation of the performance charts are in this directory.

This script vmm_perf_utility/chart_utility/chart.tcl  can then be executed from inside DVE as shown below


Once, that is done, it will add will add a button “Generate Chart” in View menu.. BTW, adding a button is fairly simple..

eg:    gui_set_hotkey -menu “View->Generate Chart” -hot_key “G”

is how the button gets added..

Now,  click on a “Generate Chart” to select the sql database.


This will bring up the dialog box to select the SQL database..


Once, the appropriate data base is selected, the user can select which table to work with and then generate the appropriate.. The options would be provided to the user based on the data that is dumped into the SQL database.. From the combinations of charts, that is shown, select the graph that you want to generate and the required graphs will be generated for you. This is what you can see when you use the SQL DB generated for the TL bus example



Once, you have made the selections, you would see the following chart generated..


Now, obviously, you as a user would not just want the graphs to be  generated but you would also want these values to be available to you..

Thus, once you use this chart generation mechanism, the relevant .csv files corresponding to the graphs that you have generated would also be dumped for you..

This will be generated in the perfReports directory that would be created as well.. So, you can do any additional custom computation in Excel or by running your own scripts..  To generate the graphs for any other example, you just need to pick up the appropriate SQL DB  that was generated based on your simulation runs and then subsequently generate the reports and graphs of your interest.

So whether you use the Performance Analyzer in VMM (Performance and statistical analysis from HDL simulations using the VMM Performance Analyzer) or in UVM (Using the VMM Performance Analyzer in a UVM Environment) and even while you are doing your own PA customizations Performance appraisal time – Getting the analyzer to give more feedback , you can easily generate whatever charts you require which  would easily help you analyze all the  different performance aspects of the design you are verifying..

Posted in Automation, Coverage, Metrics, Customization, Performance Analyzer, Tools & 3rd Party interfaces | Comments Off

Closed Loop Register Verification using IDesignSpec and the Register Abstraction Layer

Posted by Amit Sharma on 26th September 2011

Nitin Ahuja, Agnisys Technology Pvt. Ltd

In the previous article titled “Automatic generation of Register Model for VMM using IDesignSpecTM ” we discussed how it is advantageous to use a register model generator such as IDesignSpecTM, to automate the process of RALF model generation. Taking it forward, in this article we will discuss how to close the loop on register verification.

Various forms of coverage are used to ensure that registers are functioning properly. There are three coverage models in VMM. They are:

1. reg_bits coverage: this model is used to make sure that all the bits in the register are covered. This model works by writing and reading both 1 and 0 on every register bit, hence the name. This is specified using “cover +b” in the RALF model.

2. field_vals coverage: field value coverage model is implemented at the register level and supports value coverage of all fields and cross coverage between fields and other cross coverage points within the same register. This is specified using “cover +f” in the RALF model. User can specify the cross coverage depending on the functionality.

3. Address map: this coverage model is implemented at block level and ensures that all registers and the memories in the block have been read from and written to. This is specified using “cover +a” in the RALF model.

We will discuss how coverage can be switched on/off and how the type of coverage can be controlled for each field directly from the register specification.

Once the RALF model is generated, the next step in verification is to generate the RTL and the SystemVerilog RAL model using ‘ralgen’. The generated RAL model along with the RTL can be compiled and simulated in the VMM environment to generate the coverage database. This database is used for the report generation and analysis.

Reports can be generated using IDesignSpecTM (IDS). IDS generated reports have advantages over other report in that it generates the reports in a much more concise way showing all the coverage at one glance.

Turning Coverage ON or OFF

IDesignSpecTM enables the users to turn ON/OFF all the three types of coverage from within the MS Word specification itself.

Coverage can be specified and controlled using the “coverage” property in IDesignSpecTM which has the following possible values:


The hierarchical “coverage” property enables users to control the coverage of the whole block or at the chip level.

Here is a sample of how coverage can be specified in IDesignSpecTM:



This would be the corresponding RALF file :



The coverage bins for each CoverPoint along with the cross for the various CoverPoints can also be defined in the specification as shown below:


This would translate to the following RALF:


Now, the next step after RALF generation would be to generate the RAL Model from the IDS generated RALF.


The IDS generated RALF can be used with the Synopsys ‘ralgen’ to generate the RAL  (VMM or UVM) model as well as the RTL.

RAL model can be generated by using the following command:


If you specify –uvm above in the fisrt ralgen invocation above, a UVM Register Model would be generated.


Once the RTL and the RAL model are generated using the ‘ralgen’, the complete model can be compiled and simulated in the VMM environment using VCS.

To compile the model use the following command on the command line:

vcs -R +plusarg_save -sverilog -o “simv1″ -ntb_opts rvm+dtm +incdir+<directories to search `defines> <files to be compiled> +define+RAL_COVERAGE

The compilation and simulation generates the simulation database which is used for the generation of the coverage reports.

Coverage reports can be generated in various forms but the most concise form can be in the form of the graphics showing all the coverage at a glance. For this, a tcl script “ivs_simif.tcl” takes up the simulation database and generates the text based report on execution of the following command:

% ivs_simif.tcl -in simv.vdb –svg

For running the above command set the environment variable “IDS_SIM_DIR”, the text report are generated at this location. This will also tell IDS where to look for the simulation data file.

A detailed graphical view of the report can be generated from IDS with the help of this text report. To generate the graphical report in the form of “scalable vector graphics” (SVG) select the “SVG” output from the IDS config and regenerate.

Another way of generating the SVG could be by using the IDS-XML or the Doc/Docx specification of the model as the input to the IDS in batch mode to generate the graphical report of the simulation by using the following command:

% idsbatch <IDS_generated_XML or doc/docx specification> -out “svg” -dir output_directory

Coverage Reports

IDesignSpec generates two types of reports from the input database.

They are:

1. Field_vals report

2. Reg_bits report

Field_vals report:

Field_vals report gives the graphical view of the field_vals coverage and the address coverage of the various registers and their respective fields.

The amount of coverage for the field (CoverPoints) is depicted by the level of green color in the fields, while that for complete register (CoverGroup) is shown by the color of name of the register.

The address coverage for the individual register (CoverPoint) is shown by the color of the address of the register (green if addressed; black if not addressed), while that of the entire block (CoverGroup) is shown by the color of the name of the block.

The coloring scheme for all the CoverGroups i.e. register name in case of the field_vals coverage and block name in case of the address coverage is:

1. If the overall coverage is greater than or equal to 80% then the name appears in GREEN color

2. If the coverage is greater than 70% but less than 80% then it appears in YELLOW

3. For coverage less than 70% name appears in RED color

Figure1 shows the field_vals and address coverage.


Figure:  Closed loop register verification using RALF and IDS

The above sample gives the following coverage information:

a. 2 registers, T and resetvalue, are not addressed out of total of 9 registers. Thus the overall coverage of the block falls in the range >70% &<80% which is depicted by the color of the Stopwatch (name of the block).

b. All the fields of the registers are filled with some amount of the green color which shows the amount of the coverage. As an example field T1 of register arr is covered 100% thus it is completely filled and FLD4 of register X is covered only about 10%. The exact value of coverage can be obtained by hovering over the field to get the tooltip showing the exact coverage value

c. Color of the name of the register, for example X is red, show the overall coverage of the whole register , which is less than 70% for X.

Reg_bits report:

Reg_bits report gives the detailed graphical view of the reg_bits coverage and address coverage.

Address coverage for reg_bits is shown in the same way as for the address coverage in field_vals. Reg_bits coverage has 4 components, that is,

1. Written as 1

2. Read as 1

3. Written as 0

4. Read as 0

Each of the 4 components is allocated a specific region inside a bit. If that component of the coverage is hit then the corresponding region is shown as green else it is blank. The overall coverage of the entire register is shown by the color of the name of the register as in the case of the field_vals.


The above sample report shows that there is no issue in “Read as 1” for the ‘resetvalue’ register. While other types or read/write has not been hit completely.

Thus, in this article we described what the various coverage models for a register are and how to generate the RALF coverage model of the registers automatically with minimum effort. An intuitive visualization of the register coverage data will ease the effort involved in deciphering the coverage reports from simulation lengthy log files. This type of closed loop register verification ensures better coverage and high quality results in less time. Hope you found this useful.. Do share with me your feedback on the same and and also let me know if you want any additional details to get the maximum benefits from this flow..

Posted in Automation, Coverage, Metrics, Organization, Register Abstraction Model with RAL, Tools & 3rd Party interfaces, Verification Planning & Management | 1 Comment »

Functional Coverage Driven VMM Verification scalable to 40G/100G Technology

Posted by Shankar Hemmady on 9th September 2011

Amit Baranwal, ASIC Design Verification Engineer, ViaSat

As data rate increases to 100 Gbps and beyond, optical links suffer severely from various impairments in the optical channel, such as chromatic dispersion, polarization-mode dispersion etc. Traditional optical compensation techniques are expensive and complex. ViaSat has developed DSP IP cores for coherent, differential, burst and continuous, high data rate networks. These cores can be customized according to system requirements.

One of the inherent problems with verifying communication applications is that there is large amount of information which is arranged over space and time. These are generally dealt using Fourier Transform, equalization and other DSP techniques. Thus, we needed to come up with interesting stimulus to match these complex equations which exercises the full design. With horizontal and vertical polarization (Four I and Q streams running at 128 samples per cycle), there was high level of parallelism to deal. To address these challenges, we decided to go with Constraint Random Self Checking Test Bench Environment using SystemVerilog and VMM. We have extensively used the reusability, direct programming interface and scalable features with various interesting coverage techniques to minimize our efforts and meet aggressive deadlines of the project. Our system model was bit and cycle accurate developed using C language. Class configurations were used to allow different behaviors such as sampling output at every cycle vs valid cycle only. Parameterized VMM data classes were used for control signals and feedback path which required parameterized generators, drivers, monitors and scoreboards so that they can be scaled as required to match different filter designs and specifications.

Code and functional coverage was used as benchmark to gauge completeness of verification. We used lot of useful constructs from SV in FCM like – ‘ignore bins’ to remove any unwanted sets, helping us to avoid any overhead efforts and ‘illegal bins’ to catch error conditions and intersect keyword etc. Here is an example:

data_valid_H_trans: coverpoint ifc_data_valid.data_valid_H {

bins valid_1_285 = (0=>1[*1:285]=>0);

illegal_bins valid_286 = (0=>1[*286]);

bins one_invalid = (1=>0=>1);

illegal_bins two_invalid = (1=>0[*2:5]=>1);


Covergroup “data_valid_H_trans” covers a signal ‘data_valid_H’ which should never have consecutive 286 or more asserted cycles. Also, data_valid_H signal should never be low for two consecutive data cycle. These are interesting scenario’s and can be found in many designs under test where two blocks have dependency between each other and there is data input/output rate that needs to be met for maintaining the data integrity between blocks else the data might overflow/underflow or can induce other possible errors. In such situations, an illegal bin can be effectively used to continuously check this condition through out the simulation. An easy usage of an illegal bin, keeps an eye on this condition and if this condition ever occurs, VCS flags a runtime error

Another interesting feature that we found out was the capability to merge different coverage reports using flexible merging. As we move along the project, due to various reasons like any system specification changes, signal name change etc we might have to modify our cover groups. Currently, if we have a saved data base of vdb files from previous simulations and we run urg command to create directory of coverage report, we will find multiple cover groups with same name in the new coverage report, this can make things very confusing to identify which cover groups are of our interest. Thus, corrupting our previous efforts, coverage report and leading to more engineering efforts and resource usage. To counter this problem, flexible merging can be used.

To enable flexible merging –group flex_merge_drop option is passed with urg command.

Urg –dir simv1.vdb –dir simv2.vdb –group flex_merge_drop

Note: URG assumes the first specified coverage database as a reference for flexible merging.

This feature is available only for covergroup coverage and is very useful when the coverage model is still evolving and minor changes in the coverage model between the test runs might be required. To merge two coverpoints, they need to be merge equivalent. Requirements for merge equivalence are as follows -

1. For User defined coverpoints:

Coverpoint C1 is said to be merge equivalent to a coverpoint C2 only if the coverpoint names and width are the same.

2. For Autobin Coverpoints:

Coverpoint C1 is said to be merge equivalent to a coverpoint C2 only if the name, auto_bin_max and width are the same.

3. For Cross coverpoints:

Coverpoint C1 is said to be merge equivalent to a coverpoint C2 only if the crosspoint have same number of coverpoints

If the cover points are merge equivalent. The merged cover points will contain a union of all the cover points for different tests. If the cover points are not merge equivalent then merged coverpoint will only contain all the coverpoint bins in the most recent test run and older test run data is not considered.

To achieve our verification goals, SystemVerilog and VMM Methodology features were very helpful in achieving our verification goals by giving us a robust verification environment which was very productive and reusable over course of project. Moreover, it also gave us a head start to our next project verification efforts. To find more details, please refer to the paper I presented at SNUG, San Jose, 2011, “Functional Coverage Driven VMM Verification scalable to 40G/100G Technology”

Posted in Coverage, Metrics | 1 Comment »

Using the VMM Performance Analyzer in a UVM Environment

Posted by Amit Sharma on 23rd August 2011

As a generic VMM package, the Performance Analyzer (PAN) is not based on nor requires specific shared resources, transactions or hardware structures. It can be used to collect statistical coverage metrics relating to the utilization of a specific shared resource. This package helps to measure and analyze many different performance aspects of a design. UVM doesn’t have a performance analyzer as a part of the base class library as of now. Given that the collection/tracking and analysis  of performance metrics of a design has become a key checkpoint in today’s verification, there is a lot of value in integrating the VMM Performance Analyzer in an UVM testbench. To demonstrate the same, we will use both VMM and UVM base classes in the same simulation.

Performance is analyzed based on user-defined atomic resource utilization called ‘tenures’. A tenure refers to any activity on a shared resource with a well-defined starting and ending point. A tenure is uniquely identified by an automatically-assigned identifier. We take the XBUS example in  $VCS_HOME/doc/examples/uvm_1.0/simple/xbus as a demo vehicle for the UVM environment.

Step 1: Defining data collection

Data is collected for each resource in a separate instance of the “vmm_perf_analyzer” class. These instances should be allocated in the build phase of the top level environment.

For example, in


Step 2: Defining the tenure, and enable data collection

There must be one instance of the “vmm_perf_tenure” class for each operation that is performed on the  sharing resource. Tenures are associated with the instance of the “vmm_perf_analyzer” class that corresponds to the resource operated. In this case of the Xbus example, lets say we want to measure transcation throughput performance (i.e for the XBUS transfers).. This is how we will associate a tenure with the Xbus transaction. To denote the starting and ending of the tenure, we define two additional events in the XBUS Master Driver (started, ended). ‘started’ is triggered when the Driver obtains a transaction from the Sequencer, and ‘ended’ once the transaction is driven on the bus and the driver is about to indicate seq_item_port.item_done(rsp); At the same time,  ‘started’ is triggered, a callback is invoked to get the PAN to starting collecting statistics. Here is the relevant code.


Now, the Performance Analyzer  works on classes extended from vmm_data and uses the base class functionality for starting/stopping these tenures. Hence, the callback task which gets triggered at the appropriate points would have to have the functionality for converting the UVM transactions to a corresponding VMM one. This is how it is done.

Step 2.a: Creating the VMM counterpart of the XBUS Transfer Class


Step 2.b: Using the UVM Callback for starting/stopping data collection and calling the UVM -> VMM conversion routines appropriately.


The callback class needs to be associated with the driver as follows in the Top testbecnh (xbus_demo_tb)


Step 3: Generating the Reports..

In the report_ph of xbus_demo_tb, save, and write out the appropriate databases


Step 4. Run simulation , and analyze the reports for possible inefficiencies etc

Use -ntb_opts uvm-1.0+rvm +define+UVM_ON_TOP with VCS

Include along with the new files in the included file list.  The following table shows the text report at the end of the simulation.


You can generate the SQL databases as well and typically you would be doing this across multiple simulations.. Once, you have done that, you can create your custom queries to the get the desired information out of the SQL database across your regression runs.  You can also analyze the results and generate the required graphs in Excel. Please see the following post : Analyzing results of the Performance Analyzer with Excel

So there you go,  the VMM Performance Performance Analyzer can fit in any verification environment you have.. So make sure that you leverage this package  to make the  RTL-level performance measurements that are needed to validate micro-architectural and architectural assumptions, as well as to tune the RTL for optimal performance.

Posted in Coverage, Metrics, Interoperability, Optimization/Performance, Performance Analyzer, VMM infrastructure, Verification Planning & Management | 6 Comments »

Building & Configuring Coverage Model – VMM Style – Part-III

Posted by paragg on 25th June 2011

Parag Goel, Senior Corporate Application Engineer, Synopsys

In the final blog of this coverage modeling with VMM series, we focus on error coverage. Negative scenario testing is an integral part of verification. But again, we have this question – Whether I have covered all negative scenarios?

So it is important to ensure that the generic coverage model tracks all the error scenarios.

Let’s see, how a specific mechanism provided in VMM in the form of vmm_report_catcher helps to track error coverage efficiently and effectively. The VMM Log Catcher is able to identify/catch a specific string of any type any of the messages issue through the VMM reporting mechanism.

Typically, the Verification Environment issues messages to STDOUT when the DUT responds to an error scenario. These messages can be ‘caught’ by the Log Catcher to update the appropriate coverage groups. Let see how this is done in detail.

The Verification Environment would respond to each negative scenario by issuing a message with a unique text, specific to specific error messages.

In the context of the AXI in framework, we can introduce a wide-range of error scenarios and test if the DUT responds correctly or not. A few possible error scenarios in AXI are listed below for your reference.


However, all the scenarios may not be applicable always and hence configurability is required to enable only the required set of coverpoints tied to the relevant negative scenarios. Thus, we should have similar configurability for error coverage as I talked about in the earlier blogs.

Let’s see how we can catch the relevant responses and sample the appropriate covergroups.

As mentioned earlier, in the example below, we make use of the unique message issued as a result of a negative scenario.

This is how we use the VMM Log catcher.

1. The error coverage class is extended from vmm_log_catcher – VMM base class.

2. The vmm_log::caught() API is utilized as means to qualify the covergroup sampling.


In the code above, whenever a message with the text “AXI_WRITE_RESPONSE_SLVERR “ is issued from anywhere in the verification environment, the ‘caught’ method is invoked which in turn samples the appropriate covergroup. Additionally, you an specify more parameters in the caught API, to restrict what ‘scenarios’ should be caught.


string name = “”,

string inst = “”,

bit recurse = 0,

int typs = ALL_TYPS,

int severity = ALL_SEVS,

string text = “”);

The above API, installs the specified message handler to catch any message of the specified type and severity, issued by the specified message service interface instances specified by name and instance arguments, which contains the specified text. By default, this method catches all messages issued by this message service interface instance.

Hope these set of articles would be relevant and useful to you.. I have made an attempt to leverage some of the built-in capabilities of the SV languages and the VMM base classes to target some of the challenges in creating configurable coverage models.. These techniques can be improvised further to make them more efficient and scalable. I would be waiting to hear from you all any inputs that you, have in this area.

Posted in Automation, Configuration, Coverage, Metrics, Reuse, Structural Components, VMM, VMM infrastructure | Comments Off

Building & Configuring Coverage Model – VMM Style – Part-II

Posted by paragg on 25th June 2011

Parag Goel, Senior Corporate Application Engineer, Synopsys

In the previous post, we looked at how you can enable/disable different types of coverage encapsulated in the Coverage Model wrapper class. In this post, let’s look at how we can easily create an infrastructure to pass different inputs to the wrapper class so as to able to configure the coverage collection based on user. The infrastructure ensure that these elements values percolate down to the to the sub-coverage model groups.

The following are some of the key inputs that needs to be passed to the difference coverage component classes

1. SV Virtual Interfaces so that different signal activity can be accessed

2. The Transactions observed and collected by the physical level monitors

3. The ‘Configuration’ information


Let’s look at how the we can easily pass the signal level information to the Coverage Model

Step I: Encapsulation of the interface in the class wrapper.

class intf_wrapper extends vmm_object;

virtual axi_if v_if ;

function new (string name, virtual axi_if mst_if);, name);
this.v_if = mst_if;

endclass: master_port

Step II: In the top class/environment- Set this object using vmm_opts API.

class axi_env extends vmm_env;
intf_wrapper mc_intf;

function void build_ph();
mc_intf = new(“Master_Port”, tb_top.master_if_p0);
// Set the master port interface
vmm_opts::set_object(“VIP_MSTR:vip_mstr_port“, mc_intf, env);
endclass: axi_env

Step III: Connecting in the coverage class.

A. Get the object containing interface in the coverage model class using vmm_opts.

assert($cast(this.mst_port_obj, vmm_opts::get_object_obj(is_set, this, “vip_mstr_port“)));

B. Connecting local virtual interface to one contained in the object.

this.cov_vif = mstr_port_obj.v_if;

Now, we need to pass the collected transaction object from the monitor needs to the coverage collector. This can be conveniently done in VMM using TLM communication. This is achieved through the vmm_tlm_analysis_port, which establishes the communication between a subscriber & an observer.

class axi_transfer extends vmm_data;

. . .

class axi_bus_monitor  extends  vmm_xactor;

vmm_tlm_analysis_port#(axi_bus_monitor, axi_transfer)  m_ap;
task collect_trans();

//Writing to the analysis port.


class axi_coverage_model extends vmm_object;
vmm_tlm_analysis_export #( axi_coverage_model, axi_transfer) m_export;

function new (string inst, vmm_object parent = null);
m_export = new(this, “m_export”);


function void write(int id, axi_transfer trans);

//Sample the appropriate covergroup, once the transaction is received

in the write function.



To set up the TLM Connections in the agent/environment, we need to do the following:

class axi_subenv extends vmm_group;

//Instantiate the model classes and creates them.

axi_bus_monitor mon;

axi_coverage_model cov;

. . .

virtual function void build_ph;
mon = new( “mon”, this);
cov = new( “cov”, this);
virtual function void connect_ph;

//Bind the TLM ports via VMM – tlm_bind

monitor.m_ap.tlm_bind( cov.m_export );


To make the Coverage Model truly configurable, we need to look at some of the other key requirements as well at different level of granularity. This can be summarized as the ability to do the following.

1. Enable/Disable coverage collection for each covergroup defined . Every covergroup should be created only if a user wishes to do so. So there should be a configuration parameter which restricts the creation of the covergroup altogether. And this should also be used to control the sampling of a covergroup.

2. The user must be able to configure the limits on the individual values being covered in the coverage model within a legal set of values. Say for example, transaction field BurstLength – user should be able to guide the model what are the limits on this field that one wishes to get coverage on within a legal set of values ranging from ‘1’ to ‘16’ as per AXI spec. So providing lower and upper limits for transaction parameters like burst size, burst length, address etc. makes it re-usable. This limits should be modeled as variables which can be overwritten dynamically

3. The user should be able to control the number of bins to be created. For example in fields like address. auto_bin_max option can be exploited to achieve this in case the user doesn’t have explicitly defined bins..

4. The user must be able to control the number of hits for which a bin can be considered as covered. option.atleast can be used for this purpose and the input to this can be a user defined parameter.

5. The user should also have the control to specify his coverage goal, i.e. when the coverage collector should show the covergroupcovered” even though the coverage is not 100%. This can be achieved by using option.goal, where goal is again a user defined parameter.

All the parameters required to meet the above requirements can be encapsulated in the class (i.e. coverage configuration class) and this can be set and retrieved in a similar fashion described for setting & getting the interface wrapper class using vmm_opts API’s.

class coverage_cfg extends vmm_object;
  int disable_wr_burst_len;
   . . .
  function new( vmm_object parent=null, string name);, name);
  coverage_cfg cfg;
  function new(vmm_object parent=null, string name);
     bit is_set;, name);
     $cast(cfg, vmm_opts::get_object_obj(is_set, this,

Wei Hua presents another cool mechanism of collecting this parameters using vmm_notification mechanism in this earlier blog  :

A Generic Functional Coverage Solution Based On vmm_notify

Hope you found this useful. I will be talking about how to track Error Coverage in my next blog, so stay tuned!

Posted in Communication, Configuration, Coverage, Metrics, Reuse, Structural Components, VMM, VMM infrastructure | Comments Off

Building & Configuring Coverage Model – VMM Style – Part-I

Posted by paragg on 24th June 2011

Parag Goel, Senior Corporate Application Engineer, Synopsys

To minimize wasted effort, coverage is used as a guide for directing verification resources by identifying tested and untested portions of the design.”

- IEEE Standard for System Verilog (IEEE Std. 1800-2009)

Configurability & reusability are the buzz^^^ words in the verification of chips and this are enabled to a big extent by the present day verification methodologies. Through a set of blogs, I plan to show how we can create configurable coverage models in VMM based environments. Given that, AMBA – AXI is one of the most commonly used protocols in industry for communication amongst the SOC peripherals, I chose protocol AXI based framework for my case study.

The idea here is to create a configurable coverage model leveraging some of the base classes provided in the methodology so that we can make it completely reusable as we move from the block to system level or as we move across projects. Once, we enable that, we can move the coverage model inside the Sub-environment modeled by vmm_group or vmm_subenv which are the units of reuse.


Primary Requirements of Configuration Control:

Two important requirements that are needed to be met to ensure that the coverage model is made a part of reusable components are:

1. Ability to enable/disable the coverage model whenever required.

2. Ability to Turn ON/OFF different subgroups at the desired granularity. For example, an user may not always want the Error Coverage to be enabled, unless under specific circumstances.

To meet the above requirements, we make use of the VMM Global and Hierarchical Configurations

Through the vmm_opts base classes, VMM provides a mechanism to control the configuration parameters of a verification environment. This can be done in a hierarchical as well as in a global manner. These options are summarized below:


In the environment, the coverage_enable is by default set to 0, i.e. disabled.

coverage_enable = vmm_opts::get_int(“coverage_enable”, 0);

Now, the user can enable the coverage via either of the two mechanisms.

1. From user code using vmm_opts.

The basic rule is that you need to ‘set’ it *before* the ’get’ is invoked and during the time where the construction of the components take place.  As a general recommendation, for the construction of structural configuration, the build phase is the most appropriate place.
function axi_test::build_ph();
// Enable Coverage.
vmm_opts::set_int(“@%*:axi_subenv:enable_coverage”, 1);

2. From command line or external option file. The option is specified using the command-line +vmm_name or +vmm_opts+name.

The command line supersedes the option set within code as shown in 1.

User can also specify options for specific instances or hierarchically using regular expressions.


Now let’s look at the typical classification of a coverage model.

From the perspective of AXI protocol, we can look at the 4 sub-sections.

Transaction coverage: coverage definition on the user-controlled parameters usually defined in the transaction class & controlled through sequences.

Error coverage: coverage definition on the pre-defined error injection scenarios

Protocol coverage: This is protocol specific ((AXI Handshake coverage)). In case of AXI, it is mainly for coverage on the handshake signals i.e. READY & VALID on all the 5 channels.

Flow coverage: This is again protocol specific and for AXI it covers various features like, outstanding, inter-leaving, write data before write address etc…


At this point, let’s look at how these different sub-groups with the complete coverage model can be enabled or disabled. Once the coverage configuration class is built and passed on to the main coverage model, we need a fine grain control to enable/disable individual coverage models. The code shows how the user can control all the coverage models in the build phase of the main coverage class.

Here too, we can see how we use vmm_opts comes to meet the requirements of controlling individual parameters.

vmm_opts::set_int(“@%*:disable_transaction_coverage”, 0);
vmm_opts::set_int(“@%*:disable_error_coverage”, 0);
vmm_opts::set_int(“@%*:disable_axi_handshake_coverage”, 0);

vmm_opts::set_int(“@%*:disable_flow_coverage”, 0);

In my next blog, I show how the hierarchical VMM Configurations is used to dynamically pass on signal level and other configuration related information to the coverage model. Also, we shall discuss the usage of VMM TLM feature, towards fulfilling the goal of configurable coverage model. Stay tuned!

Posted in Configuration, Coverage, Metrics, Organization, Reuse, SystemVerilog, VMM, VMM infrastructure | Comments Off

Managing coverage grading in complex multicore microprocessor environments

Posted by Shankar Hemmady on 26th January 2011

Something as simple as coverage grading, which we often take for granted, starts showing its exponential complexity when dealing with cutting-edge designs where quality and timeliness are essential. 

An article in EE Times by James Young and Michael Sanders of AMD along with Paul Graykowski and Vernon Lee of Synopsys describes how they created a coverage grading solution, Quickgrade, that scales to meet the complexity of multicore multiprocessor design environments:


Posted in Coverage, Metrics | Comments Off

Planning for Functional Coverage

Posted by JL Gray on 23rd November 2010

JL Gray, Vice President, Verilab, Austin, Texas, and Author of Cool Verification

Functional coverage cures cancer and the common cold. It helps kittens down from trees and old ladies up the stairs. Functional coverage is the greatest thing since sliced bread, right? So all we need to do is put some in our verification environments and our projects will be a success.

Well… perhaps not.

Many teams I’ve worked with have struggled with various aspects of functional coverage. The area I’d like to address today is the part most engineers spend the least amount of time on – planning (as opposed to implementing) your functional coverage model. One of the critically important uses for functional coverage is to help us (engineers and project managers alike) know when we are done with verification. We all know it’s impossible to 100% verify a complex chip. But it is possible to define what “done” means for our individual teams and projects. The functional coverage model gives us a way to correlate what we’ve accomplished in the testbench with some definition of “done” in our verification plan.

What that means is that the verification plan itself is the key to understanding where we are relative to where we need to be to finish a project. It is the document that managers, designers, verification engineers and others can all interact and agree on what constitutes “good enough.” Verification plans often contain lists of design features and statements about what types of tests should be run to verify these features. Unfortunately, that is almost never sufficient to give stakeholders insight into what should be covered.

Verification plans need to contain a sufficient level of detail to meet the following goals:

  • 1. Allow key stakeholders who may not know anything about SystemVerilog to see what will be covered and then provide a platform for them to give feedback to the verification engineer.
  • 2. Allow this discussion to take place before any SystemVerilog code has been written.
  • 3. Allow an engineer other than the author of the document to understand and implement the coverage model.
  • 4. Allow the project manager to understand what coverage goals should be reached for the module to be considered verified.

Point 3 is important in ways some engineers may not realize. During my verification planning seminars I often ask students if they’ve ever heard of a project’s “truck factor”. Basically, a project’s truck factor is the number of people on the project who could get hit by a truck before the project is in trouble. A verification plan should specify coverage goals in enough detail that someone besides you (this could include the you 6 months from now who’s forgotten all relevant details about the block in question) can understand what the original intent of the document was. For example, some plans I’ve seen make comments like “test all valid packet lengths”. What are the valid lengths? They should be specified in the plan. Are there certain corner cases you think are important to cover? Make sure you call out this information with sufficient detail so that the project manager can tell if you’ve actually accomplished your goals!

Adding this extra level of detail to your verification plan takes time. But without the detail, how can you easily share with your colleagues what you plan to do? And how can you know which features your testbench needs to support in order to meet your coverage goals?

As always, questions and comments welcome.

Posted in Coverage, Metrics, Verification Planning & Management | 2 Comments »

The Blind Leading the Deaf

Posted by Andrew Piziali on 28th September 2010

The Wall of Separation Between Design and Verification

by Andrew Piziali, Independent Consultant

I remember a time in my zealous past, leading a large microprocessor verification team, where one my junior engineers related how they had forcefully resisted examining the floating point unit RTL, explaining to the designer that they did not want to become tainted by the design! The engineer insisted on a dialog with the designer rather than reviewing the RTL. Their position was mine: we must maintain some semblance of separation between the design and verification engineers.

There has been an age old debate between whether or not there ought to be a “wall” between the design and verification teams. “Wall” in this context refers to an information barrier between the teams that minimizes the verification engineer’s familiarity with the details of the implementation (but not the specification!) of the design component being verified. Similarly, the wall minimizes the designer’s familiarity with the verification environment for their implementation.

Re-convergence Model

Re-Convergence Model

The intent of the wall is to allow two pairs of eyes (and ears!)—those of the designer and those of the verification engineer—to independently interpret the specification for a common component and then compare their results. The hypothesis is that if they reach the same conclusion, they are likely to have correctly interpreted the specification. If they do not, one or both are in error. This process is an example of the re-convergence model[1], where a design transformation is verified by performing a second parallel transformation and then comparing the two results. What are the pros and cons of the wall?

The argument in favor of the wall depends upon what we might call original impressions, the fresh insight provided by a person unfamiliar with a concept upon initial exposure. In this context, the verification engineer reading the specification will acquire an understanding of the design intent, independent of the designer, but only if study of its implementation is postponed. Why? Because nearly any implementation will be a plausible interpretation of the specification. The objective is to acquire two independent interpretations for comparison. Hence, influencing a second understanding with an initial implementation would defeat the purpose. What is the opposite position?

The argument against the wall is that a verification engineer and designer, working closely together, are more likely to gain a more precise understanding of the specification than either one working alone. The interactive exploration of possible specification interpretations, each implementing their understanding—the designer the RTL and software, the verification engineer the properties, stimulus, checking and coverage aspects of their verification environment—is argued to lead to convergence more quickly than each party working alone. Well, what should it be? Should verification engineers and designers scrupulously avoid one another, should they collaborate or should they find some intermediate interaction?

Pondering the answer brings to mind the metaphor of the blind leading the deaf, where each of two parties is crippled in a different way such that neither is able to grasp the whole picture. Nevertheless, working together they are able to progress further than working alone. Are the verification engineer and designer the blind leading the deaf? Before I weigh in with my opinion, I’d like to read yours. Type in the “Leave a Reply” box below to respond. Thanks!

[1] Writing Testbenches Using SystemVerilog, Janick Bergeron, 2006, Springer Science+Business Media, Inc.

Posted in Coverage, Metrics, Verification Planning & Management | 4 Comments »

Fantasy Cache: The Role of the System Level Designer in Verification

Posted by Andrew Piziali on 12th July 2010

Andrew Piziali, Independent Consultant

As is usually the case, staying ahead of the appetite of a high performance processor with the memory technology of the day was a major challenge. This processor consumed instructions and data far faster than current PC memory systems could supply. Fortunately, spatial and temporal locality–the tendency for memory accesses to cluster near common addresses and around the same time–were on our side. These could be exploited by a cache that would present a sufficiently fast memory interface to the processor while dealing with the sluggish PC memory system behind the scenes. However, this cache would require a three level, on-chip memory hierarchy that had never been seen before in a microprocessor. Could it be done?


The system level designer responsible the cache design–let’s call him “Ambrose”–managed to meet the performance requirements, yet with an exceedingly complex cache design. It performed flawlessly as a C model running application address traces, rarely stalling the processor on memory accesses. Yet, when its RTL incarnation was unleashed to the verification engineers, it stumbled … and stumbled badly. Each time a bug was found and fixed, performance took a hit while another bug was soon exposed. Before long we finally had a working cache but it unfortunately starved the processor. Coupled with the processor not making its clock frequency target and schedule slips, this product never got beyond the prototype stage, after burning through $35 million and 200 man-years of labor. Ouch! What can we learn about the role of the system level designer from this experience?

The system level designer faces the challenge of evaluating all of the product requirements, choosing implementation trade-offs that necessarily arise. The architectural requirements of a processor cache include block size, hit time, miss penalty, access time, transfer time, miss rate and cache size. It shares physical requirements with other blocks such as area, power and cycle time. However, of particular interest to us are its verification requirements, such as simplicity, limited state space and determinism.

The cache must be as simple as possible while meeting all of its other requirements. Simplicity translates into a shorter verification cycle because the specification is less likely to be misinterpreted (fewer bugs), fewer boundary conditions to be explored (smaller search space and smaller coverage models), less simulation cycles required for coverage closure and fewer properties to be proven. A limited state space also leads to a smaller search space and coverage model.  Determinism means that from the same initial conditions and given the same cycle-by-cycle input the response of the cache is always identical from one simulation run to the next. Needless to say, this makes it far easier to isolate a bug than an ephemeral glitch that cannot be produced on demand. These all add up to a cost savings in functional verification.

Ambrose, while skilled in processor cache design, was wholly unfamiliar with the design-for-verification requirements we just discussed. The net result was a groundbreaking, novel, high-performance three level cache that could not be implemented.

Posted in Coverage, Metrics, Organization, Verification Planning & Management | 2 Comments »

Automating Coverage Closure

Posted by Janick Bergeron on 5th July 2010

“Coverage Closure” is the process used to reach 100% of your coverage goals. In a directed test methodology, it is simply the process of writing all of the testcases outlined in the verification plan. In a constrained-random methodology, it is the process of adding constraints, defining scenarios or writing directed tests to hit the uncovered areas in your functional and structural coverage model. In the latter case, it is a process that is time-consuming and challenging: you must reverse-engineer the design and verification environment to determine why specific stimulus must be generated to hit those uncovered areas.

Note that your first strategy should be to question the relevance of the uncovered coverage point. A coverage point describes an interesting and unique condition that must be verified. If that condition is already represented by another coverage point, or it is not that interesting (if no one on your team is curious about nor looking forward to analyzing a particular coverage point, then it is probably not that interesting), then get rid of the coverage point rather than trying to cover it.

Something that is challenging and time-consuming is an ideal candidate for automation. In this case, the Holy Grail is the automation of the feedback loop between the coverage metrics and the constraint solver. The challenge in automating that loop is correlating those metrics with the constraints.

Coverage ConvergenceFor input coverage, this correlation is obvious. For every random variable in a transaction description, there is usually a corresponding coverage point, then cross coverage points for various combinations of random variables. VCS includes automatic input coverage convergence. It automatically generates the coverage model based on the constraints in a transaction descriptor, then will automatically tighten the constraints at run-time to reach 100% coverage in very few runs.

For internal or output coverage, the correlation is a lot more obscure. How can a tool determine how to tweak the constraints to reach a specific line in the RTL code, trigger a specific expression or produce a specific set of values in an output transaction? That is where newly acquired technology from Nusym will help. Their secret sauce traces the effect of random input values on expressions inside the design. From there, it is possible to correlate the constraints and the coverage points. Once this correlation is known, it is a relatively straightforward process to modify the constraints to target uncovered coverage points.

A complementary challenge to coverage closure is identifying coverage points that are unreachable. Formal tools, such as Magellan, can help identify structural coverage points that cannot be reached and thus further trim your coverage space. For functional coverage, that same secret sauce from Nusym can also be used to help identify why existing constraints are preventing certain coverage points from being reached.

Keep in mind that filling the coverage model is not the goal of a verification process: it is to find bugs! Ultimately, a coverage model is no different than a set of directed testcases: it will only measure the conditions you have thought of and consider interesting. The value of constraint-random stimulus is not just in filling those coverage models automatically, but also in creating conditions you did not think of. In a constrained-random methodology, the journey is just as interesting—and valuable—as the destination.

Posted in Coverage, Metrics, Creating tests, Verification Planning & Management | 4 Comments »

Learn about VMM adoption from customers – straight from SNUG India 2010

Posted by Srinivasan Venkataramanan on 1st July 2010

…Reflections form core engineering team of CVC, fresh from SNUG India 2010

Jijo PS, Thirumalai Prabhu, Kumar Shivam, Avit Kori, Praveen & Nikhil – TeamCVC

Here is a quick collection of various VMM updates from SNUG India 2010 – as seen by TeamCVC. Expect to hear more on VMM1.2 soon from us as now I have a young team all charged up with VMM 1.2 (thanks to Amit @SNPS). All the papers and presentations can be accessed through:

TI’s usage of VMM 1.2 & RAL

In one of the well received papers, TI Bangalore talked about “Pragmatic Approach for Reuse with VMM1.2 and RAL “. The design is a complex digital display subsystem involving numerous register configurations. Not only handling the register configurations is a challenge, but also the ability to reuse of block level subenvs at system level with ease, and with minimal rework and reduced verification time. The author presented their success with VMM 1.2 & RAL to address these challenges.

Key elements touched up on advanced VMM are:

· TLM 2.0 communication mechanism

· VMM Multi-Stream Scenario gen (MSS)


Automated Coverage Closure with ECHO

It is real and live – automated coverage closure is slowly becoming reality atleast in select designs & projects. Having been attempted by various vendors for a while. VCS has added this under ECHO technology. At SNUG, TI presented their experience with ECHO & VMM-RAL. In their paper titled “Automating Functional Coverage Convergence and Avoiding Coverage Triage with ECHO Technology” TI described how an ECHO based methodology in a VMM RAL based environment, can in an automated manner close the feedback loop in targeting coverage groups involving register configuration combinations resulting in significant reduction in verification time.

WRED Verification with VMM

In her paper on “WRED verification with VMM”, Puja shared her usage of advanced VMM capabilities for a challenging verification task. Specifically she touched upon:

· VMM Multi-Stream Scenario gen

· VMM Datastream Scoreboard with its powerful “with_loss” predictor engine

· VMM RAL to access direct & indirect RAMs & registers

What we really liked is to see real application of some of these advanced VMM features – we were taught all of these during our regular CVC trainings and we even tried many of them on our own designs. It feels great to hear form peers on similar usage and to appreciate the value we derive out of VMM @CVC and the vibrant ecosystem that CVC creates around the same.

System-Level verification with VMM

Ashok Chandran, of Analog Devices presented their use of specialized VMM components in a system-level verification project. Specifically he touched upon specialized VMM base classes like vmm_broadcast and vmm_scheduler

At the end the audience learnt what are some of the unique challenges a SoC verification project can present. Even more interesting was the fact that the ever growing VMM seems to have solution for a wide variety of such problems, well thought-out upfront – Kudos to the VMM developers!

Ashok also briefed on his team’s usage of relatively new features in VMM such as vmm_record and vmm_playback and how it helps us to quickly replay streams.

On the tool side, a very useful feature for regressions is the usage of separate compile option in VCS.

VMM 1.2 for VMM users

Amit from SNPS gave a very useful and upto-the-point update on VMM 1.2 for long time VMM users. It was rejuvenating to listen to the VMM 1.2 run_tests feature and the implicit phasing techniques. Though look like little “magic” these features are bound to improve our productivity as there are lesser things to code-debug and move-on..

Amit also touched upon the use of TLM 2.0 ports and how they can be nicely used for integrating functional coverage, instead of using the vmm_callbacks.

The hierarchical component creation and configurations in VMM 1.2 puts us on track for the emerging UVM and is very pleasing to see how the industry keeps moving to more-n-more automation.

A truly vibrant ecosystem enabled by CVC -VMM Catalyst member

A significant addition to this year’s SNUG India was the DCE – Designer Community Expo – a genuine initiative by Synopsys to bring in partners to serve the larger customer base better all under one roof. CVC ( being the most active VMM catalyst member in this region was invited to setup a booth showcasing its offerings. We gave away several books including our popular VMM adoption book and all the new SVA Handbook 2nd edition .

Here is a snapshot of CVC’s booth with our VMM and other offerings.


Posted in Coverage, Metrics, Register Abstraction Model with RAL, VMM | 1 Comment »

Proxy Coverage

Posted by Andrew Piziali on 22nd January 2010

Andy's Picture_1

Andrew Piziali, Independent Consultant

I was recently talking with a friend about a subject near and dear to my heart: functional coverage. Our topic was block level or subsystem coverage models that are not ultimately satisfied until paired with corresponding system level context. The question at hand was “Is it possible to use conditional block level coverage as a proxy for system level coverage by recording coverage points at the block level, conditioned upon subsequent observation of system level context?”

For example, let’s say I have an SoC DUV (design under verification) that contains an MPEG-4 AVC (H.264) video encoder with a variety of features enabled by configuration registers. I need to observe the encoder operating in a subset of modes—referred to as “profiles” in H.264 parlance—defined by permutations of the features. In addition, each profile is only stressed when the encoder is used by an application for which the profile was designed. For example, the “high profile” is intended for high definition television while the “scalable baseline profile” was designed for mobile and surveillance use. The profiles may be simulated at the block level but the applications are only simulated at the system level. How do I design, implement and fill this H.264 coverage model so that the model is populated using primarily block level simulations, with their inherent performance advantage, while depending upon system level application context? May I even use the block level coverage results as a proxy for the corresponding system level coverage?

I think this nut is tougher to crack than the black walnuts that recently fell from my tree. The last time I tried to crack those nuts I resorted to vise grips from my tool chest, leading to a number of unintended, but spectacular, results, but I digress. My friend and I were able to define a partial solution but it wasn’t at all clear whether or not a viable solution exists that leverages the speed of block level simulations. After you finish reading this post, I’d like to hear your thoughts on the matter. This is how far we got.

As with the top level design of any coverage model, we started with its semantic description (sometimes called a “story line”):

Record the encoder operating in each of its profiles while in use by the corresponding applications.

Next, we identified the attributes required by the model and their values:

Attribute Values
Profile BP, CBP, Hi10P, Hi422P, Hi444PP, HiP, HiSP, MP, XP
Application broadcast, disc_storage, interlaced_video, low_cost, mobile, stereo_video, streaming_video, video_conferencing
Feature 8×8_v_4×4_transform_adaptivity, Arbitrary_slice_ordering, B_slices, CABAC_entropy_coding, Chroma_formats, Data_partitioning, Flexible_macroblock_ordering, Interlaced_coding, Largest_sample_depth, Monochrome, Predictive_lossless_coding, Quantization_scaling_matrices, Redundant_slices, Separate_Cb_and_Cr_QP_control, Separate_color_plane_coding, SI_and_SP_slices

Then, we finished the top level design using a second order model (simplified somewhat for clarity):

The left column of the table serves as row headings: Attribute, Value, Sampling Time and Correlation Time. The remaining cells of each row contain values for the corresponding heading. For example, the “Attribute” names are “Profile,” “Application” and “Feature.” The values of attribute “Profile” are BP, CBP, Hi10P, etc. The time at which each attribute is to be sampled is recorded in the “Sampling Time” row. The time that the most recently sampled attribute values are to be recorded as a set is when the “application [is] started,” the correlation time. Finally, the magenta cells define the value tuples that compose the coverage points of the model.

The model is implemented in SystemVerilog, along with the rest of the verification environment and we begin running block level simulations. Since no actual H.264 application is run at the block level, we need to invent an expected application value whenever we record a coverage point defined by an attribute value set, lacking only an actual application. Why not substitute a proxy application value, to be replaced by an actual application value when a similar system level simulation is run? For example, if in a block level simulation we observe profile BP and feature ASO ({BP, *, ASO}), we could substitute the value “need_low_cost” for “low_cost” for the application value, recording the tuple {BP, need_low_cost, ASO}. This becomes a tentative proxy coverage point. Later on, when we are running a system level simulation with an H.264 application, whenever we observe a {BP, low_cost, ASO} event, we would substitute the much larger set of {BP, need_low_cost, ASO} block level events for this single system level event, replacing “need_low_cost” with “low_cost.” This would allow us to “observe” the larger set of system level {BP, low_cost, ASO} events from the higher performance block level simulations. How can we justify this substitution?

We could argue that a particular system level coverage point is an element of a set of block level coverage points because (1) it shares the same subset of attribute values with the system level coverage point and (2) the block level simulation is, in a sense, a superset of the system level simulation because it implicitly abstracts away the additional detail available in the system level simulation. Is there any argument that the profile value BP and feature value ASO are the same in the two environments? The second reason is clearly open to discussion.

This brings us back to the opening question, can we use conditional block level coverage as a proxy for system level coverage by recording coverage points at the block level, conditioned upon subsequent observation of system level context? If so, is this design approach feasible and reasonable? If not, why not? Have at it!

Posted in Coverage, Metrics, Verification Planning & Management | 3 Comments »

Paved With Good Intentions: Examining Lost Design Intent

Posted by Adiel Khan on 7th December 2009


Adiel Khan, Synopsys CAE

Andrew Piziali, Independant Consultant

Remember the kick-off of your last new project, when the road to tape-out was paved with good intentions? Architects were brainstorming novel solutions to customer requirements? Designers were kicking around better implementation solutions? Software engineers were building fresh new SCM repositories? And you, the verification engineer, were excitedly studying the new design and planning its verification? Throughout all of this early excitement, all sorts of good intentions were revealed. Addressing the life story of each intention would make a good short story, or even a novel! Since we don’t have room for that, let’s just focus on the design intentions.

Design intent, how the architect intended the DUV (design under verification) to behave, originates in the mind’s eye of the architect. It is the planned behavior of the final implementation of the DUV. Between original intent and implementation the DUV progresses through a number of representations, typically referred to as models, wherein intent is unfortunately lost. However, intent’s first physical representation, following conception, is its natural language specification.

We may represent the space of design behaviors as the following Venn diagram:1

Each circle—Design Intent (AEGH), Specification (BEFH) and Implementation (CFGH)—represents a set of behaviors. AEGH represents the set of design requirements, as conveyed by the customer. BEFH represents the intent captured in the specification(s). CFGH represents intent implemented in the final design. The region outside the three sets (D) represents unintended, unspecified and unimplemented behavior. The design team’s objective is to bring the three circular sets into coincidence, leaving just two regions: H (intended, specified and implemented) and D. By following a single design intention from set Design Intent to Specification to Implementation, we learn a great deal about how design intent is lost.

An idea originally conceived appears in set AEGH (Design Intent) and, if successfully captured in the specification, is recorded in set EH. However, if the intent is miscommunicated or not even recorded and lost, it remains in set A. There is the possibility that a designer learns of this intent, even though it not recorded in the specification, and recaptures it in the design. In that case we find it in set G: intended, implemented, but unspecified.

Specified intent is recorded in set BEFH. Once the intent is captured in the specification it must be read, comprehended and implemented by the designer. If successful, it makes it to the center of our diagram, set H: intended, specified and implemented. Success! Unfortunately some specified requirements are missed or misinterpreted and remain unimplemented, absent from the design as represented by set E: intended, specified but unimplemented. Sometimes intent is introduced into the specification that was never desired by the customer and (fortunately) never implemented, such as set B: unintended, unimplemented, yet specified. Unfortunately, there is also unintended behavior that is specified and implemented as in set F. This is often the result of gratuitous embellishment or feature creep.

Finally, implemented intent is represented by set CFGH, all behaviors exhibited by the design. Those in set G arrived as originally intended but were never specified. Those in set H arrived as intended and specified. Those in set F were introduced into the specification, although unintended, and implemented. Behaviors in set C were implemented, although never intended nor specified! In order to illustrate the utility of this diagram, let’s consider a specific example of lost design intent.

We can think of each part of the development process as building a model. Teams write documentation as a specification model, such as a Microsoft Word document. System architects build an abstract algorithmic system model that captures the specification model requirements, using SystemC or C++. Designers build a synthesizable RTL model in Verilog or VHDL. Verification engineers build an abstract problem space functional model in SystemVerilog, SVA and/or e.

If any member of the team fails to implement an element of the upstream, more abstract model correctly (or at all), design intent is lost. The verification engineer can recover this lost design intent by working with all members of the team and giving the team observability into all models.

Consider an example where the system model (ex. C++) uses a finite state machine (FSM) to control the data path of the CPU whereas the specification model (ex. MS Word) implies how the data path should be controlled. This could be a specification ambiguity that the designer ignores, implementing the data path controller in an alternate manner, which he considers quite efficient.

Some time later the system architect may tell the software engineers that they do not need to implement exclusive locking because the data path FSM will handle concurrent writes to same address (WRITE0, WRITE1). However, the designer’s implementation is not based on the system model FSM but rather the specification model. Therefore, exclusive locking is required to prevent data corruption during concurrent writes. We need to ask: How can the verification engineer recover this lost design intent by observing all models?

Synopsys illustrates a complete solution to the problem in a free verification planning seminar that dives deep into this topic. However, for the purposes of this blog we offer a simplified example, using the design and implementation of a coverage model:

  1. Analyze the specification model along with the system model
  2. Identify the particular feature (ex. mutex FSM) and write its semantic description
  3. Determine what attributes contribute to the feature behavior
  4. Identify the attribute values required for the feature
  5. Determine when the feature is active and when the attributes need to be sampled and correlated

This top-level design leads to:

Feature CPU_datapath_Ctrl
Description Record the state transitions of the CPU data path controller
Attribute Data path controller state variable
Attribute Values IDLE, START, WRITE0, WRITE1
Sample Whenever the state variable is written

The verification engineer can now implement a very simple coverage model to explicitly observe the system model, ensuring entry to all states:

enum logic [1:0] {IDLE, START, WRITE0, WRITE1} st;

covergroup cntlr_cov (string m_name) with function sample (st m_state);

option.per_instance = 1; = m_name;

model_state: coverpoint m_state {

bins t0 = (IDLE   => IDLE);

bins t1 = (IDLE   => START);

bins t2 = (START  => IDLE);

bins t3 = (START  => WRITE0);

bins t4 = (WRITE0 => WRITE1);

bins t5 = (WRITE1 => IDLE);

bins bad_trans = default;




The verification engineer can link the feature “CPU_datapath_Ctrl” in his verification plan to the cntlr_cov covergroup. Running the system model with the verification environment and RTL implementation will reveal that bin “t4″ is never visited, hence state transition WRITE0 to WRITE1 is never observed. The team can review the verification plan to determine if the intended FSM controller should be improved in the design to conform to all design intent.

Although there are many other subsets of the design intent diagram we could examine, it is clear that a design intention may be lost through many recording and translation processes. By understanding this diagram and its application, we become aware of where intent may be lost or corrupted and ensure that our good intentions are ultimately realized.

1The design intent diagram is more fully examined in the context of a coverage-driven verification flow in chapter two of the book Functional Verification Coverage Measurement and Analysis (Piziali, 2004, Springer, ISBN 978-0-387-73992-2).

Posted in Coverage, Metrics, Organization, Verification Planning & Management | Comments Off

Say What? Another Look At Specification Analysis

Posted by Shankar Hemmady on 26th October 2009

Andy's Picture_2 Andrew Piziali, Independent Consultant

Have you ever been reviewing a specification and asked yourself “Say what?!” Then you’re not alone! One of the most challenging tasks we face as verification engineers is understanding design specifications. What does the architect mean when she writes “The conflabulator remains inoperative until triggered by a neural vortex?” Answering that question is part of specification analysis, the first step in planning the verification of a design, the subsequent steps being coverage model design, verification environment implementation, and verification process execution.

The specifications for a design—DUV, or “design-under-verification” for our purposes—typically include a functional specification and a design specification. The functional specification captures top level, opaque box, and implementation- independent requirements. Conversely, the design specification captures internal, clear box, implementation dependent behaviors. Each is responsible for conveying the architect’s design intent at a particular abstraction level to the design and verification teams. Our job is to ultimately comprehend these specifications in order to understand and quantify the scope of the verification problem and specify its solution. This comprehension comes through analyzing the specifications.

In order to understand the scope of the verification problem, the features of the DUV and their relationships must be identified. Hence, specification analysis is sometimes referred to as feature extraction. The features are described in the specifications, ready to be mined through our analysis efforts. Once extracted and organized in the verification plan, we are able to proceed to quantifying the scope and complexity of each by designing its associated coverage model. How do we tackle the analysis of specifications ranging from tens to hundreds of pages? The answer depends upon the size of the specification and availability of machine-guided analysis tools. For relatively small specifications, less than a hundred pages or so, bottom-up analysis ought to be employed. Specifications ranging from a hundred pages and beyond require top-down analysis.

Bottom-up analysis is the process of walking through each page of a specification: section-by-section, paragraph-by-paragraph, and sentence-by-sentence. As we examine the text, tables and figures, we ask ourselves what particular function of the DUV is addressed? What behavioral requirements are imposed? What verification requirements are implied? Is this feature amenable to formal verification, constrained random, or a hybrid technology? If formal is applicable, how might I formulate a declarative statement of the required behavior? What input, output and I/O coverage is needed? If this feature is more amenable to constrained random simulation, what are the stimulus, checking and coverage requirements?

Each behavioral requirement is a feature to be placed in the verification plan, in either the functional or design requirements sections, as illustrated below:

1 Introduction ………………………………… what does this document contain?

2 Functional Requirements …………….. opaque box design behaviors

2.1 Functional Interfaces ……………. external interface behaviors

2.2 Core Features ………………………. external design-independent behaviors

3 Design Requirements ………………….. clear box design behaviors

3.1 Design Interfaces …………………. internal interface behaviors

3.2 Design Cores ……………………….. internal block requirements

4 Verification Views ……………………….. time-based or functional feature groups

5 Verification Environment Design …. functional specification of the verification environment

5.1 Coverage …………………………….. coverage aspect functional specification

5.2 Checkers …………………………….. checking aspect functional specification

5.3 Stimuli ………………………………… stimulus aspect functional specification

5.4 Monitors ……………………………… data monitor functional specifications

5.5 Properties ……………………………. property functional specifications

Bottom-up analysis is amenable to machine-guided analysis, wherein an application presents a specification before the user. For each section of the spec, perhaps for each sentence, the tool asks if this describes a feature, what are its property, stimulus, checking and coverage requirements, and records this information so that it may be linked to the corresponding section of the verification plan. This facilitates keeping the specifications and the verification plan synchronized. The verification plan is incrementally constructed within a verification plan integrated development environment (IDE).

The alternative to bottom-up analysis is analyzing a specification from the top down, required for large specifications. Your objective here is to bridge the intent abstraction gap between the detail of the specification and the more abstract, incrementally written verification plan. Behavioral requirements are distilled into concise feature descriptions, quantified in their associated coverage models. Top-down analysis is conducted in brainstorming sessions wherein representatives from all stakeholders in the DUV contribute. These include the systems engineer, verification manager, verification engineer, hardware designer and software engineer. After the verification planning methodology is explained to all participants, each engineer contributing design intent explains their part of the design. The design is explored through a question-and-answer process, using a whiteboard for illustration. In order to facilitate a fresh examination of the design component, no pre-written materials should be used.

Whether bottom-up or top-down analysis is used, each design feature should be a design behavioral requirement, stating the intended behavior of the DUV. Both the data and temporal behaviors of each feature should be recorded. In addition to recording the name of each feature, the behavior should be summarized in a sentence or two semantic description. Optionally, design and verification responsibilities, technical references, schedule information and verification labor estimates may be recorded. If the verification plan is written in Microsoft Word, Excel or in HVP1 plain text, it may drive the downstream verification flow, serving as a design-specific verification user interface.

The next time you ask “Say what?!,” make sure you are methodically analyzing the specification using either of the above approaches and don’t hesitate to contact the author directly. Many bugs discovered during these exchanges are the least expensive of all!

1Hierarchical Verification Planning language

Posted in Coverage, Metrics, Organization, Verification Planning & Management | Comments Off

Make Your Coverage Count!

Posted by Shankar Hemmady on 31st August 2009

Andy Piziali Andrew Piziali, Independent Consultant

You are using coverage, along with other metrics, to measure verification progress as part of your verification methodology.1 2 Yet, lurking in the flow are the seeds of a bug escape that will blindside you. How so?

Imagine you are responsible for verifying an in-order, three-way x86 superscalar processor in the last millennium, before the Constrained Random Generation. Since your management wouldn’t spring for an instruction set architecture (ISA) test generator, you hired a team of new college grads to write thousands of assembly language tests. Within the allocated development time, the tests were written, they were functionally graded and achieved 100% coverage, and they all finally passed. Yeah! But, not so fast …

When first silicon was returned and Windows was booted on the processor, it crashed. The diagnosis revealed a variant of one of the branch instructions was misbehaving. (This sounds better than “It had a bug escape.”) How could this be? We reviewed our branch instruction coverage models and confirmed they were complete. Since all of the branch tests passed, how could this bug slip through?

Further analysis revealed this branch instruction was absent from the set of branch tests yet used in one of the floating point tests. Since the floating point test was aimed at verifying the floating point operation of the processor, we were not surprised to find it was insensitive to a failure of this branch instruction. In other words, as long as the floating point operations verified by the test behaved properly, the test passed, independent of the behavior of the branch instruction. From a coverage aspect, the complete ISA test suite was functionally graded rather than each sub-suite graded according to its functional requirements. Hence, we recorded full coverage.

The problem was now clear: the checking and coverage aspects of each test were not coupled, conditioning coverage recording on passing checked behavior. If we had either (1) functionally graded each test suite only for the functionality it was verifying or (2) conditionally recorded each coverage point based upon a corresponding check passing, this bug would not have slipped through. Using either approach, we would have discovered this particular branch variant was absent from the branch test suite. In the first case, that coverage point would have remained empty for the branch test suite. Likewise, in the second case we would not have recorded the coverage point because no branch instruction check would have been activated and passed.

Returning to the 21st century, the lesson we can take away from this experience is that coverage—functional, code and assertion—is suspect unless, during analysis, you confirm that for each coverage point a corresponding checker was active and passed. From the perspective of implementing your constrained random verification environment, each checker should emit an event (or some other notification), synchronous with the coverage recording operation, indicating it was active and the functional behavior was correct. The coverage code should condition recording each coverage point on that event. If you are using a tool like VMM Planner to analyze coverage, you may use its “-feature” switch to restrict the annotation of feature-specific parts of your verification plan to the coverage database(s) of that feature’s test suite.

You might ask if functional qualification3 would address this problem. Functional qualification answers the question “Will my verification environment detect, propagate and report each functional bug?” As such, it provides insight into how well your environment detects bugs but says nothing about the quality of the coverage aspect of the environment. I will address this topic in a future post if there is sufficient interest.

Remember, make your coverage count by coupling checking with coverage!

1Metric Driven Verification, 2008, Hamilton Carter and Shankar Hemmady, Springer

2Functional Verification Coverage Measurement and Analysis, 2008, Andrew Piziali, Springer

3“Functional Qualification,” “EDA Design Line,” June 2007, Mark Hampton

Posted in Coverage, Metrics, Organization, Verification Planning & Management | Comments Off

Give Me Some Space, Man!

Posted by Shankar Hemmady on 11th August 2009

andrew_piziali2 Andrew Piziali, Independent Consultant

A question I am often asked is “When and where should I use functional coverage and code coverage?” Since the purpose of coverage is to quantify verification progress, the answer lies in understanding the coverage spaces implemented by these two kinds of coverage.

A coverage space represents a subset of the behavior of your DUV (design under verification), usually of a particular feature. It is defined by a set of metrics, each a parameter or attribute of the feature quantified by the space. For example, the coverage space for the ADD instruction of a processor may be defined by the product of the absolute values of ranges of the operands (remember “addends?”) and their respective signs. In order to understand the four kinds of coverage metrics, we need to discuss the coverage spaces from which they are constructed.

A coverage metric is determined by its source—implementation or specification—and its author—explicit or implicit. An implementation metric is derived from the implementation of the DUV or verification environment. Hence, the width of a data bus is an implementation metric, as is the module defining a class. Conversely, a specification metric is derived from the DUV functional or design specification. A good example is the registers and their characteristics defined in a specification.

The complementary coverage metric classification is determined by whether the metric is explicitly chosen by an engineer or implicit in the metric source. Hence, an explicit metric is chosen or invented by the verification engineer in order to quantify some aspect of a DUV feature. For example, processor execution mode might be chosen for a coverage metric. Alternatively, an implicit metric is inherent in the source from which the metric value is recorded. This means things like module name, line number and Boolean expression term are implicit metrics from a DUV or verification environment implementation. Likewise, chapter, paragraph, line, table and figure are implicit metrics from a natural language document, such as a specification.

Combining the two metric kinds—source and author—leads to four kinds of coverage metrics, each defining a corresponding kind of coverage space:

  1. Implicit implementation metric

  2. Implicit specification metric

  3. Explicit implementation metric

  4. Explicit specification metric

An example of an implicit implementation metric is a VHDL statement number. The register types and numbers defined by a functional specification are an implicit specification metric. Instruction decode interval is an explicit implementation metric. Finally, key pressed-to-character displayed latency is an example of an explicit specification coverage metric.

Each metric kind may be used to define an associated kind of coverage space. The astute reader may also wonder about coverage spaces defined by a mix of the above metric kinds. If such a hybrid space more precisely quantifies the verification progress of a particular feature, use it! To the best of my knowledge, you’d have to design and implement this space in much the same way as any functional coverage space because no commercial tool I am aware of offers this kind of coverage.

With an understanding of the kinds of coverage spaces, we can now classify functional and code coverage and figure out where they ought to be used. Functional coverage, making use of explicit coverage metrics—independent of their source—defines either an explicit implementation space or an explicit specification space. Code coverage tools provide a plethora of built-in implicit metric choices. Hence, it defines implicit implementation spaces. Where you want to measure verification progress relative to the DUV functional specification, where features are categorized and defined, functional coverage is the appropriate tool. Where you want to make sure all implemented features of the DUV have been exercised, you should use code coverage. Lastly, when your code coverage tool does not provide sufficient insight, resolution or fidelity into the behavior of the DUV implementation, functional coverage is required to complement the implicit spaces it does offer.

Functional coverage can tell you the DUV is incomplete, missing logic required to implement a feature or a particular corner case, whereas code coverage cannot. On the other hand, code coverage can easily identify unexercised RTL, while functional coverage cannot. Functional coverage requires a substantial up-front investment for specification analysis, design and implementation yet relieves the engineer of much back-end analysis. Code coverage, on the other hand, may be enabled at the flip of a switch but usually requires a lot of back-end analysis to sift the false positives from the meaningful coverage holes. Both are required—and complementary—but their deployment must be aligned with the stage of the project and DUV stability.

Some smart alec will point out that you can’t measure verification progress using coverage alone, and you’re right! Throughout this discussion I assume each feature, with its associated metrics, has corresponding checkers that pipe up when the DUV behavior differs from the specified behavior. (I’ll leave the topic of concurrent behavior recording and checking for another day.)

If you’d like to learn much more about designing, implementing, using and analyzing coverage, the following books delve much more deeply into verification planning, management and coverage model design:

Posted in Coverage, Metrics, Organization, Verification Planning & Management | Comments Off

A generic functional coverage solution based on vmm_notify

Posted by Wei-Hua Han on 15th June 2009

Weihua Han, CAE, Synopsys

Functional coverage plays an essential role in Coverage Driven Verification. In this blog, I’ll explain a modular way of modeling and implementing  functional coverage models.

SystemVerilog users can take the advantage of  the “covergroup” construct to implement functional coverage. However this is not enough. The VMM methodology provides some important design-independent guidelines on how to implement functional coverage in a reusable verification environment.

Here are a few guidelines  from the VMM book that I consider very important for implementing a functional coverage model:

“Stimulus coverage shall be sampled after submission to the design” because in some cases it is possible that not all generated transactions will be applied to DUT.

“Stimulus coverage should be sampled via a passive transactor stack” for the purpose of reuse so that the same implementation can be used in a different stimulus generation structure. For example in another verification environment, the stimulus may be generated by a design block instead of testbench component.

“Functional coverage should be associated with testbench components with a static lifetime” to avoid creating a large number of coverage group instances. So “Coverage groups should not be added to vmm_data class extensions”. These static testbench components include monitors, generators, self-checking structure, etc.

“The comment option in covergroup, coverpoint, and cross should be used to document the corresponding verification requirements”.
You can find some more details on these and other functional coverage related guidelines in the VMM book, pages 263-277.

In an earlier post “Did you notice vmm_notify?”, Janick showed how vmm_notify can be used to connect a transactor to a passive observer like a functional coverage model. Here, let me borrow his idea to implement the following VMM-based functional coverage example.

1.    Transaction class

1. class eth_frame extends vmm_data;
2.    typedef enum {UNTAGGED, TAGGED, CONTROL} frame_formats_e;
3.    rand frame_formats_e format;
4.    rand bit[3:0] src;
5.     …
6. endclass

There is some random property defined in the transaction class.  We would like to collect the coverage information for these properties in our coverage class, for example “format” and “src”.

2.    A generic subscribe class

1.  class subscribe #(type T = int) extends vmm_notify_callbacks;
2.     local T obs;
3.     function new(T obs, vmm_notify ntfy, int id);
4.        this.obs = obs;
5.        ntfy.append_callback(id, this);
6.     endfunction
7.     virtual function void indicated(vmm_data status);
8.          this.obs.observe(status);
9.       endfunction
10. endclass

The generic subscribe class specifies the behavior for “indicated”method which will be called when vmm_notify “indicate” method is called. Then with using this generic subscribe class, functional coverage and other observer models only need to implement the desired behavior with “observe” method. Please note that in line 8 the vmm_data object is passed to “observe” method through vmm notification status information.

3.    Coverage class

1.   class eth_cov;
2.      eth_frame tr;

3.      covergroup cg_eth_frame(string cg_name,string cg_comment,int cg_at_least);
4.         type_option.comment = “eth frame coverage”;
5.         option.at_least = cg_at_least;
7.         option.comment = cg_comment;
8.         option.per_instance=1;
9.         cp_format:coverpoint tr.format {
10.         type_option.weight=10;
11.       }
12.       cp_src: coverpoint tr.src {
13.          illegal_bins ilg = {4′b0000};
14.         wildcard bins src0[] = {4′b0???};
15.         wildcard bins src1[] = {4′b1???};
16.         wildcard bins src2[] = (4′b0??? => 4′b1???);
17.      }
18.       format_src_crs: cross cp_format, cp_src {
19.          bins c1 = !binsof(cp_src) intersect {4’b0000,4’b1111 };
20.      }
21.    endgroup

22.    function new(string name=”eth_cov”, vmm_notify notify, int id);
23.       subscribe #(eth_cov) cb=new(this,notify,id);
24.       cg_eth_frame = new(“cg_eth_frame”,”eth frame coverage”,1);
25.    endfunction
26.    function void observe (vmm_data tr);
27.       $cast(,tr);
28.       cg_eth_frame.sample();
29.    endfunction
30. endclass

The coverage group is implemented in coverage class eth_cov. And this coverage class is registered to one vmm_notify service through scubscribe class. The coverage group is sampled in “observe” method so when the notification is indicated the interesting properties will be sampled.

4.    Monitor

1.   class eth_mon extends vmm_xactor;
2.      int OBSERVED;
3.      eth_frame tr; // Contains reassembled eth_frame transaction

4.      function new(…)
5.         OBSERVED = this.notify.configure();
6.         …
7.      endfunction
8.      protected virtual task main();
9.         forever begin
10.          tr = new;
11.          //catch transaction from interface
12.         …
13.         this.notify.indicate(OBSERVED,tr);
14.         …
15.       end
16.    endtask
17. endclass

The monitor extracts the transaction from the interface then it indicates the notification with the transaction as the status information (line 13).

5.    Connect monitor and coverage object

1.   class eth_env extends vmm_env;
2.      eth_mon mon;
3.      eth_cov cov;
4.      …
5.      virtual function void build();
6.         …
7.         mon = new(…);
8.         cov = new(“eth_cov”,mon.notify,mon.OBSERVED);
9.         …
10.    endfunction
11.     …
12. endclass

In the verification environment, the coverage object is created with the monitor notification service interface and the proper notification ID.

There are other ways to implement functional coverage in VMM based environments.  For example a callback-based implementation is used in the example located under sv/examples/std_lib/vmm_test in the VMM package which can be downloaded from

I haven’t discussed assertion coverage in this post, which are another important type of “functional coverage”.  If you are interested in using assertions for functional coverage please check out chapter 3 and chapter 6 in the VMM book for its suggestions.

Posted in Communication, Coverage, Metrics, Register Abstraction Model with RAL, Reuse, SystemVerilog, VMM | 4 Comments »