Verification Martial Arts: A Verification Methodology Blog

Archive for the 'Organization' Category

Avoiding Redundant Simulation Cycles with your UVM VIP based simulation with a Simple Save-Restore Strategy

Posted by paragg on 6th March 2014

In many verification environments, you reuse the same configuration cycles across different testcases. These cycles might involve writing and reading from different configuration and status registers, loading program memories, and other similar tasks to set up a DUT for the targeted stimulus. In many of these environments, the time taken during this configuration cycles are very long. Also, there is a lot of redundancy as the verification engineers have to run the same set of verified configuration cycles for different testcases leading to a loss in productivity. This is especially true for complex verification environments with multiple interfaces which require different components to be configured.

The Verilog language provides an option of saving the state of the design and the testbench at a particular point in time. You can restore the simulation to the same state and continue from there. This can be done by adding appropriate built in system calls from the Verilog code. VCS provides the same options from the Unified Command line Interpreter (UCLI).

However, it is not enough for you to restore simulation from the saved state. For different simulations, you may want to apply different random stimulus to the DUT. In the context of UVM, you would want to run different sequences from a saved state as show below.

In the above example apart from the last step which varies to large extent, the rest of the steps once established need no iterations.

Here we explain how to achieve the above strategy with the simple existing UBUS example available in the standard UVM installation. Simple changes are made in the environment to show what needs to be done to bring in this additional capability. Within the existing set of tests, the two namely, “test_read_modify_write” and “test_r8_w8_r4_w4”, differs only w.r.t the master sequence being executed – i.e. “read_modify_write_seq” and “r8_w8_r4_w4_seq” respectively.

Let’s say that we have a scenario where we would want to save a simulation once the reset_phase is done and then start executing different sequences post the reset_phase the restored simulations. To demonstrate a similar scenario through the UBUS tests, we introduced a delay in the reset_phase of the base test (in a real test, this may correspond to the PLL lock, DDR Initialization, Basic DUT Configuration).

The following snippet shows how the existing tests are modified to bring in the capability of running different tests in different ‘restored’ simulations.

As evident in the code we made two major modifications.

  • Shifted the setting of the phase default_sequence from the build phase to the start of the main phase.
  • Get the name of the sequence as an argument from the command-line and process the string appropriately in the code to execute the sequence on the relevant sequencer.

As you can see, the changes are kept to a minimum. With this, the above generic framework is ready to be simulated.  In VCS, one of the different ways, the save/restore flow can be enabled as follows.

Thus above strategy helps in optimal utilization of the compute resources with simple changes in your verification flow. Hope this was useful and you manage to easily make the changes in your verification environment to adopt this flow and avoid redundant simulation cycles.

Posted in Automation, Coding Style, Configuration, Creating tests, Customization, Optimization/Performance, Organization, Reuse, Stimulus Generation, SystemVerilog, Tutorial, UVM, Uncategorized, Verification Planning & Management | 1 Comment »

Namespaces, Build Order, and Chickens

Posted by Brian Hunter on 14th May 2012

As described in the video, vkits are our convenient method of lumping together reusable UVM packages with the interface(s) that they operate on. Because code within packages can only peek or poke wires that are contained by a virtual interface, it is often useful to wrap these together somehow, and vkits are our technique at Cavium for doing that.

What goes in a vkit? Anything that is reusable. From simple agents and the interfaces they work on to complete UVM environments that connect these agents together, including scoreboards, sequence libraries, types, and utility functions.

What does not go in a vkit are items that are bound to a specific testbench, including the tests themselves.

The video describes the wildcard import syntax as an “egregiously bad idea.” First and foremost, doing so can lead to namespace pollution, which comes about when one engineer independently adds types or classes to their package and only later finds out that they conflict with those of another package. Secondly, wildcard imports prevent our shorter naming conventions of having an agent_c, drv_c, env_c, etc., within each package.

Not described in the video are CSR packages that are auto-generated by RAL, IP-XACT, or your script of choice. These packages should be independent of your vkits, such that your vkits refer to them with their explicit scopes (i.e., chx_csr_pkg::PLUCKING_CFG_C).

Future posts will go into more detail about how we architect UVM testbenches and some of our other conventions that work within this framework. Until then, I’ve got a lot of pies to eat.

PS. I’ll be at DAC this year! Come see me on Tuesday, June 5, during the “Industry Leaders Verify with Synopsys” lunch. Hopefully they’ll be serving some of my favorite foods!

Posted in Organization, Structural Components, SystemVerilog, Tutorial, UVM | 6 Comments »

Closed Loop Register Verification using IDesignSpec and the Register Abstraction Layer

Posted by Amit Sharma on 26th September 2011

Nitin Ahuja, Agnisys Technology Pvt. Ltd

In the previous article titled “Automatic generation of Register Model for VMM using IDesignSpecTM ” we discussed how it is advantageous to use a register model generator such as IDesignSpecTM, to automate the process of RALF model generation. Taking it forward, in this article we will discuss how to close the loop on register verification.

Various forms of coverage are used to ensure that registers are functioning properly. There are three coverage models in VMM. They are:

1. reg_bits coverage: this model is used to make sure that all the bits in the register are covered. This model works by writing and reading both 1 and 0 on every register bit, hence the name. This is specified using “cover +b” in the RALF model.

2. field_vals coverage: field value coverage model is implemented at the register level and supports value coverage of all fields and cross coverage between fields and other cross coverage points within the same register. This is specified using “cover +f” in the RALF model. User can specify the cross coverage depending on the functionality.

3. Address map: this coverage model is implemented at block level and ensures that all registers and the memories in the block have been read from and written to. This is specified using “cover +a” in the RALF model.

We will discuss how coverage can be switched on/off and how the type of coverage can be controlled for each field directly from the register specification.

Once the RALF model is generated, the next step in verification is to generate the RTL and the SystemVerilog RAL model using ‘ralgen’. The generated RAL model along with the RTL can be compiled and simulated in the VMM environment to generate the coverage database. This database is used for the report generation and analysis.

Reports can be generated using IDesignSpecTM (IDS). IDS generated reports have advantages over other report in that it generates the reports in a much more concise way showing all the coverage at one glance.

Turning Coverage ON or OFF

IDesignSpecTM enables the users to turn ON/OFF all the three types of coverage from within the MS Word specification itself.

Coverage can be specified and controlled using the “coverage” property in IDesignSpecTM which has the following possible values:

image

The hierarchical “coverage” property enables users to control the coverage of the whole block or at the chip level.

Here is a sample of how coverage can be specified in IDesignSpecTM:

image

image

This would be the corresponding RALF file :

agnisys_ralf

image

The coverage bins for each CoverPoint along with the cross for the various CoverPoints can also be defined in the specification as shown below:

image

This would translate to the following RALF:

image

Now, the next step after RALF generation would be to generate the RAL Model from the IDS generated RALF.

RAL MODEL AND RTL GENERATION FROM RALF:

The IDS generated RALF can be used with the Synopsys ‘ralgen’ to generate the RAL  (VMM or UVM) model as well as the RTL.

RAL model can be generated by using the following command:

image

If you specify –uvm above in the fisrt ralgen invocation above, a UVM Register Model would be generated.

COMPILATION AND REPORT GENERATION:

Once the RTL and the RAL model are generated using the ‘ralgen’, the complete model can be compiled and simulated in the VMM environment using VCS.

To compile the model use the following command on the command line:

vcs -R +plusarg_save -sverilog -o “simv1″ -ntb_opts rvm+dtm +incdir+<directories to search `defines> <files to be compiled> +define+RAL_COVERAGE

The compilation and simulation generates the simulation database which is used for the generation of the coverage reports.

Coverage reports can be generated in various forms but the most concise form can be in the form of the graphics showing all the coverage at a glance. For this, a tcl script “ivs_simif.tcl” takes up the simulation database and generates the text based report on execution of the following command:

% ivs_simif.tcl -in simv.vdb –svg

For running the above command set the environment variable “IDS_SIM_DIR”, the text report are generated at this location. This will also tell IDS where to look for the simulation data file.

A detailed graphical view of the report can be generated from IDS with the help of this text report. To generate the graphical report in the form of “scalable vector graphics” (SVG) select the “SVG” output from the IDS config and regenerate.

Another way of generating the SVG could be by using the IDS-XML or the Doc/Docx specification of the model as the input to the IDS in batch mode to generate the graphical report of the simulation by using the following command:

% idsbatch <IDS_generated_XML or doc/docx specification> -out “svg” -dir output_directory

Coverage Reports

IDesignSpec generates two types of reports from the input database.

They are:

1. Field_vals report

2. Reg_bits report

Field_vals report:

Field_vals report gives the graphical view of the field_vals coverage and the address coverage of the various registers and their respective fields.

The amount of coverage for the field (CoverPoints) is depicted by the level of green color in the fields, while that for complete register (CoverGroup) is shown by the color of name of the register.

The address coverage for the individual register (CoverPoint) is shown by the color of the address of the register (green if addressed; black if not addressed), while that of the entire block (CoverGroup) is shown by the color of the name of the block.

The coloring scheme for all the CoverGroups i.e. register name in case of the field_vals coverage and block name in case of the address coverage is:

1. If the overall coverage is greater than or equal to 80% then the name appears in GREEN color

2. If the coverage is greater than 70% but less than 80% then it appears in YELLOW

3. For coverage less than 70% name appears in RED color

Figure1 shows the field_vals and address coverage.

image

Figure:  Closed loop register verification using RALF and IDS

The above sample gives the following coverage information:

a. 2 registers, T and resetvalue, are not addressed out of total of 9 registers. Thus the overall coverage of the block falls in the range >70% &<80% which is depicted by the color of the Stopwatch (name of the block).

b. All the fields of the registers are filled with some amount of the green color which shows the amount of the coverage. As an example field T1 of register arr is covered 100% thus it is completely filled and FLD4 of register X is covered only about 10%. The exact value of coverage can be obtained by hovering over the field to get the tooltip showing the exact coverage value

c. Color of the name of the register, for example X is red, show the overall coverage of the whole register , which is less than 70% for X.

Reg_bits report:

Reg_bits report gives the detailed graphical view of the reg_bits coverage and address coverage.

Address coverage for reg_bits is shown in the same way as for the address coverage in field_vals. Reg_bits coverage has 4 components, that is,

1. Written as 1

2. Read as 1

3. Written as 0

4. Read as 0

Each of the 4 components is allocated a specific region inside a bit. If that component of the coverage is hit then the corresponding region is shown as green else it is blank. The overall coverage of the entire register is shown by the color of the name of the register as in the case of the field_vals.

image

The above sample report shows that there is no issue in “Read as 1” for the ‘resetvalue’ register. While other types or read/write has not been hit completely.

Thus, in this article we described what the various coverage models for a register are and how to generate the RALF coverage model of the registers automatically with minimum effort. An intuitive visualization of the register coverage data will ease the effort involved in deciphering the coverage reports from simulation lengthy log files. This type of closed loop register verification ensures better coverage and high quality results in less time. Hope you found this useful.. Do share with me your feedback on the same and and also let me know if you want any additional details to get the maximum benefits from this flow..

Posted in Automation, Coverage, Metrics, Organization, Register Abstraction Model with RAL, Tools & 3rd Party interfaces, Verification Planning & Management | 11 Comments »

Extending Hierarchical Options in VMM to work with all data types

Posted by Amit Sharma on 2nd September 2011

Abhisek Verma, CAE, Synopsys

Tyler Bennet, Senior Application Consultant, Synopsys

Traditionally, to pass a custom data type like a struct or a virtual interface using vmm_opts, it is recommended to wrap it in a class and then use the set/get_obj/get_object_obj on the same. This approach has been explained in another blog here.  But wouldn’t you prefer to have the same usage for these data types as the simple use model you have for integers, strings and objects?  This blog describes how to create a simple helper package around vmm_opts that uses parameterization to pass user-defined types. It will work with any user-defined type that can be assigned with a simple “=”, including virtual interfaces.

Such a package can be created as follows:-

STEP1:: Create the parameterized wrapper class inside the package

image

The above vmm_opts_p class is used to encapsulate any custom data type which it takes as a parameter “t”.

STEP2:: Define the ‘get’ methods inside the package.

Analogous to vmm_opts::get_obj()/get_object_obj(), we define get_type and get_object_type. These static functions allow the user to get an option of a non-standard type. The only restriction is that the datatype must work with the assignment operator. Also note that since this uses vmm_opts::get_obj, these options cannot be set via the command-line or options file.

image

STEP3:: Define the ‘set’ methods inside the package.

Similarly, analogous to vmm_opts::set_object(), the custom package needs to declare set_type. This static function allows the user to set an option of a non-standard type. .

image

USE-MODEL

The above package can be imported and used to set/get virtual interfaces as follows :-

vmm_opts_p#(virtual dut_if)::set_type(“@BAR”, top.intf, null); //to set the virtual interface of type dut_if

tb_intf = vmm_opts_p#(virtual dut_if)::get_object_type(is_set, this, “BAR”, null, “SET testbench interface”, 0); //to get the virtual interface of type dut_if, set by the above operation.

The following template example shows the usage of the package in complete detail in the context of passing virtual interfaces

1. Define the interface, Your DUT

image

2. Instantiate the DUT, Interface and make the connections

image

3.  Leverage the Hierarchical options and the package in your Testbench

image

So, there you go.. Now , whether you are using your own user defined types, structs, queues , you can go ahead and use this package and thus have your TB components communicate and pass data structures  elegantly and efficiently..

Posted in Communication, Configuration, Customization, Organization | No Comments »

Automatic generation of Register Model for VMM using IDesignSpec

Posted by Amit Sharma on 5th August 2011

Nitin Ahuja, Verification Engineer, Agnisys Technology Pvt Ltd

Generating a register model by hand could take up a lot of time in the design process and may result in serious bugs, which makes the code inefficient. On the other hand, generating the register model using the register model generator such as IDesignSpecTM reduces the coding effort, as well as generates more competent codes by avoiding the bugs in the first place, thus making the process more efficient and reduces the time to market exponentially.

Register model generator can be proved efficient in the following ways:

1. Error free codes in the first place, i.e. being automatically generated, the register model code is free from all the human as well as logical errors.

2. In the case of change in the register model specification, it is easy to modify the spec and generate the codes again in no time.

3. Generating all kind of hardware, software, industry standard specifications as well as verification codes from a single source of specification.

IDesignSpecTM (IDS) is capable of generating all the RTL as well as the verification codes such as VMM(RALF) from the register specification defined in Word, Excel, Open-office or IDS-XML.

Getting Started

A simple register can be defined inside a block in IDesigSpecTM as:

The above specification is translated into the following RALF code by IDS.

image

As a protocol, all the registers for which the hdl_path is mentioned in the RALF file, the ralgen generates the backdoor access. Thus special properties on the register such as hdl_path and coverage can be mentioned inside the IDS specification itself and will be appropriately translated into the RALF file.

The properties can be defined as below:

For Block:

image

As for the block, hdl_path , coverage or even any other such property can be mentioned for other IDS elements, such as register or field.

For register/field:

clip_image002OR

clip_image004

Note: Coverage property can take up the following three possible values:

1. ON/on: This enables all the coverage types i.e for block or memory address coverage and for registers and field the REG_BITS and FIELD_VALS coverage is on.

2. OFF/off: By default all the coverage is off. This option holds valid only in case, when the coverage is turned ON from the top level of the hierarchy or from the parent and to turn off the coverage for some particular register, specify ‘coverage=off’ for that register or field. The coverage for that specific child will be invert of what its parent has.

3. abf: Any combination of these three characters can be used to turn ON the particular type of the coverage. These characters stand for:

· a : address coverage

· b : reg_bits

· f : field_vals

For example to turn on the reg_bits and field_vals coverage, it can be mentioned as:

“coverage=bf”.

In addition to these properties, there are few more properties that can be mentioned in a similar way as above. Some of them are:

1. bins: various bins for the coverpoints can be specified using this property.

Syntax: {bins=”bin_name = {bin} ; <bin_name>…”}

2. constraint : constraints can also be specified for the register or field or for any element.

Syntax : {constraint=”constraint_name {constraint};<constraint_name> …”}

3. [vmm]<code>[/vmm]: This tag gives the users the ability to specify their own piece of system-verilog code in any element.

4. cross: cross for the coverpoints of the registers can be specified using cross property in the syntax:

Syntax: {cross = “coverpoint_1 <{label label_name}>;< coverpoint_2>…”}

Different types of registers in IDS :

1.Register Arrays:

Register arrays in RALF can be defined in IDS using the register groups. To define a register array of size ‘n’, it can be defined by placing a register inside a regroup with the repeat count equal to size of the array (n).

For example a register array of the name ”reg_array” with size equal to 10 can be defined in the IDS as follows:

clip_image006

The above specification will be translated into the following vmm code by the ids:

clip_image008

2.Memory :

Similar to the register array, Memories can also be defined in the IDS using the register groups. The only difference in the memory and register array definition is that in case of memory the external is equal to “true”. The size of the memory is calculated as, ((End_Address – Start_Address)*Repeat_Count)

As an example a memory of name “RAM” can be defined in IDS as follows:

clip_image010

The above memory specification will be translated into following VMM code:

clip_image012

3.Regfile

Regfile in RALF can be specified in IDS using the register group containing multiple registers(> 1).

One such regfile with 3 registers, repeated 16 times is shown below:

clip_image014

Following is the IDS generated VMM code for the above reg file:

clip_image016

RAL MODEL AND RTL GENERATION FROM RALF:

The IDS generated RALF can be used with the Synopsys Ralgen to generate the RAL model as well as the RTL.

To generate the RAL model use the following command:

clip_image018

And for the RTL generation use the following command:

clip_image020

SUMMARY:

It is beneficial to generate the RALF using the register model generator “IDesignspecTM”, as it guarantees bug free code, making it more competent and also reduces the time and effort. In case of modifications in the register model specification, it enables the users to regenerate the code again in no time.

Note:

We will extend this automation further in the next article where we will cover details about how you can “close the loop” on register verification. The “Closed Loop Register Verification” article will be available on VMM Central soon. Meanwhile if you have any questions/comments you can reach me at nitin[at]agnisys[dot]com .

Posted in Automation, Organization, Register Abstraction Model with RAL, Tools & 3rd Party interfaces, VMM infrastructure | 20 Comments »

Building & Configuring Coverage Model – VMM Style – Part-I

Posted by paragg on 24th June 2011

Parag Goel, Senior Corporate Application Engineer, Synopsys

To minimize wasted effort, coverage is used as a guide for directing verification resources by identifying tested and untested portions of the design.”

- IEEE Standard for System Verilog (IEEE Std. 1800-2009)

Configurability & reusability are the buzz^^^ words in the verification of chips and this are enabled to a big extent by the present day verification methodologies. Through a set of blogs, I plan to show how we can create configurable coverage models in VMM based environments. Given that, AMBA – AXI is one of the most commonly used protocols in industry for communication amongst the SOC peripherals, I chose protocol AXI based framework for my case study.

The idea here is to create a configurable coverage model leveraging some of the base classes provided in the methodology so that we can make it completely reusable as we move from the block to system level or as we move across projects. Once, we enable that, we can move the coverage model inside the Sub-environment modeled by vmm_group or vmm_subenv which are the units of reuse.

Picture1

Primary Requirements of Configuration Control:

Two important requirements that are needed to be met to ensure that the coverage model is made a part of reusable components are:

1. Ability to enable/disable the coverage model whenever required.

2. Ability to Turn ON/OFF different subgroups at the desired granularity. For example, an user may not always want the Error Coverage to be enabled, unless under specific circumstances.

To meet the above requirements, we make use of the VMM Global and Hierarchical Configurations

Through the vmm_opts base classes, VMM provides a mechanism to control the configuration parameters of a verification environment. This can be done in a hierarchical as well as in a global manner. These options are summarized below:

Picture6

In the environment, the coverage_enable is by default set to 0, i.e. disabled.

coverage_enable = vmm_opts::get_int(“coverage_enable”, 0);

Now, the user can enable the coverage via either of the two mechanisms.

1. From user code using vmm_opts.

The basic rule is that you need to ‘set’ it *before* the ’get’ is invoked and during the time where the construction of the components take place.  As a general recommendation, for the construction of structural configuration, the build phase is the most appropriate place.
function axi_test::build_ph();
// Enable Coverage.
vmm_opts::set_int(“@%*:axi_subenv:enable_coverage”, 1);
endfunction

2. From command line or external option file. The option is specified using the command-line +vmm_name or +vmm_opts+name.
./simv
+vmm_opts+enable_coverage=1@axi_env.axi_subenv

The command line supersedes the option set within code as shown in 1.

User can also specify options for specific instances or hierarchically using regular expressions.

Picture3

Now let’s look at the typical classification of a coverage model.

From the perspective of AXI protocol, we can look at the 4 sub-sections.

Transaction coverage: coverage definition on the user-controlled parameters usually defined in the transaction class & controlled through sequences.

Error coverage: coverage definition on the pre-defined error injection scenarios

Protocol coverage: This is protocol specific ((AXI Handshake coverage)). In case of AXI, it is mainly for coverage on the handshake signals i.e. READY & VALID on all the 5 channels.

Flow coverage: This is again protocol specific and for AXI it covers various features like, outstanding, inter-leaving, write data before write address etc…

clip_image001[11]

At this point, let’s look at how these different sub-groups with the complete coverage model can be enabled or disabled. Once the coverage configuration class is built and passed on to the main coverage model, we need a fine grain control to enable/disable individual coverage models. The code shows how the user can control all the coverage models in the build phase of the main coverage class.

Here too, we can see how we use vmm_opts comes to meet the requirements of controlling individual parameters.

vmm_opts::set_int(“@%*:disable_transaction_coverage”, 0);
vmm_opts::set_int(“@%*:disable_error_coverage”, 0);
vmm_opts::set_int(“@%*:disable_axi_handshake_coverage”, 0);

vmm_opts::set_int(“@%*:disable_flow_coverage”, 0);

In my next blog, I show how the hierarchical VMM Configurations is used to dynamically pass on signal level and other configuration related information to the coverage model. Also, we shall discuss the usage of VMM TLM feature, towards fulfilling the goal of configurable coverage model. Stay tuned!

Posted in Configuration, Coverage, Metrics, Organization, Reuse, SystemVerilog, VMM, VMM infrastructure | No Comments »

Using VMM template Generator to ramp up your testbench development

Posted by Amit Sharma on 25th October 2010

Amit Sharma, Synopsys
‘vmmgen’, the template generator for creating robust, extensible VMM compliant environments, has been available for a long time with VMM and it was upgraded significantly with VMM1.2. Though the primary functionality of ‘vmmgen’ is to help minimize VIP and environment development cycle by providing detailed templates for developing VMM Compliant verification environments, a lot of folks also use it to quickly understand how different VMM base classes can be used in different contexts. This is done as the templates uses a rich set of the latest VMM features to ensure the appropriate base classes and their features are picked up optimally.

Given that it has a wide user interaction mechanism which provides available features and options to the user, the user can pick up the modes which are most relevant to his or her requirement. It also provides them the option to provide their own templates thus providing a rich layer of customization. Based on the need, one can generate individual templates of different verification components or they can generate a complete verification environment which comes with a ’Makefile’ and an intuitive directory structure, thus propelling them on their way to catch the first set of bugs in their DUTs. I am sure all of you know where to pick up ‘vmmgen’ form. It available in the <VMM_HOME>/Shared/bin area or in $VCS_HOME/bin

Some of the rich set of features available now includes:

• Template Options:

– Complete environment generation

– Individual templates generation

• Options to create Explicitly phased environments or Implicitly phased environment or to mix Implicitly phased components and Explicitly phased components

• Usage of VMM Shorthand macros

• Creating RAL based environment, and providing Multiplexed domain support if required

• Hooking up VMM Performance Analyzer at the appropriate interfaces

• Hooking up the DS Scoreboard at the relevant interfaces (with options to chose from a range of integration options, e.g. : through callbacks, through TLM2.0 analysis ports, connect directly through to transactors, channels or notifications)

• Ability to hook up different generators (atomic, scenario, Multistream generators) at different interfaces

• Creating a scenario library and Multistream scenario creation

• Multi-driver generator support for different kinds of transactions in the same environment

• Factory support for transactions, scenarios and multi stream scenarios. Sample factory testcase which can explain the usage of transaction override from a testcase.

• ‘RTL config’ support for drivers and receivers.

• Various types of unidirectional and bi-directional TLM connections between generator and driver.

• Analysis ports/exports OR parameterized notify observers to broadcast the information from monitor to scoreboard and coverage collectors.

• Multi test concatenation support and management to run the tests

• Creating portable Interface wrapper object, and setting up interface connections to the testbench components using vmm_opts::set_object/get_object_obj

• Creating a Generic slave component

• Option to use default names or user provided names for different components

As you can see the above list itself is quite comprehensive and let me tell you that that it is not exhaustive as there are many more features in vmmgen.

With respect to the usage as well, there are multiple flavors. In the default mode, the user is taken through multiple detailed choices/options as he is creating/connecting different components in his verification environment. However, some folks might want to use ‘vmmgen’ within their own wrapper script/environment and for them there are options to generate the environments by providing all required options in the command line or through a configuration file… Some of these switches include

-SE [y/n] : Generates a complete environment with sub-environments

-RAL [y/n] : Create RAL based verification environments

-RTL [y/n] : Generates RTL configuration for the environment

-ENV <name>, -TR <name> : Provide the name for the environment class and transaction classes. names for multiple transaction class names can be provide as well:

vmmgen –l sv –TR tr1+tr2

-cfg_file <file_name> : Option to provide a configuration file for the options

There is an option to generate an environment quickly by taking the user through the minimum number of questions (-q).

Additionally, the user can provide his or her own templates through the –L <template directory> option.

As far as individual template generation goes, you have the complete list. Here, I am outlining this down for reference:

image

I am sure a lot of you have already been using ‘vmmgen’. For those, who haven’t, I encourage you to try out the different options with it. I am sure you will find this immensely useful and it will not only help you create verification components and environments quickly but will also make sure they are optimal and appropriate based on your requirements.

Posted in Automation, Coding Style, Customization, Modeling Transactions, Organization, Performance Analyzer, Scoreboarding, Tools & 3rd Party interfaces, VMM infrastructure | No Comments »

Vaporware, Slideware or Software?

Posted by Andrew Piziali on 27th August 2010

The Role of the Technical Marketing Engineer in Verification

by Andrew Piziali, Independent Consultant

In our previous blog posts on the subject of verification for designers we addressed the role of the architect, software engineer and system level designer. We now turn our attention to perhaps the least understood—and oftentimes most vilified—member of the design team, the technical marketing engineer. But, before we explain why, what is the role of the technical marketing engineer? After all, not all companies have such a position.

The technical marketing engineer is responsible for determining customer product requirements and ensuring that these requirements are satisfied in the delivered product. They typically build product prototypes—initially slideware and later rudimentary functional code—that are evaluated by the customer while refining requirements. With each iteration the customer and engineer come closer to understanding precisely what the product must do and the constraints under which it must operate. This role differs from other traditional marketing functions such as inbound and outbound communication. Given that such a position exists, how does the technical marketing engineer contribute to the functional verification of a design?

Functional Verification Process

Although there are many definitions of functional verification, my favorite is one I recorded at Convex Computer Corporation some twenty years ago: “Demonstrate that the intent of a design is preserved in its implementation.” It is short, colloquial and simple:

  • • Demonstrate — Illustrate but not necessarily prove
  • • Intent — What behaviors are desired?
  • • Design — The various representations of the product before it is ultimately realized and shipped
  • • Preserve — Prevent corruption, scrambling and omission
  • • Implementation — Final realization of the product for the customer

The technical marketing engineer plays a key role in this process as demonstrated by one personal experience of mine in this role.

One company I worked for offered a product that assisted in producing clean, efficient, bug-free code through pre-compilation analysis. In this environment one could navigate through your code within its module (file) structure, as well as within its object structure. However, the programming language supported not only objects—language elements that are defined, inherited and instantiated–but also aspects—language elements that group object extensions into common concerns. I proposed adding an aspect browser to the product as a natural extension to its existing navigation facilities. The challenge was inferring the aspect structure of a program from its files and object structure because an aspect is not explicitly identified as such in the program.

Slide Presentation Model Use

I put together a slide presentation that illustrated the two existing navigation paradigms, as well as the proposed aspect navigation. The feature looked great from a user interface perspective, but could it be implemented? Since heuristics could be employed in identifying each aspect, I also illustrated each heuristic and its application by way of sample code and its structure. This animated slide presentation served as the first prototype demonstration of the aspect browser for the product. When reviewed with existing users, they were able to provide valuable feedback about the new feature and its utility and limitations. When subsequently referenced by the programmer implementing the feature, it served as a rough executable specification.

Returning to the vilified technical marketing engineer, why are some poor souls subject to this criticism? More often than not, the marketing engineer promised more capability, performance or features than could be delivered by the developers. It is easy to “Powerpoint” a feature that cannot be implemented so the marketing engineer must walk far enough down the implementation path to understand what is feasible. If they do, they will likely avoid this charge and remain a perceived asset by the design team. Moreover, their employer will retain a reputation for delivering quality “product-ware,” not vaporware or slideware!

Posted in Organization, Reuse, Verification Planning & Management | No Comments »

Fantasy Cache: The Role of the System Level Designer in Verification

Posted by Andrew Piziali on 12th July 2010

Andrew Piziali, Independent Consultant

As is usually the case, staying ahead of the appetite of a high performance processor with the memory technology of the day was a major challenge. This processor consumed instructions and data far faster than current PC memory systems could supply. Fortunately, spatial and temporal locality–the tendency for memory accesses to cluster near common addresses and around the same time–were on our side. These could be exploited by a cache that would present a sufficiently fast memory interface to the processor while dealing with the sluggish PC memory system behind the scenes. However, this cache would require a three level, on-chip memory hierarchy that had never been seen before in a microprocessor. Could it be done?

clip_image002

The system level designer responsible the cache design–let’s call him “Ambrose”–managed to meet the performance requirements, yet with an exceedingly complex cache design. It performed flawlessly as a C model running application address traces, rarely stalling the processor on memory accesses. Yet, when its RTL incarnation was unleashed to the verification engineers, it stumbled … and stumbled badly. Each time a bug was found and fixed, performance took a hit while another bug was soon exposed. Before long we finally had a working cache but it unfortunately starved the processor. Coupled with the processor not making its clock frequency target and schedule slips, this product never got beyond the prototype stage, after burning through $35 million and 200 man-years of labor. Ouch! What can we learn about the role of the system level designer from this experience?

The system level designer faces the challenge of evaluating all of the product requirements, choosing implementation trade-offs that necessarily arise. The architectural requirements of a processor cache include block size, hit time, miss penalty, access time, transfer time, miss rate and cache size. It shares physical requirements with other blocks such as area, power and cycle time. However, of particular interest to us are its verification requirements, such as simplicity, limited state space and determinism.

The cache must be as simple as possible while meeting all of its other requirements. Simplicity translates into a shorter verification cycle because the specification is less likely to be misinterpreted (fewer bugs), fewer boundary conditions to be explored (smaller search space and smaller coverage models), less simulation cycles required for coverage closure and fewer properties to be proven. A limited state space also leads to a smaller search space and coverage model.  Determinism means that from the same initial conditions and given the same cycle-by-cycle input the response of the cache is always identical from one simulation run to the next. Needless to say, this makes it far easier to isolate a bug than an ephemeral glitch that cannot be produced on demand. These all add up to a cost savings in functional verification.

Ambrose, while skilled in processor cache design, was wholly unfamiliar with the design-for-verification requirements we just discussed. The net result was a groundbreaking, novel, high-performance three level cache that could not be implemented.

Posted in Coverage, Metrics, Organization, Verification Planning & Management | 2 Comments »

Verification For the Rest of Us

Posted by Andrew Piziali on 29th March 2010

Andrew Piziali, independent consultant
Jim Bondi, DMTS, Texas Instruments

Functional verification engineers—also known as DV engineers—often think quite highly of themselves. Having mastered both hardware and software design, and each new design from top to bottom with an understanding exceeding all but the architects, we can see why they might end up with an inflated ego. Yet, responsibility for verification of the design is not theirs alone and sometimes not theirs at all!

In this next series of blog posts I am going to direct your attention to the role various members of a design team play in the verification process. Each will be co-authored by someone contributing to their design in the role under discussion. It is not uncommon these days for a small design team to lack any dedicated verification engineers.  Hence, the designers become responsible for the functional verification process embedded in, yet operating in parallel to, the design process.  What does that overall process look like?[1]

  1. Specification and Modeling
  2. Hardware/Software Partitioning
  3. Pre-Partitioning Analysis
  4. Partitioning
  5. Post-Partitioning Analysis and Debug
  6. Post-Partitioning Verification
  7. Hardware and Software Implementation
  8. Implementation Verification

Specification and modeling is responsible for exploring nascent design spaces and capturing original intent. The difficult choices of how to partition the design implementation between hardware and software components comes next. Then, analysis of each partitioning choice and debugging these high level models. Our first opportunity for functional verification follows post-partitioning analysis and debug, where abstract algorithm errors are discovered and eliminated. Hardware and software implementation is self explanatory, lastly leading to implementation verification, answering the question “Has the design intent been preserved in the implementation?”

This kick-off post in this series addresses the role of the architect in verification. My co-author, Jim Bondi, has been a key architect on numerous design projects at Texas Instruments ranging from embedded military systems to Pentium-class x86 processors to ultra low power DSP platforms for medical applications. The architect, whether a single individual or several, is responsible for specifying a solution to customer product requirements that captures the initial design intent of the solution. The resultant specification is iteratively refined during the first three stages of design.

In addition to authoring the original design intent, the second role of the architect in the verification process is preserving that intent and contributing to its precise conveyance throughout the remainder of the design process.[2] This begins during verification planning, where the scope of the verification problem is quantified and its solution specified. Verification planning itself begins with specification analysis, where the features of the design are identified and quantified. The complexity of most designs requires a top down analysis of the specification—first, because of its size (>20 pages) and second, because behavioral requirements must be distilled. This analysis is performed in a series of brainstorming meetings wherein each of the stakeholders of the design contribute: architect, system engineer, verification engineer, software engineer, hardware designer and project manager.

A brainstorming session is guided by someone familiar with the planning process. The architect describes each design feature and—through Q&A—its attributes are illuminated. These attributes and their associated values—registers, data paths, control logic, opcodes—are initially recorded in an Ishikawa diagram (also known as a “fish bone diagram”) for organizational purposes and then transferred to a coverage model design table as they are refined. Ultimately, each coverage model is implemented using a high level verification language (HVL), as part of the verification environment, and used to measure verification progress.

The seasoned architect knows that, even though modeling is mentioned only in the first design step above, it is most effective not only when started early but also continued iteratively throughout most of the design process. It is quite true that system modeling should be started early—as soon as possible and ideally before any RTL is written—when modeling can have its biggest impact on the design and offer its biggest return on model investment. In this early stage, modeling can best help tune the nascent architecture to the application, with the biggest resultant possible improvements in system performance and power. When used right, models are developed first and then actually drive the development of RTL in later design steps. This is contrary to the all- to-common tendency to jump prematurely to RTL representations of the design, and then perhaps use modeling mostly thereafter in attempts to help check and improve the RTL. Used in this fashion, the ability of modeling to improve the design is limited. More experienced architects have learned that modeling is best applied “up front” because it is here, before the design is cast in RTL, that up to 75% of the overall possible improvements in system performance and power can be realized.  The architect knows that a design process that jumps prematurely to RTL leaves much of this potential performance and power improvement on the table.

The seasoned architect also knows that, even though started early, modeling should be continued iteratively throughout most of the remainder of the design process. They know that, in fact, a set of models is needed to best support the design process. The first is typically an untimed functional model that becomes the design’s “golden” reference model, effectively an executable specification. As the design process continues, other models are derived from it, with, for example, timing added to derive performance models and power estimates added to derive power-aware models. In later stages, after modeling has been used “up front” to tune the architecture, optimal RTL can actually be derived from the models. Wherever verification is applied in the design process, whether before or after RTL appears, the models, as a natural form of executable golden reference, can support, or even drive, the verification process. Thus, in design flows that use modeling best, system modeling begins up front and is continued iteratively throughout most of the overall design process.

Indeed, the architect plays a crucial role in the overall design process and in the functional verification of the design derived from that process. They are heavily involved in all design phases affecting and involving verification, from authoring the initial design intent to ensuring its preservation throughout the rest of the design process.  The seasoned architect leverages a special set of system models to help perform this crucial role. Despite the verification engineer’s well-deserved reputation as a jack-of-all-trades, they cannot verify the design alone and may not even be represented in a small design team.  The architect is the “intent glue” that holds the design together until it is complete!

——————-
[1] ESL Design and Verification, Bailey, Martin and Piziali, Elsevier, 2007
[2] Functional Verification Coverage Measurement and Analysis, Piziali, Springer, 2004

Posted in Modeling, Organization, Verification Planning & Management | No Comments »

Paved With Good Intentions: Examining Lost Design Intent

Posted by Adiel Khan on 7th December 2009

image-thumb.pngandys-picture_2

Adiel Khan, Synopsys CAE

Andrew Piziali, Independant Consultant

Remember the kick-off of your last new project, when the road to tape-out was paved with good intentions? Architects were brainstorming novel solutions to customer requirements? Designers were kicking around better implementation solutions? Software engineers were building fresh new SCM repositories? And you, the verification engineer, were excitedly studying the new design and planning its verification? Throughout all of this early excitement, all sorts of good intentions were revealed. Addressing the life story of each intention would make a good short story, or even a novel! Since we don’t have room for that, let’s just focus on the design intentions.

Design intent, how the architect intended the DUV (design under verification) to behave, originates in the mind’s eye of the architect. It is the planned behavior of the final implementation of the DUV. Between original intent and implementation the DUV progresses through a number of representations, typically referred to as models, wherein intent is unfortunately lost. However, intent’s first physical representation, following conception, is its natural language specification.

We may represent the space of design behaviors as the following Venn diagram:1

Each circle—Design Intent (AEGH), Specification (BEFH) and Implementation (CFGH)—represents a set of behaviors. AEGH represents the set of design requirements, as conveyed by the customer. BEFH represents the intent captured in the specification(s). CFGH represents intent implemented in the final design. The region outside the three sets (D) represents unintended, unspecified and unimplemented behavior. The design team’s objective is to bring the three circular sets into coincidence, leaving just two regions: H (intended, specified and implemented) and D. By following a single design intention from set Design Intent to Specification to Implementation, we learn a great deal about how design intent is lost.

An idea originally conceived appears in set AEGH (Design Intent) and, if successfully captured in the specification, is recorded in set EH. However, if the intent is miscommunicated or not even recorded and lost, it remains in set A. There is the possibility that a designer learns of this intent, even though it not recorded in the specification, and recaptures it in the design. In that case we find it in set G: intended, implemented, but unspecified.

Specified intent is recorded in set BEFH. Once the intent is captured in the specification it must be read, comprehended and implemented by the designer. If successful, it makes it to the center of our diagram, set H: intended, specified and implemented. Success! Unfortunately some specified requirements are missed or misinterpreted and remain unimplemented, absent from the design as represented by set E: intended, specified but unimplemented. Sometimes intent is introduced into the specification that was never desired by the customer and (fortunately) never implemented, such as set B: unintended, unimplemented, yet specified. Unfortunately, there is also unintended behavior that is specified and implemented as in set F. This is often the result of gratuitous embellishment or feature creep.

Finally, implemented intent is represented by set CFGH, all behaviors exhibited by the design. Those in set G arrived as originally intended but were never specified. Those in set H arrived as intended and specified. Those in set F were introduced into the specification, although unintended, and implemented. Behaviors in set C were implemented, although never intended nor specified! In order to illustrate the utility of this diagram, let’s consider a specific example of lost design intent.

We can think of each part of the development process as building a model. Teams write documentation as a specification model, such as a Microsoft Word document. System architects build an abstract algorithmic system model that captures the specification model requirements, using SystemC or C++. Designers build a synthesizable RTL model in Verilog or VHDL. Verification engineers build an abstract problem space functional model in SystemVerilog, SVA and/or e.

If any member of the team fails to implement an element of the upstream, more abstract model correctly (or at all), design intent is lost. The verification engineer can recover this lost design intent by working with all members of the team and giving the team observability into all models.

Consider an example where the system model (ex. C++) uses a finite state machine (FSM) to control the data path of the CPU whereas the specification model (ex. MS Word) implies how the data path should be controlled. This could be a specification ambiguity that the designer ignores, implementing the data path controller in an alternate manner, which he considers quite efficient.

Some time later the system architect may tell the software engineers that they do not need to implement exclusive locking because the data path FSM will handle concurrent writes to same address (WRITE0, WRITE1). However, the designer’s implementation is not based on the system model FSM but rather the specification model. Therefore, exclusive locking is required to prevent data corruption during concurrent writes. We need to ask: How can the verification engineer recover this lost design intent by observing all models?

Synopsys illustrates a complete solution to the problem in a free verification planning seminar that dives deep into this topic. However, for the purposes of this blog we offer a simplified example, using the design and implementation of a coverage model:

  1. Analyze the specification model along with the system model
  2. Identify the particular feature (ex. mutex FSM) and write its semantic description
  3. Determine what attributes contribute to the feature behavior
  4. Identify the attribute values required for the feature
  5. Determine when the feature is active and when the attributes need to be sampled and correlated

This top-level design leads to:

Feature CPU_datapath_Ctrl
Description Record the state transitions of the CPU data path controller
Attribute Data path controller state variable
Attribute Values IDLE, START, WRITE0, WRITE1
Sample Whenever the state variable is written

The verification engineer can now implement a very simple coverage model to explicitly observe the system model, ensuring entry to all states:

enum logic [1:0] {IDLE, START, WRITE0, WRITE1} st;

covergroup cntlr_cov (string m_name) with function sample (st m_state);

option.per_instance = 1;

option.name = m_name;

model_state: coverpoint m_state {

bins t0 = (IDLE   => IDLE);

bins t1 = (IDLE   => START);

bins t2 = (START  => IDLE);

bins t3 = (START  => WRITE0);

bins t4 = (WRITE0 => WRITE1);

bins t5 = (WRITE1 => IDLE);

bins bad_trans = default;

}

endgroup

planner

The verification engineer can link the feature “CPU_datapath_Ctrl” in his verification plan to the cntlr_cov covergroup. Running the system model with the verification environment and RTL implementation will reveal that bin “t4″ is never visited, hence state transition WRITE0 to WRITE1 is never observed. The team can review the verification plan to determine if the intended FSM controller should be improved in the design to conform to all design intent.

Although there are many other subsets of the design intent diagram we could examine, it is clear that a design intention may be lost through many recording and translation processes. By understanding this diagram and its application, we become aware of where intent may be lost or corrupted and ensure that our good intentions are ultimately realized.


1The design intent diagram is more fully examined in the context of a coverage-driven verification flow in chapter two of the book Functional Verification Coverage Measurement and Analysis (Piziali, 2004, Springer, ISBN 978-0-387-73992-2).

Posted in Coverage, Metrics, Organization, Verification Planning & Management | No Comments »

Say What? Another Look At Specification Analysis

Posted by Shankar Hemmady on 26th October 2009

Andy's Picture_2 Andrew Piziali, Independent Consultant

Have you ever been reviewing a specification and asked yourself “Say what?!” Then you’re not alone! One of the most challenging tasks we face as verification engineers is understanding design specifications. What does the architect mean when she writes “The conflabulator remains inoperative until triggered by a neural vortex?” Answering that question is part of specification analysis, the first step in planning the verification of a design, the subsequent steps being coverage model design, verification environment implementation, and verification process execution.

The specifications for a design—DUV, or “design-under-verification” for our purposes—typically include a functional specification and a design specification. The functional specification captures top level, opaque box, and implementation- independent requirements. Conversely, the design specification captures internal, clear box, implementation dependent behaviors. Each is responsible for conveying the architect’s design intent at a particular abstraction level to the design and verification teams. Our job is to ultimately comprehend these specifications in order to understand and quantify the scope of the verification problem and specify its solution. This comprehension comes through analyzing the specifications.

In order to understand the scope of the verification problem, the features of the DUV and their relationships must be identified. Hence, specification analysis is sometimes referred to as feature extraction. The features are described in the specifications, ready to be mined through our analysis efforts. Once extracted and organized in the verification plan, we are able to proceed to quantifying the scope and complexity of each by designing its associated coverage model. How do we tackle the analysis of specifications ranging from tens to hundreds of pages? The answer depends upon the size of the specification and availability of machine-guided analysis tools. For relatively small specifications, less than a hundred pages or so, bottom-up analysis ought to be employed. Specifications ranging from a hundred pages and beyond require top-down analysis.

Bottom-up analysis is the process of walking through each page of a specification: section-by-section, paragraph-by-paragraph, and sentence-by-sentence. As we examine the text, tables and figures, we ask ourselves what particular function of the DUV is addressed? What behavioral requirements are imposed? What verification requirements are implied? Is this feature amenable to formal verification, constrained random, or a hybrid technology? If formal is applicable, how might I formulate a declarative statement of the required behavior? What input, output and I/O coverage is needed? If this feature is more amenable to constrained random simulation, what are the stimulus, checking and coverage requirements?

Each behavioral requirement is a feature to be placed in the verification plan, in either the functional or design requirements sections, as illustrated below:

1 Introduction ………………………………… what does this document contain?

2 Functional Requirements …………….. opaque box design behaviors

2.1 Functional Interfaces ……………. external interface behaviors

2.2 Core Features ………………………. external design-independent behaviors

3 Design Requirements ………………….. clear box design behaviors

3.1 Design Interfaces …………………. internal interface behaviors

3.2 Design Cores ……………………….. internal block requirements

4 Verification Views ……………………….. time-based or functional feature groups

5 Verification Environment Design …. functional specification of the verification environment

5.1 Coverage …………………………….. coverage aspect functional specification

5.2 Checkers …………………………….. checking aspect functional specification

5.3 Stimuli ………………………………… stimulus aspect functional specification

5.4 Monitors ……………………………… data monitor functional specifications

5.5 Properties ……………………………. property functional specifications

Bottom-up analysis is amenable to machine-guided analysis, wherein an application presents a specification before the user. For each section of the spec, perhaps for each sentence, the tool asks if this describes a feature, what are its property, stimulus, checking and coverage requirements, and records this information so that it may be linked to the corresponding section of the verification plan. This facilitates keeping the specifications and the verification plan synchronized. The verification plan is incrementally constructed within a verification plan integrated development environment (IDE).

The alternative to bottom-up analysis is analyzing a specification from the top down, required for large specifications. Your objective here is to bridge the intent abstraction gap between the detail of the specification and the more abstract, incrementally written verification plan. Behavioral requirements are distilled into concise feature descriptions, quantified in their associated coverage models. Top-down analysis is conducted in brainstorming sessions wherein representatives from all stakeholders in the DUV contribute. These include the systems engineer, verification manager, verification engineer, hardware designer and software engineer. After the verification planning methodology is explained to all participants, each engineer contributing design intent explains their part of the design. The design is explored through a question-and-answer process, using a whiteboard for illustration. In order to facilitate a fresh examination of the design component, no pre-written materials should be used.

Whether bottom-up or top-down analysis is used, each design feature should be a design behavioral requirement, stating the intended behavior of the DUV. Both the data and temporal behaviors of each feature should be recorded. In addition to recording the name of each feature, the behavior should be summarized in a sentence or two semantic description. Optionally, design and verification responsibilities, technical references, schedule information and verification labor estimates may be recorded. If the verification plan is written in Microsoft Word, Excel or in HVP1 plain text, it may drive the downstream verification flow, serving as a design-specific verification user interface.

The next time you ask “Say what?!,” make sure you are methodically analyzing the specification using either of the above approaches and don’t hesitate to contact the author directly. Many bugs discovered during these exchanges are the least expensive of all!

1Hierarchical Verification Planning language

Posted in Coverage, Metrics, Organization, Verification Planning & Management | No Comments »

Make Your Coverage Count!

Posted by Shankar Hemmady on 31st August 2009

Andy Piziali Andrew Piziali, Independent Consultant

You are using coverage, along with other metrics, to measure verification progress as part of your verification methodology.1 2 Yet, lurking in the flow are the seeds of a bug escape that will blindside you. How so?

Imagine you are responsible for verifying an in-order, three-way x86 superscalar processor in the last millennium, before the Constrained Random Generation. Since your management wouldn’t spring for an instruction set architecture (ISA) test generator, you hired a team of new college grads to write thousands of assembly language tests. Within the allocated development time, the tests were written, they were functionally graded and achieved 100% coverage, and they all finally passed. Yeah! But, not so fast …

When first silicon was returned and Windows was booted on the processor, it crashed. The diagnosis revealed a variant of one of the branch instructions was misbehaving. (This sounds better than “It had a bug escape.”) How could this be? We reviewed our branch instruction coverage models and confirmed they were complete. Since all of the branch tests passed, how could this bug slip through?

Further analysis revealed this branch instruction was absent from the set of branch tests yet used in one of the floating point tests. Since the floating point test was aimed at verifying the floating point operation of the processor, we were not surprised to find it was insensitive to a failure of this branch instruction. In other words, as long as the floating point operations verified by the test behaved properly, the test passed, independent of the behavior of the branch instruction. From a coverage aspect, the complete ISA test suite was functionally graded rather than each sub-suite graded according to its functional requirements. Hence, we recorded full coverage.

The problem was now clear: the checking and coverage aspects of each test were not coupled, conditioning coverage recording on passing checked behavior. If we had either (1) functionally graded each test suite only for the functionality it was verifying or (2) conditionally recorded each coverage point based upon a corresponding check passing, this bug would not have slipped through. Using either approach, we would have discovered this particular branch variant was absent from the branch test suite. In the first case, that coverage point would have remained empty for the branch test suite. Likewise, in the second case we would not have recorded the coverage point because no branch instruction check would have been activated and passed.

Returning to the 21st century, the lesson we can take away from this experience is that coverage—functional, code and assertion—is suspect unless, during analysis, you confirm that for each coverage point a corresponding checker was active and passed. From the perspective of implementing your constrained random verification environment, each checker should emit an event (or some other notification), synchronous with the coverage recording operation, indicating it was active and the functional behavior was correct. The coverage code should condition recording each coverage point on that event. If you are using a tool like VMM Planner to analyze coverage, you may use its “-feature” switch to restrict the annotation of feature-specific parts of your verification plan to the coverage database(s) of that feature’s test suite.

You might ask if functional qualification3 would address this problem. Functional qualification answers the question “Will my verification environment detect, propagate and report each functional bug?” As such, it provides insight into how well your environment detects bugs but says nothing about the quality of the coverage aspect of the environment. I will address this topic in a future post if there is sufficient interest.

Remember, make your coverage count by coupling checking with coverage!

1Metric Driven Verification, 2008, Hamilton Carter and Shankar Hemmady, Springer

2Functional Verification Coverage Measurement and Analysis, 2008, Andrew Piziali, Springer

3“Functional Qualification,” “EDA Design Line,” June 2007, Mark Hampton

Posted in Coverage, Metrics, Organization, Verification Planning & Management | No Comments »

Give Me Some Space, Man!

Posted by Shankar Hemmady on 11th August 2009

andrew_piziali2 Andrew Piziali, Independent Consultant

A question I am often asked is “When and where should I use functional coverage and code coverage?” Since the purpose of coverage is to quantify verification progress, the answer lies in understanding the coverage spaces implemented by these two kinds of coverage.

A coverage space represents a subset of the behavior of your DUV (design under verification), usually of a particular feature. It is defined by a set of metrics, each a parameter or attribute of the feature quantified by the space. For example, the coverage space for the ADD instruction of a processor may be defined by the product of the absolute values of ranges of the operands (remember “addends?”) and their respective signs. In order to understand the four kinds of coverage metrics, we need to discuss the coverage spaces from which they are constructed.

A coverage metric is determined by its source—implementation or specification—and its author—explicit or implicit. An implementation metric is derived from the implementation of the DUV or verification environment. Hence, the width of a data bus is an implementation metric, as is the module defining a class. Conversely, a specification metric is derived from the DUV functional or design specification. A good example is the registers and their characteristics defined in a specification.

The complementary coverage metric classification is determined by whether the metric is explicitly chosen by an engineer or implicit in the metric source. Hence, an explicit metric is chosen or invented by the verification engineer in order to quantify some aspect of a DUV feature. For example, processor execution mode might be chosen for a coverage metric. Alternatively, an implicit metric is inherent in the source from which the metric value is recorded. This means things like module name, line number and Boolean expression term are implicit metrics from a DUV or verification environment implementation. Likewise, chapter, paragraph, line, table and figure are implicit metrics from a natural language document, such as a specification.

Combining the two metric kinds—source and author—leads to four kinds of coverage metrics, each defining a corresponding kind of coverage space:

  1. Implicit implementation metric

  2. Implicit specification metric

  3. Explicit implementation metric

  4. Explicit specification metric

An example of an implicit implementation metric is a VHDL statement number. The register types and numbers defined by a functional specification are an implicit specification metric. Instruction decode interval is an explicit implementation metric. Finally, key pressed-to-character displayed latency is an example of an explicit specification coverage metric.

Each metric kind may be used to define an associated kind of coverage space. The astute reader may also wonder about coverage spaces defined by a mix of the above metric kinds. If such a hybrid space more precisely quantifies the verification progress of a particular feature, use it! To the best of my knowledge, you’d have to design and implement this space in much the same way as any functional coverage space because no commercial tool I am aware of offers this kind of coverage.

With an understanding of the kinds of coverage spaces, we can now classify functional and code coverage and figure out where they ought to be used. Functional coverage, making use of explicit coverage metrics—independent of their source—defines either an explicit implementation space or an explicit specification space. Code coverage tools provide a plethora of built-in implicit metric choices. Hence, it defines implicit implementation spaces. Where you want to measure verification progress relative to the DUV functional specification, where features are categorized and defined, functional coverage is the appropriate tool. Where you want to make sure all implemented features of the DUV have been exercised, you should use code coverage. Lastly, when your code coverage tool does not provide sufficient insight, resolution or fidelity into the behavior of the DUV implementation, functional coverage is required to complement the implicit spaces it does offer.

Functional coverage can tell you the DUV is incomplete, missing logic required to implement a feature or a particular corner case, whereas code coverage cannot. On the other hand, code coverage can easily identify unexercised RTL, while functional coverage cannot. Functional coverage requires a substantial up-front investment for specification analysis, design and implementation yet relieves the engineer of much back-end analysis. Code coverage, on the other hand, may be enabled at the flip of a switch but usually requires a lot of back-end analysis to sift the false positives from the meaningful coverage holes. Both are required—and complementary—but their deployment must be aligned with the stage of the project and DUV stability.

Some smart alec will point out that you can’t measure verification progress using coverage alone, and you’re right! Throughout this discussion I assume each feature, with its associated metrics, has corresponding checkers that pipe up when the DUV behavior differs from the specified behavior. (I’ll leave the topic of concurrent behavior recording and checking for another day.)

If you’d like to learn much more about designing, implementing, using and analyzing coverage, the following books delve much more deeply into verification planning, management and coverage model design:

Posted in Coverage, Metrics, Organization, Verification Planning & Management | No Comments »