Verification Martial Arts: A Verification Methodology Blog

The right name at the right space: using ‘namespace’ in VMM to set virtual interfaces

Posted by Amit Sharma on September 7th, 2011

Abhisek Verma, CAE, Synopsys

A ‘namespace’ is an abstract container or environment created to hold a logical grouping of unique identifiers or names. Thus the same identifier can be independently defined in multiple namespaces and the the meaning associated with an identifier defined in one namespace may or may not have the same meaning as the same identifier defined in another namespace. ‘Namespace’ in VMM is used to group or tag different VMM objects, resources and transactions with a meaningful namespace for the different components across the testbench environment. This allows the user to identify them and access them efficiently. For example, a benefit of this approach is that it relieves the user from making cross module references to access the various resources. This can be seen in the context of accessing the interfaces associated with a driver or a monitor in the environment and goes a long way in making the code more scalable.

Accessing and assigning interface handles to a particular transactor can be done in various ways in VMM, as discussed in the following blogs: Transactors and Virtual Interface and Extending Hierarchical Options in VMM to work with all data types. In addition to these, one can leverage ‘namespaces’ in VMM to achieve this fairly elegantly. The idea here is to put the Virtual Interface instances in the appropriate namespace in the object hierarchy to be retrieved by the verification environment wherever required through simple APIs as shown in the following steps:

STEP 1:: Define a parameterized class extending form vmm_object to act as a wrapper for the interface handle.

STEP 2:: Instantiate the interface wrapper in the top-level MODULE and put in the “VIF” name space

STEP 3:: In environment, access interface wrapper from the VIF name space by querying for the same in the ‘VIF” namespace and use the retrieved handle to set the interface in the transactor

The example below demonstrates the implementation of the above

The Interface and DUT templates..


Step1: Parameterized wrapper class for the interface-


The Testbench Top:


The Program Block:


Posted in Configuration, Structural Components, VMM infrastructure | Comments Off

Extending Hierarchical Options in VMM to work with all data types

Posted by Amit Sharma on September 2nd, 2011

Abhisek Verma, CAE, Synopsys

Tyler Bennet, Senior Application Consultant, Synopsys

Traditionally, to pass a custom data type like a struct or a virtual interface using vmm_opts, it is recommended to wrap it in a class and then use the set/get_obj/get_object_obj on the same. This approach has been explained in another blog here.  But wouldn’t you prefer to have the same usage for these data types as the simple use model you have for integers, strings and objects?  This blog describes how to create a simple helper package around vmm_opts that uses parameterization to pass user-defined types. It will work with any user-defined type that can be assigned with a simple “=”, including virtual interfaces.

Such a package can be created as follows:-

STEP1:: Create the parameterized wrapper class inside the package


The above vmm_opts_p class is used to encapsulate any custom data type which it takes as a parameter “t”.

STEP2:: Define the ‘get’ methods inside the package.

Analogous to vmm_opts::get_obj()/get_object_obj(), we define get_type and get_object_type. These static functions allow the user to get an option of a non-standard type. The only restriction is that the datatype must work with the assignment operator. Also note that since this uses vmm_opts::get_obj, these options cannot be set via the command-line or options file.


STEP3:: Define the ‘set’ methods inside the package.

Similarly, analogous to vmm_opts::set_object(), the custom package needs to declare set_type. This static function allows the user to set an option of a non-standard type. .



The above package can be imported and used to set/get virtual interfaces as follows :-

vmm_opts_p#(virtual dut_if)::set_type(“@BAR”, top.intf, null); //to set the virtual interface of type dut_if

tb_intf = vmm_opts_p#(virtual dut_if)::get_object_type(is_set, this, “BAR”, null, “SET testbench interface”, 0); //to get the virtual interface of type dut_if, set by the above operation.

The following template example shows the usage of the package in complete detail in the context of passing virtual interfaces

1. Define the interface, Your DUT


2. Instantiate the DUT, Interface and make the connections


3.  Leverage the Hierarchical options and the package in your Testbench


So, there you go.. Now , whether you are using your own user defined types, structs, queues , you can go ahead and use this package and thus have your TB components communicate and pass data structures  elegantly and efficiently..

Posted in Communication, Configuration, Customization, Organization | Comments Off

Using the VMM Performance Analyzer in a UVM Environment

Posted by Amit Sharma on August 23rd, 2011

As a generic VMM package, the Performance Analyzer (PAN) is not based on nor requires specific shared resources, transactions or hardware structures. It can be used to collect statistical coverage metrics relating to the utilization of a specific shared resource. This package helps to measure and analyze many different performance aspects of a design. UVM doesn’t have a performance analyzer as a part of the base class library as of now. Given that the collection/tracking and analysis  of performance metrics of a design has become a key checkpoint in today’s verification, there is a lot of value in integrating the VMM Performance Analyzer in an UVM testbench. To demonstrate the same, we will use both VMM and UVM base classes in the same simulation.

Performance is analyzed based on user-defined atomic resource utilization called ‘tenures’. A tenure refers to any activity on a shared resource with a well-defined starting and ending point. A tenure is uniquely identified by an automatically-assigned identifier. We take the XBUS example in  $VCS_HOME/doc/examples/uvm_1.0/simple/xbus as a demo vehicle for the UVM environment.

Step 1: Defining data collection

Data is collected for each resource in a separate instance of the “vmm_perf_analyzer” class. These instances should be allocated in the build phase of the top level environment.

For example, in


Step 2: Defining the tenure, and enable data collection

There must be one instance of the “vmm_perf_tenure” class for each operation that is performed on the  sharing resource. Tenures are associated with the instance of the “vmm_perf_analyzer” class that corresponds to the resource operated. In this case of the Xbus example, lets say we want to measure transcation throughput performance (i.e for the XBUS transfers).. This is how we will associate a tenure with the Xbus transaction. To denote the starting and ending of the tenure, we define two additional events in the XBUS Master Driver (started, ended). ‘started’ is triggered when the Driver obtains a transaction from the Sequencer, and ‘ended’ once the transaction is driven on the bus and the driver is about to indicate seq_item_port.item_done(rsp); At the same time,  ‘started’ is triggered, a callback is invoked to get the PAN to starting collecting statistics. Here is the relevant code.


Now, the Performance Analyzer  works on classes extended from vmm_data and uses the base class functionality for starting/stopping these tenures. Hence, the callback task which gets triggered at the appropriate points would have to have the functionality for converting the UVM transactions to a corresponding VMM one. This is how it is done.

Step 2.a: Creating the VMM counterpart of the XBUS Transfer Class


Step 2.b: Using the UVM Callback for starting/stopping data collection and calling the UVM -> VMM conversion routines appropriately.


The callback class needs to be associated with the driver as follows in the Top testbecnh (xbus_demo_tb)


Step 3: Generating the Reports..

In the report_ph of xbus_demo_tb, save, and write out the appropriate databases


Step 4. Run simulation , and analyze the reports for possible inefficiencies etc

Use -ntb_opts uvm-1.0+rvm +define+UVM_ON_TOP with VCS

Include along with the new files in the included file list.  The following table shows the text report at the end of the simulation.


You can generate the SQL databases as well and typically you would be doing this across multiple simulations.. Once, you have done that, you can create your custom queries to the get the desired information out of the SQL database across your regression runs.  You can also analyze the results and generate the required graphs in Excel. Please see the following post : Analyzing results of the Performance Analyzer with Excel

So there you go,  the VMM Performance Performance Analyzer can fit in any verification environment you have.. So make sure that you leverage this package  to make the  RTL-level performance measurements that are needed to validate micro-architectural and architectural assumptions, as well as to tune the RTL for optimal performance.

Posted in Coverage, Metrics, Interoperability, Optimization/Performance, Performance Analyzer, VMM infrastructure, Verification Planning & Management | 6 Comments »

The ‘user’ in RALF : get ralgen to generate ‘your’ code

Posted by S. Varun on August 11th, 2011

A lot of times, registers in a device could be associated with configuration fields that may not exist physically inside the DUT. For example, there could be a register field meant for enabling the scrambler, a field that would need to be set to “1” only when the protocol is PCIE. As this protocol-mode is not a physical field one cannot write it as a memory mapped register. For such cases ralgen reserves a “user area” wherein users can write SystemVerilog compatible code which will be copied as-is into the RAL model. This gives users the flexibility to add any variables/constraints that may not necessarily be physical registers/fields while maintaining the automated flow. This ensures that the additional parameters are part of the ‘spec’, in this case RALF from which the Model Generation happens.. Thus, it creates a more seamless sharing of variables across the register model and the testbench..

Lets looks at how it works..  If I have a requirement to randomize the register values based on additional testbench parameters, this is what can be done..

block mdio {

bytes 2;

register mdio_reg_1_0@’h0000000 {

field bit_31_0 {

bits 32;

access rw;

reset ‘h00000000;

constraint c_bit_31_0 {

value inside {[0:15]};




user_code lang=SV {

rand enum {PCIE,XAUI} protocol;

constraint protocol_reg1 {

if(protocol == PCIE) mdio_reg_1_0.bit_31_0.value == 16′hFF;




As shown above the “user_code” RALF construct enables the users to achieve the addition of the user-code inside the generated RAL model. Make note of the fact that this construct allows you to weave custom code without having to modify the generated code. This construct can also be used to generate custom coverage. In the context of the above example the “protocol mode” will not be a coverpoint in the coverage generated by ralgen as it is not a physical field in the DUT. So user can fill this a separate covergroup using “user_code”. The new RALF spec and the generated RAL model with the added coverage are shown below:

block mdio {image

bytes 2;

register mdio_reg_1_0 @’h0000000 {

field bit_15_0 {

bits 16;

access rw;

reset ‘h00000000;

constraint c_bit_15_0 {

value inside {[0:15]};




user_code lang=SV {

rand enum {PCIE,XAUI} protocol;

constraint protocol_reg1 {

if(protocol == XAUI)

mdio_reg_1_0.bit_15_0.value == 16′hff;



user_code lang=sv {

covergroup protocol_mode; = name;

mode : coverpoint protocol {

bins pcie = {PCIE};


mdio_reg : coverpoint mdio_reg_1_0.bit_15_0.value {

bins set = {‘hff};


cross mode, mdio_reg;


protocol_mode = new();


} Figure: Generated model snippet

User-code gets embedded in the generated RAL classes but there is no way to embed user-code in the “sample” method that exists inside each block. And so for any user embedded covergroups the sampling will need to be done manually (perhaps inside post_write callback of registers/fields) within the user testbench using <covergroup>.sample(). The construct could also be used to embed additional data members and user-defined methods, a sampling method to sample all the newly defined covergroups maybe. Thus “user_code” as a RALF construct comes in as a very handy solution for embedding user code in the automated model generation flow.

Posted in Register Abstraction Model with RAL, Stimulus Generation | 1 Comment »

Automatic generation of Register Model for VMM using IDesignSpec

Posted by Amit Sharma on August 5th, 2011

Nitin Ahuja, Verification Engineer, Agnisys Technology Pvt Ltd

Generating a register model by hand could take up a lot of time in the design process and may result in serious bugs, which makes the code inefficient. On the other hand, generating the register model using the register model generator such as IDesignSpecTM reduces the coding effort, as well as generates more competent codes by avoiding the bugs in the first place, thus making the process more efficient and reduces the time to market exponentially.

Register model generator can be proved efficient in the following ways:

1. Error free codes in the first place, i.e. being automatically generated, the register model code is free from all the human as well as logical errors.

2. In the case of change in the register model specification, it is easy to modify the spec and generate the codes again in no time.

3. Generating all kind of hardware, software, industry standard specifications as well as verification codes from a single source of specification.

IDesignSpecTM (IDS) is capable of generating all the RTL as well as the verification codes such as VMM(RALF) from the register specification defined in Word, Excel, Open-office or IDS-XML.

Getting Started

A simple register can be defined inside a block in IDesigSpecTM as:

The above specification is translated into the following RALF code by IDS.


As a protocol, all the registers for which the hdl_path is mentioned in the RALF file, the ralgen generates the backdoor access. Thus special properties on the register such as hdl_path and coverage can be mentioned inside the IDS specification itself and will be appropriately translated into the RALF file.

The properties can be defined as below:

For Block:


As for the block, hdl_path , coverage or even any other such property can be mentioned for other IDS elements, such as register or field.

For register/field:



Note: Coverage property can take up the following three possible values:

1. ON/on: This enables all the coverage types i.e for block or memory address coverage and for registers and field the REG_BITS and FIELD_VALS coverage is on.

2. OFF/off: By default all the coverage is off. This option holds valid only in case, when the coverage is turned ON from the top level of the hierarchy or from the parent and to turn off the coverage for some particular register, specify ‘coverage=off’ for that register or field. The coverage for that specific child will be invert of what its parent has.

3. abf: Any combination of these three characters can be used to turn ON the particular type of the coverage. These characters stand for:

· a : address coverage

· b : reg_bits

· f : field_vals

For example to turn on the reg_bits and field_vals coverage, it can be mentioned as:


In addition to these properties, there are few more properties that can be mentioned in a similar way as above. Some of them are:

1. bins: various bins for the coverpoints can be specified using this property.

Syntax: {bins=”bin_name = {bin} ; <bin_name>…”}

2. constraint : constraints can also be specified for the register or field or for any element.

Syntax : {constraint=”constraint_name {constraint};<constraint_name> …”}

3. [vmm]<code>[/vmm]: This tag gives the users the ability to specify their own piece of system-verilog code in any element.

4. cross: cross for the coverpoints of the registers can be specified using cross property in the syntax:

Syntax: {cross = “coverpoint_1 <{label label_name}>;< coverpoint_2>…”}

Different types of registers in IDS :

1.Register Arrays:

Register arrays in RALF can be defined in IDS using the register groups. To define a register array of size ‘n’, it can be defined by placing a register inside a regroup with the repeat count equal to size of the array (n).

For example a register array of the name ”reg_array” with size equal to 10 can be defined in the IDS as follows:


The above specification will be translated into the following vmm code by the ids:


2.Memory :

Similar to the register array, Memories can also be defined in the IDS using the register groups. The only difference in the memory and register array definition is that in case of memory the external is equal to “true”. The size of the memory is calculated as, ((End_Address – Start_Address)*Repeat_Count)

As an example a memory of name “RAM” can be defined in IDS as follows:


The above memory specification will be translated into following VMM code:



Regfile in RALF can be specified in IDS using the register group containing multiple registers(> 1).

One such regfile with 3 registers, repeated 16 times is shown below:


Following is the IDS generated VMM code for the above reg file:



The IDS generated RALF can be used with the Synopsys Ralgen to generate the RAL model as well as the RTL.

To generate the RAL model use the following command:


And for the RTL generation use the following command:



It is beneficial to generate the RALF using the register model generator “IDesignspecTM”, as it guarantees bug free code, making it more competent and also reduces the time and effort. In case of modifications in the register model specification, it enables the users to regenerate the code again in no time.


We will extend this automation further in the next article where we will cover details about how you can “close the loop” on register verification. The “Closed Loop Register Verification” article will be available on VMM Central soon. Meanwhile if you have any questions/comments you can reach me at nitin[at]agnisys[dot]com .

Posted in Automation, Organization, Register Abstraction Model with RAL, Tools & 3rd Party interfaces, VMM infrastructure | 1 Comment »

The One stop shop: get done with everything you need to do with your registers

Posted by Amit Sharma on July 14th, 2011

Ballori Bannerjee, Design Engineer, LSI India

Processes are created, refined and improved upon and the change in productivity which starts with a big leap subsequently slows down and at the same time as the complexity of tasks increases, the existing processes can no longer scale up. This drives the next paradigm shift in moving towards new process and automation. As in the case of all realms of technology, this is true in the context of the Register development and validation flow as well.. So, let’s look at how we changed our process to get the desired boost in productivity that we wanted..

This following flowchart represents our legacy register design and validation process.. This was a closed process and served us well initially when the number of registers, their properties etc were limited.. However, with the complex chips that we are designing and validating today, does this scale up?


As an example, in a module that we are implementing, there are four thousand registers. Translating into number of fields, for 4000 32-bit registers we have 128,000 fields, with different hardware and software properties!

Coding the RTL with address decoding for 4000 registers, with fields having different properties is a week’s effort by a designer. Developing a re-usable randomized verification environment with tests like reset value check, read-write is another 2 weeks, at the least. Closure on bugs requires several feedbacks from verification to update design or document. So overall, there is at least a month’s effort plus maintenance overhead anytime the address mapping is modified or a register updated/added.

This flow is susceptible to errors where there could be disconnect between document, design, verification and software.

So, what do we do? We redefine the process! And this is what I will be talking about, our automated register design and verification (DV) flow which streamlines this process.


The flow starts with the designer modeling the registers using a high level register description language. In our case , we use SystemRDL, and then leverage third party tools are available to generate the various downstream components from the RDL file:

· RTL in Verilog/VHDL

· C/C++ code for firmware

· Documentation ( different formats)

· High level verification environment code (HVL) in VMM

This is shown in below. The RDL file serves as a one-stop point for any register update required following a requirement change.


Automated Register DV Flow

Given, that its critical to create an efficient object oriented abstraction layer to model registers and memories in a design under test, we exploit VMM RAL for the same. How do we generate the VMM RAL Model? This is generated from RALF. Many 3rd party tools are available to generate RALF from various inputs formats and we use one of them to generate RALF from SystemRDL

Thus, a complete VMM compliant randomized, coverage driven register verification environment can be created by extending the flow such that:

i. Using 3rd party tool, from SystemRDL the verification component generated is RALF, Synopsys’ Register Abstraction Layer File.

ii. RALF is passed through RALGEN, a Synopsys utility which converts the RALF information to a complete VMM based register verification environment. This includes automatic generation of pre-defined tests like reset value check, bit bash tests etc of registers and complete functional coverage model, which would have taken considerable staff-days of effort to write.

The flowchart below elucidates the process.


Adopting the automated flow, it took 2 days to write the RDL. The rest of components were generated from this source. A small amount of manual effort may be required for items like back-door path definition, but it is minimal and a one-time effort. The overall benefits are much more than the number of staff days saved and we see this as something which gives us perpetual returns.. I am sure, a lot of you would already be bringing in some amount of automation in your register design and verification setup, and if you aren’t, its time you do it J

While, we are talking about abstraction and automation, lets look at another aspect in register verification.

Multiple Interfaces/Views for a register

It is possible to have registers in today’s complex SOC designs which need to be connected to two or more different buses and accessed differently. The register address will be different for the different physical interfaces it is shared between. So, how do we model this..

This can be defined in SystemRDL by using a parent addressmap with bridge property, which contains sub addressmaps representing the different views.

For example:

addrmap dma_blk_bridge {
bridge;// top level address map
reg commoncontrol_reg {
shared; // register will be shared by multiple address maps
field {
} f1[32];

addrmap {// Define the Map for the AHB Side of the bridge
commoncontrol_reg cmn_ctl_ahb @0×0; // at address=0
} ahb;

addrmap { // Define the Map for the AXI Side of the bridge
commoncontrol_reg cmn_ctl_axi @0×40; // at address=0×40
} axi;

The equivalent of multiple view addressmap, in RALF is domain.

This allows one definition of the shared register while allowing access from each domain to it, where register address associated with each domain may be different .The following code is RALF with domain implementation for above RDL.

register commoncontrol_reg {
field f1 {
bits 32;
access rw;
reset ‘h0;

block dma_blk_bridge {
domain ahb {
bytes 4;
register commoncontrol_reg =cmn_ctl_ahb @’h00 ;

domain axi {
bytes 4;

register commoncontrol_reg=cmn_ctl_axi @’h40 ;

Each physical interface is a domain in RALF. Only blocks and systems have domains, registers are in the block. For access to a register from one interface/domain RAL provides read/write methods which can be called with the domain name as argument. This is shown below..

ral_model.STATUS.write(status, data, “pci”);, data, “ahb”);

This considerably simplifies the verification environment code for the shared register accesses. For more on the same, you can look at : Shared Register Access in RAL though multiple physical interfaces

However, unfortunately, in our case, the tools we used did not support multiple interfaces and the automated flow created a the RALF having effectively two or more top level systems re-defining the registers. This can blow up the RALF file size and also verification environment code.

system dma_blk_bridge {
bytes 4;
block ahb (ahb) @0×0 {
bytes 4;
register cmn_ctl_ahb @0×0 {
bytes 4;
field cmn_ctl_ahb_fl(cmn_ctl_ahb_f1)@0{
bits 32;
access rw;
reset 0×0;
} }

block axi (axi) @0×0 {
bytes 4;
register cmn_ctl_axi @0×40 {
bytes 4;
field cmn_ctl_axi_f1 (cmn_ctl_axi_f1) @0 {
bits 32;
access rw;
reset 0×0;
} }

Thus, as seen above, the tool is generating two blocks ‘ahb’ and ‘axi’ and re-defining the register in each block. For multiple shared registers, the resulting verification code will be much bigger than if domain had been used.

Also, without the domain associated read/write methods (as shown above) for accessing the shared registers will be at least a few lines of code per register for accessing it from a domain/interface. This makes writing the test scenarios complicated and wordy.

Using ‘domain’ in RALF and VMM RAL makes shared register implementation and access in verification environment easy. We hope that we would soon be able to have our automated flow leverage this effectively..

If you are interested to go through more details about our automation setup and register verification experiences, you might want to look at:

Posted in Automation, Modeling, Register Abstraction Model with RAL, Tools & 3rd Party interfaces | 4 Comments »

Building & Configuring Coverage Model – VMM Style – Part-III

Posted by paragg on June 25th, 2011

Parag Goel, Senior Corporate Application Engineer, Synopsys

In the final blog of this coverage modeling with VMM series, we focus on error coverage. Negative scenario testing is an integral part of verification. But again, we have this question – Whether I have covered all negative scenarios?

So it is important to ensure that the generic coverage model tracks all the error scenarios.

Let’s see, how a specific mechanism provided in VMM in the form of vmm_report_catcher helps to track error coverage efficiently and effectively. The VMM Log Catcher is able to identify/catch a specific string of any type any of the messages issue through the VMM reporting mechanism.

Typically, the Verification Environment issues messages to STDOUT when the DUT responds to an error scenario. These messages can be ‘caught’ by the Log Catcher to update the appropriate coverage groups. Let see how this is done in detail.

The Verification Environment would respond to each negative scenario by issuing a message with a unique text, specific to specific error messages.

In the context of the AXI in framework, we can introduce a wide-range of error scenarios and test if the DUT responds correctly or not. A few possible error scenarios in AXI are listed below for your reference.


However, all the scenarios may not be applicable always and hence configurability is required to enable only the required set of coverpoints tied to the relevant negative scenarios. Thus, we should have similar configurability for error coverage as I talked about in the earlier blogs.

Let’s see how we can catch the relevant responses and sample the appropriate covergroups.

As mentioned earlier, in the example below, we make use of the unique message issued as a result of a negative scenario.

This is how we use the VMM Log catcher.

1. The error coverage class is extended from vmm_log_catcher – VMM base class.

2. The vmm_log::caught() API is utilized as means to qualify the covergroup sampling.


In the code above, whenever a message with the text “AXI_WRITE_RESPONSE_SLVERR “ is issued from anywhere in the verification environment, the ‘caught’ method is invoked which in turn samples the appropriate covergroup. Additionally, you an specify more parameters in the caught API, to restrict what ‘scenarios’ should be caught.


string name = “”,

string inst = “”,

bit recurse = 0,

int typs = ALL_TYPS,

int severity = ALL_SEVS,

string text = “”);

The above API, installs the specified message handler to catch any message of the specified type and severity, issued by the specified message service interface instances specified by name and instance arguments, which contains the specified text. By default, this method catches all messages issued by this message service interface instance.

Hope these set of articles would be relevant and useful to you.. I have made an attempt to leverage some of the built-in capabilities of the SV languages and the VMM base classes to target some of the challenges in creating configurable coverage models.. These techniques can be improvised further to make them more efficient and scalable. I would be waiting to hear from you all any inputs that you, have in this area.

Posted in Automation, Configuration, Coverage, Metrics, Reuse, Structural Components, VMM, VMM infrastructure | Comments Off

Building & Configuring Coverage Model – VMM Style – Part-II

Posted by paragg on June 25th, 2011

Parag Goel, Senior Corporate Application Engineer, Synopsys

In the previous post, we looked at how you can enable/disable different types of coverage encapsulated in the Coverage Model wrapper class. In this post, let’s look at how we can easily create an infrastructure to pass different inputs to the wrapper class so as to able to configure the coverage collection based on user. The infrastructure ensure that these elements values percolate down to the to the sub-coverage model groups.

The following are some of the key inputs that needs to be passed to the difference coverage component classes

1. SV Virtual Interfaces so that different signal activity can be accessed

2. The Transactions observed and collected by the physical level monitors

3. The ‘Configuration’ information


Let’s look at how the we can easily pass the signal level information to the Coverage Model

Step I: Encapsulation of the interface in the class wrapper.

class intf_wrapper extends vmm_object;

virtual axi_if v_if ;

function new (string name, virtual axi_if mst_if);, name);
this.v_if = mst_if;

endclass: master_port

Step II: In the top class/environment- Set this object using vmm_opts API.

class axi_env extends vmm_env;
intf_wrapper mc_intf;

function void build_ph();
mc_intf = new(“Master_Port”, tb_top.master_if_p0);
// Set the master port interface
vmm_opts::set_object(“VIP_MSTR:vip_mstr_port“, mc_intf, env);
endclass: axi_env

Step III: Connecting in the coverage class.

A. Get the object containing interface in the coverage model class using vmm_opts.

assert($cast(this.mst_port_obj, vmm_opts::get_object_obj(is_set, this, “vip_mstr_port“)));

B. Connecting local virtual interface to one contained in the object.

this.cov_vif = mstr_port_obj.v_if;

Now, we need to pass the collected transaction object from the monitor needs to the coverage collector. This can be conveniently done in VMM using TLM communication. This is achieved through the vmm_tlm_analysis_port, which establishes the communication between a subscriber & an observer.

class axi_transfer extends vmm_data;

. . .

class axi_bus_monitor  extends  vmm_xactor;

vmm_tlm_analysis_port#(axi_bus_monitor, axi_transfer)  m_ap;
task collect_trans();

//Writing to the analysis port.


class axi_coverage_model extends vmm_object;
vmm_tlm_analysis_export #( axi_coverage_model, axi_transfer) m_export;

function new (string inst, vmm_object parent = null);
m_export = new(this, “m_export”);


function void write(int id, axi_transfer trans);

//Sample the appropriate covergroup, once the transaction is received

in the write function.



To set up the TLM Connections in the agent/environment, we need to do the following:

class axi_subenv extends vmm_group;

//Instantiate the model classes and creates them.

axi_bus_monitor mon;

axi_coverage_model cov;

. . .

virtual function void build_ph;
mon = new( “mon”, this);
cov = new( “cov”, this);
virtual function void connect_ph;

//Bind the TLM ports via VMM – tlm_bind

monitor.m_ap.tlm_bind( cov.m_export );


To make the Coverage Model truly configurable, we need to look at some of the other key requirements as well at different level of granularity. This can be summarized as the ability to do the following.

1. Enable/Disable coverage collection for each covergroup defined . Every covergroup should be created only if a user wishes to do so. So there should be a configuration parameter which restricts the creation of the covergroup altogether. And this should also be used to control the sampling of a covergroup.

2. The user must be able to configure the limits on the individual values being covered in the coverage model within a legal set of values. Say for example, transaction field BurstLength – user should be able to guide the model what are the limits on this field that one wishes to get coverage on within a legal set of values ranging from ‘1’ to ‘16’ as per AXI spec. So providing lower and upper limits for transaction parameters like burst size, burst length, address etc. makes it re-usable. This limits should be modeled as variables which can be overwritten dynamically

3. The user should be able to control the number of bins to be created. For example in fields like address. auto_bin_max option can be exploited to achieve this in case the user doesn’t have explicitly defined bins..

4. The user must be able to control the number of hits for which a bin can be considered as covered. option.atleast can be used for this purpose and the input to this can be a user defined parameter.

5. The user should also have the control to specify his coverage goal, i.e. when the coverage collector should show the covergroupcovered” even though the coverage is not 100%. This can be achieved by using option.goal, where goal is again a user defined parameter.

All the parameters required to meet the above requirements can be encapsulated in the class (i.e. coverage configuration class) and this can be set and retrieved in a similar fashion described for setting & getting the interface wrapper class using vmm_opts API’s.

class coverage_cfg extends vmm_object;
  int disable_wr_burst_len;
   . . .
  function new( vmm_object parent=null, string name);, name);
  coverage_cfg cfg;
  function new(vmm_object parent=null, string name);
     bit is_set;, name);
     $cast(cfg, vmm_opts::get_object_obj(is_set, this,

Wei Hua presents another cool mechanism of collecting this parameters using vmm_notification mechanism in this earlier blog  :

A Generic Functional Coverage Solution Based On vmm_notify

Hope you found this useful. I will be talking about how to track Error Coverage in my next blog, so stay tuned!

Posted in Communication, Configuration, Coverage, Metrics, Reuse, Structural Components, VMM, VMM infrastructure | Comments Off

Building & Configuring Coverage Model – VMM Style – Part-I

Posted by paragg on June 24th, 2011

Parag Goel, Senior Corporate Application Engineer, Synopsys

To minimize wasted effort, coverage is used as a guide for directing verification resources by identifying tested and untested portions of the design.”

- IEEE Standard for System Verilog (IEEE Std. 1800-2009)

Configurability & reusability are the buzz^^^ words in the verification of chips and this are enabled to a big extent by the present day verification methodologies. Through a set of blogs, I plan to show how we can create configurable coverage models in VMM based environments. Given that, AMBA – AXI is one of the most commonly used protocols in industry for communication amongst the SOC peripherals, I chose protocol AXI based framework for my case study.

The idea here is to create a configurable coverage model leveraging some of the base classes provided in the methodology so that we can make it completely reusable as we move from the block to system level or as we move across projects. Once, we enable that, we can move the coverage model inside the Sub-environment modeled by vmm_group or vmm_subenv which are the units of reuse.


Primary Requirements of Configuration Control:

Two important requirements that are needed to be met to ensure that the coverage model is made a part of reusable components are:

1. Ability to enable/disable the coverage model whenever required.

2. Ability to Turn ON/OFF different subgroups at the desired granularity. For example, an user may not always want the Error Coverage to be enabled, unless under specific circumstances.

To meet the above requirements, we make use of the VMM Global and Hierarchical Configurations

Through the vmm_opts base classes, VMM provides a mechanism to control the configuration parameters of a verification environment. This can be done in a hierarchical as well as in a global manner. These options are summarized below:


In the environment, the coverage_enable is by default set to 0, i.e. disabled.

coverage_enable = vmm_opts::get_int(“coverage_enable”, 0);

Now, the user can enable the coverage via either of the two mechanisms.

1. From user code using vmm_opts.

The basic rule is that you need to ‘set’ it *before* the ’get’ is invoked and during the time where the construction of the components take place.  As a general recommendation, for the construction of structural configuration, the build phase is the most appropriate place.
function axi_test::build_ph();
// Enable Coverage.
vmm_opts::set_int(“@%*:axi_subenv:enable_coverage”, 1);

2. From command line or external option file. The option is specified using the command-line +vmm_name or +vmm_opts+name.

The command line supersedes the option set within code as shown in 1.

User can also specify options for specific instances or hierarchically using regular expressions.


Now let’s look at the typical classification of a coverage model.

From the perspective of AXI protocol, we can look at the 4 sub-sections.

Transaction coverage: coverage definition on the user-controlled parameters usually defined in the transaction class & controlled through sequences.

Error coverage: coverage definition on the pre-defined error injection scenarios

Protocol coverage: This is protocol specific ((AXI Handshake coverage)). In case of AXI, it is mainly for coverage on the handshake signals i.e. READY & VALID on all the 5 channels.

Flow coverage: This is again protocol specific and for AXI it covers various features like, outstanding, inter-leaving, write data before write address etc…


At this point, let’s look at how these different sub-groups with the complete coverage model can be enabled or disabled. Once the coverage configuration class is built and passed on to the main coverage model, we need a fine grain control to enable/disable individual coverage models. The code shows how the user can control all the coverage models in the build phase of the main coverage class.

Here too, we can see how we use vmm_opts comes to meet the requirements of controlling individual parameters.

vmm_opts::set_int(“@%*:disable_transaction_coverage”, 0);
vmm_opts::set_int(“@%*:disable_error_coverage”, 0);
vmm_opts::set_int(“@%*:disable_axi_handshake_coverage”, 0);

vmm_opts::set_int(“@%*:disable_flow_coverage”, 0);

In my next blog, I show how the hierarchical VMM Configurations is used to dynamically pass on signal level and other configuration related information to the coverage model. Also, we shall discuss the usage of VMM TLM feature, towards fulfilling the goal of configurable coverage model. Stay tuned!

Posted in Configuration, Coverage, Metrics, Organization, Reuse, SystemVerilog, VMM, VMM infrastructure | Comments Off

Pipelined RAL Access

Posted by Amit Sharma on May 12th, 2011

Ashok Chandran, Analog Devices

Many times, we come across scenarios where a register can be accessed from multiple physical interfaces in a system. An example would be a homogenous multi-core system. Here, each core may be able to access registers within the design through its own interfaces. In such scenarios, defining a “domain” (a testbench abstraction for physical interfaces) for each interface may be an overhead.

· From a system verification point of view, it does not make any difference as to which core accesses the registers since they are identical. The flexibility to bring in ‘random selection of interfaces’ can provide additional value.

· Defining a ‘domain’ for each interface in such scenario requires duplication of registers/ or their instantiation.

· Also, the usage of multiple “domains” for homogenous multi-core systems would prevent us from seamlessly reusing our code from block level to system level. This is because as we will have to incorporate the domain definition within the testbench RAL access code when we migrate to system level as the same wouldn’t have been needed during our register abstraction code in the block level.

Another related scenario is where we need to support multiple outstanding transactions at a time. Different threads could initiate distinct transactions which can return data out of order (as in AXI protocol). The default implementation of RAL allows only one transaction at a time for each domain in consideration.

VMM pipelined RAL comes to our rescue in such cases. This mechanism allows multiple RAL accesses to be simultaneously processed by the RAL access layer. This feature of VMM can be enabled with – `define VMM_RAL_PIPELINED_ACCESS. This define adds a new state to vmm_rw::status_e – vmm_rw::PENDING. When vmm_rw::PENDING is returned as status from execute single()/execute_burst(), the transaction initiating thread is kept blocked till vmm_data::ENDED notification is received for vmm_rw_access. New transactions can now be initiated from other testbench threads and pending transactions cleared in parallel when response is received from the system.


As shown in the figure above, transactions initiated by thread A (A0 and A1) can be processed / queued even while transactions from thread B (B0 and B1) are in progress. Here A can be processed by one interface and B by the other. Alternately, A and B can be driven together from same interface in case the protocol supports multiple outstanding accesses.

The code below shows how the user can create his execute_single() functionality to use pipelined RAL for a simple protocol like APB. For protocols like AXI which allow multiple outstanding transactions from same interface, the physical layer transactor can control the sequence further using the vmm_data::ENDED notification of the physical layer transaction.

virtual task execute_single(vmm_rw_access tr);

   apb_trans apb = new; //Physical layer transaction

   apb.randomize() with {
      addr == tr.addr;
      if(tr.kind == vmm_rw::READ) {
         dir == READ;
      } else {
         dir == WRITE;
      resp == OKAY;
      interface_id inside{0,1}; //the interface_id property in the physical layer transaction maps to the different physical interface instances

   if(tr.kind == vmm_rw::WRITE) =;
   //Fork out the access in  parallel
fork begin
   //Get copies for thread
      automatic apb_trans pend = apb;
      automatic vmm_rw_access rw = tr;

      //Push into the physical layer BFM

      //Wait for transaction completion from the physical layer BFM

      //Get the response and read data
if(pend.resp == apb_trans::OKAY) begin
         rw.status = vmm_rw::IS_OK;
      end else begin
         rw.status = vmm_rw::ERROR;

      if(rw.kind == vmm_rw::READ) begin =;

      // End of this transaction – Indicate to RAL
   end join_none

   //Return pending status to RAL access layer
tr.status = vmm_rw::PENDING;


For more details on creating “Pipelined Accesses”,  you might want to go through the section “Concurrently Executing Generic Transactions” in the VMM RAL User Guide

Posted in Register Abstraction Model with RAL | 3 Comments »

Cool Things You Can Do With DVE – Part 4

Posted by Yaron Ilani on April 26th, 2011

Yaron Ilani, Apps. Consultant, Synopsys

If you liked part 2 where I explained how Interactive Rewind could save you precious time during debug, then here’s another one for you. Obviously one of the most powerful methods of debugging interactively is by adding breakpoints at interesting points. You could have a single breakpoint as a starting point and then go on step by step. But in most cases it would be wiser to put multiple breakpoints in your code so that you could have more control over your simulation or even jump from one interesting point to another (remember you can always go backwards in time).

So the process of adding breakpoints and refining them might take some time and ideally you wouldn’t want to repeat that process all over again when you start a new interactive debug session. Wouldn’t it be nice to be able to save your breakpoints so that you or someone else from your team could reuse them in a different simulation? Well, DVE lets you do that! Simply launch the Breakpoints window from the Simulator menu:

In the example above I’ve added 3 breakpoints. In the source code window they are marked in red, but they are also listed in the Breakpoints window where each breakpoint can be enabled or disabled individually. In the bottom left you can see the “Save” button. Clicking on it will save all your breakpoints to a TCL file. You may use this file later on in any other DVE session by clicking on the “Load” button.

Once your test bench code is more or less stable, with this new feature you can actually create a number of useful breakpoints files (a breakpoints library if you will…). Each breakpoints file could be designed to help debugging a different part of your test bench. Or if you’re debugging some unfamiliar verification IP, you can create a breakpoints file and send it to its owner for help.

Happy debugging!

Check out the previous parts of this series to learn more about more cool features available today in DVE.

Posted in Automation, Debug | 4 Comments »

Cool Things You Can Do With DVE – Part 3

Posted by Yaron Ilani on April 13th, 2011

Yaron Ilani, Apps. Consultant, Synopsys

If you missed part 1 or part 2 of this series don’t worry, you can go on reading and catch up with the previous parts later on. Today I’m going to show you a small, yet very powerful feature in DVE that you may not be aware of.

Remember the last time you had to count clock cycles in the waveform window? Sometimes this is a quick way to verify that an internal counter behaves correctly or that a signal goes up just at the right clock edge. Remember how frustrating it is when you lose count for some reason and have to start over? Remember how you’re never 100% sure about the result even if you calculated the time difference between the left and right cursors and divided by the clock period? If your answer was yes to any of those questions then you’re going to love the Grid feature. What Grid simply does, as its name suggests, is draw a grid on top of the waveform. Here’s what it looks like:

If you click on the Grid button (in the red circle) the Grid will show up as dotted lines. As you can see, the Grid can count clock cycles for you. You can set it up to count falling edges, rising edges or any edge. You can also set the range either by entering the start time and end time, or simply by placing the cursors at the desired points and clicking on the “Get Time From C1/C2” button in the Grid Properties window:

The Grid Properties window lets you have even more control over the grid. For example – you can set its cycle time to a custom value. This could be very useful if you want to be able to visually inspect drifting clocks or duty-cycle issues, etc.

In short, the Grid is one of those little things that make a big difference when it comes to efficient debugging and you should definitely become familiar with it. If you ever have to count clock cycles again, remember that you no longer have to do this manually. You don’t even have to make any calculations. Simply launch the Grid and voila!

If you’d like to learn more about DVE’s advanced debug features check out part 1 and part 2 of this series.

Posted in Automation, Debug | 3 Comments »

Cool Things You Can Do With DVE – Part 2

Posted by Yaron Ilani on April 7th, 2011

Yaron Ilani, Apps. Consultant, Synopsys

In Part 1 of this series we discussed how SystemVerilog macros might add complexity when it comes to debugging your test bench and how DVE can make your life much easier in that area. Today we’re going to show you another cool feature in DVE that if used wisely, could save you a significant amount of time when debugging. Let’s recall for a moment the two main use models of DVE – Post Processing and Interactive. The former is where you’re debugging your simulation results after it has completed. The latter is where you’re running and debugging your simulation simultaneously, trading off performance for enhanced debug capabilities. Today we shall focus on the interactive mode. We’re about to see how the Interactive Rewind feature will help you minimize your debug turnaround time.

So during a typical interactive session you put a breakpoint somewhere in your code and let the simulation run until it reaches your breakpoint. From that point and on you take the controls and advance the simulation step by step which allows you to inspect your signals or variables very closely. You may assign different values to signals along the way to try out potential workarounds or fixes. Now here’s the tricky part: simulation can only advance forward! So if your breakpoint occurs late in the simulation, every time you want to restart your debug trail you’re bound to wait for the simulation to rerun from the beginning. Why would you want to start over? Well, you might want to change a signal value (remember you can force signals interactively in DVE). But more often than not, stepping through your code gets very complicated and you might miss the interesting point where the bug occurs, or you lose track of the debug trail and need to start fresh. In certain cases your breakpoints occur periodically (e.g. every time a packet is transmitted) and you just wish you could go back in time to the previous occurrence to step through the code again.

The good news is that DVE allows you to navigate backwards in simulation!  We call it Interactive Rewind. All you have to do is set up one or more checkpoints along the simulation. Upon each checkpoint a snapshot of the simulation is saved and you may use it later on to literally go back in time. To give you a sense of how easy it is to work with checkpoints, here’s how it looks like – the left button is used to add a new checkpoint. The right button will be used later to rewind to any of the previously added checkpoints.

Selecting a checkpoint to rewind to is easy: simply select one from the drop-down menu:

You can also control your checkpoints via UCLI, where you’ll find many advanced features such as the ability to add periodic checkpoints automatically.

To sum up, interactive debugging with DVE becomes much more efficient with Interactive Rewind. Simply add checkpoints at strategic points along the debug trail. Then use Step/Next to advance simulation time and Rewind to go back. This will keep your debug turnaround time to minimum, thus enabling you to focus on debugging and not waiting.

Posted in Automation, Debug | 5 Comments »

Cool Things You Can Do With DVE – Part 1

Posted by Yaron Ilani on April 3rd, 2011

Yaron Ilani, Apps. Consultant, Synopsys

A SystemVerilog test bench could get quite complex. Typical projects today have thousands of lines of code, and the number is constantly on the rise. However, standard base class libraries such as VMM and UVM can help you minimize the amount of code that needs to be rewritten by providing a rich set of macros that substitute long lines of code with a single line. For example, the simple line `vmm_channel(atm_cell) defines a standard VMM channel for an ATM cell with all the necessary fields and methods, all under the hood. All you have to do is instantiate the newly defined channel wherever you need it.A close cousin to the SV macro is the `include directive which basically substitutes an entire file with a single line. This is a neat way to reuse files and enhance code clarity.

But what happens when you need to debug your source code? Indeed, macros and `includes allow for less clutter and enhanced readability, but at the same time hide from you pieces of code you might actually need access to during debug. Fortunately enough, DVE ships with some new cool features that give you quick and easy access to any underlying code and thus taking the pain out of debugging a SystemVerilog test bench. Let’s see some of them:

Macro Tooltips

Hover your mouse over a macro statement and a tooltip window will pop up displaying the underlying code – very useful for short macros. The tooltip’s height can be customized to your liking!

Macro Expand/Collapse

Macros can be expanded interactively so that the underlying source code is presented in the source file you are viewing. Very powerful!

Hyperlinks / Back / Forward

Clicking on a macro/include statement will take you to the original source code or file.

Don’t worry, you can always go back and forth using the browser-like Back/Forward buttons.

To sum up, DVE offers a really comfortable way to debug your SystemVerilog source code – be it plain code, macros or `included files. While keeping you focused on the important part of your code, DVE provides quick and easy access to any underlying code. And thanks to the Back & Forward buttons you can skip back and forth between macros, `included files and your main source file as smoothly as you would in your internet browser. This really takes the pain out of debugging a modern SystemVerilog test bench.

Posted in Debug, SystemVerilog | 3 Comments »

Blocking and Non-blocking Communication Using the TLI

Posted by John Aynsley on March 31st, 2011

John Aynsley, CTO, Doulos

In the previous blog post I introduced the VCS TLI Adapters for transaction-level communication between SystemVerilog and SystemC. Now let’s look at the various coding styles supported by the TLI Adapters, and at the same time review the various communication options available in VMM 1.2.

We will start with the options for sending transactions from SystemVerilog to SystemC. VMM 1.2 allows transactions to be sent through the classic VMM channel or through the new-style TLM ports, which come in blocking- and non-blocking flavors. Blocking means that the entire transaction completes in one function call, whereas non-blocking interfaces may required multiple function calls in both directions to complete a single transaction:


On the SystemVerilog side, transactions can be sent out through blocking or non-blocking TLM ports, through VMM channels or through TLM analysis ports. On the SystemC side, transactions can be received by b_transport or nb_transport, representing the loosely-timed (LT) and approximately-timed (AT) coding styles, respectively, or through analysis exports. In the TLM-2.0 standard any socket supports both the LT and AT coding styles, although SystemVerilog does not offer quite this level of flexibility, and hence neither does the TLI.

Now we will look at the options for sending transactions from SystemC back to SystemVerilog. Not surprisingly, they mirror the previous case:


On the SystemC side, transactions can be sent out from LT or from AT initiators or through analysis ports. On the SystemVerilog side, transactions can be received by exports for blocking- or non-blocking transport, by vmm_channels, or by analysis subscribers.

Note the separation of the transport interfaces from the analysis interfaces in either direction. The transport interfaces are used for modeling transactions in the target application domain, whereas the analysis interfaces are typically used internally within the verification environment for coverage collection or checking.

In the SystemVerilog and SystemC source code, the choice of which TLI interface to use is made when binding ports, exports, or sockets to the TLI Adapter, for example:

// SystemVerilog
`include “”
import vmm_tlm_binds::*;           // For port/export
import vmm_channel_binds::*;       // For channel

tli_tlm_bind(m_xactor.m_b_port,    vmm_tlm::TLM_BLOCKING_EXPORT,    “sv_tlm_lt”);
tli_tlm_bind(m_xactor.m_nb_port,   vmm_tlm::TLM_NONBLOCKING_EXPORT, “sv_tlm_at”);
tli_tlm_bind(m_xactor.m_b_export,  vmm_tlm::TLM_BLOCKING_PORT,      “sc_tlm_lt”);
tli_tlm_bind(m_xactor.m_nb_export, vmm_tlm::TLM_NONBLOCKING_PORT,   “sc_tlm_at”);
tli_channel_bind(m_xactor.m_out_at_chan, “sv_chan_at”, SV_2_SC_NB);

// SystemC
#include “tli_sc_bindings.h”
tli_tlm_bind_initiator(m_scmod->init_socket_lt, LT, “sc_tlm_lt”,true);
tli_tlm_bind_initiator(m_scmod->init_socket_at, AT, “sc_tlm_at”,true);
tli_tlm_bind_target   (m_scmod->targ_socket_lt, LT, “sv_tlm_lt”,true);
tli_tlm_bind_target   (m_scmod->targ_socket_at, AT, “sv_tlm_at”,true);
tli_tlm_bind_target   (m_scmod->targ_socket_chan_at, AT, “sv_chan_at”,true);

Note how the tli_tlm_bind calls require you to specify in each case whether the LT or AT coding style is being used. The root cause of this inflexibility is certain language restrictions in SystemVerilog, in particular the lack of multiple inheritance, which makes it harder to create sockets that support multiple interfaces. Hence, in SystemVerilog, the blocking- and non-blocking interfaces get partitioned across multiple ports and exports. In the SystemC TLM-2.0 standard there is only a single kind of initiator socket and a single kind of target socket, each able to forward method calls of any of the core interfaces, namely, the blocking transport, non-blocking transport, direct memory, and debug interfaces.

In summary, the VCS TLI provides a simple and straightforward mechanism for passing transaction in both directions between SystemVerilog and SystemC by exploiting the TLM-2.0 standard.

Posted in SystemC/C/C++, SystemVerilog, Transaction Level Modeling (TLM) | 1 Comment »

VMM-to-SystemC Communication Using the TLI

Posted by John Aynsley on March 22nd, 2011

John Aynsley, CTO, Doulos

I have said several times on this blog that the presence of TLM-2.0 features in VMM 1.2 should ease the task of communicating between a SystemVerilog test bench and a SystemC reference model. Now, at last, let’s see how to do this – using the VCS TLI or Transaction Level Interface from Synopsys.

The parts of the TLI in question are the VCS TLI Adapters between SystemVerilog and SystemC. These adapters exploit the TLM-2.0-inspired features introduced into VMM 1.2 on the SystemVerilog side and the OSCI TLM-2.0 standard itself on the SystemC side in order to pass transactions between the two language domains within a VCS simulation run. The TLI Adapters do not provide a completely general solution out-of-the-box for passing transactions between languages in that they are restricted to passing TLM-2.0 generic payload transactions (as discussed in a previous blog post). However, the Adapters can be extended by the user with a little work.

Clearly, the VCS TLI solution will only be of interest to VCS users. As an alternative to the VCS TLI, it is possible to pass transactions between SystemVerilog and SystemC using the SystemVerilog DPI as described in SystemVerilog Meets C++: Re-use of Existing C/C++ Models Just Got Easier, and the Accellera VIP Technical Subcommittee are discussing a proposal to add a similar capability to UVM.

If you want to pass user-defined transactions between SystemVerilog and SystemC you are going to have to jump through some hoops, whether you choose to use the VCS TLI or the DPI. However, for the TLM-2.0 generic payload, the VCS TLI provides a simple ready-to-use solution. Let’s see how it works.


The TLI Adapters are provided as part of VCS. All you have to do is to include the appropriate file headers in your source code on both the SystemVerilog and SystemC sides, as shown on the diagram. The adapters themselves get compiled and instantiated automatically. The SystemVerilog side needs to use the VMM TLM ports, exports, or channels (as described in previous blog posts). The SystemC side needs to use the standard TLM-2.0 sockets. You then need to add a few extra lines of code on each side to bind the two sets of sockets together, and the TLI Adapters take care of the rest.
From the point of view of the source code, the adapter is invisible apart from the presence of the header files. Each binding needs to be identified by giving it a name, with identical names being used on the SystemVerilog and SystemC sides to tie the two sets of ports or sockets together. Here is a trivial example:

// SystemVerilog
`include “”

vmm_tlm_b_transport_port #(my_xactor, vmm_tlm_generic_payload) m_b_port;

tli_tlm_bind(m_xactor.m_b_port, vmm_tlm::TLM_BLOCKING_EXPORT, “abc”);

// SystemC
#include “tli_sc_bindings.h”

tlm_utils::simple_target_socket<scmod>  targ_socket_lt;

tli_tlm_bind_target (m_scmod->targ_socket_lt, LT, “abc”, true);

Note that the same name “abc” has been used on both the SystemVerilog and SystemC sides to tie the two ports/sockets together. On the SystemVerilog side we can now construct transactions and send them out through the TLM port:

// SystemVerilog
tx = new;
assert( tx.randomize() with { m_command != 2; } );
m_b_port.b_transport(tx, delay);

On the SystemC side, we receive the incoming transaction:

// SystemC
void scmod::b_transport(tlm::tlm_generic_payload& tx, sc_time& delay) {


The TLI Adapter takes care of converting the generic payload transaction from SystemVerilog to SystemC, and also takes care of the synchronization between the two languages. The VCS TLI provides a great ready-made solution for this particular use case. In the next blog post I will look at the VCS TLI support for various TLM modeling styles.

Posted in SystemC/C/C++, Transaction Level Modeling (TLM) | 2 Comments »

A RAL example with Designware VIP

Posted by S. Varun on March 17th, 2011

I often get asked how best RAL ought to be used with Designware VIP. Since several of these VIPs provide a mechanism to
program registers across different DUTs, I felt it would be useful to create an example with Designware AMBA AHB VIP and

The example has a structure as shown in the block diagram below,

    |                                             |
    |         Register Abstraction Layer          |
    |                                             |
 |                |
 |    RAL2AHB     |
 |  ------------  |     ------------------       -----------      -----------
 | | AHB MASTER | |----| HDL Interconnect |-----| AHB SLAVE |----| Resp Gen  |
 |  ------------  |     ------------------       -----------      -----------
 |                |

It uses a dummy HDL interconnect that has two AHB interfaces to connect a VIP AHB master component with a VIP AHB slave
component. I have created a dummy register specification in "ahb_advanced_ral_slave.ralf". 

This example lacks a real AHB based DUT with real registers and hence the AHB slave VIP components' internal memory is
used to model the register space of the system. The RALF specification is then used to generate the System Verilog RAL
model using the "ralgen" utility as shown by the command-line below.

% ralgen -l sv -t ahb_advanced_slave ahb_advanced_ral_slave.ralf -c b -c a -c f

The above command will dump an SV based RAL model in a file named "". This model is instantiated
within the top-level environment class, which by the way is an extension of the "vmm_ral_env" class. You may already be
aware that a "vmm_ral_env" based environment has to be used for RAL verification. Once instantiated it is registered using
vmm_ral_access::set_model() method. This would complete the RAL model instantiation and registration leaving only the
translation logic which is the crux of the task.

Note: "vmm_ral_env" is extended from "vmm_env". Check the RAL userguide to see the additional members.

In the block diagram above, the block RAL2AHB is the block that translates a generic RAL access in to a command, in this
case  the command being an AHB transfer. The functional logic that translates generic READ/WRITE into an AHB master
transfer is within the "vmm_rw_xactor::execute_single()". The code snippet below shows how the translation is done.

// --------------------------------------------------------------------
task ral2ahb_xlate::execute_single(vmm_rw_access tr);

  // The generic read/write being translated into a AHB transfer.
  ahb_xact_fact.data_id        = tr.data_id;
  ahb_xact_fact.scenario_id    = tr.scenario_id;
  ahb_xact_fact.stream_id      = tr.stream_id;

  // Copying over the data width from the system configuration
  ahb_xact_fact.m_enHdataWidth = vip_cfg.m_oSystemCfg.m_enHdataWidth;

  // Setting the burst type to SINGLE & number of beats to 1
  ahb_xact_fact.m_enBurstType  = dw_vip_ahb_transaction::SINGLE;
  ahb_xact_fact.m_nNumBeats    = 1;

  // Copying the address generated by RAL into the AHB address of the AHB transfer
  ahb_xact_fact.m_bvAddress    = tr.addr;

  // Copying the size information over to the AHB transaction. RAL provides
  // the size in terms of bits and the dw_vip_ahb_master_transaction class takes
  // it in terms of bytes.
  ahb_xact_fact.m_nNumBytes    = tr.n_bits/8;
  ahb_xact_fact.m_enXferSize   = dw_vip_ahb_transaction::xfer_size_enum'(func_log(tr.n_bits) - 3);
  ahb_xact_fact.m_bvvData      = new[ahb_xact_fact.m_nNumBytes];

  // Setting the transfer type WRITE/READ based on the kind value generated by RAL
  if (tr.kind == vmm_rw::WRITE) begin
    ahb_xact_fact.m_enXactType = dw_vip_ahb_transaction::WRITE;
  else begin
    ahb_xact_fact.m_enXactType = dw_vip_ahb_transaction::READ;

  // Unpacking the RAL write data element into the byte sized data queue available within
  // dw_vip_ahb_master_transaction class
  if(tr.kind == vmm_rw::WRITE) begin
    for (int i = 0; i < tr.n_bits/8; j++) begin
       ahb_xact_fact.m_bvvData[i] = >> 8*i;

  // Put the AHB transaction object into the AHB master's input channel

  // Wait for the transfer to END

  // Pack the READ data from the AHB transfer back into the RAL transactions data
  // member
  if (tr.kind == vmm_rw::READ) begin
     int i; = 0;
     for (i = 0; i < xact.m_nNumBytes; i++) begin += xact.m_bvvData[i] << 8*i;

  // Collecting the status of the transfer and returning it to RAL
  if (xact.m_nvRespLast[0] == 0)
     tr.status = vmm_rw::IS_OK;
     tr.status  = vmm_rw::ERROR;

endtask: execute_single
// --------------------------------------------------------------------

The created translator class also has to be instantiated within the environment class and has to be registered using
"vmm_rw_access::add_xactor()" as shown below,

// --------------------------------------------------------------------
class ahb_advanced_ral_env extends vmm_ral_env ;

  ral2ahb_xlate    ral_to_ahb;


  virtual function void build() ;
    ral_to_ahb      = new("AHB RAL MASTER XACTOR", master_mp, cfg.cfg_master);


// --------------------------------------------------------------------

At this juncture, we are set to run the RAL tests and easily program the registers using the Abstarcted model
with simple APIs as shown below;

     env.ral_model.slave_block.REGA.set( ... );
     env.ral_model.slave_block.REGB.set( ... );

The above section shows how to setup for single transfers. For burst transfer, there is a slight variation
wherein you provide the burst translation within the vmm_rw_xactor::execute_burst() task. The code snippet
below shows this.

// --------------------------------------------------------------------
task ral2ahb_xlate::execute_burst(vmm_rw_burst tr);

  ahb_xact_fact.data_id        = tr.data_id;
  ahb_xact_fact.scenario_id    = tr.scenario_id;
  ahb_xact_fact.stream_id      = tr.stream_id;
  ahb_xact_fact.m_enHdataWidth = vip_cfg.m_oSystemCfg.m_enHdataWidth;
  ahb_xact_fact.m_nNumBeats    = tr.n_beats;
  ahb_xact_fact.m_bvAddress    = tr.addr;
  ahb_xact_fact.m_nNumBytes    = tr.n_bits/8*tr.n_beats;
  ahb_xact_fact.m_enXferSize   = dw_vip_ahb_transaction::xfer_size_enum'(func_log(tr.n_bits) - 3);

  if (tr.kind == vmm_rw::WRITE) begin
    ahb_xact_fact.m_bvvData    = new[ahb_xact_fact.m_nNumBytes];
    ahb_xact_fact.m_enXactType = dw_vip_ahb_transaction::WRITE;
  else begin
    ahb_xact_fact.m_enXactType = dw_vip_ahb_transaction::READ;

    4  : begin
               ahb_xact_fact.m_enBurstType = dw_vip_ahb_transaction::INCR4;
    8  : begin
               ahb_xact_fact.m_enBurstType = dw_vip_ahb_transaction::INCR8;
    16 : begin
               ahb_xact_fact.m_enBurstType = dw_vip_ahb_transaction::INCR16;
    default : begin
               ahb_xact_fact.m_enBurstType = dw_vip_ahb_transaction::INCR;

  // Write cycle
  if(tr.kind == vmm_rw::WRITE) begin
     for(int i=0; i<tr.n_beats; i++) begin
       for(int j = 0; < tr.n_bits/8; j++) begin
         ahb_xact_fact.m_bvvData[i*(tr.n_bits/8) + j] =[i] >> 8*j;

  // Put the AHB burst transaction into the AHB master's input channel

 // Read cycle
 if(tr.kind == vmm_rw::READ) begin = new[tr.n_beats];
     for(int i=0; i<tr.n_beats; i++) begin[i] = 0;
        for(int j = 0; j<tr.n_bits/8; j++) begin
 [i] += xact.m_bvvData[i*tr.n_bits/8 +  j] <<8*j;

  // Collecting the status of the transfer and returning it to RAL
  if (xact.m_nvRespLast[0] == 0)
     tr.status = vmm_rw::IS_OK;
     tr.status  = vmm_rw::ERROR;

endtask: execute_burst
// --------------------------------------------------------------------

For invoking burst transfers in RAL, the vmm_ral_access::burst_write() & vmm_ral_access::burst_read() tasks
have to be used as shown below;

     env.ral.burst_write(status, 8'h00, 4, 8'h0f, exp_data, , 32);
     env.ral.burst_read(status, 8'h00, 4, 8'h0f, 4, act_data, , 32);

The complete example is downloadable from solvnet "tb_ahb_vmm_10_advanced_ral_sys.tar.gz". You will need DesignWare
licenses to compile and run the example. Please follow the instructions in the README to compile and run this example.
The example can be used as a reference for creating a RAL environment with other Designware VIP titles as well.
Do write to me if you have any questions on the example.

Posted in Register Abstraction Model with RAL, VMM | 1 Comment »

Transaction Debugging with Discovery Visualization Environment (DVE) Part-2

Posted by JL Gray on March 8th, 2011

Asif Jafri, Verilab Inc.

In my previous blog post, I introduced how to dump waves and how to use $tblog for dynamic data and message recording. If you need more control over scope sensitive transaction debugging, $msglog task is very useful. This blog has been divided into two sections: in the the first section, I talk about how to use $msglog. In the second section, I will discuss how VMM performs transaction recording by calling $msglog from within the VMM library. The call is protected so as not to confuse other simulators or tools. You can use $msglog in any of your code as well.

•    The advantage of using $msglog is that we have more control over the debug messaging. If a transaction can be divided into start and finish, it is possible to identify cause and effect.
•    Parent and child relationship can be shown
•    Identify execution stream with start and end time.

The following steps need to be followed to invoke $msglog.

Include msglog.svh in testbench code
Add +incdir+${VCS_HOME}/include in the compile line

1) The example below shows how to call the $msglog task in the testbench. The first msglog statement creates a transaction (read) on a stream (stream1) which has an attribute addr. It also sets the header text (RD) and body text (text 1). This statement can be placed in a read task of your transactor.  The second msglog statement once again can be placed in the read task and it shows when the read completes. Streams are global and do not need to be created explicitly. They are created implicitly as they are needed.

$msglog (“stream1”, XACTION, “read”, NORMAL, “RD”, “text 1”, START, addr);


$msglog (“stream1”, XACTION, “read”, NORMAL, “”, FINISH);

The table below shows the various possible parameters for the type, severity and relation field in the $msglog task:

































As shown above you can also place $msglog tasks in the response task of the responding transactor if the transaction needs to be followed into the response transactor.

$msglog(“stream1″, XACTION, “resp”, NORMAL, “RESP”, START, data);

2) VMM provides build-in transaction recording. To enable it use “+define+VMM_TR_RECORD” when compiling your code. At simulation runtime, recording of transactions is controlled by setting “+vmm_tr_verbosity=debug” in the command line.
The following VMM base classes have build-in recording support:
vmm_channel, vmm_voter, vmm_env, vmm_subenv, vmm_timeline

The figure below shows an example of the recorded transactions as viewed in the waveform viewer:


You can also do your own transaction recording by using the following VMM functions:


mystream = vmm_tr_record::open_stream(get_instance(), “MyChannel”);

vmm_tr_record::start_tr(mystream, “Read”, “Text line 1\nText line 2”);



As shown in the two part blogs on transaction debugging, $tblog and $msglog can be very useful transaction debugging constructs. You can choose to dump transactions and follow them through the environment, dump channel data, notification ID, phase names etc. To be able to see all this information on the waveform viewer has been a blessing for me.  I hope it is helpful to you.

Posted in Debug, VMM infrastructure | Comments Off

Transaction Debugging with Discovery Visualization Environment (DVE) Part-1

Posted by JL Gray on February 25th, 2011

Asif Jafri, Verilab Inc., Austin, TX

The art of verification has evolved dramatically over the last decade. What used to be a very simple verilog testbench which could not possibly cover the vast solution space has evolved into the current monstrosity (Random testbenches) which is a very powerful tool, but the complexity to debug has gone up exponentially.

VMM has introduced various debug constructs to aid in the debug of the design as well as the test environment such as:
•    Messaging: Report regular, debug, or error information.
•    Recording: Transaction and components have built-in recording facilities that enable transaction and environment debugging.

Today I want to spend some time looking at DVE as a powerful debug tool in our tool box.

To start things off lets look at some simple calls used to invoke dumping waves.

1) $vcdpluson() : This call is used to start dumping design signals into a .vpd (VCD plus) format. “vpd” is a proprietary Synopsys format (binary, highly compressed) that is generated by vcs, which solves the issue of generating excessively large .vcd (IEEE standard) format files.


When compiling, specify -debug_pp (for post process debug), -debug (for interactive debug), -debug_all (for interactive debug with line stepping and breakpoints) to enable VCS Dumping.

The code snippets shown above will generate waves of all design signals for viewing in the DVE waveform viewer. You can also use the UCLI (unified command line interface) command ‘dump’ for dumping design signals interactively or in scripts.

Won’t it be great if we can also view dynamic variables as waveforms?

2) $tblog() system task is used for recording dynamic (or static) data and simple messages.  No additional environment setup is required. $tblog() has to be called in the testbench where you want to record a message or a variable. The next example shows how to record a message in the send_packet task of a transactor.

// Foo transactor
task send_packet();
    int id; // local variable
    $tblog(-1, “Sending packet”); // record all local and class variables
    cnt = cnt + 1; // cnt is a class variable
    if (cnt < 50)
       $tblog(0, “Count is less than 50”, cnt, id); // record variable cnt and id

endtask: send_packet

Along with the message and variable values, $tblog automatically records the time and the call stack. To view these messages and variables in the waveform viewer select a recording from transaction browser and add it as a waveform.
The figure below shows how a message is displayed in a DVE waveform window.


Another useful tool for transaction debugging is using the $msglog task which will be discussed in the next article “Hyperlink….”.

Posted in Debug, VMM infrastructure | Comments Off

Register Programming using RAL package

Posted by Vidyashankar Ramaswamy on February 15th, 2011

There are many methods available to program (read/write) registers in a design using RAL.

1.    ral_model::read()/write(): This is the old fashion method where you specify the address and the data. No need to know the register by name.

              EX:, addr, data, . . .);

 2.    ral_model::read_by_name()/write_by_name(): You have to specify the register name and data to execute this method. Here the register name is hard-coded.

              EX: ral_model.read_by_name(status, “reg_name”, data, . . .);

 3.    ral_reg::read()/write(): Here you have to specify the hierarchical path to the register to execute the task. Only value needs to be specified.


 Option 1 works fine at all levels of the test bench — block, core or system – if a portable addressing scheme is used for register programming. The only issue is that it is not self documented. For instance, a sequence of writes/reads to program a DDR memory controller makes it hard to understand and debug the code when the controller does not behave as expected.

Option 2, uses the register name to perform the read/write process. This code is self documented and works great in the block level test bench. However, this option breaks when used at the core or system level. Why? Let us analyze the following situation:

Unit level test bench is using read_by_name()/write_by_name().  This works as the register names are unique. However, when multiple instances of the same block  exist at the system level, multiple registers exist with the same name, thus creating name conflict. To ensure that the name is properly scoped and that the same scope is used from block to top, read_by_name()/write_by_name() should not be used as it uses a flat name space.  To reuse the same code across different levels of test bench, one should use option 3 which is scalable.  The following code segment demonstrates this concept:

task init_ddr_controller (vmm_ral_block ddr_block);

     vmm_ral_reg  reg_in_use;

     reg_in_use = ddr_block.mode_reg_0;


      . . .


Note: You can  use,  reg_in_use = ddr_block.get_reg_by_name(“reg_name”); if the task is written to take in register name as well.

Then in core/integration or in SoC  level:

   init_ddr_controller (system.blk2);

Please do share your comments/ideas on this.

Posted in Register Abstraction Model with RAL | 1 Comment »

Migrating Legacy File-Based Testbenches to VMM

Posted by JL Gray on February 9th, 2011

Scott Roland, Verilab Inc, Austin, TX

As a verification engineer, it is common to be given a design to test that is based on earlier design. Presumably, that existing design also comes with a proven verification environment and suite of tests. Unfortunately, the legacy verification environment might be rather rudimentary. If we are going to create a modern, VMM-based, verification environment for the new design, then what can and should be done with the existing “simple” tests? As Joel Spolsky once said, “the single worst strategic mistake [is deciding] to rewrite the code from scratch.”

Verification reuse is just as valuable as design reuse. This can be true even if the legacy tests are a set of text command files that are read in at runtime and perform “dumb” directed testing. If the original tests are of good quality, then they will still cover important functionality of the design. They might also tests critical corner cases or problems that were seen during the initial development of the design being reused. In addition, reusing existing directed tests could help you achieve some early testing of your device faster than if you had started your verification effort from scratch.

The first way you can leverage a legacy testbench is to reuse some of the code responsible for stimulating and monitoring the DUT. As you build the new VMM environment you can properly encapsulate the existing code into the relevant VMM transactors.

Next, it would be nice to reuse the original tests themselves. The tests could be a set of tasks calls or a text file containing commands, as mentioned before. You could translate each test individually into a proper VMM test that generates transaction directly, but it would be better to create an adaption layer between the original tests and the new environment. That would obviate the need to modify the tests and allow the VMM environment to also handle new tests written for the old testbench.

To illustrate an example of using a file-based test in a VMM environment, I chose to extend the memsys_cntrlr example that is distributed with VMM release 1.2.1 in the directory sv/examples/std_lib. The example contains a number of scenarios that are implemented as extensions of the VMM Multi-stream Scenario class. I want to create an additional scenario that reads in a command file and generates transaction based on the file. First, assume that my command file contains lines that specify the command, address and data:

WRITE 8888_8888 2A
READ  2222_2222 24
WRITE 3333_3333 1A
READ  5555_5555 81
READ  7777_7777 42

Using the existing cpu_directed_scenario as a template, I created a new cpu_filebased_scenario. The execute() task, that is responsible for defining what the scenario does, takes care of reading in the test file and calling the proper write/read tasks based on the individual commands. Since the original command file specified the expected return value of every read command, the read task checks the actual return value against the given expected value. Eventually, you might create a reference model that would enable the VMM environment to predict the expected read values. Implementing the directed check in the scenario enables you to run the legacy tests before a reference model is completed and later validate the initial tests and reference model against each other. Here is the implementation of the scenario:

/// Scenario that executes commands read from a directed test file.
class cpu_filebased_scenario extends cpu_rand_scenario;
  /// Overloaded version of vmm_ms_scenario::execute(). Body of our scenario.
  /// Reads each line in the file and performs the specified action.
  task execute(ref int n);
    integer      fileID;
    bit [8*5:1]  cmd_str;
    bit   [7:0]  data;
    bit  [31:0]  addr;
    if (chan == null) chan = get_channel("cpu_chan");
    fileID = $fopen("test.file", "r" );
    // Lines look like: "CMD ADDRESS DATA"
    while ($fscanf(fileID, "%s %h %h", cmd_str, addr, data) != -1) begin
      unique case (cmd_str)
        "WRITE": this.write(addr, data);
        "READ" : (addr, data);
        default: `vmm_error(this.log, $psprintf("Unknown command %s", cmd_str));
      n += 1;

  /// Send a write transaction for the given address and data.
  task write(input bit [31:0]  addr,
             input bit [ 7:0]  data);
    cpu_trans  tr = new();
    tr.randomize() with {tr.address == addr; tr.kind == WRITE; == data;};

  /// Send a read transaction for the given address and check for the given expected data.
  task read(input bit [31:0]  addr,
            input bit [ 7:0]  exp_data);
    cpu_trans  tr = new();
    tr.randomize() with {tr.address == addr; tr.kind == READ;};
    if ( !== exp_data) begin
      `vmm_error(this.log, $psprintf("READ(A:%X, D:%X) did not match expected:%X",
                                     addr,, exp_data));

After defining the scenario class, I created an extension of vmm_test that tells the VMM factory to use the file-based scenario for this test and run it once. The VMM architecture and factory makes it possible to run the legacy tests just like any randomized VMM tests. Plus, it does not require modifying any other component in the environment. Here is the implementation of the test class:

class test_filebased extends vmm_test;
  function void configure_test_ph();
    // Tell the factory which scenario class to use for this test.


          cpu_filebased_scenario::this_type(), log, `__FILE__, `__LINE__);

  function void build_ph();
    // Run the scenario only once.
    vmm_opts::set_int("%*:num_scenarios", 1);

Since the directed test file is read into a new environment, it stands to reason that the driver in the new environment could act differently than the one in the legacy environment. For example, the driver might try to combine multiple transactions into bursts or reorder them. While this should probably be done in a specific scenario in the VMM environment, not the driver, you should compare the final stimulus that is performed on the DUT between the two environments. Only after you have done such an assessment can you say that the testcase is being reused for it’s original intent.

Once you have the legacy tests running in a modern VMM environment, you can enhance the environment to have randomization, self-checking and functional coverage. You can analyze the existing tests with a coverage model to determine what new tests you need to write to verify functionality that was initially missed or has been added or modified. The coverage model can also tell you which legacy tests duplicate functionality in other tests, providing justification for getting rid of legacy tests and giving confidence in the quality of your VMM environment.

Posted in VMM | 3 Comments »

Performance appraisal time – Getting the analyzer to give more feedback

Posted by Amit Sharma on January 28th, 2011

S. Prashanth, Verification & Design Engineer, LSI Logic

Performance appraisal time – Getting the analyzer to give more feedback

We wanted to use the VMM performance analyzer to analyze the performance of the bus matrix we are verifying. To begin with, we wanted these information while accessing a shared resources (slave memory).

· Throughput/Effective Bandwidth for each master in terms of Mbytes/sec

· Worst case latency for each master

· Initiator and Target information associated with every transaction

By default, the performance analyzer records the initiator id, target id, start time and end time of each tenure (associated with a corresponding transaction) in the SQL data base. In addition to the useful information provided by the Performance Analyzer, we needed the number of bytes transferred for each transaction to be dumped in the SQL data base. This was required for calculating throughput which in our case was the number of bytes transferred from the start time of the first tenure until the end time of the last tenure of a master. Given that we had a complex interconnect with 17 initiators, it was difficult for us to correlate an initiator id with their names. So we wanted to add initiator names as well in the SQL data base. Let’s see how these information can be added from the environment.

An earlier blog on performance analyzer “Performance and Statistical analysis from HDL simulations using the VMM Performance Analyzer”  provides useful information on how to use VMM performance analyzer in verification environment. Now, starting with that, let me outline the additional steps we took for getting the statistical analysis we desired

Step 1: Define the fields and their data types required to be added to the data base in a string (user_fields). i.e., “MasterName VARCHAR(255)” for initiator name and “NumBytes SMALLINT” for number of bytes. Provide this string to the performance analyzer instance during initialization.

class tb_env extends vmm_env;
vmm_sql_db_sqlite db; //Sqlite data base
vmm_perf_analyzer bus_perf;
string user_fields;
virtual function void build();;
db = new(“perf_data”); //Initializing the data base
user_fields = “MasterName VARCHAR(255), NumBytes SMALLINT”;
bus_perf = new(“BusPerfAnalyzer”, db, , , , user_fields);

Step 2: When each transaction ends, get information about the initator name and the number of bytes transferred in a string variable (user_values) . Then provide the variable to the performance analyzer through the end_tenure() method.

fork begin

vmm_perf_tenure perf_tenure = new(initiator_id, target_id, txn);

string user_values;



user_values = $psprintf(“%s, %0d”, initiator.get_object_name(), txn.get_num_bytes());

bus_perf.end_tenure(perf_tenure, user_values);



With this, the performance analyzer dumps the additional user information in an SQL data base. The blog “Analyzing results of Performance Analyzer with Excel”  explains how to extract information from the SQL database generated. Using the spreadsheet, we could create our own plots and ensure that  management has all the analysis it needs to provide the perfect appraisal.

Posted in Optimization/Performance, Performance Analyzer, Verification Planning & Management | 1 Comment »

Managing coverage grading in complex multicore microprocessor environments

Posted by Shankar Hemmady on January 26th, 2011

Something as simple as coverage grading, which we often take for granted, starts showing its exponential complexity when dealing with cutting-edge designs where quality and timeliness are essential. 

An article in EE Times by James Young and Michael Sanders of AMD along with Paul Graykowski and Vernon Lee of Synopsys describes how they created a coverage grading solution, Quickgrade, that scales to meet the complexity of multicore multiprocessor design environments:


Posted in Coverage, Metrics | Comments Off

Cool Things You Can Do With DVE – The Videos

Posted by Yaron Ilani on January 19th, 2011

DVE is now on YouTube !! Here’s a collection of short videos demonstrating some the coolest features in DVE that you need to know about. If you wish to learn more, check out the the recent blog articles. Enjoy!

Debugging UVM Sequences

Searching & Cool GUI tips

Interactive Rewind

Debugging SystemVerilog Macros

Debugging FSMs & The Grid

Debugging Your Source Code

Debugging SystemVerilog Assertions (SVA)

Tracing Drivers and Active Driver

More videos coming up soon…

Posted in Debug | 1 Comment »

Verification in the trenches: Transform your sc_module into a vmm_xactor

Posted by Ambar Sarkar on January 19th, 2011

Dr. Ambar Sarkar, Chief Verification Technologist, Paradigm Works Inc.

Say you have SystemC VIP lying around, tried and true. More likely than not, they are BFMs that connect at the signal level to the DUT and have a procedural API supporting transaction level abstraction.

What would be the best way to hook these components up with a VMM environment? With VMM now being available in SystemC as well, you really want to make these models look and behave as vmm_xactor derived objects that interact seamlessly across the SystemC/SystemVerilog language boundary. Your VMM environment can thus take full advantage of your existing SystemC components. And your sc_module can still be used, just as before, in other non VMM environments!

Enough motivation. Can this be done? Since Syst
emC is really C++, and it supports multiple inheritance, is there a way to just create a class that inherits from both your SystemC component as well vmm_xactor?

Here is an example..

Originally, suppose you had a consumer bfm defined(keeping the example simple for illustration purposes).

   1:  SC_MODULE(consumer) {
   2:    sc_out<sc_logic>   reset;
   3:    sc_out<sc_lv<32> > sample;
   5:    sc_in_clk    clk;
   6:      SC_CTOR(consumer_wrapper):    clk("clk"),    reset("reset"),   sample("sample") {
   7:    }
   9:    . . .  
  10:  };

Solution Attempt 1) The first thing to try would be to simply create a new class called consumer_vmm as follows and define the required vmm_xactor methods.

   1:  class consumer_vmm : public consumer, public vmm_xactor 
   2:  {
   3:    consumer(vmm_object* parent, sc_module_name _nm) 
   4:           : vmm_xactor(_nm,"consumer",0,parent)
   5:              ,reset("reset") 
   6:              ,sample("sample") 
   7:              ,clk("clk")   
   8:       {   
   9:           SC_METHOD(entry);
  10:           sensitive << clk.pos();
  11:          . . .
  14:       }
  15:      . . . define the remaining vmm_xactor methods as needed . . .
  16:  };
Unfortunately, this does not work. Reason? As it turns out, vmm_xactor also inherits from sc_module.So consumer_vmm will end up inheriting same sc_module through two separate classes, the consumer and the vmm_xactor. This is known as the Diamond Problem.  Check out for some fun reading 
Okay, so what can be done? Well, luckily, we can get all of this to work reasonably well with some additional tweaks/steps. Yes, you will need to very slightly modify the original source code, but in a backward compatible way. 

Solution Attempt 2) Make the original consumer class  derive from vmm_xactor instead of sc_module. This is the only change to existing code, and this will be backward compatible since vmm_xactor inherits from sc_module as well. Of course, add any further vmm_xactor:: derived methods using the old api as needed.

   1:  class consumer: public vmm_xactor
   2:  {
   3:   public:
   4:    sc_out<sc_logic>   reset;
   5:    sc_out<sc_lv<32> > sample;
   6:    sc_in_clk    clk;
   7:    . . . 
   8:  }
Solution) Here are all the steps. It looks like quite a few steps, but other than creating the 
wrappers and hooking them, the rest of the steps remain the same regardless of whether you use 
the sc_module or the vmm_xactor. 

Step 1. Make the original consumer class  derive from vmm_xactor instead of sc_module. This is the only change to existing code, and this will be backward compatible since vmm_xactor inherits from sc_module as well. Of course, add any further vmm_xactor:: derived methods using the old api as needed.

   1:  class consumer: public vmm_xactor
   2:  {
   3:   public:
   4:    sc_out<sc_logic>   reset;
   5:    sc_out<sc_lv<32> > sample;
   6:    sc_in_clk    clk;
   7:    . . . 
   8:  }

step 2. define sc_module(consumer_wrapper) declare class that has the same set of pins as needed by consumer.

   1:  sc_module(consumer_wrapper) {
   2:    sc_out<sc_logic>   reset;
   3:    sc_out<sc_lv<32> > sample;
   4:    sc_in_clk    clk;
   6:    sc_ctor(consumer_wrapper):    clk("clk"),    reset("reset"),   sample("sample") {
   7:    }
   9:  };
step 3. declare pointers to instances(not instances)  to these wrappers in env class
   1:  class env: public vmm_group
   2:  {
   3:  public:
   4:     consumer *consumer_inst0;
   5:     consumer *consumer_inst1;
   6:     consumer_wrapper *wrapper0, *wrapper1;
   7:   . . .
   8:  }

step 4. in the connect_ph phase, connect the pins of consumer instances and the corresponding wrappers instances

   1:  virtual void env::connect_ph() {
   2:      consumer_inst0->reset(wrapper0->reset);
   3:      consumer_inst0->clk(wrapper0->clk);
   4:      consumer_inst0->sample(wrapper0->sample);
   6:      consumer_inst1->reset(wrapper1->reset);
   7:      consumer_inst1->clk(wrapper1->clk);
   8:      consumer_inst1->sample(wrapper1->sample);
   9:  }

Step 5. In the constructor for sc_top, after the  vmmm_env instance is created, make sure the pointers in the env point to the these wrappers

   1:  class sc_top : public sc_module
   2:  {
   3:  public: 
   5:    vmm_timeline*  t1;
   6:    env*           e1;
   8:    sc_out<sc_logic>   reset0;
   9:    sc_out<sc_lv<32> > sample0;
  10:    sc_in_clk    clk;
  12:    sc_out<sc_logic>   reset1;
  13:    sc_out<sc_lv<32> > sample1;
  15:    consumer_wrapper wrapper0;
  16:    consumer_wrapper wrapper1;
  18:    SC_CTOR(sc_top):
  19:      wrapper0("wrapper0")
  20:      ,wrapper1("wrapper1")
  21:      ,reset0("reset0")
  22:      ,sample0("sample0")
  23:      ,reset1("reset1")
  24:      ,sample1("sample1")
  25:      ,clk("clk")
  26:     {
  27:        t1 = new vmm_timeline("timeline","t1");
  28:        e1 = new env("env","e1",t1);
  30:        e1->wrapper0 = &wrapper0;
  31:        e1->wrapper1 = &wrapper1;
  33:        vmm_simulation::run_tests();
  35:        wrapper0.clk(clk);
  36:        wrapper0.reset(reset0);
  37:        wrapper0.sample(sample0);
  39:        wrapper1.clk(clk);
  40:        wrapper1.reset(reset1);
  41:        wrapper1.sample(sample1);
  43:     }
  45:  };
So while it looks like a few more than we had hoped, you do it only once, and mechanically. Small price to pay for reuse. Maybe someone can create a simple script. 


Also, contact me if you want the complete example. The example also shows how you can add tlm ports as well.

This article is the 10th in the Verification in the trenches series. Hope you found this article useful. If you would like to hear about any other related topic, please comment or drop me a line at Also, if you are starting out fresh, please check out the free VMM1.2 environment generator.


Posted in Interoperability, SystemC/C/C++, VMM | 1 Comment »