Verification Martial Arts: A Verification Methodology Blog

Archive for the 'VMM' Category

Verification Methodology Manual

Accessing Virtual Registers in RAL

Posted by Amit Sharma on 5th August 2010

Amit Sharma, Synopsys

In one of my previous posts on  Virtual Registers I talked about how you use RAL to model Virtual Registers or fields which are an efficient means of implementing large number of registers in  memory or RAM instead of individual flip-flops.  I also mentioned that they are  implemented as arrays associated with a memory.

In this post, I will talk about how you access these registers through RAL. Normal registers can be accessed using  the hierarchical name in RAL. For these, RAL  would generate the offsets and addresses required.  However, for virtual registers, along with  the virtual register name you need to provide the index ID to refer to them for read and write operations. Accordingly, RAL  generates the offset address based on the index ID of the virtual register.

Consider the following example:


The above RALF specification would translate into the following SV classes in the RAL model



‘DMA_BFRS’  which is the instance of the Virtual Registers class ‘ral_vreg_dut_DMA_BFRS’ is not an array in the RAL Model and is thus different from a typical register array modeled in RAL.

Now, how do you access the individual registers mapped to the memory?  You have to specify the index as the argument to the read/write methods. For example, To access the 4th index of this array, the access would like:

blk.DMA_BFRS.write(4, status, ‘hFF);

RAL would ultimately access the RAM to enable this access. Hence, the following are functionally equivalent:

blk.DMA_BFRS.write(4, status, ‘hFF);
blk.ram.write(4 * 0×0004 + 0×1000, status, ‘hFF);

The following illustration explains this point:


You can see that you now have an option to access these registers in different modes, but both will eventually go through the RAM. You can leverage the Virtual register Callbacks or the RAL memory callbacks for additional customizations and extensibility, in addition to other capabilities that you get when you are using VMM RAL.

Hope, this was useful.. do comment on any other specific functionality that you might look at when modeling these kind of registers.

Posted in Register Abstraction Model with RAL | Comments Off

Low Power Verification for telecom designs with VMM-LP

Posted by Paul Kaunds on 2nd August 2010

Paul Kaunds, founder, Kacper Technologies

Today power has become a dominant factor for most applications including hand-held portable devices, consumer electronics, communications and computing devices. Market requirements are shifting focus from traditional constraints like area, cost, performance and reliability to power consumption.

For efficient power management, power intent has to be reviewed at each phase along the design flow — starting from RTL, during synthesis, and in place and route in the physical design. Power specifications of the design will be common throughout the flow. In order to implement uniform power flow, we need a common format which is easily understood by all tools used along the flow with reusability and that can be declared independent of the RTL. The creation of the UPF (unified power format) as the IEEE standard has been the best solution for specifying power intent for design implementation as well as verification. UPF sits beside design and verification, and develops relationship between the design specifications and low power specifications.

Understanding the driving factors, designers have opted for advanced low power design techniques like Power Gating, Multi Supply Multi Voltage (MSMV), Power-Retention, Power-Isolation etc. Such low power constraints have increased design complexity, which have a direct impact on verification. This makes the verification engineers’ job more challenging as they have to verify the power intent bugs along with functionality.

Some of the low-power design verification challenges are:

  • · Power switch off/on duration
  • · Transitions between different power modes and states
  • · Interface between power domains
  • · Missing of level shifters, isolation and retention
  • · Legal and illegal transitions
  • · Clock enabling/toggling
  • · Verifying retention registers and isolation cells and level shifter Strategies

To tackle these challenges, we need a structured and reusable verification environment which is power aware and encapsulates the best practices when verifying such complex low power designs. To address the low power requirements of one of the complex telecom designs, we made use of (Verification Methodology Manual for Low Power (VMM-LP) ) for SONET/SDH verification. (

We developed a framework for accelerating the verification of low power designs. Our verification environment included VMM along with UPF, RAL, and a power state manager controlled by a power management unit.

Generation of Power control signals

Using extensive features provided by VMM, we developed a VMM based power generator to enable the user to generate the power control signals as random, constrained random and directed to verify power management blocks, and implement low power strategies like Retention Registers and Isolation Cells. This power generator was designed with a creed that it can be hooked into any VMM based environment. It provides the user the ability to generate power signals without any hassles in generating the complex power control sequencing of such signals.


Power Scenario generation

Power scenario generation eased the verification of power aware designs. To generate power signals, a user can simply enter the ranges in a scenario file depending on power specifications. The same generator can be used along with any other VMM environment without any modifications for conventional verification. Some of the capabilities that can be made configurable in this scenario include: varying retention, isolation and power widths.  Additionally, other key capabilities were enabled:

  • · Adheres to standard power sequence as specified in VMM_LP
  • · All the power signals and power domains are parameterizable
  • · Power Generator can be hooked to any VMM environment
  • · Gives a well defined structure to define power sequences
  • · Avoids overlapping of control signals
  • · Allows multiple save, restore in each sequence

The Power state manager takes care of state transition operations by defining power domains, states, modes and their dependencies. It is the key holder for all power transitions that can monitor all transitions dynamically.

Automated Power Aware Assertion Library

Another mechanism to address the challenges in low power verification is the usage of powerful LP assertions. Low power assertions are dealt in conjunction with design’s logic data checkers. VMM-LP assertion rules and coverage techniques helped us to achieve a comprehensive low power verification solution. It suggested some handy recommendations to ensure a consistent and portable implementation of the methodology.

To increase our efficiency of Low Power Telecom design verification, we also developed an automated Power Aware Assertion Library which is generic to any domain and can be hooked to any design. Assertion library is built on top of VMM_LP rules and guidelines with standard power sequencing. Key features of our Assertion library include:

  • · Assertions to verify access of software addressable registers in on/off conditions
  • · Clock toggling during power on/off
  • · Reset during power off
  • · Power control signals sequencing etc.

More details on our flow and usage can be found online in the paper we recently presented at the Synopsys User Group meeting (SNUG) in Bangalore:

Posted in Low Power, Register Abstraction Model with RAL | 3 Comments »

Modeling Virtual Registers and Fields in RAL

Posted by Amit Sharma on 27th July 2010

Amit Sharma, Synopsys

Typically, fields and registers are assumed to be implemented in individual, dedicated hardware structures with a constant and permanent physical location such as a set of D flip-flops. However for some registers which are typically large in number, implementing them in  memory or RAM instead of individual flip-flops is most efficient.  These are the virtual fields and virtual registers, and as they are implemented in a RAM, their physical location and layout is created by an agreement between the hardware and the software, not by their physical implementation.

Virtual fields and registers can be modeled using RAL by creating a logical overlay on a RAL memory model that can be accessed as if they were real physical fields and registers. Virtual fields are contained in virtual registers.



Virtual registers are defined as continuous locations within a memory and can span across multiple memory locations. They are always composed of entire memory locations and not fractions and are modeled arrays associated with a memory.

The association of a virtual register array with a memory can be static or dynamic, but the structure of these registers should be specified in the RALF file.


Static virtual registers are associated with a specific memory and are located at specific offsets within this memory. This association should be specified in the RALF file as shown in the following example. This association is permanent and cannot be broken at runtime.

Static Virtual Register Array:

Example 1 :

            memory ram1 {
              size 32; bits 8; access rw; initial 0; 

            block dut {
               bytes 2;
               memory ram1@0×0000
               virtual register vreg[3] ram1 @0×5 +2 {
                   bytes 2;
                   field f1 { bits 4; };
                   field f2 { bits 4; };
                   field f3 { bits 8; };

        An array of new virtual registers with array size as 3 is associated with the memory ‘ram1′ starting at the offset 5. The increment value is specified as 2 as minimum 2 locations in the memory are required to implement 1 virtual register. The 3 virtual registers are associated with the ram1 as shown:

    vreg[0] is associated at offset 0×5 of ram1.
    vreg[1] is associated at offset 0×7(0×5 +2) of ram1.
    vreg[2] is associated at offset 0×9(0×7 +2) of ram1.
    vreg2[2] is associated at offset 0×24(0×22 +2) of ram1.


Dynamic virtual registers are associated with a user-specified memory and are located at user-specified offsets within that memory. The association is done at runtime. The dynamic allocation of virtual register arrays can also be performed randomly by a Memory Allocation Manager instance.

The number of virtual registers in the array and its association with the memory is specified in the SystemVerilog code and must be correctly implemented by the user. Dynamic virtual registers arrays can be relocated or resized at runtime.

Dynamic Virtual Register Array:

Example 1:

            memory ram1 {
              size 64; bits 8; access rw; initial 0; 

            virtual register vreg {
             bytes 2;
             field f1 { bits 4; };
             field f2 { bits 4; };
             field f3 { bits 8; };

            block dut {
             bytes 2;
             memory ram1@0×0000
             virtual register vreg=vreg1;
             virtual register vreg=vreg2;           

           Here, the virtual register structure is specified in the RALF file, but the association is done during runtime (in the env or testcase) in one of the following ways:

a. Dynamic specification:


   Virtual register array of size 4 is implemented starting at the offset ‘h20 of the memory ram1 and ’2′ is the increment value. The virtual registers will be associated as:

    vreg1[0] is associated at offset 0×20 of ram1.
    vreg1[1] is associated at offset 0×22(0×20 +2) of ram1.
    vreg1[2] is associated at offset 0×24(0×22 +2) of ram1.
    vreg1[3] is associated at offset 0×26(0×24 +2) of ram1.

b. Randomly implemented dynamic specification:  


   Virtual register array of size 5 is allocated randomly by the memory allocation manager (MAM). The allocated region is randomly selected in the   address space managed by the specified MAM.


Hope this post would help you to model and verify the registers and fields implemented in the DUT more efficiently. Please look out for the next blog post where I will talk about the different modes through which you can access these registers and fields

Posted in Register Abstraction Model with RAL | Comments Off

Learn about VMM adoption from customers – straight from SNUG India 2010

Posted by Srinivasan Venkataramanan on 1st July 2010

…Reflections form core engineering team of CVC, fresh from SNUG India 2010

Jijo PS, Thirumalai Prabhu, Kumar Shivam, Avit Kori, Praveen & Nikhil – TeamCVC

Here is a quick collection of various VMM updates from SNUG India 2010 – as seen by TeamCVC. Expect to hear more on VMM1.2 soon from us as now I have a young team all charged up with VMM 1.2 (thanks to Amit @SNPS). All the papers and presentations can be accessed through:

TI’s usage of VMM 1.2 & RAL

In one of the well received papers, TI Bangalore talked about “Pragmatic Approach for Reuse with VMM1.2 and RAL “. The design is a complex digital display subsystem involving numerous register configurations. Not only handling the register configurations is a challenge, but also the ability to reuse of block level subenvs at system level with ease, and with minimal rework and reduced verification time. The author presented their success with VMM 1.2 & RAL to address these challenges.

Key elements touched up on advanced VMM are:

· TLM 2.0 communication mechanism

· VMM Multi-Stream Scenario gen (MSS)


Automated Coverage Closure with ECHO

It is real and live – automated coverage closure is slowly becoming reality atleast in select designs & projects. Having been attempted by various vendors for a while. VCS has added this under ECHO technology. At SNUG, TI presented their experience with ECHO & VMM-RAL. In their paper titled “Automating Functional Coverage Convergence and Avoiding Coverage Triage with ECHO Technology” TI described how an ECHO based methodology in a VMM RAL based environment, can in an automated manner close the feedback loop in targeting coverage groups involving register configuration combinations resulting in significant reduction in verification time.

WRED Verification with VMM

In her paper on “WRED verification with VMM”, Puja shared her usage of advanced VMM capabilities for a challenging verification task. Specifically she touched upon:

· VMM Multi-Stream Scenario gen

· VMM Datastream Scoreboard with its powerful “with_loss” predictor engine

· VMM RAL to access direct & indirect RAMs & registers

What we really liked is to see real application of some of these advanced VMM features – we were taught all of these during our regular CVC trainings and we even tried many of them on our own designs. It feels great to hear form peers on similar usage and to appreciate the value we derive out of VMM @CVC and the vibrant ecosystem that CVC creates around the same.

System-Level verification with VMM

Ashok Chandran, of Analog Devices presented their use of specialized VMM components in a system-level verification project. Specifically he touched upon specialized VMM base classes like vmm_broadcast and vmm_scheduler

At the end the audience learnt what are some of the unique challenges a SoC verification project can present. Even more interesting was the fact that the ever growing VMM seems to have solution for a wide variety of such problems, well thought-out upfront – Kudos to the VMM developers!

Ashok also briefed on his team’s usage of relatively new features in VMM such as vmm_record and vmm_playback and how it helps us to quickly replay streams.

On the tool side, a very useful feature for regressions is the usage of separate compile option in VCS.

VMM 1.2 for VMM users

Amit from SNPS gave a very useful and upto-the-point update on VMM 1.2 for long time VMM users. It was rejuvenating to listen to the VMM 1.2 run_tests feature and the implicit phasing techniques. Though look like little “magic” these features are bound to improve our productivity as there are lesser things to code-debug and move-on..

Amit also touched upon the use of TLM 2.0 ports and how they can be nicely used for integrating functional coverage, instead of using the vmm_callbacks.

The hierarchical component creation and configurations in VMM 1.2 puts us on track for the emerging UVM and is very pleasing to see how the industry keeps moving to more-n-more automation.

A truly vibrant ecosystem enabled by CVC -VMM Catalyst member

A significant addition to this year’s SNUG India was the DCE – Designer Community Expo – a genuine initiative by Synopsys to bring in partners to serve the larger customer base better all under one roof. CVC ( being the most active VMM catalyst member in this region was invited to setup a booth showcasing its offerings. We gave away several books including our popular VMM adoption book and all the new SVA Handbook 2nd edition .

Here is a snapshot of CVC’s booth with our VMM and other offerings.


Posted in Coverage, Metrics, Register Abstraction Model with RAL, VMM | 1 Comment »

VMM 1.2.1 is now available

Posted by Janick Bergeron on 15th June 2010

We, in the VMM team, have been so busy working on improving VMM that we only recently noticed that it has been almost a full year since we released an Open Source distribution of VMM. With the release of VCS 2010.06, we took the opportunity to release an updated Open Source distribution that contains all of the new features and capability in VMM now available in the VCS distribution.

I am not going to repeat in details what changed (you can refer to the RELEASE.txt file for that), but I will point two of the most important highlights…

First, this version supports the VMM/UVM interoperability package (also available from download). This interoperability package will allow you to use UVM verification assets in your VMM verification environment (and vice-versa). Note that the VMM/UVM interoperability package is also included in the VCS distribution (along with the UVM-1.0EA release) in VCS2010.06.

Second, many new features were added to the VMM Register Abstraction Layer (RAL). For example, RAL now supports automatic mirroring of registers by passively observing read/write transaction or by actively monitoring changes in the RTL code itself. Another important addition is the ability to perform sub-register accesses when fields are located in individual byte lanes.

The Open Source distribution is the exact same source code as the VMM distribution included in VCS. Therefore, you can trust its robustness acquired in the field through many hundreds of successful verification projects.

Posted in Announcements, Interoperability, Register Abstraction Model with RAL | 1 Comment »

Simplifying test writing with MSSG and constraint switching feature of System Verilog

Posted by Tushar Mattu on 25th May 2010

Pratish Kumar KT  & Sachin Sohale, Texas Instruments
Tushar Mattu,  Synopsys

In an ideal world, one would want maximum automation and minimal effort. The same holds true when you would want to uncover bugs in your DUV. You would want to provide your team members with an environment whereby they can focus their efforts on debugging issues in the design rather than spend time writing hundreds of lines of testcase code. Here we want to share some simple techniques to make the test writer’s life easier by enabling him to achieve the required objectives with minimum lines of test code, and by providing more automation and reuse. Along with automation, it is important to ensure that the test logic remains easy to understand. In one recent project, where the DUV was an image processing block, the following were some of the requirements:

- The relevant configuration for each block had to be driven first on a configuration port before driving the frame data on another port

- The configuration had to be written to the registers in individual blocks and each block had its own unique configuration.

- Several such blocks together constituted the overall sub-system which required an integrated testbench

- Because all these blocks had to be verified in parallel, the requirement was to have one a generic Register Model which could work not only with the blocks in parallel but also be reused at the sub-system level

- Also, another aspect was to provide as much of reuse in the testcases as possible from block to system level

Given the requirements, we decided to go ahead in the manner described below using a clever mix of the MSS scenarios , named constraints blocks and RAL:


Here is the brief description of the different components and the flow which is represented in the pictorial representation above:

Custom RAL model

As the RAL register model forms the configuration stimulus space, we decided to put all test-specific constraints for register configuration in the top level RAL block itself and had them switched off by default. As seen below, the custom RAL block extended from the generated model has all the test-specific constraints which are switched off.


Basic Multistream Scenario

For each block, one basic scenario (“BASIC SCENARIO”) was always registered with the MSSG. This procedural scenario governs the test flow, which is:

- Randomize the RAL configuration with default constraints,

- Drive the configuration to the configuration port

- Put the desired number of frame data into the driver BFM input channel

The following snippet shows the actual ‘BASIC SCENARIO”


Scenario Library

With the basic infrastructure in place, the strategy for creating a scenario library for the test-specific requirements was simple. Each extended scenario class was meant to change the configuration generation through the RAL model of the different blocks. Thus, for the scenario library for each block, each independent scenario extended the basic scenario and overloaded only the required virtual method for configuration change as shown below:



Each test would then select the scenario corresponding to the test and register it with the MSSG. fifth

At the block level, we were able to reuse this approach across all block level tests consistently and effectively.

Later we were able to reuse all block level tests in the sub-system level testbench , as tests were written in terms of constraints in the RAL model itself. At the sub-system level, we would then turn on specific constraint per block using corresponding block level scenarios. This ensures that there is maximum reuse and that the test flow is consistent across levels.


At higher levels, the scenarios extend the “basic_system_scenario”. The test flow managed by the system level scenarios have a slightly different execution flow than block level tests.But the ‘configuration generation’ is reused consistently and efficiently from block to system. That means modifications of block level test constraints would not require any modification for subsystem level tests using that configuration.

And voila !! Scenarios and test dumping steps were automated at block and sub-system level using simple perl scripting. Test-specific constraint per block were written in an XL sheet which a script will take that as input to generate the scenario and test at block/sub-system level. The scenario execution flow is predetermined and defined in the block/sub-system basic scenario . Furthermore, the flexibility was given to user within this automation process to create multiple such scenarios and reuse existing test constraints.

Posted in Creating tests, Register Abstraction Model with RAL, Tutorial, VMM | 3 Comments »

Using RAL callbacks to model latencies within complex systems

Posted by Amit Sharma on 20th May 2010

Varun S, CAE, Synopsys

In very complex systems there may be cases where a write to a register may be delayed by blocks internal to the system while the block sitting on the periphery might signal a write complete. The RAL BFM will generally be connected to the peripheral block and as soon as it sees the write complete from the peripheral block it will update the RAL’s shadow registers. But due to latencies that might be present within the internal blocks, the state of the physical register wouldn’t have changed yet.The value within the real register and the shadow register will not match for some duration due to the internal latencies
within the system. This could cause problems if another component happens to read from the register, there would be a mis-match as the shadow register reflects the new value which hasn’t reached the register yet, whereas the DUT register still holds the old value.

To prevent such mis-matches one can use the callback methods within the vmm_rw_xactor_callbacks class and model the delay within the relevant callback task. The following are the methods present within the callback,

1. pre_single()  – This task is invoked before execution of a single cycle i.e., before execute_single().
2. pre_burst()   – This task is invoked before execution of a burst cycle i.e., before execute_burst().
3. post_single() – This task is invoked after execution of a single cycle i.e., after execute_single().
4. post_burst()  – This task is invoked after execution of a burst cycle i.e., after execute_burst().

In the scenario described above it would be ideal if we introduced a delay such that the updation of RAL’s shadow register is simultaneous with the state change of the real register. For this we will need a monitor that sits very close to the register which would correctly indicate the state changes of the register. Within the post_write() method of the callback we can wait till the monitor senses a change and then exit the task. This would thus delay the shadow register from being updated until the real register changes its state. The sample code below gives an illustration of how this is done.


class my_rw_xactor_callbacks extends vmm_rw_xactor_callbacks;

my_monitor_channel mon_activity_chan;

function new(my_monitor_channel mon_activity_chan);
this.mon_activity_chan = mon_activity_chan;

virtual task post_single(vmm_rw_xactor xact, vmm_rw_access tr);

my_monitor_transaction mon_tr;

while(1) begin
if(tr.kind == vmm_ral::WRITE)
if(tr.addr == mon_tr.addr)

endclass : my_rw_xactor_callbacks


In the example code above the callback class receives a handle to the monitors activity channel which is then used within the post_single() task to wait till the monitor observes a state change on the register. A check is made using the address of the register to confirm if the state change was for the register in question. Real scenarios could get much more complex with multiple domains, drivers etc., additional complexities can be accordingly handled within the post_single() callback task using the monitor/monitors that monitor the interfaces through which the register is written to.

RAL also has a built-in mechanism known as auto mirroring to automatically update the mirror whenever the register changes inside the DUT through a different interface. One can refer to the latest RAL User Guide to get more details on its usage.

Posted in Register Abstraction Model with RAL, Reuse | Comments Off

VMM Smart Log in DVT

Posted by Cristian Amitroaie on 1st April 2010

In this blog, I’d like to cover two cool features that are coming up with the DVT Integrated Development Environment (IDE). The 1st one is the possibility to color VMM log occurrences, the 2nd one a possibility to hyperlink these occurrences to the VMM source code.

Here is a snapshot showing VMM logs being colored and hyperlinked to source code:


Not only providing hyperlinks from VMM log to source code, DVT also provides hyperlinks to VCS compilation/simulation errors and warnings. To enable these features, all you have to do is to define the right parameters in DVT Run Configuration window. For instance, you can specify the compilation/simulation commands and turn VMM log filtering from the corresponding tab.


This is just one among many other VMM features that are available in DVT.

For more details see

Posted in Debug, Tools & 3rd Party interfaces, VMM | Comments Off

Managing VMM Log verbosity in a smart way

Posted by Srinivasan Venkataramanan on 1st March 2010

Srinivasan Venkataramanan, CVC Pvt. Ltd.

Vishal Namshiker, Brocade Communications

Any complex system requires debugging at some point or the other. To ease the debug process, a good, proven coding practice is to add enough messages for the end user to aid in debug. However as systems become mature the messages tend to become too many and quickly users feel a need for controlling the messages. VMM provides a comprehensive log scheme that provides enough flexibility to let users control what-how-and-when to see certain messages (See:

As we know the usage of `vmm_verbose/`vmm_debug macros requires the +vmm_log_default=VERBOSE run time argument. However when using this, there are tons of messages coming from VMM base classes too – as they are under the VERBOSE/DEBUG severity. Users at Brocade did not prefer to have these messages when debugging problems in user code. Parsing through these messages and staying focussed on debugging the problem at hand was tedious if post-processing of the log file was not implemented. Sure the messages from VMM base classes are useful to one set of/class of problems, but if the current problem is with user code, user would like to be able to exclude them easily. An interesting problem of contradictory requirements perhaps? Not really, VMM base class is well architected to handle this situation.

In VMM, there are two dimensions to control which messages user would like to see. The verbosity level specifies the minimum severity to display and you’ll see every message with a severity greater to equal to it. The other dimension/classification is based on TYPE. There are several values for the TYPE such as NOTE_TYP, DEBUG_TYP etc. Most relevant here is the INTERNAL_TYP – a special type intended to be used exclusively by VMM base class code. All debug related VMM library messages are classified under INTERNAL_TYP. You can use vmm_log::disable_types() method.

A quick example to do this inside the user_env is below:

virtual function void my_env::build();;

.name(“/./”),.inst( “/./”) );
endfunction : build

This is a typical usage if everyone in the team agrees to such a change. However if a localized change is needed for few runs alone, one can combine the power of VCS’s Aspect Oriented Extensions (AOE) made to SystemVerilog. In this case, user supply a separate file as shown below:

extends disable_log(vmm_log);
after function new(string name = “/./”,
string instance = “/./”,
vmm_log under = null);


Add this file to the compile list and voila! BTW, during recent SystemVerilog extensions discussion at DVCon 2010, AOP extensions are being requested by more users to be added to the LRM standard. With its due process, a version of AOP is likely to be added to the LRM in the future (let’s hope in the “near future” :) ).

Posted in Debug, Messaging, SystemVerilog, VMM | Comments Off

Using a RAL test to go from Block to Top quickly and effectively….

Posted by Srivatsa Vasudevan on 9th February 2010

Using a RAL test to go from Block to Top quickly and effectively….

Often, I’ve been in situations where the chip lead came up to me asking me to write a test to makes sure that the block quickly integrates at the top, and had to pull some tricks out of a hat in fairly short order. I’m sure the below has happened to many of you way more than you want to count.

In order for the top level integration to run, I recommend a simple sanity register read/write test with all the blocks to make sure the integration went ok and the RTL that is available can be debugged further.


The point being, that if the CPU cannot read/write configuration registers from any one block on the SOC, the core will fail the integration test anyways.

Many folks typically wind up writing such a test using tasks in verilog, One of the challenges is that the test has to be continually kept up to date

As the memory map changes and the environment for that has to be updated as well.

However, using RAL allows both the Block owner and the chip top owner to both Tag off the same RAL File and get their work done. The core level person can continue on his path of verifying

His tests, while the top chip owner can continue on a separate path of writing top level tests.


All the core owner has to do is share the one RALF for now for each release of the core while he works on other integration tests.

In the coming posts, we’ll see how to cut the amount of work you do using these generated models.

My mantra has always been “ work less/verify more” We’ll explore sequences and user generated code and how to kill the documentation issues.

Till then, Stay tuned.

Posted in Register Abstraction Model with RAL, Reuse | Comments Off

Connecting Multiple Analysis Ports to a Single Analysis Export

Posted by JL Gray on 9th February 2010

Today’s post was written by my colleague Asif Jafri. Enjoy! JL

by Asif Jafri

Asif Jafri is a verification engineer at Verilab.

This post introduces the VMM implementation of the Transaction Level Modeling (TLM) 2.0 specification of how you can connect multiple broadcasting ports to the same receiving export using peer ID’s. Figure 1 shows multiple initiators communicating with the same target. The initiators can be monitors on either side of your DUT passing transaction to a single scoreboard which keeps track of the transactions and does various checks. In TLM 2.0 message broadcast is accomplished through write function calls from the initiator which are then implemented in the target. image

Figure 1: Connecting using ID

Read the rest of this entry »

Posted in Communication, Reuse, Transaction Level Modeling (TLM), VMM, VMM infrastructure | Comments Off

Leverage on the built-in callback inside vmm_atomic_gen and be productive with DVE features for VMM debug

Posted by Srinivasan Venkataramanan on 7th February 2010

Srinivasan Venkataramanan, CVC Pvt. Ltd.

Rashmi Talanki, Sasken

John Paul Hirudayasamy, Synopsys

During a recent Verification environment creation for a customer we had to tap an additional copy/reference of the generated transaction to another component in the environment without affecting the flow. So one producer gets more than one consumer (here 2 consumers). As a first time VMM coder the customer tried using “vmm_channel::peek” on the channel that was connecting GEN to BFM. Initially it seemed to work, but with some more complex code being added across the 2 consumers for the channel, things started getting funny – one of the consumers received the transactions more than once for instance.

The log file looked like:

@ (N-1) ns the transaction was peeked by Master_BFM  0.0.0

@ (N-1) ns the transaction was peeked by Slave_BFM 0.0.0


.(perform the task)


@N ns the Master_BFM  get the transaction 0.0.0

@N ns the transaction was peeked by Slave_BFM 0.0.0

@N ns the transaction was peeked by Master_BFM 0.0.1

@N ns the transaction was peeked by Slave_BFM 0.0.1

With little reasoning from CVC team, the customer understood the issue quickly to be classical race condition of 2 consumers waiting for same transaction. What are the options, well several indeed:

1. Use vmm_channel::tee() (See our VMM Adoption book for an example)

2. Use callbacks – a flexible, robust means to provide extensions for any such future requirements

3. Use vmm_broadcaster

4. Use the new VMM 1.2 Analysis Ports (See a good thread on this: )

The customer liked the callbacks route but was hesitant to move towards the lengthy route of callbacks – for few reasons (valid for first timers).

1. Coding callbacks takes more time than simple chan.peek(), especially the facade class & inserting at the right place

2. She was using the built-in `vmm_atomic_gen macro to create the generator and didn’t know exactly how to add the callbacks there as it is pre-coded!

Up for review, we discussed the pros and cons of the approaches and when I mentioned about the built-in post_inst_gen callback inside the vmm_atomic_gen she got a pleasant surprise – that takes care of 2 of the 4 steps in the typical callbacks addition step as being recommended by CVC’s popular DR-VMM course.

Step-1: Declaring a facade class with needed tasks/methods

Step-2: Inserting the callback at “strategic” location inside the component (in this case generator)

This leaves only the Steps 3 & 4 for the end user – not bad for a robust solution (especially given that the Step-4 is more of formality of registration). Now that the customer is convinced, it is time to move to coding desk to get it working. She opened up and got trapped in the multitude of `define vmm_atomic_gen_* macros with all those nice looking “ \ “ at the end – thanks to SV’s style of creating macros with arguments. Though powerful, it is not the easiest one to read and decipher – again for a first time SV/VMM user.

Now comes the rescue in terms of well proven DVE – the VCS’s robust GUI front end. Its macro expansion feature that works as cleanly as it can get is at times hard to locate. But with our toolsmiths ready for assistance at CVC, it took hardly a few clicks to reveal the magic behind the `vmm_atomic_gen(icu_xfer). Here is a first look at the atomic gen code inside DVE.


Once the desired text macro is selected, DVE has a “CSM – Context Sensitive Menu” to expand the macro with arguments. It is “Show à Macro”, as seen below in the screenshot.


With a quick bang go on DVE – the Macros expander popped up revealing the nicely expanded, with all class name argument substituted source code for the actual atomic_generator that gets created by the one liner macro. Along with clearly visible were the facade class name and the actual callback task with clear argument list (something that’s not obvious by looking at standard


Now, what’s more – in DVE, you can bind such “nice feature” to a convenient hot-key if you like (say if you intend to use this feature often). Here is the trick:

Add the following to your $HOME/.synopsys_dve_usersetup.tcl

gui_set_hotkey -menu “Scope->Show->Macro” -hot_key “F6″

Now when you select a macro and type “F6” – the macro expands, no rocket science, but a cool convenient feature indeed!

Voila – learnt 2 things today – the built-in callback inside the vmm_atomic_gen can save more than 50% of coding and can match up to the effort (or the lack of) of using simple chan.peek(). The second one being DVE’s macro expansion feature that makes debugging a real fun!

Kudos to VMM and the ever improving DVE!

Posted in Callbacks, Debug, Reuse, Stimulus Generation, VMM, VMM infrastructure | Comments Off

VMM 1.2 – The Movie

Posted by John Aynsley on 14th January 2010


John Aynsley, CTO, Doulos

To celebrate the release of VMM 1.2 on VMM Central, I thought I would do something a little different and share with you a video giving a brief overview of the new features, including the implicit phasing and TLM-2 communication. So grab some popcorn, sit back, and enjoy…

Posted in Phasing, Structural Components, Transaction Level Modeling (TLM), VMM | Comments Off

Using Explicitly-Phased Components in an Implicitly-Phased Testbench

Posted by JL Gray on 11th December 2009

In my last post, I described the new VMM 1.2 implicit phasing capabilities.  I also recommended developing any new code based off of implicit phasing.  Obviously, though, companies that have been using the VMM for quite some time will have developed all of their existing testbench components using explicit phasing.  It is relatively straightforward (and in some sense almost trivial) to use an explicitly phased component in an implicitly phased testbench.

Remember that the whole point of explicit phasing is that users cycle components through the desired phases by manually calling functions and tasks within the component itself. vmm_env contains the following methods:

  • gen_cfg
  • build
  • reset
  • config_dut
  • start
  • wait_for_end
  • stop
  • cleanup
  • report

vmm_subenv contains the following relevant methods:

  • new
  • configure
  • start
  • stop
  • cleanup
  • report

In an explicitly-phased environment, subenv methods are called manually by integrators, usually from the equivalent method in vmm_env. There are two approaches for instantiating a vmm_subenv-based component in an implicitly-phased testbench. The default approach is to simply allow the implicit phasing mechanism to call these explicit phases for you. Explicitly phased components are identified by the implicit phasing mechanism, and methods are called using a standard (and not entirely unexpected) mapping:

Implicit Phase Explicit Phase Called
build_ph vmm_subenv::new[1]
configure_ph vmm_subenv::configure
start_ph vmm_subenv::start
stop_ph vmm_subenv::stop
cleanup_ph vmm_subenv::cleanup
report_ph vmm_subenv::report

[1] Users must call vmm_subenv::new manually.

Now, you might want to phase your vmm_subenv in a non-standard way. If that’s the case, the first thing you’ll need to do is disable the automatic phasing. Here’s how. First, instantiate a null phase:

vmm_null_phase_def null_ph = new();

Next, override the phases you don’t want to start automatically. For example:

my_group.override_phase(“start”, null_ph);
my_group.override_phase(“stop”, null_ph);

Finally, call the explicit phases from the parent object’s implicit phases.  A complete example is shown below.

class testbench_top extends vmm_group;
bus_master_subenv bus_master;
vmm_null_phase_def null_ph = new();

function void build_ph();
bus_master = bus_master_subenv::create_instance(this, “bus_master”);
bus_master.override_phase(“start”, null_ph);
bus_master.override_phase(“stop”, null_ph);
endfunction: build_ph

task reset_ph();
// wait 1000 clocks…
endtask: reset_ph

endclass: testbench_top

Posted in Communication, Modeling, Phasing, Reuse, VMM | Comments Off

VMM scenario generators and dependent scenarios

Posted by Avinash Agrawal on 4th December 2009

Avinash Agrawal

Avinash Agrawal, Corporate Applications, Synopsys

Often folks wonder if it possible to have a VMM scenario generator, where one scenario is dependent on another scenario.

The answer is “Yes.”

Consider the testcase below. You can define two scenarios, scn_a and scn_b, both of which have their own set of constraints. The variables generated in scn_b are a multiple of the values that were set when scn_a was generated previously, in this case by variable “ratio”. For more details on how VMM scenario generators work, refer to the VMM user guide.

Systemverilog testcase:


class packet extends vmm_data;

rand int sa;

rand int da;


`vmm_data_member_scalar(sa, DO_ALL)

`vmm_data_member_scalar(da, DO_ALL)




`vmm_scenario_gen(packet, “packet”)

class a_scenario extends packet_scenario;

int unsigned scn_a;

rand int ratio;

function new();

scn_a = define_scenario(“scn_a”, 5);


constraint cst_a {

$void(scenario_kind) == scn_a ->  {

foreach(items[i]) {

this.items[i].sa inside {[0:100]};

this.items[i].da inside {[0:100]};


ratio inside {[1:5]};




class b_scenario extends packet_scenario;

int unsigned scn_b;

function new();

scn_b = define_scenario(“scn_b”, 10);


constraint cst_b {

$void(scenario_kind) == scn_b -> {

foreach(items[i]) {

this.items[i].sa inside {[100:300]};

this.items[i].da inside {[100:300]};





class hier_scenario extends packet_scenario;

rand a_scenario scn_a;

rand b_scenario scn_b;

function new();

this.scn_a = new();

this.scn_b = new();




virtual task apply(packet_channel channel, ref int unsigned n_insts);

this.scn_a.apply(channel, n_insts);

this.scn_b.apply(channel, n_insts);


constraint cst_hier {

foreach(scn_b.items[i]) {

// Create a scenario ‘scn_b’ depending upon ‘scn_a’

scn_b.items[i].sa inside {[scn_a.ratio*100:scn_a.ratio*800]};

scn_b.items[i].da inside {[scn_a.ratio*100:scn_a.ratio*800]};




program automatic test;

packet_scenario_gen scn_gen;

packet_channel      pkt_chan;

packet pkt;

hier_scenario scn_hier;

initial begin

scn_hier = new();

scn_gen = new(“scn_gen”, -1, pkt_chan);

scn_gen.register_scenario(“scn_hier”, scn_hier);


$display(“Size of the scn is %0d”, scn_gen.scenario_set.size());

scn_gen.stop_after_n_insts = 15;


while(1) begin

#10 scn_gen.out_chan.get(pkt);

$display(“id is %0d , %0d %0d”, pkt.data_id,, pkt.da);





Posted in Reuse, Stimulus Generation, VMM | Comments Off

Life After Verification

Posted by Janick Bergeron on 8th November 2009

Janick Bergeron, Synopsys

Two years ago, Francoise left her job as a VMM/verification CAE with Synopsys. Her husband Pierre similarly quit his job at a large semi-conductor company. They embarked on a career change, the magnitude of which few of us undertake voluntarily: they traded high-tech careers and their Porsches for a tractor and ATV and took over Francoise’s father’s vineyard in south-western France

I had the pleasure of working with Francoise for several years before and, upon hearing the news, the wine-enthusiast in me quickly promised to help her with her harvest. This year, I made good on my promise. On October 27th, at 12:30pm, my girlfriend and I landed at the Pau airport, full of energy and anticipation at the prospect of experiencing a tiny sliver of a winemaking tradition that goes back 400 years.


North Americans have a very distorted sense of time. One hundred years is “old” to us. Very few can trace their ancestry on this continent beyond a few generations.

Francoise’s winery, Domaine Guirardel, is housed in the original home and barn. They were built at the same time Samuel de Champlain was busy establishing Quebec City. The “new” house is 300-years old. Both are nestled on a south-facing hill with an incredible view of the Pyrenees. The domaine has never been sold: it has always been passed down through generations.

P1010396The postcard atmosphere of the place immediately reminded me of the movie “A Good Year”. A hammock is hung between two palm trees. Jean, Francoise’s father, is at work in an immense garden that is the source of many of the vegetables we will be eating throughout our stay. Lunch is often served on the front patio, under the protective shade of a huge oleander tree.

The vineyard is small: 5 hectares planted with Petit Manseng and Gros Manseng, two of the five varietals that make up the Jurancon appelation. The harvest is done entirely by hand by uncles, aunts, in-laws, cousins and friends, most of whom have been helping with the harvest for many years. Of the 20 or so harvesters, it is a first experience for only four of us.

P1010317 The work is simple: armed with shears, one cuts the bunches of golden grapes from the vines into plastic buckets. The content of full buckets is then transferred to a large bin behind the tractor.


The day starts at 9am. We break for lunch at 1pm, then resume at 3pm until 5pm. It is intense but not hard work. Despite the physical nature of the work, shuffling along the vines shifting buckets up the hill and untangling twisted bunches of grapes, I was surprised to discover that it is accomplished in a very social and jovial atmosphere. Everyone is chatting and updating one another on common acquaintances.

P1010301 Lunch is reminiscent of a Thanksgiving dinner in the U.S. Everyone gathers in the dinning room, in front of a fireplace large enough to roast an entire cow, and share in a feast of delicacies from various French regions: home-made peach wine, raw oysters from Brittany, sausages from Corsica, wine from Bordeaux, Bearn pork bellies and many others.

Going back to the fields filled with so much great food is not easy! But the grapes cannot wait and we must make the best of the ideal weather conditions. Francoise and her father have been working in the chais since 7am, busily cleaning the press to receive the freshly-picked grape. And they will continue to work after dinner to complete the last pressing of the day.

P1010312P1010314 At the end of the day, under Francoise’s watchful eyes, everyone gathers again in the Tasting Room for a glass of the proprietor’s wine. It gives us a glimpse of what our (and her) labor will yield in two years.

P1010291P1010294Trading a keyboard, mouse and a well-paying secure job for a mechanical press, half a dozen stainless steel vats, oak barrels and the uncertainties of climate vagaries requires more courage than I possess! They are bringing their technical and marketing savoir-faire to this artisanal enterprise in the hopes of ensuring the future of their two children. For example, they are continuing the production of a late-harvest version of their wine, an experiment that was started back in 2005 and repeated only once in 2007. They are also planning to reclaim vines currently leased to another producer and, under the aegis of being a “young farmer”, plant additional vines in a new parcel of land.

She produces three wines: Tradition, Bi de Prat and Vendanges Tardives. They are very fruity, syrupy white wines similar to Sauterne, excellent with foie gras. Unlike most white wines, hers will improve with age, up to 20 years. A few years ago, I had the great fortune of tasting the 1967 vintage: it was the color of maple syrup and was totally sublime. Unfortunately, her wines are not available in North America (which makes the dozen or so bottles I have in my cellar even more precious!).

The French have a poor reputation when it comes to visitors. But it has never been my experience – nor mustn’t it be for the majority of visitors that continue to make France the single-most visited country in the world. And it most definitely has not been the case for our 6-day stay with Francoise and Pierre. We were welcomed in their home and immediately treated like family. It has been a wonderful experience that I hope I’ll be able to repeat next year.

For further reading (in french):

Posted in VMM | 1 Comment »

class factory

Posted by Wei-Hua Han on 26th August 2009

Weihua Han, CAE, Synopsys

As a well-known Object-Oriented technique, class factory has actually been applied in VMM since inception. For instance, in the vmm atomic and scenario generators, by assigning different blueprints to randomized_obj and scenario_set[] properties, these generators can generate transactions with user specified patterns. Using the class factory pattern, users create an instance with a pre-defined method (such as allocate() or copy()) instead of the constructor. This pre-defined method will create an instance from the factory not just the type of the variable being assigned.

VMM1.2 now simplifies the application of the class factory pattern within the whole verification environment so that users can easily replace any kind of object, transaction, scenario and transactor by a similar object. Users can easily follow the steps below to apply the class factory pattern within the verification environment.

1. define “new”, “allocate”, “copy” methods for a class and create the factory for the class.

class vehicle_c extends vmm_object;

//defines the new function. each argument should have default values

function new(string name=”",vmm_object parent=null);,name);


//defines allocate and copy methods

virtual function vehicle_c allocate();

vehicle_c it;

it = new(this.get_object_name,get_parent_object());

allocate = it;


virtual function vehicle_c copy();

vehicle_c it;

it = new this;

copy = it;


//these two macros will define necessary methods for class factory and create factory for the class




`vmm_typename, `vmm_class_factory will implement the necessary methods to support the class factory pattern, like get_typename(), create_instance(), override_with_new(), override_with_copy(), etc.

Users can also use `vmm_data_member_begin and `vmm_data_member_end to implement the “new”, “copy”, “allocate” methods conveniently.

2. create an instance using the pre-defined “create_instance()” method

To use the class factory, the class instance should be created with pre-defined create_instance() method instead of the constructor. For example:

class driver_c extends vmm_object;

vehicle_c myvehicle;

function new(string name=”",vmm_object parent=null);,name);


task drive();

//create an instance from create_instance method

myvehicle = vehicle_c::create_instance(this,”myvehicle”);

$display(“%s is driving %s(%s)”, this.get_object_name(),





program p1;

driver_c Tom=new(“Tom”,null);

initial begin;



For this example, the output is:

Tom is driving myvehicle(class $unit::vehicle_c)

3.  define a new class

Let’s now define the following new class which is derived from the original class vehicle_c:

class sedan_c extends vehicle_c;


function new(string name=”",vmm_object parent=null);,parent);


virtual function vehicle_c allocate();

sedan_c it;

it = new(this.get_object_name,get_parent_object());

allocate = it;


virtual function vehicle_c copy();

sedan_c it;

it = new this;

copy = it;




And we would like to create myvehicle instance from this new class without modifying driver_c class.

4. override the original instance or type with the new class

VMM1.2 provides two methods for users to override the original instances or type.

  • override_with_new:(string name, new_class factory, vmm_log log,string fname=”",int lineno=0)

With this method, when create_instance() is called, a new instance of new_class will be created through facory.allocate() and returned.

  • override_with_copy(string name, new_class factory,vmm_log log, string fname=”", int lineno=0)

With this method, when create_instance() is called, a new instanced of new_class will be created through factory.copy() and returned.

For both methods, the first argument is the instance name, as specified in the create_instance() method, which users hope to override with the type of new_class. Users can use powerful name matching mechanism defined in VMM to specify the override happens on dedicated instance or all the instances of one class in the whole verification environment.

The code below will override all vehicle_c instances with sedan_c type in the environment:

program p1;

driver_c Tom=new(“Tom”,null);

vmm_log log;

initial begin

//override all vehicle_c instances with type of sedan_c




And the output of the above code is:

Tom is driving myvehicle(class $unit::sedan_c)

If users only want to override one dedicated instance with a copy of another instance, users can call override_with_copy using the following code:


As the above example shows, with the class factory pattern short-hand macros provided with VMM1.2, users can easily use class factories patterns to replace transactors, transactions and other verification components without modifying the testbench code. I find this very useful for increasing the reusability of verification components.

Posted in Configuration, Modeling, SystemVerilog, Tutorial, VMM, VMM infrastructure | 1 Comment »

How to connect your SystemC Reference Models to your verification VMM based framework

Posted by Shankar Hemmady on 17th August 2009


Nasib Naser, Phd, CAE, Synopsys

In this blog I will discuss the Use Model demonstrated in Figure 1 where a VMM layered testbench is used to verify an RTL DUT against a SystemC transaction level model. SystemVerilog allows for the creation of a reusable layered testbench architectures. The VMM methodology provides the basis for such a layered architecture. With a layered approach, transaction-level reference models can be easily integrated at the appropriate level to provide self-checking functions. In this Use Model the VMM Function layer is communicating with SystemC model using the TLI mechanism to perform read/write transactions, and using the same testbench scenarios the VMM command layer is driving the DUT at pin level.


Figure 1 – VMM driving TLM and RTL with checking

Synopsys’ VCS functional verification solution addresses the challenge of this use model with its SystemC-SystemVerilog Transaction-Level Interface (TLI). Using TLI SystemC interface methods can be invoked in SystemVerilog and vice versa. The **value add** for using TLI is that the SystemVerilog DPI based communication code that synchronizes both domains is automatically generated.

Let’s take a look at the various code components in SystemC and SystemVerilog based on the VMM methodology that enables such a verification use model. In the following example we define the read and write transactions as SC Interface methods.

class Buf_if: virtual public sc_interface {

// do the pure virtual function read()/write() declarations here
virtual void read(unsigned int addr, unsigned int* data) = 0;

virtual void write(unsigned int addr, unsigned int data) = 0;

Following code shows VMM Transactor invoking SystemC transactions read and write at function layer. VMM transactor tb_mast is communicating to SystemC TLM using vmm channel tb_mast_out_ch1 and with the RTL model using the vmm channel tb_mast_out_ch2 channel, as shown in the following code:

class tb_master extends vmm_xactor;
virtual tb_if.master ifc;
tb_data_channel tb_master_in_ch;

extern virtual task main();
endclass: tb_master

task tb_master::main();
tb_data tr, tr_out;
forever begin
// Send the Instruction to SC Reference Model
// Send the Instruction to RTL
endtask: main

The following code shows VMM reference Transactors calling the SystemC transaction functions via the adaptor alu_tl_if_adpt_vlog which was automatically generated by VCS TLI.

class tb_ref extends vmm_xactor;

tb_data_channel     tb_ref_in_ch;

alu_tl_if_adpt_vlog alu_tl_if_adpt_vlog_inst0;

extern function new (string instance,
integer stream_id = -1,

tb_data_channel tb_ref_in_ch = null);

extern virtual task main();

endclass: tb_ref

task tb_ref::main();


forever begin


SA_SB_OP_GO : begin




tr_out.data_out = d;




This VCS TLI use model provides a complete and easy way to integrating blocking and non-blocking SystemC reference models into a VMM based multi-layer verification environment.

Posted in SystemC/C/C++, SystemVerilog, Transaction Level Modeling (TLM), Tutorial, VMM | Comments Off

Introducing VMM 1.2

Posted by Fabian Delguste on 27th July 2009

Fabian Delguste / Synopsys Verification Group

I’m very pleased to announce that VMM 1.2 beta is now available. You’re welcome to enroll our VMM 1.2 Beta program by signing up the form on VMM Central at:

As you know, the VMM methodology defines industry best practices for creating robust, reusable and scalable verification environments using SystemVerilog. Introduced in 2005, it is the industry’s most proven verification methodology for SystemVerilog, with over 500 successful tape outs and over 50 SNUG papers. The VMM methodology enables verification novices and experts alike to develop powerful transaction-level, constrained-random verification environments. A comprehensive set of guidelines, recommendations and rules help engineers avoid common mistakes while creating interoperable verification components. The VMM Standard Library provides the foundation base classes for building advanced testbenches, while VMM Applications like the Register Abstraction Layer, Performance Analyzer and Hardware Abstraction Layer provide higher-level functions for improved productivity.

We’ve gained valuable feedback and insight after working with customers on hundreds of production verification projects using VMM. From time to time, we incorporate these findings back into VMM so the broad VMM community can take advantage of them. Such is the case with VMM 1.2, where we’ve made some great enhancements to improve productivity and ease of use. These changes, like those in last year’s VMM 1.1 update are backward compatible with earlier releases so you won’t have to change your existing code to take advantage of the new features.

Here are some of new features in VMM 1.2:

  • SystemC/SystemVerilog TLM 2.0 support
    • We have added TLM-2.0 support, which adds remote procedure call functionality between components and extends support to SystemC modules
    • TLM-2.0 can connect to VMM channels and notification. The conjunction of both interfaces creates a robust completion model. You can now easily integrate reference models written in SystemC with TLM-2.0 transport interface directly in your VMM testbench
  • Enhanced block-to-top reuse
    • Hierarchical phasing: we have added the concept of phasing and timelines for enhanced flexibility and reuse of verification components. You can now control execution order directly from transactors and get all phase to be coordinated in upper layers
    • Class Factory: this allows for faster stimulus configuration and reuse. It’s now possible to replace any kind of transaction, scenario, transactor and class
  • Enhanced ease-of-use
    • Implicit phasing: as each transactor comes with its own pre-defined phasing, You can use these phases to ensure my transactor are fully controlled and do follow some pre-defined phases conventions. Each transactor controls its own status. Serial phasing supports multiple timelines within a simulation, improving regression productivity. This way, you can run multiple tests one after the other in the same simulation run
    • Parameterization: VMM 1.2 adds new classes and concepts to provide additional functionality and flexibility. We have added parameterization support to many existing classes including channels, generators, scoreboard, notify and new base classes for making it easy to connect transactors together
    • Configuration Options: VMM 1.2 adds a rich set of methods for controlling testbench functionality from the runtime command line
    • Common Base Class: VMM 1.2 make possible multiple name spaces, hierarchical naming along with enhanced search functionality RTL configuration insures the same configuration is shared between RTL and testbench.

We’ll cover these new features in more detail in subsequent blogs.

If you are at DAC, you can pick up a copy of the new Doulos VMM Golden Reference Guide, which contains more details on the VMM 1.2 enhancements. This will be available in the VCS suite at the Synopsys booth, at the Synopsys Standards booth while Doulos is presenting, and at the Synopsys Interoperability Breakfast on Wednesday morning.

VMM 1.2 is available today for beta review. You’re welcome to sign up for our beta program. Please note that the beta program runs from today until August 31, 2009.

I look forward to hearing what you think about the new VMM 1.2 features!

Posted in Announcements, SystemVerilog, VMM, VMM infrastructure | Comments Off

How to connect your SystemC reference model to your verification framework

Posted by Janick Bergeron on 8th July 2009

Nasib Naser, Phd, CAE, Synopsys

Architects and designers have many reasons to start using the SystemC language to describe their high level specifications for their target System-on-Chip (SoC). In this blog I will not discuss the “why” the SystemC selection BUT the “how” to bring these models into the team’s design verification flow.

Mixed abstractions come from the fact that designers often develop models at the transaction level to capture the correct architecture and analyze the system performance and do so fairly early in the design cycle. Various components of a SoC developed with high level languages such as C, C++, SystemC, and SystemVerilog are increasingly in demand and for obvious reasons. They are faster to write and faster to simulate. High level reference models became increasingly important in verifying the implementation details at the RTL level.

Although SystemVerilog provides different methods (PLI, VPI, DPI, etc) to support communications between SystemVerilog and C/C++ domains, it is not so straight forward for SystemC interface methods. One important reason is that these SystemC methods can be “blocking”, i.e. these methods can “consume” time. And these “blocking” SystemC methods can return value as well. Users should build a very elaborate mechanism to maintain synchronization between the simulations running in the SystemVerilog and the SystemC domains.

Synopsys’ VCS functional verification solution addresses the challenge of this use model with its SystemC-SystemVerilog Transaction-Level Interface (TLI). With TLI, VCS automatically generates adapters to be instantiated in each domain to support SystemVerilog block calls of SystemC methods, or vice versa. These TLI adapters take care of the synchronization between both domains.

To demonstrate the ease of use of this unique interface we start by identifying the SystemC interface methods that SystemVerilog require accessibility to. In the following example we define Read and Write transactions as SC Interface methods as such:

class Mem_if : virtual public sc_interface {
public :

   //do the pure virtual function declaration here
virtual void rd_data(int addr, int &data)=0;
virtual void wr_data(int addr, int data)=0;

Manually we create an interface configuration file that declares the methods required for the interface. These methods are declared in a file called Mem_if.h. The content of this configuration file is as follows:

interface Mem_if
direction sv_calls_sc
verilog_adapter Mem_if_adapt_vlog
systemc_adapter Mem_if_adapt_sc

#include "Mem_if.h"

task rd_data
input int addVal
inout& int dataVal

task wr_data
input int addVal
input int dataVal


VCS uses this configuration file to generate the necessary SystemC and SystemVerilog code that enables complete synchronization. The command line is as follows:

%syscan -idf Mem_if.idf

This operation creates the following helper files:


The helper file contains a SystemVerilog Interface called Mem_if_adapt_vlog which declares the following tasks:

task wr_data(input int addVal, input int dataVal);
task rd_data(input int addVal, inout int dataVal);

These tasks could then be wrapped and used in the SystemVerilog testbench to access SystemC methods as demonstrated in the following code:

module testbench;
reg clk;
int data;
int cnt;

// Instantiate the helper interface
Mem_if_adapt_vlog #(0) mem_if_adapt_vlog_inst();


// Calling SC TLI function **Write Data**
sv_wr_data(input int addr, input int data);

// Calling SC TLI function **Read Data**
sv_rd_data(input int addr, output int data);

. . .

// Use the wrapper code
always @(posedge clk)
   if(cnt < 100)
cnt = cnt+1;

always @(posedge clk)
   if(cnt < 100)


Posted in SystemC/C/C++, SystemVerilog, Tutorial, VMM | 1 Comment »

A generic functional coverage solution based on vmm_notify

Posted by Wei-Hua Han on 15th June 2009

Weihua Han, CAE, Synopsys

Functional coverage plays an essential role in Coverage Driven Verification. In this blog, I’ll explain a modular way of modeling and implementing  functional coverage models.

SystemVerilog users can take the advantage of  the “covergroup” construct to implement functional coverage. However this is not enough. The VMM methodology provides some important design-independent guidelines on how to implement functional coverage in a reusable verification environment.

Here are a few guidelines  from the VMM book that I consider very important for implementing a functional coverage model:

“Stimulus coverage shall be sampled after submission to the design” because in some cases it is possible that not all generated transactions will be applied to DUT.

“Stimulus coverage should be sampled via a passive transactor stack” for the purpose of reuse so that the same implementation can be used in a different stimulus generation structure. For example in another verification environment, the stimulus may be generated by a design block instead of testbench component.

“Functional coverage should be associated with testbench components with a static lifetime” to avoid creating a large number of coverage group instances. So “Coverage groups should not be added to vmm_data class extensions”. These static testbench components include monitors, generators, self-checking structure, etc.

“The comment option in covergroup, coverpoint, and cross should be used to document the corresponding verification requirements”.
You can find some more details on these and other functional coverage related guidelines in the VMM book, pages 263-277.

In an earlier post “Did you notice vmm_notify?”, Janick showed how vmm_notify can be used to connect a transactor to a passive observer like a functional coverage model. Here, let me borrow his idea to implement the following VMM-based functional coverage example.

1.    Transaction class

1. class eth_frame extends vmm_data;
2.    typedef enum {UNTAGGED, TAGGED, CONTROL} frame_formats_e;
3.    rand frame_formats_e format;
4.    rand bit[3:0] src;
5.     …
6. endclass

There is some random property defined in the transaction class.  We would like to collect the coverage information for these properties in our coverage class, for example “format” and “src”.

2.    A generic subscribe class

1.  class subscribe #(type T = int) extends vmm_notify_callbacks;
2.     local T obs;
3.     function new(T obs, vmm_notify ntfy, int id);
4.        this.obs = obs;
5.        ntfy.append_callback(id, this);
6.     endfunction
7.     virtual function void indicated(vmm_data status);
8.          this.obs.observe(status);
9.       endfunction
10. endclass

The generic subscribe class specifies the behavior for “indicated”method which will be called when vmm_notify “indicate” method is called. Then with using this generic subscribe class, functional coverage and other observer models only need to implement the desired behavior with “observe” method. Please note that in line 8 the vmm_data object is passed to “observe” method through vmm notification status information.

3.    Coverage class

1.   class eth_cov;
2.      eth_frame tr;

3.      covergroup cg_eth_frame(string cg_name,string cg_comment,int cg_at_least);
4.         type_option.comment = “eth frame coverage”;
5.         option.at_least = cg_at_least;
7.         option.comment = cg_comment;
8.         option.per_instance=1;
9.         cp_format:coverpoint tr.format {
10.         type_option.weight=10;
11.       }
12.       cp_src: coverpoint tr.src {
13.          illegal_bins ilg = {4′b0000};
14.         wildcard bins src0[] = {4′b0???};
15.         wildcard bins src1[] = {4′b1???};
16.         wildcard bins src2[] = (4′b0??? => 4′b1???);
17.      }
18.       format_src_crs: cross cp_format, cp_src {
19.          bins c1 = !binsof(cp_src) intersect {4’b0000,4’b1111 };
20.      }
21.    endgroup

22.    function new(string name=”eth_cov”, vmm_notify notify, int id);
23.       subscribe #(eth_cov) cb=new(this,notify,id);
24.       cg_eth_frame = new(“cg_eth_frame”,”eth frame coverage”,1);
25.    endfunction
26.    function void observe (vmm_data tr);
27.       $cast(,tr);
28.       cg_eth_frame.sample();
29.    endfunction
30. endclass

The coverage group is implemented in coverage class eth_cov. And this coverage class is registered to one vmm_notify service through scubscribe class. The coverage group is sampled in “observe” method so when the notification is indicated the interesting properties will be sampled.

4.    Monitor

1.   class eth_mon extends vmm_xactor;
2.      int OBSERVED;
3.      eth_frame tr; // Contains reassembled eth_frame transaction

4.      function new(…)
5.         OBSERVED = this.notify.configure();
6.         …
7.      endfunction
8.      protected virtual task main();
9.         forever begin
10.          tr = new;
11.          //catch transaction from interface
12.         …
13.         this.notify.indicate(OBSERVED,tr);
14.         …
15.       end
16.    endtask
17. endclass

The monitor extracts the transaction from the interface then it indicates the notification with the transaction as the status information (line 13).

5.    Connect monitor and coverage object

1.   class eth_env extends vmm_env;
2.      eth_mon mon;
3.      eth_cov cov;
4.      …
5.      virtual function void build();
6.         …
7.         mon = new(…);
8.         cov = new(“eth_cov”,mon.notify,mon.OBSERVED);
9.         …
10.    endfunction
11.     …
12. endclass

In the verification environment, the coverage object is created with the monitor notification service interface and the proper notification ID.

There are other ways to implement functional coverage in VMM based environments.  For example a callback-based implementation is used in the example located under sv/examples/std_lib/vmm_test in the VMM package which can be downloaded from

I haven’t discussed assertion coverage in this post, which are another important type of “functional coverage”.  If you are interested in using assertions for functional coverage please check out chapter 3 and chapter 6 in the VMM book for its suggestions.

Posted in Communication, Coverage, Metrics, Register Abstraction Model with RAL, Reuse, SystemVerilog, VMM | 4 Comments »

VMM VIP’s on multiple buses

Posted by Adiel Khan on 27th May 2009


Adiel Khan, Synopsys CAE

Increasingly, more design-oriented engineers are writing VMM code. Some are trying to map typically good design architecture practices to verification development.

A dangerous mapping is parameterization, from modules to classes.

In my old Verilog testbenches I would develop reusable modules and use #parameters extensively to control the settings of the modules I was instantiating. (It was a sad day when I heard IEEE was deprecating my friend the defparam).

1. module vip #(parameter int data_width = 16,

2. parameter int addr_width = 16)

3. (addr, data);

4. output [addr_width-1:0] addr;

5. inout [data_width-1:0] data;


7. endmodule

This would allow me to instantiate this VIP for many bus variants.

8. vip #(64, 32) vip_inst1(…);

9. vip #(32, 128) vip_inst2(…);

Mapping the approach from modules to classes, I could end up with:

1. class pkt_c #( parameter int data_size=16,

2. parameter int addr_size=16)

3. extends vmm_data;

4. rand bit [addr_size-1:0] addr;

5. rand bit [data_size-1:0] data;


7. endclass

8. //specialized class with 64 & 32 sizes

9. pkt_c #(64, 32) pkt1=new();

10. //specialized class with 32 & 128 sizes

11. pkt_c #(32, 128) pkt2=new();

Be warned, in the SystemVerilog testbench centric view of VIP reusability, parameterization of classes leads to a dead-end path. Moving one layer of abstraction up, I really don’t care if it is a 32/64/128 bits wide interfaces. What I want to do is use pkt_c around the verification environment. The simplest case is creating a reusable driver using pkt_c to drive any bus-width interface.

However, if I try to use a generic class instantiation, I will get a specialization with parameters = 16&16. I cannot perform the $cast() to put the right pkt_c type onto the bus.

1. class pkt_driver_c extends vmm_xactor;

2. virtual protected task main();

3. forever begin : GET_OBJ_TO_SEND

4. pkt_c pkt_to_send; //default class instance

5. pkt_c #(64, 32) pkt_created;

6. randomize(pkt_created);// generator code

7. $cast(pkt_to_send, pkt_created); //FAILS !!!!!

If you are using VMM channels, they must similarly be specialized and cannot carry generic parameterized classes:

8. `vmm_channel(pkt_c)

9. class pkt_driver_c extends vmm_xactor;

10. pkt_c_channel in_chan; //Can only carry pkt_c#(16,16)!!!

Or you must upfront select which specialization you want for use with a parameterized channel.

8. class pkt_driver_c extends vmm_xactor;

9. vmm_channel_typed #(pkt_c#(64, 32)) in_chan;

Hence, for the driver to operate on the correct object type, I need to instantiate the exact specialization throughout my entire environment and make the driver itself parameterized. Now you can clearly see instantiating a specific specialization in the driver (or monitor, scoreboard etc) stops the code from being really reusable for other bus_widths.

1. pkt_c #(64, 32) pkt_to_send;

2. pkt_driver_c #(64, 32) driver;

A better approach is one that was described by Janick in the “Size Does Matter” blog of using `define. Let’s expand on this and see how it works for reusable VIPs. Well, the first thing that comes to my mind is that a `define is a global namespace macro with a single value, whereas I am using my VIP with 2 different bus architectures. Therefore, the `define alone is not enough: you also need a local constant to be able to exclude unwanted bits when you have a VIP instantiated for various bus widths.

1. //default define values

2. `define MAX_DATA_SIZE 16

3. `define MAX_ADDR_SIZE 16

4. class pkt_c extends vmm_data;

5. static vmm_log log = new(“Pkt”, “class”);

6. //instance constant to control actual bus sizes

7. const int addr_size;

8. logic [`MAX_ADDR_SIZE:0] addr;

9. logic [`MAX_DATA_SIZE:0] data;

10. // pass a_size as arg to coverage

11. // ensuring valid coverage ranges.

12. covergroup cg (int a_size);

13. coverpoint addr

14. {bins ad_bin[] = {[0:a_size]};}

15. endgroup

16. // sizes specialized at construction for pkts

17. // on buses less than MAX bus widths

18. function new(int a_s=`MAX_ADDR_SIZE);

19. addr_size = a_s;

20. cg = new(addr_size);

21. `vmm_note(log, $psprintf(“\nADDR_TYPE: “,$typename(addr),

22. “\nDATA_TYPE: “,$typename(data),

23. “\nMAX_BUS_SIZE: “, addr_size));

24. endfunction

The code above allows for a default implementation and all the user needs to do is set the `MAX_ADDR_SIZE and `MAX_DATA_SIZE symbols and all the code will be fully reusable across drivers, monitors, subenv, SoC etc.

For situations where two VIP’s of differing bus architectures are used, the compiler symbols need to be set to the biggest bus architecture in the system; smaller bus-widths are set using addr_size. It is not necessary for addr_size variable to be an instance constant or set during construction. By using instance constants, this ensures the bus-widths are not changed at runtime by users. Having the value of addr_size set during construction gives the users the flexibility to setup the object as they want. For pseudo-static objects such as drivers, monitors, subenvs, masters, slaves, scoreboards etc you should check the construction of verification modules for your particular design architecture during the vmm_env::start phase.

N.B not shown above, but assumed, is that the addr_size variable would be used to ensure correct masking occurs when performing do_pack(), do_unpack() compare() etc.

Just to wrap up some loose ends…

I’m not totally discounting the merits of parameterized classes just insuring people look at all the options. For instance you could parameterize everything and then set the SIZE at the vmm_subenv level and map the SIZE parameters to all other objects. At some point you will want to monitor or scoreboard across different bus-widths and then the parameterized class casting will bite you, reducing you to manually mapping the members within the comparison objects. There is a time and place for everything, so probably need another blog showing merits and where to use parameterized classes.

The vmm_data class is not the only place you might need to know the size of the bus, the same `define & instance constant technique can be used throughout your VIP classes.

This blog does not discuss the pros and cons of putting coverage groups in your data object class. I merely used the covergroup in the data-object as a vehicle to demonstrate how you can make your classes more reusable. I think a separate blog about where best to put coverage will clarify the usage models.

All the code snippets can be run with VCS-2009.06 & VMM1.1. Contact me for more complete code examples and bugs or issues you find.

Posted in Coding Style, Configuration, Register Abstraction Model with RAL, Reuse, Structural Components, VMM | 8 Comments »

The hidden pitfalls of type name hiding in a derived class

Posted by Wei-Hua Han on 9th May 2009

Weihua Han, CAE, Synopsys

Here I describe one major difference between virtual and non-virtual methods, and how type name hiding can collide with these methods. This is commonly used OOP feature in SystemVerilog.

“type name hiding” here refers to the situation where a user type definition in the derived class uses the same type name defined in base class, i.e, the new type definition in derived class “hides” the type definition in base class.

SystemVerilog LRM (Language Reference Manual) does not forbid hiding base class type definition in a derived class.  Here’s a typical example:

  1. class company_xactor_c;
  2.         typedef enum { SOFT, HARD } reset_e;
  3.         function void do_reset (reset_e rst);
  4.                 if(rst==SOFT)
  5.                         $display("Do SOFT reset");
  6.                 else
  7.                         $display("Do HARD reset");
  8.         endfunction
  9. endclass
  10. class pci_prj_xactor_c extend company_xactor_c;
  11.                 typedef enum { SOFT, HARD, PCI } reset_e; 
  12.                 function void do_reset(reset_e rst);
  13.                         if(rst==SOFT)..
  14.                         else if(rst==HARD)…
  15.                         else ….
  16.                 endfunction
  17. endclass

Here we intend to have a new “reset_e” definition in PCI project by adding a new PCI specific label called “PCI”.  Next, we also rewrite the “do_reset” method which now comes with a specific “reset” behavior for PCI project.  Then, all transactors derived from “pci_prj_xactor_c” for this PCI project can use this “PCI” label to define PCI specific behaviors.

It seems everything works fine until now. But we will see problems crop up when virtual function (polymorphism) comes into the picture. 

Below is a simplified but realistic scenario.

As you know, VMM allows you to extend the base class library. It’s not uncommon for you to add some specific methods and to create your own base class library, such as:

  1. class company_xactor extends vmm_xactor;
  2.         virtual function void tb_start(int nth_run,
  3. reset_e  reset_e_list[$]);
  4.         …
  5.         endfunction
  6. endclass

Here a company-wide “tb_start” method is added to the company-wide transactor class. And “reset_e” here is derived from vmm_xactor. We may also have a project specific transactor base class like:

  1. class project_xactor extends company_xactor; 
  2.                 typedef enum int { 
  3.                          SOFT_RST, 
  4.                          PROTOCOL_RST, 
  5.                          FIRM_RST, 
  6.                          HARD_RST, 
  7.                          PON, 
  8.                          PCIE 
  9.                 } reset_e; 
  10.     virtual function void tb_start(int nth_run,  reset_e  reset_e_list[$]); 
  11.             … 
  12.             endfunction
  13. endclass

And we have a new “reset_e” definition with some new project specific labels.  “tb_start” is also refined for the project.

Although this may seem fine, there is a fairly serious error in this code: the virtual function in SystemVerilog class follows similar common OOP polymorphism requirement.

These OOP requirements are described in SystemVerilog LRM. Here is an excerpt:

“Virtual methods provide prototypes for the methods that later override them, i.e., all of the information generally found on the first line of a method declaration: the encapsulation criteria, the type and number of arguments, and the return type if it is needed. Later, when subclasses override virtual methods, they shall follow the prototype exactly by having matching return types and matching argument names, types, and directions. It is not necessary to have matching default expressions, but the presence of a default shall match. ”

In above code, “reset_e” definition in derived project_xactor class actually defines a new type, i.e, project_xactor::reset_e, which is a different one as the “reset_e” defined in vmm_xactor.  Now the prototypes of tb_start in company_xactor is
      void company_xactor::tb_reset(int nth_run, vmm_xactor::reset_e reset_e_list[$]).
And in project_xactor it is:
      void project_xactor::tb_reset(int nth_run, project_xactor::reset_e reset_e_list[$]).
These are different.  They are not in compliance with the LRM. Sure, a user can omit to make “tb_start” virtual.  Technically, this will work fine.  However, in this case, all the benefits of using polymorphism will be lost.

Although type name hiding is allowed, we need to be careful in how it is used, since polymorphism is quite commonly employed in VMM and in most testbench environments. This is a very important aspect of reuse and extendibility.

In my next blog post, I will discuss more about what is overriding, what is hiding, what are overridden and what are hiden in SystemVerilog.

Posted in SystemVerilog, VMM | 1 Comment »

VMM 1.1 is finally out

Posted by Janick Bergeron on 18th December 2008

Even though VMM 1.1 is only the second Open Source release of the VMM library, it follows in a long series of customer-based productivity enhancements that have been made to VMM since the original specification was published back in 2005.

I would like to thank all of the customers who kindly contributed to the requirement specification, reviews and beta-testing. The feedback from late beta customers and early adoptees has been very positive.

So, what’s new in VMM 1.1?

First, we have fixed all of the non-compliant language usage that were reported in VMM 1.0.1.

It also adds two new functional coverage models to the generated Register Abstraction model. One measures that all addresses have been accessed, the other that all interesting field values have been used.

It adds a new Performance Analysis package to measure coverage metrics that are more statistical in nature, compared to the simple singular counts of traditional functional coverage.

It adds a Multi-Stream Scenario Generator (MSSG) that can generate and coordinate stimulus across multiple channels, anywhere in the verification environment. Multi-Stream Scenarios (MSS) can be built of individual transactions, single-stream scenarios (used by the VMM Scenario Generator) or other multi-stream scenarios.

It adds a transactor iterator that brings the simple named-based controllability of vmm_log to vmm_xactor. It makes it possible to control and configure transactors reguardless of where they are located in the verification environment.

It adds a message catching mechanism that combines, in a single handler, the capability of vmm_log::modify() and vmm_log::wait_for_msg(). It can catch any message, making it easy to react or mask exceptions and perform negative testing (i.e. making sure that your environment does catch errors when they are present).

It adds a command-line option manager that makes it easy to define, document and manage environment and test options. Options can be specified on the command line or in a series of option files. And the +vmm_help option will display the automatically-generated usage documentation of all the options available in an environment or test.

And finally, it adds a mechanisms for run-time test selection for those who prefer to compile all of your tests in one simulation binary then select, via the command line, which test to execute.

Examples are provided that demonstrate how to use each of these new productivity-enhancing capabilities.

There are other minor additions that are too numerous to mention here. Just refer to the file “RELEASE.txt” in the distribution for an exhaustive list.

VMM 1.1 requires the following versions of VCS: 2006.06-SP2-9(*), 2008.09-4 or 2008.12. As usual, we believe that it is implemented using IEEE-compliant SystemVerilog code — but should some non-compliant usage be identified, be assured that it is entirely unintentional and it will be fixed as soon as possible.

The Open Source distribution of VMM 1.1 can be downloaded immediately from but will also be included in VCS2009.06. If you wish to use this version of VMM with OpenVera and/or DesignWare VIPs that require OpenVera interoperability, you must download the “SvOv” version of the distribution and patch your VCS installation using the “patch_vcs” script. This latter version is identical except for some additional encrypted code that enables methodology-level interoperability between OpenVera and SystemVerilog.

Give it a try and let us know what you think. And keep those suggestions and requests coming! We have a healthy backlog of enhancement requests that are sure to keep VMM moving forward for years to come!

(*) See VCS2006.06-SP2.txt in the distribution. The `VCS2006_06 symbol MUST be defined to enable some work-arounds unsupported SystemVerilog features.

Posted in Announcements, SystemVerilog, VMM, VMM infrastructure | 1 Comment »

Where did this blog go?

Posted by Janick Bergeron on 6th November 2008

Darn. Haven’t posted in over three months!

My appologies to readers: these past few months have been crazy! Two trips to India, one to Japan and many to various parts of North America. Made my 1K status with United in mid-october this year. Lots of customer visits, working on lots of new VMM stuff.

I’ll be back on a regular schedule soon.

Posted in VMM | Comments Off