Verification Martial Arts: A Verification Methodology Blog

Archive for the 'VMM infrastructure' Category

SNUG-2012 Verification Round Up – Language & Methodologies – II

Posted by paragg on 3rd March 2013

In my previous post, we discussed papers that leveraged SystemVerilog language and constructs, as well as those that covered broad methodology topics.  In this post, I will summarize papers that are focused on the industry standard methodologies: Universal Verification Methodology (UVM) and Verification Methodology Manual (VMM).

Papers on Universal Verification Methodology (UVM)

Some users prefer not to use the base classes of a methodology directly. Adding a custom layer enables them to add in additional capabilities specific to their requirements. This layer would consist of a set of generic classes that extend the classes of the original methodology. These classes provide a convenient location to develop and share the processes that are relevant to an organization for re-use across different projects. Pierre Girodias of IDT (Canada) in the paper, Developing a re-use base layer with UVMfocuses on the recommendations that adopters of these “methodologies” should follow while developing the desired ‘base’ layer.  In the paper typical problems and possible solutions are also identified while developing this layer. Some of these including dealing with the lack of multiple-inheritance and foraging through class templates.

UVM provides many features but fails to define a reset methodology, forcing users to develop their own methodology within the UVM framework to test the ‘reset’ of their DUT. Timothy Kramer of The MITRE Corporation in the paper “Implementing Reset Testingoutlines several different reset strategies and enumerates the merits and disadvantages of each. As is the case for all engineering challenges, there are several competing factors to consider, and in this paper the different strategies are compared on flexibility, scalability, code complexity, efficiency, and how easily they can be integrated into existing testbenches. The paper concludes by presenting the reset strategy which proved to be the most optimal for their application.

The ‘Factory’ concept in advanced OOP based verification methodologies like UVM is something that has baffled most verification engineers. But is it all that complicated? Not necessarily  and this is what is  explained by Clifford E. Cummings of Sunburst Design, Inc. in his paper– The OVM/UVM Factory & Factory Overrides – How They Works – Why They Are Important” . This paper explains the fundamental details related to the OVM/UVM factory and explain how it works and how overrides facilitate simple modification to the testbench component and transaction structures on a test by test basis. This paper not only explains why the factory should be used but also demonstrates how users can create configurable UVM/OVM based environments without it.

Register Abstraction Layer has always been an integral component of most of the HVL methodologies defined so far. Doug Smith of Doulos, in his paper, Easier RAL: All You Need to Know to Use the UVM Register Abstraction Layer”, presents a simple introduction to RAL. He distills the adoption of UVM RAL into a few easy and salient steps which is adequate for most cases. The paper describes the industry standard automation tools for the generation of register model.  Additionally the integration of the generated model along with the front-door and backdoor access mechanism is explained in a lucid manner.

The combination of the SystemVerilog language features coupled with the DPI & VPI language extensions can enable the testbench to generically react to value-changes on arbitrary DUT signals (which might or might not be part  of a standard interface protocol).  Jonathan Bromley, Verilab in “I Spy with My VPI: Monitoring signals by name, for the UVM register package and more”, presents a package which supports both value probing and value-change detection for signals identified at runtime by their hierarchical name, represented as a string. This provides a useful enhancement to the UVM Register package, allowing the same string to be used for backdoor register access.

Proper testing of most digital designs requires that error conditions be stimulated to verify that the design either handles them in the expected fashion, or ignores them, but in all cases recovers gracefully. How to do it efficiently and effectively is presented in “UVM Sequence Item Based Error Injectionby Jeffrey Montesano and Mark Litterick, Verilab. A self-checking constrained-random environment can be put to the test when injecting errors, because unlike the device-under-test (DUT) which can potentially ignore an error, the testbench is required to recognize it, potentially classify it, and determine an appropriate response from the design. This paper presents an error injection strategy using UVM that meets all of these requirements. The strategy encompasses both active and reactive components, with code examples provided to illustrate the implementation details.

The Universal Verification Methodology is a huge win for the Hardware Verification community, but does it have anything to offer Electronic System Level design? David C Black from Doulos Inc. explores UVM on the ESL front in the paperDoes UVM make sense for ESL?The paper considers UVM and SystemVerilog enhancements that could make the methodology even more enticing.

Papers on Verification Methodology Manual (VMM)

Joseph Manzella of LSI Corp in “Snooping to Enhance Verification in a VMM Environmentdiscusses situations in which a verification environment may have to peek at internal RTL states and signals to enhance results, and provides guidelines of what is an acceptable practice. This paper explains how the combination of vmm_log (logger class for VMM) and +vmm_opts (Command-line utility to change the different configurable values) helps in creating a configurable message wrapper for the internal grey-box testing. The techniques show how different assertion failures can be re-routed through the VMM messaging interface. An effective and reusable snooping technique for robust checking is also covered.

At Silicon Valley in Mechanism to allow easy writing of test cases in a SystemVerilog Verification environment, then auto-expand coverage of the test case Ninad Huilgol of VerifySys addresses designer’s apprehension of using a class based environment  through a  tool that leverages the VMM base classes. It  automatically expands the scope of the original test case to cover a larger verification space around it, based on a user friendly API that looks more like Verilog, hiding the complexity underneath.

Andrew Elms of Huawei in Verification of a Custom RISC Processorpresents the successful application of VMM to the verification of a custom RISC processer. The challenges in verifying a programmable design and the solutions to address them  are presented. Three topics explored in detail are the – Use of Verification Planner, Constrained random generation of instructions, Coverage closure.The importance of the Verification Plan as the foundation for the verification effort is explored. Enhancements to the VMM generators are also explored. By default VMM data generation is independent of the current design state, such as register values and outstanding requests. RAL and generator callbacks are used to address this. Finally, experiences with coverage closure are presented.

Keep you covered on the varied verification topics in the upcoming blog ahead!!! Enjoy reading!!!

Posted in Announcements, Register Abstraction Model with RAL, UVM, VMM, VMM infrastructure | 1 Comment »

SNUG-2012 Verification Round Up – Language & Methodologies – I

Posted by paragg on 25th February 2013

As in the previous couple of years, last year’s SNUG – Synopsys User Group showcased an amazing number of useful user papers   leveraging the capabilities of the SystemVerilog language and verification methodologies centered on it.

I am always excited when I see this plethora of useful papers and I try to ensure that I set aside some time to go through all these user experiences.  Now, as we wait for SNUG, Silicon Valley to kick-start the SNUG events for this year, I would want to look back at some of the very interesting and useful paper from the different SNUGs of the year 2012.  Let me start with talking about a few papers in the area of the System Verilog language and SV methodologies.

Papers leveraging the SystemVerilog language and constructs

Hillel Miller of Freescale in the paper “Using covergroups and covergroup filters for effective functional coverageuncovers the mechanisms available for carving out the coverage goals. In the p1800-2012 of the SystemVerilog LRM, new constructs are provided just for doing this. The construct that is focused on is the “with” construct. The new construct provides the ability to carve out of a multidimensional range of possibilities for a sub-range of goals. This is very relevant in a “working” or under development setup that requires frequent reprioritization to meet tape-out goals.

The paperTaming Testbench Timing: Time’s Up for Clocking Block Confusionsby Jonathan Bromley, Kevin Johnston of Verilab, reviews the key features and purpose of clocking blocks and then examines why they continue to be a source of confusion and unexpected behavior for many verification engineers. Drawing from the authors’ project and mentoring experience, it highlights typical usage errors and how to avoid them. They clarify the internal behavior of clocking blocks to help engineers understand the reasons behind common problems, and show techniques that allow clocking blocks to be used productively and with confidence. Finally, they consider some areas that may cause portability problems across simulators and indicate how to avoid them.

Inference of latches and flops based on coding styles has always been a topic creates multiple viewpoints. There are other such scenarios of synthesis and simulation mismatches that one typically comes across. To address all such ambiguity, language developers have provided different constructs to provide for an explicit resolution based on the intent. To help us gain a deeper understanding of the topic, Don Mills of Microchip Technology Inc., presented the related concepts in the paper “Yet Another Latch and Gotchas Paper” @ SNUG Silicon Valley. This paper discusses and provides solutions to issues that designers using SystemVerilog for design come across, such as: Case expression issue for casez and casex, Latches generated when using unique case or priority case, SRFF coding style problems with synthesis, SystemVerilog 2009 new definition of logic

Gabi Glasser from Intel presented the paper “Utilizing SystemVerilog for Mixed-Signal Validation@ SNUG Israel, where he proposed a mechanism for simplifying analysis and increasing coverage for mixed signal simulations.  The method proposed here was to take advantage of SystemVerilog capabilities, which enables defining a hash (associative) array with unlimited size. During the simulation, vectors are created for required analog signals, allowing them to be analyzed within the testbench along or at the end of the simulation, without requiring saving these signals into a file. The flow change enables the ability to launch a large scale mixed signal regression while allowing an easier analysis of coverage data.

Design pattern is a general reusable solution to a commonly recurring problem within a given context. The benefit of using design patterns is clear: it gives a common language for designers when approaching a problem, and gives a set of tools, widely used, to solve issues as they come up.  The paper Design Patterns In Verification” by Guy Levenbroun of Qualcomm explores several common problems, which might rise, during the development of a testbench, and how we can use design patterns to solve these problems. The patterns are categorized majorly into following areas: creational (eg factory), structural (eg composite) and behavioral (eg template) are covered in the paper.

Arik Shmayovitsh, Avishay Tvila, Guy Lidor of Sigma Designs , in their paper “Truly reusable Testbench-to-RTL  connection for System Verilog , presents  a novel approach of  connecting the DUT and testbench using consistent semantics while  reusing the testbench. This is achieved by abstracting the connection layer of each testbench using the SystemVerilog ‘bind’ construct. This ensures that the only thing that is required to be done to reuse the testbench for a new DUT would be to identify the instance of the corresponding DUT.

In the paper, A Mechanism for Hierarchical Reuse of Interface Bindings, Thomas Zboril of Qualcomm (Canada) explores another method to instantiate SV interfaces, connect them to the DUT and wrap the virtual interfaces for use in the test environment. This method allows the reuse of all the code when the original block level DUT becomes a lower level instance  in a larger subsystem or chip. The method involves three key mechanisms: Hierarchical virtual interface wrappers, Novel approach of using hierarchical instantiation of SV interfaces, Another novel approach of automatic management of hierarchical references via SV macros (new)

Thinh Ngo & Sakar Jain of Freescale Semiconductor, in their paper, “100% Functional Coverage-Driven Verification Flow propose a coverage driven verification flow that can efficiently achieve 100% functional coverage during simulation. The flow targets varied functionality, focuses at transaction level, measures coverage during simulation, and fails a test if 100% of the expected coverage is not achieved. This flow maps stimulus coverage to functional coverage, with every stimulus transaction being associated with an event in the coverage model and vice versa. This association is derived from the DUT specification and/or the DUT model. Expected events generated along with stimulus transactions are compared against actual events triggered in the DUT. The comparison results are used to pass or fail the test. 100% functional coverage is achieved via 100% stimulus coverage. The flow enables every test with its targeted functionality to meet 100% functional coverage provided that it passes.

Papers on Verification Methodology

In the paper, Top-down vs. bottom-up verification methodology for complex ASICs, Paul Lungu & Zygmunt Pasturczyk of Ciena at Canada covers the simulation methodology used for two large ASICs requiring block level simulations. A top-down verification methodology was used for one of the ASICs while a larger version needed an expanded bottom-up approach using extended simulation capabilities. Some techniques and verification methods such as chaining of sub environments from block to top-level are highlighted  along with challenges and solutions found by the verification team. The paper presents a useful technique of  of passing a RAL (Register Abstraction Layer) mirror to the C models which are used as scoreboards in the environment. The paper also presents a method of generating stable clocks inside the “program” block.

In the paper,Integration of Legacy Verilog BFMs and VMM VIP in UVM using Abstract Classes by Santosh Sarma of Wipro Technologies(India) presents an alternative approach where Legacy BFMs written in Verilog and not implemented using ‘Classes’ are hooked up to higher level class based components to create a standard UVM VIP structure. The paper also discusses an approach where existing VMM Transactors that are tied to such Legacy BFMs can be reused inside the UVM VIP with the help of the VCS provided UVM-VMM Interoperability Library. The implementation makes use of abstract classes to define functions that invoke the BFM APIs. The abstract class is then concretized using derived classes which give the actual implementation of the functions in the abstract class. The concrete class is then bound to the Verilog instance of the BFM using the SystemVerilog “bind” concept. The concrete class handle is then used by the UVM VIP and the VMM Transactor to interact with the underlying Verilog BFM. Using this approach the UVM VIP can be made truly reusable by using run time binding of the Verilog BFM instance to the VIP instead of using hardcoded macro names or procedural calls.

A Unified Self-Check Infrastructure - A Standardized Approach for Creating the Self-Check Block of Any Verification Environmentby John Sotiropoulos, Matt Muresa , Massi Corba of Draper Laboratories Cambridge, MA, USA presents a structured approach for developing a centralized “Self-Check” block for a verification environment. The approach is flexible enough to work with various testbench architectures and is portable across different verification methodologies. Here, all of the design’s responses are encapsulated under a common base class, providing a single “Self-Check” interface for any checking that needs to be performed. This abstraction, combined with a single centralized scoreboard and a standardized set of components, provides the consistency needed for faster development and easier code maintenance. It expands the concept of ‘self-check’ to incorporate the white-box monitors (tracking internal DUT state changes etc.) and Temporal Models (reacting to wire changes) along-with traditional methodologies for enabling self-checking.

For VMM users looking at migrating to UVM, there is another paper from Courtney Schmitt of Analog Devices, Inc.Transitioning to UVM from VMMdiscusses the process of transitioning to a UVM based  environment from VMM Differences and parallels between the two verification methodologies are presented to show that updating to UVM is mostly a matter of getting acquainted with a new set of base classes. Topics include UVM phases, agents, TLM ports, configuration, sequences, and register models. Best practices and reference resources are highlighted to make the transition from VMM to UVM as painless as possible.

Posted in Announcements, Coverage, Metrics, Creating tests, Customization, Modeling, Optimization/Performance, Reuse, SystemVerilog, UVM, Uncategorized, VMM, VMM infrastructure | 3 Comments »

Build your own code generator!

Posted by Amit Sharma on 25th June 2012

One of the critical problems developers cope with during SoC development process (architecture plan, RTL design, verification, software development) is the constant need to synchronize between different forms of the same data structure: the SoC registers database. The SoC registers database can be found at the SoC architecture team (who write the SoC registers description document), design engineers (who implement the registers structure in RTL code), verification engineers (who write the verification infrastructure – such as RAL code, and write verification tests – such as exhaustive read/write tests from all registers), and software engineers (who use the registers information to write embedded software).

Since the same information is used in different forms, it is only natural to have a single, central database which will hold all SoC registers data. You would ideally like to generate all the required output files  (documentation, UVM or VMM Regsiter Abstraction Model, RTL, C headers, …) from this central database ..  Different vendors and CAD teams provide different automation solutions for doing this.

The RALF specification contains all of the necessary information to generate RTL and testbench  code that implements the specified registers. There are many ways to code and implement RTL, so it is not possible to provide a general purpose solution.  As far as the testbench abstraction model is concerned, there are multiple ways of customizing your model post generation in both UVM and VMM. Callbacks, factories, configurable RAL model  attributes are some of the ways through which the desired customization can be brought in.  “The ‘user’ in RALF : get ralgen to generate ‘your’ code” highlights a very convenient way of adding of bringing in  SystemVerilog compatible code which will be copied as-is into the RAL model and in the desired scope. When it comes down to generating the RTL and the ‘C’ headers, we cannot leave the customization to such a late stage.  Also, different organizations and project groups have their own RTL and C-code coding styles which means  a generated output of a very generic nature might not be very helpful. For RTL generation, engineers would want the generated code to be power and gate-count efficient. Similar for C registers header generation, it often needs to follow coding styles and match CPU Firmware API. How do we bring in all this customizations to the end user?

Using the RALF C++ API, you have full access to parsed RALF data (through C++ routines), which you can use to implement a customized RTL code generator, or any other feature that needs RALF data. So, you can use  it to generate your C header files, HTML documentation, or translate the i/p RALF files to another register description format, custom covergroups and coverage convergence sequences (DAC 2012 User Track poster 6U.8 — Register Verification on a Fast Lane: Using Automation to Converge on UVM REG Coverage Models)

I have seen two instances of the need to generate a different register specification in the recent past and that is one of the reasons I decided to put this down in a blog.   Let me talk about the first instance..

One of the project groups were in the process of migrating from their own SV base classes to UVM. They had their own Register description format from which they used to generate their Register Abstraction Model. This was a proven flow.

So, when they migrated to UVM , they wanted to have a flow which would validate the changes that they are doing..

Given that they were moving to using RALF and ‘ralgen’, they didn’t want to create Register Specification in the legacy format anymore. So, they wanted to have some automation for generating scripts in the earlier format.  So, how did they go about doing this?..   They took the RAL C++ APIs and using them there were able to create the necessary automation to to generate the legacy format from RALF in no time.. (From what I remember, it was a half days work).. Everyone were involved in doing what they were best at, and that helped in the overall scheme of things.

The other customer had their own format from which they were generating RTL, firmware code and HTML. They had the necessary automation to create RALF for generating the UVM register Model. They also had a mechanism in place to generate IPXACT from this format as well as vice versa.. So, to complete the traceability matrix, they wanted a RALF  to IPXACT conversion.. Again, the most logical approach was to take the RALF CPP APIs and get those to iterate through the parsed RALF data and generate IPXACT.. Though, this effort is not complete, it just took a day or so to be able to generate valid IPXACT1.5 schema and all that is required now is some more additional work to smoothen the corners.

How do you start using these APIs and build your own code/html generators? You need to include ‘ralf.hpp” (which is in $VCS_HOME/include) in your ‘generator’ block. And then to compile the code, you need to pick up the shared library from the VCS installation.

$CPP_COMPILER $CFLAGS -I${VCS_HOME}/include –L${VCS_HOME}/lib –lralf your-file.cpp $LDFLAGS

#include "ralf.hpp"
int main(int argc, char *argv[])
// Check basic command line usage…
if (argc < 3) {
fprintf(stderr, "Error: Wrong Usage.\n");
// Show Correct Usage …
* Parse command line arguments to get the essential
* constructor arguments. See documentation
* of class ralf::vmm_ralf’s constructor parameters.

* Create a ralf::vmm_ralf object by passing in proper
constructor arguments. */
ralf::vmm_ralf ralf_data(ralf_file, top, rtl_dir,
* Get the top level object storing the parsed RALF
* block/system data and traverse that, top-down, to get
* access to complete RALF data.
const ralf::vmm_ralf_blk_or_sys_typ * top_lvl_blk_or_sys
= ralf_data.getTopLevelBlockOrSys();
* TODO–Traverse the parsed RALF data structure top-down
* using/starting-from ‘top_lvl_blk_or_sys’ for getting
* complete access to the RALF data and then, do whatever
* you would want to do, with the parsed RALF data. One
* typical usage of parsed RALF data could be, to generate
* RTL code in your own style.
// TODO – Add your RTL generator code here.
* As part of this library, Synopsys also provides a
* default RTL generator, which can be invoked by
* invoking ‘generateRTL()’ method of ‘ralf::vmm_ralf’
* class, as demonstrated below.

Essentially, you have a handle to the parsed database, and with the available APIs you can do whatever you want with it :) ..  The documentation of the APIs are in the documentation shipped with the VCS installation.. Also, if you are like me and would rather hack away at existing code rather than start with something from scratch, you can just check with Synopsys support to give you existing templates to dump out code in specific format, and you can starting modifying that for your requirements..

Posted in Automation, Coverage, Metrics, Customization, Interoperability, Register Abstraction Model with RAL, Tools & 3rd Party interfaces, UVM, VMM infrastructure | Comments Off

Using the VMM Datastream Scoreboard in a UVM environment

Posted by Amit Sharma on 2nd February 2012

Implementing the response checking mechanism in a self-checking environment remains the most time-consuming task. The VMM Data Stream Scoreboard package facilitates the implementation of verifying the correct transformation, destination and ordering of ordered data streams. This package is intuitively applicable to packet-oriented design, such as modems, routers and protocol interfaces. This package can also be used to verify any design transforming and moving sequences of data items, such as DSP data paths and floating-point units. Out-of-the-box, the VMM data stream scoreboard can be used to verify single-stream designs that do not modify the data flowing through them. For example, it can be used to verify FIFOs, Ethernet media access controllers (MACs) and bridges.

The VMM data scoreboard can also be used to verify multi-stream designs with user-defined data transformation and input-to-output stream routing. The transformation from input data items into expected data items is not limited to one-to-one transformation. An input data item may be transformed into multiple expected data items (e.g. segmenters) or none (e.g. reassemblers). Compared to this, the functionality available through UVM in-order comparator or the algorithmic comparator is significantly less. Thus, users might want to have access to the functionality provided by the VMM DS Scoreboard in a UVM environment. Using the UBUS example available in $VCS_HOME/doc/examples/uvm/integrated/ubus as a demo vehicle, this article shows how simple adapters are used to integrate the VMM DS scoreboard in a UVM environment and thus get access to more advanced scoreboarding functionality within the UVM environment

The UBUS example uses an example scoreboard to verify that the slave agent is operating as a simple memory. It extends from the uvm_scoreboard class and implements a memory_verify() function to makes the appropriate calls and comparisons needed to verify a memory operation. An uvm_analysis_export is explicitly created and implementation for ‘write’ defined. In the top level environment, the analysis export is connected to the analysis port of the slave monitor.


The simple scoreboard with its explicit implementation of the comparison routines suffices for verifying the basic operations, but would require to be enhanced significantly to provide more detailed information which the user might need. For example, lets take the ‘test_2m_4s’ test. Here , the environment is configured to have 2 Masters and 4 slaves.. Depending on how the slave memory map is configured, different slaves respond to different transfers on the bus. Now, if we want to get some information on how many transfer went into the scoreboard for a specific combination (eg: Master 1 to Slave 3), how many were verified to be processed correctly etc, it would be fair enough to conclude that the existing scoreboarding schemes will not suffice..

Hence, it was felt that the Data Stream Scoreboard with its advanced functionality and support for data transformation, data reordering, data loss, and multi-stream data routing should be available for verification environments not necessarily based on VMM. From VCS  2011.12-1, this integration have meed made very simple.  This VMM DS scoreboard implements a generic data stream scoreboard that accepts parameters for the input and output packet types. A single instance of this class is used to check the proper transformation, multiplexing and ordering of multiple data streams. The scoreboard class now  leverages a policy-based design and parameterized specializations to accepts any ‘Packet’ class or d, be it VMM, UVM or OVM.

The central element in policy-based design is a class template (called the host class, which in this case in the VMM DS Scoreboad), taking several type parameters as input, which are specialized with types selected by the user (called policy classes), each implementing a particular implicit method (called a policy), and encapsulating some orthogonal (or mostly orthogonal) aspect of the behavior of the instantiated host class. In this case, the ‘policies’ implemented by the policy classes are the ‘compare’ and ‘display’ routines.

By supplying a host class combined with a set of different, canned implementations for each policy, the VMM DS scoreboard can support all different behavior combinations, resolved at compile time, and selected by mixing and matching the different supplied policy classes in the instantiation of the host class template. Additionally, by writing a custom implementation of a given policy, a policy-based library can be used in situations requiring behaviors unforeseen by the library implementor .

So, lets go through a set of simple steps to see how you can use the VMM DS scoreboard in the UVM environment

Step 1: Creating the policy class for UVM and define its ‘policies’


Step 2: Replacing the UVM scoreboard with a VMM one extended from “vmm_sb_ds_typed” and specialize it with the ubus_transfer type and the previous created uvm_object_policy.

class ubus_example_scoreboard extends vmm_sb_ds_typed #(ubus_transfer,ubus_transfer, uvm_object_policy);


endclass: ubus_example_scoreboard

Once, this is done, you can either declare an VMM TLM Analysis export to connect to the Bus Monitor in the UBUS environment or use the pre-defined on in the VMM DS scoreboard

vmm_tlm_analysis_export #(ubus_example_scoreboard,ubus_transfer) analysis_exp;

Given that for any configuration, one master and slave would be active, define the appropriate streams in the constructor (though this is not required if there are only single streams, we are defining this explicitly so that this can scale up to multiple input and expect streams for different tests)

this.define_stream(0, “Slave 0″, EXPECT);
this.define_stream(0, “Master 0″, INPUT);

Step 2 .a: Create the ‘write’ implementation for the Analysis export

Since, we are verifying the operation of the slave as a simple memory, we just add in the appropriate logic to insert a packet to the scoreboard when we do a ‘WRITE’ and an expect/check when the transfer is a ‘READ’ with an address that has already been written to.


Step 2.b: Implement the stream_id() method

You can use this method to determine to which stream a specific ‘transfer’ belongs to based on the packet’s content, such as a source or destination address. In this case, the BUS Monitor updates the ‘slave’ property of the collected transfer w.r.t where the address falls on the slave memory map.



Step 3: Create the UVM Analysis to VMM Analysis Adapter

The uvm_analysis_to_vmm_analysis is used to connect any UVM component with an analysis port to any VMM component via an analysis export. The adapter will convert all incoming UVM transactions to a VMM transaction and drive this converted transaction to the VMM component through the analysis port-export. If you are using the VMM UVM interoperability library, you do not have to create the adapter as it will be available in the library



Create the ‘write’ implementation for the analysis export in the adapter

The write method, called via the <analysis_export> would just post the receive UBUS transfer from the UVM analysis port to the VMM analysis port.


Step 4: Make the TLM connections

In the original example, the item_collected_port of the slave monitor was connected to the analysis export of the example scoreboard. Here, the DataStream scoreboard has an analysis port which expects a VMM transaction. Hence, we need the adapter created above to intermediate between the analysis port of the UVM Bus monitor and the analysis export of the VMM DS scoreboard..


Step 5: Define Additional streams if required for multi-master multi-slave configurations

This step is not required for a single master/slave configuration. However, would need to create additional streams so that you can verify the correctness on all the different permutations in terms of tests like “test_2m_4s” .

In this case, the following is added in the test_2m_2s in the connect_phase()


Step 6: Add appropriate options to your compile command and analyze your results

Change the Makefile by adding –ntb_opts rvm on the command line and add +define+UVM_ON_TOP

vcs -sverilog -timescale=1ns/1ns -ntb_opts uvm-1.1+rvm +incdir+../sv -l comp.log +define+UVM_ON_TOP

And that is all, as far and you are ready to go and validate your DUT with a more advanced scoreboard with loads of built-in functionality. This is what you will get when you execute the “test_2m_4s” test

Thus, not only do you have stream specific information now, but you now have access to much more functionality as mentioned earlier. For example, you can model transformations, checks for out of order matches, allow for dropped packets, and iterate over different streams to get access to the specific transfers. Again, depending on your requirements, you can use the simple UVM comparator for your basic checks and switch over to the DS scoreboard for the more complex scenarios with the flip of a switch in the same setup. This is what we did for a UVM PCIe VIP we developed earlier ( From the Magician’s Hat: Developing a Multi-methodology PCIe Gen2 VIP) so that the users has access to all the information they require. Hopefully, this will keep you going, till we have a more powerful UVM scoreboard with some subsequent UVM version

Posted in Communication, Interoperability, Reuse, Scoreboarding, UVM, VMM infrastructure | 2 Comments »

The right name at the right space: using ‘namespace’ in VMM to set virtual interfaces

Posted by Amit Sharma on 7th September 2011

Abhisek Verma, CAE, Synopsys

A ‘namespace’ is an abstract container or environment created to hold a logical grouping of unique identifiers or names. Thus the same identifier can be independently defined in multiple namespaces and the the meaning associated with an identifier defined in one namespace may or may not have the same meaning as the same identifier defined in another namespace. ‘Namespace’ in VMM is used to group or tag different VMM objects, resources and transactions with a meaningful namespace for the different components across the testbench environment. This allows the user to identify them and access them efficiently. For example, a benefit of this approach is that it relieves the user from making cross module references to access the various resources. This can be seen in the context of accessing the interfaces associated with a driver or a monitor in the environment and goes a long way in making the code more scalable.

Accessing and assigning interface handles to a particular transactor can be done in various ways in VMM, as discussed in the following blogs: Transactors and Virtual Interface and Extending Hierarchical Options in VMM to work with all data types. In addition to these, one can leverage ‘namespaces’ in VMM to achieve this fairly elegantly. The idea here is to put the Virtual Interface instances in the appropriate namespace in the object hierarchy to be retrieved by the verification environment wherever required through simple APIs as shown in the following steps:

STEP 1:: Define a parameterized class extending form vmm_object to act as a wrapper for the interface handle.

STEP 2:: Instantiate the interface wrapper in the top-level MODULE and put in the “VIF” name space

STEP 3:: In environment, access interface wrapper from the VIF name space by querying for the same in the ‘VIF” namespace and use the retrieved handle to set the interface in the transactor

The example below demonstrates the implementation of the above

The Interface and DUT templates..


Step1: Parameterized wrapper class for the interface-


The Testbench Top:


The Program Block:


Posted in Configuration, Structural Components, VMM infrastructure | Comments Off

Using the VMM Performance Analyzer in a UVM Environment

Posted by Amit Sharma on 23rd August 2011

As a generic VMM package, the Performance Analyzer (PAN) is not based on nor requires specific shared resources, transactions or hardware structures. It can be used to collect statistical coverage metrics relating to the utilization of a specific shared resource. This package helps to measure and analyze many different performance aspects of a design. UVM doesn’t have a performance analyzer as a part of the base class library as of now. Given that the collection/tracking and analysis  of performance metrics of a design has become a key checkpoint in today’s verification, there is a lot of value in integrating the VMM Performance Analyzer in an UVM testbench. To demonstrate the same, we will use both VMM and UVM base classes in the same simulation.

Performance is analyzed based on user-defined atomic resource utilization called ‘tenures’. A tenure refers to any activity on a shared resource with a well-defined starting and ending point. A tenure is uniquely identified by an automatically-assigned identifier. We take the XBUS example in  $VCS_HOME/doc/examples/uvm_1.0/simple/xbus as a demo vehicle for the UVM environment.

Step 1: Defining data collection

Data is collected for each resource in a separate instance of the “vmm_perf_analyzer” class. These instances should be allocated in the build phase of the top level environment.

For example, in


Step 2: Defining the tenure, and enable data collection

There must be one instance of the “vmm_perf_tenure” class for each operation that is performed on the  sharing resource. Tenures are associated with the instance of the “vmm_perf_analyzer” class that corresponds to the resource operated. In this case of the Xbus example, lets say we want to measure transcation throughput performance (i.e for the XBUS transfers).. This is how we will associate a tenure with the Xbus transaction. To denote the starting and ending of the tenure, we define two additional events in the XBUS Master Driver (started, ended). ‘started’ is triggered when the Driver obtains a transaction from the Sequencer, and ‘ended’ once the transaction is driven on the bus and the driver is about to indicate seq_item_port.item_done(rsp); At the same time,  ‘started’ is triggered, a callback is invoked to get the PAN to starting collecting statistics. Here is the relevant code.


Now, the Performance Analyzer  works on classes extended from vmm_data and uses the base class functionality for starting/stopping these tenures. Hence, the callback task which gets triggered at the appropriate points would have to have the functionality for converting the UVM transactions to a corresponding VMM one. This is how it is done.

Step 2.a: Creating the VMM counterpart of the XBUS Transfer Class


Step 2.b: Using the UVM Callback for starting/stopping data collection and calling the UVM -> VMM conversion routines appropriately.


The callback class needs to be associated with the driver as follows in the Top testbecnh (xbus_demo_tb)


Step 3: Generating the Reports..

In the report_ph of xbus_demo_tb, save, and write out the appropriate databases


Step 4. Run simulation , and analyze the reports for possible inefficiencies etc

Use -ntb_opts uvm-1.0+rvm +define+UVM_ON_TOP with VCS

Include along with the new files in the included file list.  The following table shows the text report at the end of the simulation.


You can generate the SQL databases as well and typically you would be doing this across multiple simulations.. Once, you have done that, you can create your custom queries to the get the desired information out of the SQL database across your regression runs.  You can also analyze the results and generate the required graphs in Excel. Please see the following post : Analyzing results of the Performance Analyzer with Excel

So there you go,  the VMM Performance Performance Analyzer can fit in any verification environment you have.. So make sure that you leverage this package  to make the  RTL-level performance measurements that are needed to validate micro-architectural and architectural assumptions, as well as to tune the RTL for optimal performance.

Posted in Coverage, Metrics, Interoperability, Optimization/Performance, Performance Analyzer, VMM infrastructure, Verification Planning & Management | 6 Comments »

Automatic generation of Register Model for VMM using IDesignSpec

Posted by Amit Sharma on 5th August 2011

Nitin Ahuja, Verification Engineer, Agnisys Technology Pvt Ltd

Generating a register model by hand could take up a lot of time in the design process and may result in serious bugs, which makes the code inefficient. On the other hand, generating the register model using the register model generator such as IDesignSpecTM reduces the coding effort, as well as generates more competent codes by avoiding the bugs in the first place, thus making the process more efficient and reduces the time to market exponentially.

Register model generator can be proved efficient in the following ways:

1. Error free codes in the first place, i.e. being automatically generated, the register model code is free from all the human as well as logical errors.

2. In the case of change in the register model specification, it is easy to modify the spec and generate the codes again in no time.

3. Generating all kind of hardware, software, industry standard specifications as well as verification codes from a single source of specification.

IDesignSpecTM (IDS) is capable of generating all the RTL as well as the verification codes such as VMM(RALF) from the register specification defined in Word, Excel, Open-office or IDS-XML.

Getting Started

A simple register can be defined inside a block in IDesigSpecTM as:

The above specification is translated into the following RALF code by IDS.


As a protocol, all the registers for which the hdl_path is mentioned in the RALF file, the ralgen generates the backdoor access. Thus special properties on the register such as hdl_path and coverage can be mentioned inside the IDS specification itself and will be appropriately translated into the RALF file.

The properties can be defined as below:

For Block:


As for the block, hdl_path , coverage or even any other such property can be mentioned for other IDS elements, such as register or field.

For register/field:



Note: Coverage property can take up the following three possible values:

1. ON/on: This enables all the coverage types i.e for block or memory address coverage and for registers and field the REG_BITS and FIELD_VALS coverage is on.

2. OFF/off: By default all the coverage is off. This option holds valid only in case, when the coverage is turned ON from the top level of the hierarchy or from the parent and to turn off the coverage for some particular register, specify ‘coverage=off’ for that register or field. The coverage for that specific child will be invert of what its parent has.

3. abf: Any combination of these three characters can be used to turn ON the particular type of the coverage. These characters stand for:

· a : address coverage

· b : reg_bits

· f : field_vals

For example to turn on the reg_bits and field_vals coverage, it can be mentioned as:


In addition to these properties, there are few more properties that can be mentioned in a similar way as above. Some of them are:

1. bins: various bins for the coverpoints can be specified using this property.

Syntax: {bins=”bin_name = {bin} ; <bin_name>…”}

2. constraint : constraints can also be specified for the register or field or for any element.

Syntax : {constraint=”constraint_name {constraint};<constraint_name> …”}

3. [vmm]<code>[/vmm]: This tag gives the users the ability to specify their own piece of system-verilog code in any element.

4. cross: cross for the coverpoints of the registers can be specified using cross property in the syntax:

Syntax: {cross = “coverpoint_1 <{label label_name}>;< coverpoint_2>…”}

Different types of registers in IDS :

1.Register Arrays:

Register arrays in RALF can be defined in IDS using the register groups. To define a register array of size ‘n’, it can be defined by placing a register inside a regroup with the repeat count equal to size of the array (n).

For example a register array of the name ”reg_array” with size equal to 10 can be defined in the IDS as follows:


The above specification will be translated into the following vmm code by the ids:


2.Memory :

Similar to the register array, Memories can also be defined in the IDS using the register groups. The only difference in the memory and register array definition is that in case of memory the external is equal to “true”. The size of the memory is calculated as, ((End_Address – Start_Address)*Repeat_Count)

As an example a memory of name “RAM” can be defined in IDS as follows:


The above memory specification will be translated into following VMM code:



Regfile in RALF can be specified in IDS using the register group containing multiple registers(> 1).

One such regfile with 3 registers, repeated 16 times is shown below:


Following is the IDS generated VMM code for the above reg file:



The IDS generated RALF can be used with the Synopsys Ralgen to generate the RAL model as well as the RTL.

To generate the RAL model use the following command:


And for the RTL generation use the following command:



It is beneficial to generate the RALF using the register model generator “IDesignspecTM”, as it guarantees bug free code, making it more competent and also reduces the time and effort. In case of modifications in the register model specification, it enables the users to regenerate the code again in no time.


We will extend this automation further in the next article where we will cover details about how you can “close the loop” on register verification. The “Closed Loop Register Verification” article will be available on VMM Central soon. Meanwhile if you have any questions/comments you can reach me at nitin[at]agnisys[dot]com .

Posted in Automation, Organization, Register Abstraction Model with RAL, Tools & 3rd Party interfaces, VMM infrastructure | 1 Comment »

Building & Configuring Coverage Model – VMM Style – Part-III

Posted by paragg on 25th June 2011

Parag Goel, Senior Corporate Application Engineer, Synopsys

In the final blog of this coverage modeling with VMM series, we focus on error coverage. Negative scenario testing is an integral part of verification. But again, we have this question – Whether I have covered all negative scenarios?

So it is important to ensure that the generic coverage model tracks all the error scenarios.

Let’s see, how a specific mechanism provided in VMM in the form of vmm_report_catcher helps to track error coverage efficiently and effectively. The VMM Log Catcher is able to identify/catch a specific string of any type any of the messages issue through the VMM reporting mechanism.

Typically, the Verification Environment issues messages to STDOUT when the DUT responds to an error scenario. These messages can be ‘caught’ by the Log Catcher to update the appropriate coverage groups. Let see how this is done in detail.

The Verification Environment would respond to each negative scenario by issuing a message with a unique text, specific to specific error messages.

In the context of the AXI in framework, we can introduce a wide-range of error scenarios and test if the DUT responds correctly or not. A few possible error scenarios in AXI are listed below for your reference.


However, all the scenarios may not be applicable always and hence configurability is required to enable only the required set of coverpoints tied to the relevant negative scenarios. Thus, we should have similar configurability for error coverage as I talked about in the earlier blogs.

Let’s see how we can catch the relevant responses and sample the appropriate covergroups.

As mentioned earlier, in the example below, we make use of the unique message issued as a result of a negative scenario.

This is how we use the VMM Log catcher.

1. The error coverage class is extended from vmm_log_catcher – VMM base class.

2. The vmm_log::caught() API is utilized as means to qualify the covergroup sampling.


In the code above, whenever a message with the text “AXI_WRITE_RESPONSE_SLVERR “ is issued from anywhere in the verification environment, the ‘caught’ method is invoked which in turn samples the appropriate covergroup. Additionally, you an specify more parameters in the caught API, to restrict what ‘scenarios’ should be caught.


string name = “”,

string inst = “”,

bit recurse = 0,

int typs = ALL_TYPS,

int severity = ALL_SEVS,

string text = “”);

The above API, installs the specified message handler to catch any message of the specified type and severity, issued by the specified message service interface instances specified by name and instance arguments, which contains the specified text. By default, this method catches all messages issued by this message service interface instance.

Hope these set of articles would be relevant and useful to you.. I have made an attempt to leverage some of the built-in capabilities of the SV languages and the VMM base classes to target some of the challenges in creating configurable coverage models.. These techniques can be improvised further to make them more efficient and scalable. I would be waiting to hear from you all any inputs that you, have in this area.

Posted in Automation, Configuration, Coverage, Metrics, Reuse, Structural Components, VMM, VMM infrastructure | Comments Off

Building & Configuring Coverage Model – VMM Style – Part-II

Posted by paragg on 25th June 2011

Parag Goel, Senior Corporate Application Engineer, Synopsys

In the previous post, we looked at how you can enable/disable different types of coverage encapsulated in the Coverage Model wrapper class. In this post, let’s look at how we can easily create an infrastructure to pass different inputs to the wrapper class so as to able to configure the coverage collection based on user. The infrastructure ensure that these elements values percolate down to the to the sub-coverage model groups.

The following are some of the key inputs that needs to be passed to the difference coverage component classes

1. SV Virtual Interfaces so that different signal activity can be accessed

2. The Transactions observed and collected by the physical level monitors

3. The ‘Configuration’ information


Let’s look at how the we can easily pass the signal level information to the Coverage Model

Step I: Encapsulation of the interface in the class wrapper.

class intf_wrapper extends vmm_object;

virtual axi_if v_if ;

function new (string name, virtual axi_if mst_if);, name);
this.v_if = mst_if;

endclass: master_port

Step II: In the top class/environment- Set this object using vmm_opts API.

class axi_env extends vmm_env;
intf_wrapper mc_intf;

function void build_ph();
mc_intf = new(“Master_Port”, tb_top.master_if_p0);
// Set the master port interface
vmm_opts::set_object(“VIP_MSTR:vip_mstr_port“, mc_intf, env);
endclass: axi_env

Step III: Connecting in the coverage class.

A. Get the object containing interface in the coverage model class using vmm_opts.

assert($cast(this.mst_port_obj, vmm_opts::get_object_obj(is_set, this, “vip_mstr_port“)));

B. Connecting local virtual interface to one contained in the object.

this.cov_vif = mstr_port_obj.v_if;

Now, we need to pass the collected transaction object from the monitor needs to the coverage collector. This can be conveniently done in VMM using TLM communication. This is achieved through the vmm_tlm_analysis_port, which establishes the communication between a subscriber & an observer.

class axi_transfer extends vmm_data;

. . .

class axi_bus_monitor  extends  vmm_xactor;

vmm_tlm_analysis_port#(axi_bus_monitor, axi_transfer)  m_ap;
task collect_trans();

//Writing to the analysis port.


class axi_coverage_model extends vmm_object;
vmm_tlm_analysis_export #( axi_coverage_model, axi_transfer) m_export;

function new (string inst, vmm_object parent = null);
m_export = new(this, “m_export”);


function void write(int id, axi_transfer trans);

//Sample the appropriate covergroup, once the transaction is received

in the write function.



To set up the TLM Connections in the agent/environment, we need to do the following:

class axi_subenv extends vmm_group;

//Instantiate the model classes and creates them.

axi_bus_monitor mon;

axi_coverage_model cov;

. . .

virtual function void build_ph;
mon = new( “mon”, this);
cov = new( “cov”, this);
virtual function void connect_ph;

//Bind the TLM ports via VMM – tlm_bind

monitor.m_ap.tlm_bind( cov.m_export );


To make the Coverage Model truly configurable, we need to look at some of the other key requirements as well at different level of granularity. This can be summarized as the ability to do the following.

1. Enable/Disable coverage collection for each covergroup defined . Every covergroup should be created only if a user wishes to do so. So there should be a configuration parameter which restricts the creation of the covergroup altogether. And this should also be used to control the sampling of a covergroup.

2. The user must be able to configure the limits on the individual values being covered in the coverage model within a legal set of values. Say for example, transaction field BurstLength – user should be able to guide the model what are the limits on this field that one wishes to get coverage on within a legal set of values ranging from ‘1’ to ‘16’ as per AXI spec. So providing lower and upper limits for transaction parameters like burst size, burst length, address etc. makes it re-usable. This limits should be modeled as variables which can be overwritten dynamically

3. The user should be able to control the number of bins to be created. For example in fields like address. auto_bin_max option can be exploited to achieve this in case the user doesn’t have explicitly defined bins..

4. The user must be able to control the number of hits for which a bin can be considered as covered. option.atleast can be used for this purpose and the input to this can be a user defined parameter.

5. The user should also have the control to specify his coverage goal, i.e. when the coverage collector should show the covergroupcovered” even though the coverage is not 100%. This can be achieved by using option.goal, where goal is again a user defined parameter.

All the parameters required to meet the above requirements can be encapsulated in the class (i.e. coverage configuration class) and this can be set and retrieved in a similar fashion described for setting & getting the interface wrapper class using vmm_opts API’s.

class coverage_cfg extends vmm_object;
  int disable_wr_burst_len;
   . . .
  function new( vmm_object parent=null, string name);, name);
  coverage_cfg cfg;
  function new(vmm_object parent=null, string name);
     bit is_set;, name);
     $cast(cfg, vmm_opts::get_object_obj(is_set, this,

Wei Hua presents another cool mechanism of collecting this parameters using vmm_notification mechanism in this earlier blog  :

A Generic Functional Coverage Solution Based On vmm_notify

Hope you found this useful. I will be talking about how to track Error Coverage in my next blog, so stay tuned!

Posted in Communication, Configuration, Coverage, Metrics, Reuse, Structural Components, VMM, VMM infrastructure | Comments Off

Building & Configuring Coverage Model – VMM Style – Part-I

Posted by paragg on 24th June 2011

Parag Goel, Senior Corporate Application Engineer, Synopsys

To minimize wasted effort, coverage is used as a guide for directing verification resources by identifying tested and untested portions of the design.”

- IEEE Standard for System Verilog (IEEE Std. 1800-2009)

Configurability & reusability are the buzz^^^ words in the verification of chips and this are enabled to a big extent by the present day verification methodologies. Through a set of blogs, I plan to show how we can create configurable coverage models in VMM based environments. Given that, AMBA – AXI is one of the most commonly used protocols in industry for communication amongst the SOC peripherals, I chose protocol AXI based framework for my case study.

The idea here is to create a configurable coverage model leveraging some of the base classes provided in the methodology so that we can make it completely reusable as we move from the block to system level or as we move across projects. Once, we enable that, we can move the coverage model inside the Sub-environment modeled by vmm_group or vmm_subenv which are the units of reuse.


Primary Requirements of Configuration Control:

Two important requirements that are needed to be met to ensure that the coverage model is made a part of reusable components are:

1. Ability to enable/disable the coverage model whenever required.

2. Ability to Turn ON/OFF different subgroups at the desired granularity. For example, an user may not always want the Error Coverage to be enabled, unless under specific circumstances.

To meet the above requirements, we make use of the VMM Global and Hierarchical Configurations

Through the vmm_opts base classes, VMM provides a mechanism to control the configuration parameters of a verification environment. This can be done in a hierarchical as well as in a global manner. These options are summarized below:


In the environment, the coverage_enable is by default set to 0, i.e. disabled.

coverage_enable = vmm_opts::get_int(“coverage_enable”, 0);

Now, the user can enable the coverage via either of the two mechanisms.

1. From user code using vmm_opts.

The basic rule is that you need to ‘set’ it *before* the ’get’ is invoked and during the time where the construction of the components take place.  As a general recommendation, for the construction of structural configuration, the build phase is the most appropriate place.
function axi_test::build_ph();
// Enable Coverage.
vmm_opts::set_int(“@%*:axi_subenv:enable_coverage”, 1);

2. From command line or external option file. The option is specified using the command-line +vmm_name or +vmm_opts+name.

The command line supersedes the option set within code as shown in 1.

User can also specify options for specific instances or hierarchically using regular expressions.


Now let’s look at the typical classification of a coverage model.

From the perspective of AXI protocol, we can look at the 4 sub-sections.

Transaction coverage: coverage definition on the user-controlled parameters usually defined in the transaction class & controlled through sequences.

Error coverage: coverage definition on the pre-defined error injection scenarios

Protocol coverage: This is protocol specific ((AXI Handshake coverage)). In case of AXI, it is mainly for coverage on the handshake signals i.e. READY & VALID on all the 5 channels.

Flow coverage: This is again protocol specific and for AXI it covers various features like, outstanding, inter-leaving, write data before write address etc…


At this point, let’s look at how these different sub-groups with the complete coverage model can be enabled or disabled. Once the coverage configuration class is built and passed on to the main coverage model, we need a fine grain control to enable/disable individual coverage models. The code shows how the user can control all the coverage models in the build phase of the main coverage class.

Here too, we can see how we use vmm_opts comes to meet the requirements of controlling individual parameters.

vmm_opts::set_int(“@%*:disable_transaction_coverage”, 0);
vmm_opts::set_int(“@%*:disable_error_coverage”, 0);
vmm_opts::set_int(“@%*:disable_axi_handshake_coverage”, 0);

vmm_opts::set_int(“@%*:disable_flow_coverage”, 0);

In my next blog, I show how the hierarchical VMM Configurations is used to dynamically pass on signal level and other configuration related information to the coverage model. Also, we shall discuss the usage of VMM TLM feature, towards fulfilling the goal of configurable coverage model. Stay tuned!

Posted in Configuration, Coverage, Metrics, Organization, Reuse, SystemVerilog, VMM, VMM infrastructure | Comments Off

Transaction Debugging with Discovery Visualization Environment (DVE) Part-2

Posted by JL Gray on 8th March 2011

Asif Jafri, Verilab Inc.

In my previous blog post, I introduced how to dump waves and how to use $tblog for dynamic data and message recording. If you need more control over scope sensitive transaction debugging, $msglog task is very useful. This blog has been divided into two sections: in the the first section, I talk about how to use $msglog. In the second section, I will discuss how VMM performs transaction recording by calling $msglog from within the VMM library. The call is protected so as not to confuse other simulators or tools. You can use $msglog in any of your code as well.

•    The advantage of using $msglog is that we have more control over the debug messaging. If a transaction can be divided into start and finish, it is possible to identify cause and effect.
•    Parent and child relationship can be shown
•    Identify execution stream with start and end time.

The following steps need to be followed to invoke $msglog.

Include msglog.svh in testbench code
Add +incdir+${VCS_HOME}/include in the compile line

1) The example below shows how to call the $msglog task in the testbench. The first msglog statement creates a transaction (read) on a stream (stream1) which has an attribute addr. It also sets the header text (RD) and body text (text 1). This statement can be placed in a read task of your transactor.  The second msglog statement once again can be placed in the read task and it shows when the read completes. Streams are global and do not need to be created explicitly. They are created implicitly as they are needed.

$msglog (“stream1”, XACTION, “read”, NORMAL, “RD”, “text 1”, START, addr);


$msglog (“stream1”, XACTION, “read”, NORMAL, “”, FINISH);

The table below shows the various possible parameters for the type, severity and relation field in the $msglog task:

































As shown above you can also place $msglog tasks in the response task of the responding transactor if the transaction needs to be followed into the response transactor.

$msglog(“stream1″, XACTION, “resp”, NORMAL, “RESP”, START, data);

2) VMM provides build-in transaction recording. To enable it use “+define+VMM_TR_RECORD” when compiling your code. At simulation runtime, recording of transactions is controlled by setting “+vmm_tr_verbosity=debug” in the command line.
The following VMM base classes have build-in recording support:
vmm_channel, vmm_voter, vmm_env, vmm_subenv, vmm_timeline

The figure below shows an example of the recorded transactions as viewed in the waveform viewer:


You can also do your own transaction recording by using the following VMM functions:


mystream = vmm_tr_record::open_stream(get_instance(), “MyChannel”);

vmm_tr_record::start_tr(mystream, “Read”, “Text line 1\nText line 2”);



As shown in the two part blogs on transaction debugging, $tblog and $msglog can be very useful transaction debugging constructs. You can choose to dump transactions and follow them through the environment, dump channel data, notification ID, phase names etc. To be able to see all this information on the waveform viewer has been a blessing for me.  I hope it is helpful to you.

Posted in Debug, VMM infrastructure | Comments Off

Transaction Debugging with Discovery Visualization Environment (DVE) Part-1

Posted by JL Gray on 25th February 2011

Asif Jafri, Verilab Inc., Austin, TX

The art of verification has evolved dramatically over the last decade. What used to be a very simple verilog testbench which could not possibly cover the vast solution space has evolved into the current monstrosity (Random testbenches) which is a very powerful tool, but the complexity to debug has gone up exponentially.

VMM has introduced various debug constructs to aid in the debug of the design as well as the test environment such as:
•    Messaging: Report regular, debug, or error information.
•    Recording: Transaction and components have built-in recording facilities that enable transaction and environment debugging.

Today I want to spend some time looking at DVE as a powerful debug tool in our tool box.

To start things off lets look at some simple calls used to invoke dumping waves.

1) $vcdpluson() : This call is used to start dumping design signals into a .vpd (VCD plus) format. “vpd” is a proprietary Synopsys format (binary, highly compressed) that is generated by vcs, which solves the issue of generating excessively large .vcd (IEEE standard) format files.


When compiling, specify -debug_pp (for post process debug), -debug (for interactive debug), -debug_all (for interactive debug with line stepping and breakpoints) to enable VCS Dumping.

The code snippets shown above will generate waves of all design signals for viewing in the DVE waveform viewer. You can also use the UCLI (unified command line interface) command ‘dump’ for dumping design signals interactively or in scripts.

Won’t it be great if we can also view dynamic variables as waveforms?

2) $tblog() system task is used for recording dynamic (or static) data and simple messages.  No additional environment setup is required. $tblog() has to be called in the testbench where you want to record a message or a variable. The next example shows how to record a message in the send_packet task of a transactor.

// Foo transactor
task send_packet();
    int id; // local variable
    $tblog(-1, “Sending packet”); // record all local and class variables
    cnt = cnt + 1; // cnt is a class variable
    if (cnt < 50)
       $tblog(0, “Count is less than 50”, cnt, id); // record variable cnt and id

endtask: send_packet

Along with the message and variable values, $tblog automatically records the time and the call stack. To view these messages and variables in the waveform viewer select a recording from transaction browser and add it as a waveform.
The figure below shows how a message is displayed in a DVE waveform window.


Another useful tool for transaction debugging is using the $msglog task which will be discussed in the next article “Hyperlink….”.

Posted in Debug, VMM infrastructure | Comments Off

Modeling ISRs with VMM RAL

Posted by Amit Sharma on 4th November 2010

In a verification environment, different components may be trying to access the DUT registers and memories. For example, the BFM might be programming some registers while the bus monitor might be sampling the values of these registers. In specific cases, there may be an interrupt monitor which triggers an Interrupt Service Routine (ISR) whenever it sees an Interrupt pin toggling in the interface. The ISR might end up having to read the Interrupt registers and end up clearing the Interrupt bit/s through a front door access.

To ensure that different components in a verification environment can access the DUT registers at any given point in time, the RAL model instantiated in the environment can be passed to different VMM components. These different components whose methods are executing in separate parallel threads can now access the same set of registers in the DUT through the RAL model. A question many folks ask is: when there are multiple parallel register accesses, how do they get scheduled through the RAL layer?  Here is an explanation of how threads are scheduled in RAL:

A Register read/write from different threads is comparable to an ‘atomic’ channel.put() from  different threads. Hence it gets scheduled in the order of pipelining of the threads.

A write/read would basically consist of the following atomic operations:

- A generic vmm_rw_access transaction with its fields (addr, data, kind etc) being populated and posted onto the execute_single()   task of the Translate Transactor

- The transaction being translated in the execute_single() task and pushed into the input channel of the user BFM

- The transaction being retrieved through get/activate in the User BFM main thread and then subsequently driven to the DUT interface

Thus ‘posting’ of RAL accesses whenever a Read/Write/Mirror/Update is invoked is in the same order they are issued. Subsequently, the execute_single() task just translates the generic RW RAL transaction to a User BFM comprehensible transaction, and doesn’t change the order.

Now, how do we handle a scenario when specific register accesses like those coming from an ISR need to be given a higher priority than accesses coming from other threads in a verification environment?  VMM and RAL methods give us specific hooks to achievethis requirement. Here is one of the options on how this can be done.

If we look at the vmm_ral_reg::write method, we have the given signature:

virtual task write( output vmm_rw::status_e status,

    input bit [63:0] value,

    input vmm_ral::path_e path = vmm_ral::DEFAULT,

    input string domain = "",

    input int data_id = -1,

    input int scenario_id = -1,

    input int stream_id = -1)

Now, a generic RAL transaction that gets created through any Register access has the same data_id, scenario_id, stream_id arguments which get passed on from the read/write call. These arguments help us tag and track the transactions if that is so desired. Now, these arguments can be made use of in the execute_single() task to ensure that accesses from ISRs have the highest priority. But, first, if we go back to the earlier post by Varun, Issuing concurrent READ/WRITE accesses to the same register on two physical interfaces using RAL, Issuing concurrent accesses to the same registers on two physical interfaces using RAL , we note that if the RAL model is processing a ‘register access’ , it will not initiate the next one until the earlier one is completed. So, this is what we need to get done.We first use the RAL Proxy transactor to schedule the ISR register access simultaneously. After that, we flush out any existing accesses and prevent any new register access through RAL until access from the ISR is completed

This is how it, will be done:


For a normal register access, a read/write method will be invoked as follows:

          ral_model.<reg_name>.write(stat,wdata, “AHB”); //’wdata’ is the value to be driven, “AHB” is the domain /physical interface

For a register access in an ISR modeled in a MS Scenario, we have:

         env.bfm.to_ahb.grab(this); //grabs the channel
, env.ral_model.<reg_name>.get_address_in_system("AHB"), data, 32, "AHB",,,1); // The last argument is again the “stream_id” argument

         env.bfm.to_ahb.ungrab(this); //allows the channel to be accessed from other threads once the ISR is completed

Once, this is done, the execute_single() task inside the translate Transcator will know if an access is through an ISR and can ensure that ISRs are processed on priority through a combination of functionality provided through the VMM Channel methods in combination with a semaphore:

virtual task execute_single(vmm_rw_access tr);

        AHB_tr cyc;

        AHB_tr cyc_active; //to keep any transactions residing on the active slot

        semaphore sem = new(1); //semaphore with a single key to prevent new accesses when a reg access from an ISR is processed

       // Translate the generic RW into a simple RW

       cyc = new;

       {, cyc.addr} = tr.addr;

       if (tr.kind == vmm_rw::WRITE) begin

         cyc.cycle = simple_tr::WRITE;



      else begin

         cyc.cycle = simple_tr::READ;



      if (tr.stream_id != 1) sem.get(1); // regular register access gets blocked here when a reg access from an ISR is processed

      else begin

         if(this.bfm.to_ahb.is_full()) begin

           this.bfm.to_ahb.activate(cyc_active); //removes any existing transactions in the channel’s active slot so that the current access can be pushed through





           this.bfm.to_ahb.put(cyc_active); //restores the original transaction back into the active slot

           cyc = cyc_active;

           sem.put(1); //ths ISR access puts back the key for normal accesses to resume


        else this.bfm.to_ahb.put(cyc);


      // Send the result back to the RAL

      if (tr.kind == vmm_rw::READ) begin =;


      endtask: execute_single

Posted in Register Abstraction Model with RAL, VMM infrastructure | Comments Off

Using VMM template Generator to ramp up your testbench development

Posted by Amit Sharma on 25th October 2010

Amit Sharma, Synopsys
‘vmmgen’, the template generator for creating robust, extensible VMM compliant environments, has been available for a long time with VMM and it was upgraded significantly with VMM1.2. Though the primary functionality of ‘vmmgen’ is to help minimize VIP and environment development cycle by providing detailed templates for developing VMM Compliant verification environments, a lot of folks also use it to quickly understand how different VMM base classes can be used in different contexts. This is done as the templates uses a rich set of the latest VMM features to ensure the appropriate base classes and their features are picked up optimally.

Given that it has a wide user interaction mechanism which provides available features and options to the user, the user can pick up the modes which are most relevant to his or her requirement. It also provides them the option to provide their own templates thus providing a rich layer of customization. Based on the need, one can generate individual templates of different verification components or they can generate a complete verification environment which comes with a ’Makefile’ and an intuitive directory structure, thus propelling them on their way to catch the first set of bugs in their DUTs. I am sure all of you know where to pick up ‘vmmgen’ form. It available in the <VMM_HOME>/Shared/bin area or in $VCS_HOME/bin

Some of the rich set of features available now includes:

• Template Options:

– Complete environment generation

– Individual templates generation

• Options to create Explicitly phased environments or Implicitly phased environment or to mix Implicitly phased components and Explicitly phased components

• Usage of VMM Shorthand macros

• Creating RAL based environment, and providing Multiplexed domain support if required

• Hooking up VMM Performance Analyzer at the appropriate interfaces

• Hooking up the DS Scoreboard at the relevant interfaces (with options to chose from a range of integration options, e.g. : through callbacks, through TLM2.0 analysis ports, connect directly through to transactors, channels or notifications)

• Ability to hook up different generators (atomic, scenario, Multistream generators) at different interfaces

• Creating a scenario library and Multistream scenario creation

• Multi-driver generator support for different kinds of transactions in the same environment

• Factory support for transactions, scenarios and multi stream scenarios. Sample factory testcase which can explain the usage of transaction override from a testcase.

• ‘RTL config’ support for drivers and receivers.

• Various types of unidirectional and bi-directional TLM connections between generator and driver.

• Analysis ports/exports OR parameterized notify observers to broadcast the information from monitor to scoreboard and coverage collectors.

• Multi test concatenation support and management to run the tests

• Creating portable Interface wrapper object, and setting up interface connections to the testbench components using vmm_opts::set_object/get_object_obj

• Creating a Generic slave component

• Option to use default names or user provided names for different components

As you can see the above list itself is quite comprehensive and let me tell you that that it is not exhaustive as there are many more features in vmmgen.

With respect to the usage as well, there are multiple flavors. In the default mode, the user is taken through multiple detailed choices/options as he is creating/connecting different components in his verification environment. However, some folks might want to use ‘vmmgen’ within their own wrapper script/environment and for them there are options to generate the environments by providing all required options in the command line or through a configuration file… Some of these switches include

-SE [y/n] : Generates a complete environment with sub-environments

-RAL [y/n] : Create RAL based verification environments

-RTL [y/n] : Generates RTL configuration for the environment

-ENV <name>, -TR <name> : Provide the name for the environment class and transaction classes. names for multiple transaction class names can be provide as well:

vmmgen –l sv –TR tr1+tr2

-cfg_file <file_name> : Option to provide a configuration file for the options

There is an option to generate an environment quickly by taking the user through the minimum number of questions (-q).

Additionally, the user can provide his or her own templates through the –L <template directory> option.

As far as individual template generation goes, you have the complete list. Here, I am outlining this down for reference:


I am sure a lot of you have already been using ‘vmmgen’. For those, who haven’t, I encourage you to try out the different options with it. I am sure you will find this immensely useful and it will not only help you create verification components and environments quickly but will also make sure they are optimal and appropriate based on your requirements.

Posted in Automation, Coding Style, Customization, Modeling Transactions, Organization, Performance Analyzer, Scoreboarding, Tools & 3rd Party interfaces, VMM infrastructure | Comments Off

Setting data members for factory-created objects

Posted by Avinash Agrawal on 11th October 2010

Avinash Agrawal, Corporate Applications, Synopsys

One question asked by verification engineers is that given that we are using the VMM factory, how do we assign the data members of VMM class factory service while calling the create_instance method ?

For example, the create_instance() method of class factory service always calls
the class constructor with default arguments. Therefore, it would appear that extra arguments
cannot be passed to the class instance class. i.e., there is no way to pass
additional arguments for class members with the following statements:

  tr = ahb_trans::create_instance(this, "Ahb_Tr0", `__FILE__, `__LINE__);

The answer is that while you cannot use the create_instance() function to initialize   ?
data members, you can use any of the following methods to deal with this assignment:
1. By using virtual function/task set_* in the base class and derived class, and the
   arguments are passed from the arguments list of set_* function/task. But there
   is a limitation with this as the set_* function/task arguments for the base
   class and the derived class must be the same.

2. By using vmm_opts::set_* and vmm_opts::get_* combination to set the value of
   properties of class. 


1. By using set_* to pass arguments as described in point 1:

//Source code
`define VMM_12
program P;
`include ""

// Define base class
class ahb_trans extends vmm_data;
  rand int addr;
  static vmm_log log = new("ahb_trans", "object");

    `vmm_data_member_scalar(addr,	DO_ALL)

  virtual function void set_data(int addr=0, int data=0);	// <-- Two arguments
    this.addr = addr;

  virtual function void display_data();
    `vmm_note(log, $psprintf("ahb_trans.addr=='h%0h", this.addr));


// Define derived class
class my_ahb_trans extends ahb_trans;
  rand int data;

  static vmm_log log = new("my_ahb_trans", "object");

    `vmm_data_member_scalar(data,		DO_ALL)

  virtual function void set_data(int addr=0, int data=0);	// <-- Two arguments
    this.addr = addr; = data;

  virtual function void display_data();
    `vmm_note(log, $psprintf("my_ahb_trans.addr=='h%0h,'h%0h", this.addr,;


class env extends vmm_env;

   ahb_trans tr;

   function new;"ENV");

   function void build;;

     ahb_trans::override_with_new("@%*", my_ahb_trans::this_type, log, `__FILE__, `__LINE__);
     tr = ahb_trans::create_instance(this, "Ahb_Tr0", `__FILE__, `__LINE__); 

     if(!(tr.get_typename == "class P.my_ahb_trans"))

     tr.set_data(32'h5555_5555, 32'haaaa_aaaa);	// <-- Call set_* after create_instance



initial begin
  env env = new();;


//Simulation Result
Normal[NOTE] on my_ahb_trans(object) at                    0:
Simulation PASSED on /./ (/./) at                    0 (0 warnings, 0 demoted errors & 0 demoted warnings)

2. By using vmm_opts::set_* and vmm_opts::get_* combination as described in point 2:

//Source code
`define VMM_12
program P;
`include ""

// Define base class
class ahb_trans extends vmm_data;
  rand int addr;
  static vmm_log log = new("ahb_trans", "object");

    `vmm_data_member_scalar(addr,	DO_ALL)

  virtual function void set_data();
    this.addr = vmm_opts::get_int ("ADDR", 0, "Value set for addr");

  virtual function void display_data();	//<-- no arguments passed, so there is no limitation on number of arguments
    `vmm_note(log, $psprintf("ahb_trans.addr=='h%0h", this.addr));


// Define derived class
class my_ahb_trans extends ahb_trans;
  rand int data;

  static vmm_log log = new("my_ahb_trans", "object");

    `vmm_data_member_scalar(data,		DO_ALL)

  virtual function void set_data();	//<-- no arguments passed, so there is no limitation on number of arguments
    this.addr = vmm_opts::get_int ("ADDR", 0, "Value set for addr"); = vmm_opts::get_int ("DATA", 0, "Value set for data");

  virtual function void display_data();
    `vmm_note(log, $psprintf("my_ahb_trans.addr=='h%0h,'h%0h", this.addr,;


class env extends vmm_env;

   ahb_trans tr;

   function new;"ENV");

   function void build;;

     vmm_opts::set_int("ADDR", 32'h55,null);	//	<-- Call vmm_opts::set_*
     vmm_opts::set_int("DATA", 32'haa,null);

     ahb_trans::override_with_new("@%*", my_ahb_trans::this_type, log, `__FILE__, `__LINE__);
     tr = ahb_trans::create_instance(this, "Ahb_Tr0", `__FILE__, `__LINE__); 

     if(!(tr.get_typename == "class P.my_ahb_trans"))




initial begin
  env env = new();;


//Simulation Result
Normal[NOTE] on my_ahb_trans(object) at                    0:
Simulation PASSED on /./ (/./) at                    0 (0 warnings, 0 demoted errors & 0 demoted warnings)

Posted in VMM, VMM infrastructure | Comments Off

Performance verification of a complex bus arbiter using the VMM Performance Analyzer

Posted by Shankar Hemmady on 20th September 2010

Performance verification of system bus fabrics is an increasingly complex problem. An article in EE Times by Kelly Larson, John Dickol and Kari O’Brien of MediaTek Wireless describes how they used the VMM Performance Analyzer to complete performance validation for an AXI bus arbiter:

Posted in Performance Analyzer, VMM infrastructure | Comments Off

Required and Provided Interfaces in VMM 1.2

Posted by John Aynsley on 14th May 2010

John Aynsley, CTO, Doulos

Before diving into more technical detail concerning VMM 1.2, let’s take some time to review a basic concept of transaction-level communication that often causes confusion, particularly for people more familiar with HDLs like Verilog and VHDL than with object-oriented software programming. This is the idea of the transaction-level interface.

A transaction-level interface is a software interface that permits software components to communicate using a specific set of function calls (also known as method calls). In the case of VMM, the software components in question are VMM transactors, and the function calls are the VMM TLM methods such as b_transport, introduced in previous posts on this blog. Such transaction-level interfaces are often depicted diagrammatically as shown here:


Ports and exports are depicted as if they were pins on the periphery of a component, which is accurate in a metaphorical sense, but misleading if taken too literally. A port is a structured way of representing the fact that the Producer transactor above makes a call to a specific function, and thus requires an implementation of that function in order to compile and run. On the other side, an export is a structured way of representing the fact that the Consumer transactor provides an implementation of a specific function. So although the diagram may appear to show two components with a structural connection between them, it actually shows the Producer making a call to a function implemented by the Consumer. What may appear to be a hardware connection turns out to be an object-oriented software dependency between Producer and Consumer.

When it comes to combining multiple transactors, the types of the transaction-level interfaces have to be respected. The declarations of ports and exports are each parameterized with the type of the transaction object to be passed as a function argument:

vmm_tlm_b_transport_port #(Producer, transaction) port;

vmm_tlm_b_transport_export #(Consumer, transaction) export;

The port, which requires a transaction-level interface of a given type, must be bound to an export that provides an interface of the same type. The type in question is provided by the second parameter transaction. The tlm_bind method effectively seals a contract between the transactor that requires the interface and the provider of the interface:

producer.port.tlm_bind( consumer.export );

One benefit of transaction-level interfaces is that this connection is strongly typed, so the SystemVerilog compiler will catch any mismatch between the types of the port and the export.

As well as binding a port to an export peer-to-peer, it is also possible to bind chains of ports or exports going up or down the component hierarchy, as shown diagrammatically below:


Child-to-parent port bindings carry the function call up through the component hierarchy to the left, while parent-to-child export bindings carry the function call down through the component hierarchy to the right. A port-to-export binding is only permitted at the top level.

At run-time, a method call to the appropriate function is made through the child port:

port.b_transport(tx, delay);

This will result in the corresponding function implementation being called directly, with no intervening channel to store the transaction en route. Transaction-level interfaces are fast, robust, and simple to use, which is why they have been incorporated into VMM.

Posted in Communication, Transaction Level Modeling (TLM), VMM infrastructure | Comments Off

Changing Functionality: The Factory Service #3

Posted by JL Gray on 21st April 2010

by Jason Sprott

Jason Sprott is CTO at Verilab.

In Factory Service #1 and in Factory Service #2 we discussed what a Factory is and what it can do for us. In this post we’ll take a look at the two general types of objects we might want to replace using the Factory. Typically objects in testbenches fall into two categories:

  • Dynamic: These are objects that get created and garbage collected spuriously during normal operation. An example might be a randomized transaction generated by a scenario, or even the scenario itself.
  • Structural: These objects get created once at the beginning and live throughout the simulation. An example might be vmm_xactor or vmm_group.

The location within the VMM Phasing scheme where the factory override performed is different between the two.  A factory override has to be done before any of the objects affected by the override are instantiated (using <class>::create_instance() ). Structural components in the testbench are typically instantiated earlier than dynamic ones.

In the case of overriding dynamic objects, we need to perform the override before the test starts running the simulation. A good place to perform dynamic object overrides is in the vmm_test::configure_test_ph()method. This is executed in the test phase (see “Constructing and Controlling Environments > Composing Implicitly Phased Environments/Sub-Environments” section of the VMM User Guide), before test execution is started by the vmm_test::start_of_sim_ph() method. So no dynamic objects are likely to have been created. The following example shows where we override a transaction in a testcase:

class my_dynamic_factory_test extends vmm_test;

virtual function void configure_test_ph();

log, `__FILE__, `__LINE__);


Structural object overrides on the other hand potentially give us a bit of a problem, because these objects are instantiated in the pre-pest “build” phase. Performing the factory override in vmm_test::configure_test() would be too late, as it happens after the objects have already been instantiated. Instead structural object factory overrides must be performed in vmm_test::set_config(), which is executed before the Pre-Test timeline, as long as test concatenation is not being done. The following is an example where we override a VIP (encapsulated in a vmm_group) in the testcase:

class my_structural_factory_test extends vmm_test;

virtual function void set_config();

log, `__FILE__, `__LINE__);


The figure below illustrates the phases where overrides should be implemented, with respect to the associated objects types:


Another thing to be aware of when using a Factory is test concatenation. In general, if you want to be able to concatenate a test, it’s not a good idea to change components in the environment using factory overrides. Reprogramming the environment, either structural or dynamic objects, will affect subsequent concatenated tests. If the changes can be undone, in the cleanup phase of the test for example, it may be OK, but changes to structural objects are difficult to undo. Any change to the environment may affect the validity of a test. For this reason the VMM will not execute vmm_test::set_config() if test concatenation is being performed.  That might not be enough though. For reuse, considering test concatenation is an important point test developers and users of the tests need to be aware of. Factory overrides incompatible with test concatenation will not necessarily cause a noticeable side affect, or failure, so misuse could go unnoticed. This is not a problem of the Factory; it’s just a consideration when using factory overrides with concatenation of tests.

Posted in Configuration, Reuse, VMM infrastructure | Comments Off

Changing Functionality: The Factory Service #2

Posted by JL Gray on 19th April 2010

by Jason Sprott

Jason Sprott is CTO at Verilab

In a previous post we took a look at what the VMM Factory is and what it could do for us. In this post, we take a look at the problem the Factory solves. In order to understand when we should use a Factory, we really need to understand a bit more about the problem it solves, so we can spot situations where a Factory might be appropriate. Let’s start by looking at how we might do things in SystemVerilog without using VMM Factories. Typically we might instantiate a class of type Foo like this:

Foo my_foo;

my_foo = new(…);

The problem with that is my_foo will always create an object of type Foo, no matter what.  We can’t change that. We might use this class Foo in lots of places around our testbench code. If we want to replace Foo with a new class that maybe adds more variables, or replaces a method, we could have a problem. We’d have to go around our testbench looking for everywhere Foo is used, to see if there was a way it could be replaced. Anywhere Foo is constructed using new(), it’s is highly likely to involve modifying the original code. This is very undesirable, especially if the code may be part of a tested IP library. In general, modifying the original source is a Bad Thing.

In VMM we can avoid this problem using a different way to create an instance of the class with the factory. It would look something like this:

Foo my foo;

my_foo = Foo::create_instance(…);

Although this looks similar to an instantiation with new(), something much more sophisticated  is at play. If we factory enable the class Foo, we never call new()to instantiate it again. We now call the Factory’s static method for generating instances of the object create_instance(). Since the method is static, it can be called without having an instance of Foo, therefore it can be used to create itself. What’s the point?

Now that we’ve encapsulated the task of creating an instance in a method, we can change what that method does. We can tell create_instance() to return something different. This might be a new type (with some modifications), derived from Foo, or the same type populated with different values for the variables. What’s more, we can do this easily anywhere Foo is used in the code. We can pick specific instances to be replaced, or replace multiple instances globally.

The two methods used to reprogram the factory are:
•    create_with_new() – tells the factory, when creating new instances, to return a brand new instance of the type specified.
•    create_with_copy() – tells the factory, when creating new instances,  to return a copy of instance specified.

This replacement can be done anywhere in the code, as long as it happens before the instances you want to replace are created. Let’s say a test needs to replace our type Foo, with a new class, FooWithAttitude, derived from Foo. Here’s what that might look like:

class my_factory_test extends vmm_test;

virtual function void configure_test_ph();
Foo::override_with_new(         // Foo’s factory is being overridden
“@%*”,                        // instances matching this pattern will be replaced
FooWithAttitude::this_type(), // they will be replaced with this type of class
log, `__FILE__, `__LINE__);   // some generic log and debug stuff


As can be seen, the Factory override uses regular expression pattern matching to specify which instances will be targeted. The syntax is expressive enough to indentify single instances, multiple instances (e.g. by hierarchy), or all instances (as in this example). More information on the pattern matching syntax can be found in the “Common Infrastructure and Services > Simple Match Patterns” section of the VMM User Guide.

This next example shows how the factory can be programmed to return a copy of the class we’ve modified with some values.

class test_read_back2back extends vmm_test;

virtual function void configure_test_ph();
FooBusTrans tr = new();        // create a template for the override copy
tr.address = ‘habcd_1234;      // special value we want to set up for override
tr.address.rand_mode(0);       // you might want to protect value during randomization
“@top:foobus0:*”,           // instances matching this pattern will be replaced
tr,                         // they will be replaced by a copy of this instance
log, `__FILE__, `__LINE__); // some generic log and debug stuff


The above example  shows a Factory override in the configure test phase of the simulation timeline. Exactly when a Factory replacement is done is quite important. As previously mentioned, the replacement has to be done before any instances of the class have been created. This depends on the type of class being replaced. For example, dynamic objects (such as transactions), created many times during normal operation of the testbench, are likely to be created after the test has started. However, structural objects (such as instances of a VIP), are likely to be created once when the testbench is built. The details of where to put Factory overrides is covered in another post.

Posted in Configuration, Reuse, VMM infrastructure | Comments Off

Changing Functionality: The Factory Service #1

Posted by JL Gray on 16th April 2010

by Jason Sprott

Jason Sprott is CTO at Verilab.

Building a testbench using SystemVerilog, an object-oriented testbench language, does not automatically make the end solution reusable, or easily extensible. In SystemVerilog we can certainly implement the kind of object-oriented principles and design patterns that enable reuse, but this requires significant programming skills and experience in understanding the requirements of building substantial reusable software solutions. Also, when users inevitably need to change the original functionality, the mechanisms in place to allow those changes would have to be well documented and understood.

Two common changes we might need to make are:
•    Swap one type of class with another (which may replace or add new constraints, variables and methods).
•    Add new functionality to an existing method without replacing it

Fortunately the VMM provides some standard solutions for handling these types of changes. This post takes a look at how the Factory Service helps with the first of the two requirements, swapping classes.

There are many cases where we might want to swap one class for another in a testbench. For example, a typical requirement in constrained random testbenches is to replace one randomizable class with derived version, adding or modifying constraints. We might also have various derived versions of VIP, supporting slightly different versions of a protocol, or injecting different types of errors. When we first develop some VIP, or a testbench, it’s almost impossible to predict what people will want and need from our solution in the future. We can however provide users with a standard way to replace our implementation with something slightly different. This could be a derived version (extension with additions or modification), of what we originally implemented, or a copy with modified values.

Although this sounds quite trivial, testbenches that do not have this capability can be very hard to change without modifying the original source. Changing an original implementation to add new functionality is undesirable and sometimes impossible. Access to original source code is quite often restricted, or at least controlled. The original code may be well tested and proven, and changing the original source could affect other users of the code. Testbench components implementing the Factory are easily replaced at runtime without modifying the original source. Such modification can even be done on a per test basis if required.

However, we don’t get a Factory for free; there’s a bit of up-front thought required. We have to care enough about this capability to implement a Factory in the first place, obviously. This seems obvious, but in fact many times I come across a class that was in dire need of a factory that wasn’t there, the only reason was that the original developer didn’t think to put one in. I try not to judge too harshly, because we’ve all developed code that didn’t quite meet down-steam requirements at some point in time. Theres’s also a bit of effort (a small amount of additional coding) required. So why should we bother?

The VMM Factory Service (Factory) is a standard mechanism for changing functionality by replacing classes. VMM has always recommended the use of Factories, but The Factory Service was introduced in VMM 1.2 to ease implementation. The Factory provides the following API and utilities:
•    Short-hand macros to make implementing a Factory simple. The macros implement the Factory API methods for a given class.
•    API method to create instances of the class – replacing the use of the new()constructor method.
•    API method to change the instance generated by the Factory to a new derived type.
•    API method to change the instance generated by the Factory to a copy of a class of the same type
•    Specific and regular expression based selection of component factories to override

As an example, imagine a Bus VIP. The VIP has quite a lot going on inside. It has a master, a slave, some monitors, coverage and checking. All use a particular transaction type. So if we wanted to replace that transaction with a derived type, to maybe inject some errors, it would affect multiple components in the VIP. If any of the components using the transaction create an instance using new(), there’s a pretty good chance we cannot replace the transaction type without modifying that source. This may not be an option, or at the very least not desirable.

If we use the Factory, we can not only do the replacement, we can choose which instances of the VIP in a testbench we want to perform the replacement on. Maybe not all nodes on the bus have to inject errors. In our test, or new testbench environment, we might decide to say something like:
•    When creating new classes of type Foo, instead instantiate my new class FooWithAttitude. Using the Factory in VMM that might look like:
Foo::override_with_new(         // Foo’s factory is being overridden
“@%*”,                        // match all instances
FooWithAttitude::this_type(), // they will be replaced with this type of class
log, `__FILE__, `__LINE__);   // some generic log and debug stuff

Or, maybe we want to make sure some variables in a configuration object are replaced with something specific. We can make sure a copy of a class is instantiated, with some specific values set. We might decide to say something like:
•    When creating new classes of type Foo,  instead insatiate my copy of that class (in this case we have created our own instance, tr), with some values set and randomization turned off:

“@top:foobus0:*”,           // instances matching this pattern will be replaced
tr,                         // they will be replaced by a copy of this instance
log, `__FILE__, `__LINE__); // some generic log and debug stuff

To understand when we should use a Factory, we really need to understand a bit more about the problem it solves. We’ll discuss this in another post.

Posted in Configuration, Reuse, VMM infrastructure | Comments Off

Analysis Ports in VMM 1.2

Posted by John Aynsley on 3rd March 2010


John Aynsley, CTO, Doulos

Analysis ports are another feature from the SystemC TLM-2.0 standard that has been incorporated into VMM 1.2. Analysis ports provide a mechanism for distributing transactions to passive components in a verification environment, such as checkers and scoreboards.

Analysis ports and exports are a variant on the TLM ports and exports that I have discussed in previous blog posts. The main difference between analysis ports and regular ports is that a single analysis port can be bound to multiple exports, in which case the same transaction is sent to each and every export or “subscriber” or “observer” connected to the analysis port. The terms subscriber and observer are used interchangeably in the VMM documentation.

Let us take a look at an example:

class my_tx extends vmm_data;  // User-defined transaction class

class transactor extends vmm_xactor;
vmm_tlm_analysis_port #(transactor, my_tx) m_ap;   // The analysis port

virtual task main;
my_tx tx;


The transactor above sends a transaction tx out through an analysis port m_ap.

The type of the analysis port is parameterized with the type of the transactor and of the transaction my_tx. The call to write sends the transaction to any object that has registered itself with the analysis port. There could be zero, one, or many such observers registered with the analysis port.

To continue the example, let us look at one observer:

class observer extends vmm_object;
vmm_tlm_analysis_export #(observer, my_tx) m_export;
function new (string inst, vmm_object parent = null);

m_export = new(this, “m_export”);

function void write(int id, my_tx tx);

The observer has an instance of an analysis export and must implement the write method that the export will provide to the transactors. Note that the observer extends vmm_object. Since an observer is passive, it need not extend vmm_xactor.

The analysis port may be bound to any number of observers in the surrounding environment:

class tb_env extends vmm_group;
transactor  m_transactor;
observer    m_observer_1;
another     m_observer_2;
yet_another m_observer_3;
virtual function void build_ph;
m_transactor = new( “m_transactor”, this );
m_observer   = new( “m_observer”,   this );

virtual function void connect_ph;
m_transactor.m_ap.tlm_bind( m_observer_1.m_export );
m_transactor.m_ap.tlm_bind( m_observer_2.m_export );
m_transactor.m_ap.tlm_bind( m_observer_3.m_export );

Note the use of the predefined phase methods from VMM 1.2. Transactors are created during the build phase, and ports are connected during the connect phase.

Finally, let us compare analysis ports with VMM callbacks:

`vmm_callback(callback_facade, write(tx));

The effect is very similar, but there are differences. Unlike VMM callbacks, the name of the method called through an analysis port is fixed at write. A VMM callback method is permitted to modify the transaction object, whereas a transaction sent through an analysis port cannot be modified. When multiple callbacks are registered, the prepend_callback and append_callback methods allow you to determine the order in which the callbacks are made, whereas you have no control over the order in which write is called for multiple observers bound to an analysis port. Because of these differences, only VMM callbacks are appropriate for modifying the behavior of transactors. Analysis ports are only appropriate for sending transactions to passive components that will not attempt to modify the transaction object. On the other hand, that in itself is the feature and strength of analysis ports; they are only for analysis.

It can make sense to combine a VMM callback with an analysis port in the same transactor, using the callback to inject an error and the analysis port to send the modified transaction to a scoreboard, for example:

`vmm_callback(callback_facade, inject_error(tx));

In this situation, the VMM recommendation is to make the analysis call after the callback, as shown here.

Posted in Communication, Reuse, SystemC/C/C++, Transaction Level Modeling (TLM), VMM infrastructure | Comments Off

Handling Incoming Transactions from Multiple Sources in VMM 1.2

Posted by John Aynsley on 16th February 2010

JohnAynsley John Aynsley, CTO, Doulos

In the previous post I described TLM ports and exports from VMM 1.2. In this post, we will look at how to handle incoming transactions from multiple sources, that is, multiple producers connected to a single consumer. VMM 1.2 provides two separate mechanisms to handle this situation: peer ids, and shorthand macros. We will explore what these mechanisms have in common, and also the differences between them.

We are discussing the following situation, where two separate producer instances send transactions to a single consumer:

class producer extends vmm_xactor;

vmm_tlm_b_transport_port #(producer, my_tx) m_port;

m_port.b_transport(tx, delay);

class consumer extends vmm_xactor;

vmm_tlm_b_transport_export #(consumer, my_tx) m_export;

function new (string inst, vmm_object parent = null);, inst, -1, parent);
m_export = new(this, “m_export”, 2); // 3rd argument = max # bindings

function void start_of_sim_ph;
vmm_note(log, $psprintf(“Number of peers = %d”, m_export.get_n_peers()));

task b_transport(int id = -1,
my_tx trans, ref int delay);

class my_env extends vmm_group;

producer m_producer_1;
producer m_producer_2;
consumer m_consumer;

virtual function void connect_ph;
m_producer_1.m_port.tlm_bind( m_consumer.m_export, 0 ); // 2nd argument = id
m_producer_2.m_port.tlm_bind( m_consumer.m_export, 1 );

The first thing to notice is the connect_ph method of the environment, which binds two separate ports to the same export. The tlm_bind method takes a second argument, the peer id, which allows transactions from the two ports to be distinguished.

The second thing to notice is that when the export is instantiated, the constructor new takes a third argument that specifies the maximum number of bindings to this export. The default value of 1 would be inadequate in this case, since the export is bound twice.

Thirdly, the method get_n_peers called from start_of_sim_ph returns the number of peers, which would be 2 in this case.

Finally, the first argument to the b_transport method implemented in the consumer is the peer id passed to the tlm_bind method. The implementation of b_transport can now use the peer id to distinguish between transactions from the two producers.

So much for peer ids. Now let us take a look at the alternative, that is, shorthand macros. Instead of binding two ports to a single export, we could have used the shorthand macros to create two separate exports:

class consumer extends vmm_xactor;

`vmm_tlm_b_transport_export(_1) // Argument is suffix to name
vmm_tlm_b_transport_export_1 #(consumer, my_tx) m_export_1;
vmm_tlm_b_transport_export_2 #(consumer, my_tx) m_export_2;

task b_transport_1(int id = -1,
my_tx trans, ref int delay);

task b_transport_2(int id = -1,
my_tx trans, ref int delay);

The argument passed to the macro is used as the suffix for a new type name and a new method name. Those new types are then used to create two separate exports, and the consumer contains two separate and differently named implementations of the b_transport method, one for each export. It is good practice to use the same suffix when naming the export members themselves (e.g. m_export_1), though this is not strictly necessary. Since peer ids are not being used, the id argument to b_transport will have the value 0 for both methods.

As usual, ports are bound to exports in the surrounding environment, but this time using separate exports rather than peer ids:

class my_env extends vmm_group;

producer m_producer_1;
producer m_producer_2;
consumer m_consumer;

virtual function void connect_ph;
m_producer_1.m_port.tlm_bind( m_consumer.m_export_1 );
m_producer_2.m_port.tlm_bind( m_consumer.m_export_2 );

In conclusion, we have seen peer ids and shorthand macros used to accomplish the same thing, that is, multiple producers sending transactions to a single consumer. With peer ids we instantiate a single export and provide a single b_transport method, distinguishing between the incoming transactions using the peer id argument. With shorthand macros we instantiate two exports and provide two implementations of b_transport, distinguished by the suffix to their names.

Posted in Communication, Reuse, Transaction Level Modeling (TLM), VMM infrastructure | Comments Off

Connecting Multiple Analysis Ports to a Single Analysis Export

Posted by JL Gray on 9th February 2010

Today’s post was written by my colleague Asif Jafri. Enjoy! JL

by Asif Jafri

Asif Jafri is a verification engineer at Verilab.

This post introduces the VMM implementation of the Transaction Level Modeling (TLM) 2.0 specification of how you can connect multiple broadcasting ports to the same receiving export using peer ID’s. Figure 1 shows multiple initiators communicating with the same target. The initiators can be monitors on either side of your DUT passing transaction to a single scoreboard which keeps track of the transactions and does various checks. In TLM 2.0 message broadcast is accomplished through write function calls from the initiator which are then implemented in the target. image

Figure 1: Connecting using ID

Read the rest of this entry »

Posted in Communication, Reuse, Transaction Level Modeling (TLM), VMM, VMM infrastructure | Comments Off

You get real hierarchy with VMM1.2

Posted by Wei-Hua Han on 9th February 2010

If you look at VMM1.2 classes, you may find that almost all new() functions have an argument, vmm_object parent. The purpose of this argument is to build a parent-child hierarchy within a VMM1.2 based environment, so that VMM1.2 can provide an infrastructure where users can access the components inside the environment through hierarchical path and name. And this parent-child hierarchy also contributes to the implicit phasing implementation.

Here is a small example to illustrate how a hierarchy can be built with VMM1.2:

  1. class mike_c extends vmm_object;
  2. function new(vmm_object parent=null, string name=”");
  4. endfunction
  5. endclass
  6. class ben_c extends vmm_object;
  7. function new(vmm_object parent=null, string name=”");
  9. endfunction
  10. endclass
  11. class jason_c extends vmm_object;
  12. mike_c Mike;
  13. ben_c Ben;
  14. int weight;
  15. function new(vmm_object parent=null, string name=”");
  16. bit is_set;
  18. weight=vmm_opts::get_object_int(is_set,this, “weight”,0, “set weight”);
  19. endfunction
  20. function void build();
  21. Mike = new(this,”Mike”);
  22. Ben = new(this,”Ben”);
  23. endfunction
  24. endclass
  25. program p1;
  26. jason_c Jason;
  27. initial begin
  28. vmm_opts::set_int(“Jason:weight”,10);
  29. Jason=new(null,”Jason”);
  31. vmm_object::print_hierarchy(Jason);
  32. $display(“Jason has %0d children”,Jason.get_num_children());
  33. $display(Jason.Mike.get_object_name());
  34. $display(Jason.Ben.get_object_hiername());
  35. $display(Jason.weight);
  36. end
  37. endprogram

In this small example, line 30 creates an object (Jason) for jason_c and its parent is “null”, so Jason is a root component in the hierarchy. When is called in line 31, object Mike and Ben are created and their parent is set to Jason. So in this small system we build the following hierarchy:




Jason has 2 children



This hierarchy can be printed by vmm_object method print_hierarchy().

Please note that unlike Verilog modules and instances where the hierarchy is defined as per the Verilog LRM, the VMM1.2 parent-child hierarchy is really user defined. It depends on how “parent” argument is specified when the object is created, and not on where the object variable is declared or created.

As for the component name, although you may choose to specify a different name as the variable name, it is a good practice to keep it consistent, which makes the code more readable and avoids confusion.

From the above example, you can find that the hierarchical name for object Jason.Ben is “Jason:Ben”. VMM1.2 uses “:” as the hierarchical separator instead of “.”. The reason is that this hierarchical name is actually a made-up name, and we want to differentiate it from the semantic hierarchical reference name specified in Verilog/SystemVerilog which uses “.” as the separator.

There are many methods provided in VMM1.2 which help users to work with the parent-child hierarchy. Some of these methods are:

  • find_child_by_name(): finds the named object relative to this object
  • get_num_children(): gets the total number of children for this object
  • get_nth_child(): returns the nth child of this object
  • get_object_hiername(): gets the complete hierarchical name of this object
  • get_parent_object():returns the parent of this object
  • get_root_object(): gets the root parent of this object
  • get_typename(): returns the name of the actual type of this object
  • is_parent_of(): returns true, if the specified object is a parent of this object
  • print_hierarchy(): prints the object hierarchy
  • Set_parent_object(): sets or replaces the parent of this object

Dr. Ambar Sarkar has explained how users can traverse the hierarchy in his blog post.

This parent-child hierarchical infrastructure is one of the most important mechanisms in VMM1.2. Many other VMM1.2 features rely on this infrastructure:

1.   Implicit phasing

Implicit phasing is new in VMM1.2. In implicit phasing, structural components (transactors) are aligned with each other automatically. The phase specific methods are called automatically throughout the whole hierarchy in a top-down (for functions) or forked (for tasks) mode. Thus implicit phasing makes integration of Verification IPs into the simulation environment or other structural components a lot easier. Other VMM1.2 users also benefit from implicit phasing when building complicated verification environments.

2.   Factory replacement

Factory is an important feature that enables flexibility and reuse inside a verification environment. Because of the parent-child hierarchy, users can replace components, generated transactions or scenarios with their extension type or other objects by specifying hierarchy path and names. Support for regular expression for specifying hierarchies and names make this utility very powerful.

For example, in the following code segment, we override the type mike_c for Mike with mike_ext :


3.   Hierarchical configuration

In addition to supporting runtime configuration through command-line options or files, using the parent-child hierarchy VMM1.2 also supports configuration of components by specifying their hierarchical path and name. All these configuration utilities are provided through vmm_opts.

For example, in the following code segment, we set the property weight of object Jason to 10 using hierarchical configuration:


Like factory, users can also use regular expression with hierarchical configuration.

If you have watched “Growing Pains”, you know that I am not quite accurate when I say

Jason has 2 children

He indeed has three…

Have fun with VMM1.2. J

Posted in Debug, Reuse, Tutorial, VMM infrastructure | 1 Comment »

Leverage on the built-in callback inside vmm_atomic_gen and be productive with DVE features for VMM debug

Posted by Srinivasan Venkataramanan on 7th February 2010

Srinivasan Venkataramanan, CVC Pvt. Ltd.

Rashmi Talanki, Sasken

John Paul Hirudayasamy, Synopsys

During a recent Verification environment creation for a customer we had to tap an additional copy/reference of the generated transaction to another component in the environment without affecting the flow. So one producer gets more than one consumer (here 2 consumers). As a first time VMM coder the customer tried using “vmm_channel::peek” on the channel that was connecting GEN to BFM. Initially it seemed to work, but with some more complex code being added across the 2 consumers for the channel, things started getting funny – one of the consumers received the transactions more than once for instance.

The log file looked like:

@ (N-1) ns the transaction was peeked by Master_BFM  0.0.0

@ (N-1) ns the transaction was peeked by Slave_BFM 0.0.0


.(perform the task)


@N ns the Master_BFM  get the transaction 0.0.0

@N ns the transaction was peeked by Slave_BFM 0.0.0

@N ns the transaction was peeked by Master_BFM 0.0.1

@N ns the transaction was peeked by Slave_BFM 0.0.1

With little reasoning from CVC team, the customer understood the issue quickly to be classical race condition of 2 consumers waiting for same transaction. What are the options, well several indeed:

1. Use vmm_channel::tee() (See our VMM Adoption book for an example)

2. Use callbacks – a flexible, robust means to provide extensions for any such future requirements

3. Use vmm_broadcaster

4. Use the new VMM 1.2 Analysis Ports (See a good thread on this: )

The customer liked the callbacks route but was hesitant to move towards the lengthy route of callbacks – for few reasons (valid for first timers).

1. Coding callbacks takes more time than simple chan.peek(), especially the facade class & inserting at the right place

2. She was using the built-in `vmm_atomic_gen macro to create the generator and didn’t know exactly how to add the callbacks there as it is pre-coded!

Up for review, we discussed the pros and cons of the approaches and when I mentioned about the built-in post_inst_gen callback inside the vmm_atomic_gen she got a pleasant surprise – that takes care of 2 of the 4 steps in the typical callbacks addition step as being recommended by CVC’s popular DR-VMM course.

Step-1: Declaring a facade class with needed tasks/methods

Step-2: Inserting the callback at “strategic” location inside the component (in this case generator)

This leaves only the Steps 3 & 4 for the end user – not bad for a robust solution (especially given that the Step-4 is more of formality of registration). Now that the customer is convinced, it is time to move to coding desk to get it working. She opened up and got trapped in the multitude of `define vmm_atomic_gen_* macros with all those nice looking “ \ “ at the end – thanks to SV’s style of creating macros with arguments. Though powerful, it is not the easiest one to read and decipher – again for a first time SV/VMM user.

Now comes the rescue in terms of well proven DVE – the VCS’s robust GUI front end. Its macro expansion feature that works as cleanly as it can get is at times hard to locate. But with our toolsmiths ready for assistance at CVC, it took hardly a few clicks to reveal the magic behind the `vmm_atomic_gen(icu_xfer). Here is a first look at the atomic gen code inside DVE.


Once the desired text macro is selected, DVE has a “CSM – Context Sensitive Menu” to expand the macro with arguments. It is “Show à Macro”, as seen below in the screenshot.


With a quick bang go on DVE – the Macros expander popped up revealing the nicely expanded, with all class name argument substituted source code for the actual atomic_generator that gets created by the one liner macro. Along with clearly visible were the facade class name and the actual callback task with clear argument list (something that’s not obvious by looking at standard


Now, what’s more – in DVE, you can bind such “nice feature” to a convenient hot-key if you like (say if you intend to use this feature often). Here is the trick:

Add the following to your $HOME/.synopsys_dve_usersetup.tcl

gui_set_hotkey -menu “Scope->Show->Macro” -hot_key “F6″

Now when you select a macro and type “F6” – the macro expands, no rocket science, but a cool convenient feature indeed!

Voila – learnt 2 things today – the built-in callback inside the vmm_atomic_gen can save more than 50% of coding and can match up to the effort (or the lack of) of using simple chan.peek(). The second one being DVE’s macro expansion feature that makes debugging a real fun!

Kudos to VMM and the ever improving DVE!

Posted in Callbacks, Debug, Reuse, Stimulus Generation, VMM, VMM infrastructure | Comments Off