Verification Martial Arts: A Verification Methodology Blog

Archive for the 'Tutorial' Category

Avoiding Redundant Simulation Cycles with your UVM VIP based simulation with a Simple Save-Restore Strategy

Posted by paragg on 6th March 2014

In many verification environments, you reuse the same configuration cycles across different testcases. These cycles might involve writing and reading from different configuration and status registers, loading program memories, and other similar tasks to set up a DUT for the targeted stimulus. In many of these environments, the time taken during this configuration cycles are very long. Also, there is a lot of redundancy as the verification engineers have to run the same set of verified configuration cycles for different testcases leading to a loss in productivity. This is especially true for complex verification environments with multiple interfaces which require different components to be configured.

The Verilog language provides an option of saving the state of the design and the testbench at a particular point in time. You can restore the simulation to the same state and continue from there. This can be done by adding appropriate built in system calls from the Verilog code. VCS provides the same options from the Unified Command line Interpreter (UCLI).

However, it is not enough for you to restore simulation from the saved state. For different simulations, you may want to apply different random stimulus to the DUT. In the context of UVM, you would want to run different sequences from a saved state as show below.

In the above example apart from the last step which varies to large extent, the rest of the steps once established need no iterations.

Here we explain how to achieve the above strategy with the simple existing UBUS example available in the standard UVM installation. Simple changes are made in the environment to show what needs to be done to bring in this additional capability. Within the existing set of tests, the two namely, “test_read_modify_write” and “test_r8_w8_r4_w4”, differs only w.r.t the master sequence being executed – i.e. “read_modify_write_seq” and “r8_w8_r4_w4_seq” respectively.

Let’s say that we have a scenario where we would want to save a simulation once the reset_phase is done and then start executing different sequences post the reset_phase the restored simulations. To demonstrate a similar scenario through the UBUS tests, we introduced a delay in the reset_phase of the base test (in a real test, this may correspond to the PLL lock, DDR Initialization, Basic DUT Configuration).

The following snippet shows how the existing tests are modified to bring in the capability of running different tests in different ‘restored’ simulations.

As evident in the code we made two major modifications.

  • Shifted the setting of the phase default_sequence from the build phase to the start of the main phase.
  • Get the name of the sequence as an argument from the command-line and process the string appropriately in the code to execute the sequence on the relevant sequencer.

As you can see, the changes are kept to a minimum. With this, the above generic framework is ready to be simulated.  In VCS, one of the different ways, the save/restore flow can be enabled as follows.

Thus above strategy helps in optimal utilization of the compute resources with simple changes in your verification flow. Hope this was useful and you manage to easily make the changes in your verification environment to adopt this flow and avoid redundant simulation cycles.

Posted in Automation, Coding Style, Configuration, Creating tests, Customization, Optimization/Performance, Organization, Reuse, Stimulus Generation, SystemVerilog, Tutorial, UVM, Uncategorized, Verification Planning & Management | 1 Comment »

SNUG-2012 Verification Round Up – Miscellaneous Topics

Posted by paragg on 29th March 2013

In my final installment of the series of blogs summing up the various SNUG verification papers of 2012, I try to cover the user papers on the Design IP/Verification IP and SystemC and SystemVerilog co-simulation. Please find my earlier blogs on the other domains here: System Verilog Language, Methodologies & VCS technologies

DesignWare core USB3.0 Controller (DWC_usb3) can be configured as a USB3.0 Device Controller. When verifying a system that comprises a DWC_usb3 Device Controller, the verification environment is responsible for bringing up the DWC_usb3 Device Controller to its proper operation mode to communicate with the USB3.0 Host. The paper Integrating DesignWare USB3.0 Device Controller In a UVM-based Testbench from Ning Guo of Paradigm Works describes the process of configuring and driving the DWC_usb3 Device Controller in a UVM based verification environment using the Discovery USB 3.0 Verification IP. This paper describes how the verification environment needs to be created so that it’s highly configurable and reusable.

The AMBA 4 ACE specification enables system level cache coherency across clusters of multicore processors, such as the ARM Cortex-A15 and Cortex-A7 MPCore™ processors .This ensures optimum performance and power efficiency of complex SoC designs. However, the design complexity associated with these capabilies is also higher.  And it throws up new verification challenges.  In the paper, Creating AMBA4 ACE Test Environment With Discovery VIP”, Whitney Huang, Sean Chou, MediaTek Inc, demonstrates how to tackle complex verification challenges increase their verification productivity by using Synopsys Discovery AMBA ACE VIP.

The paper, Verification Methodology of Dual NIC SOC Using VIPs by A.V. Anil Kumar, Mrinal Sarmah, Sunita Jain of Xilinx India Technology Services Pvt. Ltd, talks about how various features of Synopsys PCIe and Ethernet Verification IPs can be exploited to help in the efficient verification of the DUT across various traffic configurations. The paper explores how the VIP Application Programming Interface (API)s can leveraged in the  tests cases to reach high  functional coverage numbers  in a very short duration. They also show how a dual NIC verification environment can effectively use Ethernet VIP APIs to test various Media Access Control (MAC) features. Finally conclude how of the implementation can be used across future revisions of their design.

The ability to analyze the performance of the SoC at the early stage of the design can make a significant different to the end product.  This can lead to more accurate and an earlier estimate of the desired performance that is expected.  Dayananda Yaraganalu Sadashivappa, Igal Mariasin, Jayaprakash Naradasi of SanDisk India Device Design Centre Pvt. Ltd., in the paperGeneric MLM environment for SoC Performance Enhancement”, outlines the solution that was found by using the Synopsys VIP models. The VIPs were used in conjunction with interconnect, which in this case is a Multi-Layer-Matrix (MLM). The environment was built leveraging the VMM base classes. The VMM multiple stream scenario (vmm_ms_scenario) base class was used to create the traffic across the matrix, and the performance meters were constructed using the base classes. The callbacks were leverage appropriately help in collating the statistics. Multiple knobs were used to make the environment generic and configurable. The approach helped in finding multiple performance bugs which could not have been easily found using conventional verification.

In the paper, “User Experience Verifying Ethernet IP Core”, Puneet Rattia of Altera Corporation, presents his experience with verifying the Altera® 40-100Gbps Ethernet IP core utilizing VMM environment while integrating the Ethernet VIP from Synopsys. He explains how he created a full suite of system and blocks level regression tests and then goes on to show how he  utilizes the coverage mapping capabilities of VCS to merge the results across these various testbenches and produce meaningful reports. Besides showing how to reuse the verification infrastructure at the SoC level, the paper also demonstrates how they went in for horizontal reuse by integrating the reference SystemC based models developed and prototyped in the early phase of the project.

UVM 1.x includes support for the communication interfaces defined by the SystemC TLM-2.0 standard. This enables integration of SystemC TLM-2.0 IP into a SystemVerilog UVM verification environment. Dr David Long, John Aynsley, Doug Smith, Doulos in the paper A Beginner’s Guide to Using SystemC TLM-2.0 IP with UVMdescribes how this is done best. They talk about the fact that the connection between SystemC and SystemVerilog currently requires a tool specific interface such as Synopsys Transaction Level Interface (TLI). This paper begins with a brief overview of TLM-2.0 aimed at novice users. It then discusses the steps required to add a SystemC TLM-2.0 model into a SystemVerilog UVM environment and simulate it with VCS. At each step, issues that users will face are explored and suggestions made for practical fixes, showing the relevant pieces of code. Finally, the paper gives a summary of areas where the UVM implementation of TLM-2.0 differs from the SystemC standard and proposes workarounds to ensure correct communication between the SystemVerilog and SystemC domains.

There is an inherent need to enable the horizontal reuse of components created during the architecture and exploration stage. Subhra S Bandyopadhyay, Pavan N M, Intel Technology India Pvt. Ltd, in Integrating SystemC OSCI TLM 2.0 Models to OVM based System Verilog Verification Environments talks about how  theur architecture team creates SystemC models  for early performance analysis and accelerated software development. In OVM-based verification environment, the objective was to reuse this model as a reference model and thus helped in reducing the overall environment bring-up time. The challenge was not only to integrate the SystemC model in the OVM-based verification environment but also to be able to efficiently send transactions from SV to SystemC and vice versa. This paper explores the successful integration of SystemC TLM2 components in OVM based verification environments and also highlight how the VCS TLI (Transaction Level Interface) adapters help TLM2.0 sockets in SystemC to communicate with those in SV and vice versa.

Truly, I feel overwhelmed by the numbers of papers and the interesting use of technology across a variety of domains on which user share their experiences across the various SNUG conferences. As we speak, the SNUG events for 2013 have started, and the stage is all set for a new set of very informative and interesting sessions. I am sure most of you would be attending the SNUIG conferences in your area. . You can find the detailed schedule of those here.

Posted in Announcements, Automation, Callbacks, Coding Style, Communication, Reuse, Structural Components, SystemC/C/C++, SystemVerilog, Transaction Level Modeling (TLM), Tutorial, UVM, VMM | Comments Off

Coverage coding: simple tips and gotchas

Posted by paragg on 28th March 2013

Author - Bhushan Safi (E-Infochips)

Functional coverage has been the most widely accepted way by which we track the completeness of any constrained random testbench. However, does achieving 100% functional coverage means that the DUV is bug free? Certainly not , but it boosts the confidence of the verification engineer and management team.

Based on my experience of defining functional covergroups for different projects, I realized that coverage constructs and options in the SystemVerilog language have their own nuances for which one needs to keep an eye out. These “gotchas” have to be understood so that coverage can be used optimally to achieve appropriate usage results in correct alignment with the intent desired. Let me talk about some of these finer aspects of coverage so that you can use the constructs more productively.

Usage of ignore_bins

The ‘ignore_bins’ construct is meant to exclude a collection of bins from coverage.  While using this particular construct, you might end up with multiple ‘shapes’ issues (By ‘shapes’ I mean “Guard_OFF” and “Guard_ON”, which appears in the report whenever ‘ignore_bins’ is used). Lets look at a simple usage of ignore_bins is as shown in figure 1.

Looking at the code in figure 1, we would assume that since we have set “cfg.disable = 1” the bin with value 1 would be ignored from the generated coverage report.  Here we use the ‘iff’ condition to try to match our intent of not creating a bin for the variable under the said condition.  However in simulations, where the sample_event is not triggered, we see that we end up having an instance of our covergroup which still expects both the bins to be hit. (See the generated report in figure 1). Why does this happen? If you dig deep into the semantics, you will understand that the “iff” condition will come into action only when the event sample_event is triggered. So if we are writing  ‘ignore_bins’ for a covergroup which may/may not be sampled on each run then we need to look for an alternative.  Indeed there is a way to address this requirement and that is through the usage of the multi-mastered intelligent ternary operator. Look at the code in figure 2 to see how the ternary operator is used to model the same intent.

Now the report is as you expect!!!

Using the above mentioned coding style we make sure that the bin which is not desired in specific conditions is ignored irrespective of the condition of whether or not the covergroup is being sampled.  Also, we use the value “2’b11” to make sure that we don’t end up in ignoring a valid value for the variable concerned.

Using detect_overlap

The coverage option called “detect_overlap” helps in issuing a warning if there is an overlap between the range list (or transition list) of two bins of a coverpoint. Whenever we have plenty of ranges to be covered, and there is a possibility of overlap, it is important to use this option.

Why is it important and how can you be impacted if you don’t use it? You might actually end up with incorrect and unwanted coverage results!

Let’s look at an example. In the above scenario, if a value of 25 is generated, the coverage scores reported would be 50% when the desired outcome would ideally have been 25%. This is because the value ‘25’ contributes to two bins out of four bins when that was probably not wanted. The usage of ‘detect_overlap’ would have warned you about this and you could have fixed the bins to make sure that such a scenario doesn’t occur.

Coverage coding for crosses and assigning weight

What does the LRM (Table 19-1—Instance-specific coverage options) say about the ’weight’ attribute?  

If set at the covergroup syntactic level, it specifies the weight of this covergroup instance for computing the overall instance coverage of the simulation. If set at the coverpoint (or cross) syntactic level, it specifies the weight of a coverpoint (or cross) for computing the instance coverage of the enclosing covergroup. The specified weight shall be a non-negative integral value.”

What kinds of surprises can a combination of cross and option.weight create?

The SystemVerilog LRM shows a very simple way of writing a cross. Let’s look at the code below.

The expectation here is that for a single simulation (expecting one of the bins to be hit), we will end up with 25 % coverage as we have specified the weight of the individual coverpoints to zero. However, what essentially happens is the following, 2 internal coverpoints for check_4_a and check_4_b are generated, which are used to compute the coverage score of the ‘crossed’ coverpoint here. So you’ll end up having a total of four coverpoints, two of which have option.weight specified to 0 (i.e. CHECK_A and CHECK_B) and two of which are coverpoints with option.weigh as 1 (i.e. check_4_a and check_4_b). Thus for a single simulation, you will not get the 25% coverage desired.

Now with this report we see the following issues:

  • => We see four coverpoints while expectation is only two coverpoints
  • => The weights of the individual coverpoints is set to be expected to zero as option.weight is set to ‘0’
  • => The overall coverage numbers are undesired.

In order to avoid above disastrous results we need to take care of following aspects:

  • => Use the type_option.weight = 0, instead of option.weight = 0.
  • => Use the coverpoint labels instead of coverpoint names to specify the cross.

Hope my findings will be useful for you and you will use these options/attributes appropriately to get the best value out of your coverage metrics (without losing any sleep or debug cycles to figure out why they didn’t behave as you expected them to)!

Posted in Coding Style, Coverage, Metrics, Customization, Reuse, SystemVerilog, Tutorial | 9 Comments »

SNUG-2012 Verification Round Up: VCS Technologies

Posted by paragg on 15th March 2013

Continuing from my earlier blog posts about SNUG papers on the SystemVerilog language and verification methodologies, I will now go through some of the interesting papers that highlight core technologies in VCS which users can deploy to improve their productivity. We will walk through various stages of the verification cycle including simulation bring up, RTL simulation, gate-level simulation, regression and simulation debug which each benefit from different features and technologies in VCS.

Beating the SoC challenges

One of the biggest challenges that today’s complex SoC architectures pose is rigorous verification of SoC designs. Functional verification of the full-system represented by these mammoth scale (> 1 billion transistors per chip) designs calls for the verification environment to employ advanced methodologies, powerful tools and techniques. Constrained-random stimulus generation, coverage-driven-completion criteria, assertion-based checking, faster triage and debug turnaround, C/C++/SystemC co-simulation, gate-level verification, etc. are just some of these methods and techniques which aid in tackling the challenge.  Patrick Hamilton, Richard Yin, Bobjee Nibhanupudi, Amol Bhinge of Freescale in their paper SoC Simulation Performance: Bottlenecks and Remedies discuss the several simulation and debug bottlenecks experienced during the verification of a complex next-generation SoC; they discuss how they gained knowledge of these bottlenecks and  overcame them using VCS diagnostic capabilities, profile reports, VCS arguments, testbench modifications, smarter utilities, fine tuning of computing resources, etc.

The challenge of simulation environment is the sheer amount of tests that are being written and need to be managed. As more tests are added to the regressions, there is a quantifiable impact on several aspects of the project. These include a dramatic and unsustainable increase in the overall regression time.  As the regression time increases, the intermediate interval for collecting and analyzing results between successive regressions run shrinks.  Overlaps arising from having multiple regressions in flight can cause failure to track design bugs for several snapshots, which can also result in the inability to ensure coverage tracking by design management tools. Given constantly shortening project timelines, this affects the time-to-market of core designs and their dependent SoC products. Simulation-based Productivity Enhancements using VCS Save/Restore by Scot Hildebrandt, Lloyd Cha, AMD, Inc. looks at using VCS’s Save/Restore feature to develop steps involving binary image capture of sections of simulation. These “sections” consist of aspects replicated in all tests, like the reset sequence, or allow the skipping of the specific phases of a failing test which are ‘clean’. They further provide statistics in terms of the reduction in the regression time and the memory footprint that the saved image would typically enable. They also talk about how the dynamic re-seeding of the test case with the stored images enabled them to leverage the full strength and capabilities of the CRV methodologies.

The paper SoC Compilation and Runtime Optimization using VCS by Santhosh K.R., Sreenath Mandagani of Mindspeed Technologies(India) talks about the Partition Compile flow and associated methodology to improve TAT(turnaround time) for SOC compilations. The flow leverages v2k configurations, parallel compilation and various performance optimization switches of VCS-MX. They further explain how a SoC can be partitioned into multiple functional blocks or clusters and each block can be selectively replaced with empty shells if that particular functionality is not exercised in the desired tests. Also the paper demonstrates how new tests can be added and run without requiring to recompile the whole SoC. Thus using Partition Compile flow, only a subset of SoC or test bench blocks would be recompiled based on the dependencies across clusters. They share the productivity gains in compile TAT as well, overall runtime gains for the current SoC and the savings in overall disk space requirement. This is then shown to correlate with the reduction in the license usage time and disk space which leads to savings desired.

By the way, now there been further developments in the latest VCS release to help ensure isolation of switches between partitions in the SoC. This additional functionality helps reduce memory, decrease runtime, and reduce initial scratch compile time even further while maintaining the advantages of partition compile.

Addressing the X-optimism challenges – X-prop Technology

Gate simulations are onerous and many of the risks normally mitigated by gate simulations can now be addressed by RTL lint tools, static timing analysis tools and logic equivalence checking. However, one risk that persists, until now, is the potential for optimism in the X semantics of RTL simulation.  The semantics of the Verilog language can create mismatches between RTL and gate-level simulation due to X-optimism Also, the semantics of X’s in gate simulations are pessimistic resulting in simulation failures that don’t represent real bugs.

Improved X-Propagation using the xProp technology by Rafi Spigelman of Intel Corporation presents the motivation for having the relevant semantics for X-propagation. The process of how such semantics was validated and deployed on a major CPU design at Intel is also described. He delves upon its merits and limitations, and comments on the effort required in enabling such semantics in the RTL regressions.

Robert Booth of Freescale Inc. in the paper X-Optimism Elimination during RTL Verification explains how the chips suffer from X-optimism issues that often conceal design bugs. The deployment of low power techniques such as power-shutdown in today’s designs exacerbates these X-optimism issues. To address these problems they show how they leverage the new simulation semantics with VCS that more accurately models non-deterministic values in logic simulation. The paper describes how X-optimism can be eliminated during RTL verification.

In the paper X-Propagation: An Alternative to Gate Level Simulation”, Adrian Evans, Julius Yam, Craig Forward explores X-Propagation technology which attempts to model X behavior more accurately at the RTL level. In this paper, they review the sources of X’s in simulation and their handling in the Verilog language. They further describe their experience using this feature on design blocks from Cisco ASICs including several simulation failures that did not represent RTL bugs. They conclude by suggesting how X-Propagation can be used to reduce and potentially eliminate gate-level simulations.

In  the paper Improved x-propagation semantics: CPU server learning, Peeyush Purohit, Ashish Alexander, Anees Sutarwala of Intel stresses on the need to model and simulate silicon like behavior in RTL simulations. They bring out the fact that traditionally Gate-Level Simulations have been used to fill that void but come at the cost of time and resources. Then they go on to explain the limitations with the regular 4-value Verilog/System Verilog based RTL simulation and also cover the specifications for enhanced simulator semantics to overcome those limitations. They explain how design issues that were found on their next-generation CPU server project used the enhanced semantics; the potential flow implications and a sample methodology implementing the new semantics are provided.

Power-on-Reset (POR) is a key functional sequence for all SoC designs and any bug not detected in this logic can lead to dead silicon. Complexities in reset logic pose increasing challenges for verification engineers to catch any such design issue(s) during RTL/GL simulations. POR sequence simulations are many times accompanied by ‘X’ propagation due to non-resettable flops and un-initialized logic. Generally uninitialized and non-resettable logic is initialized to 0’s or 1’s or some random values using Forces or Deposits to bypass unwanted X propagation. Ideally, one would like to have a stimulus to try all possible combinations of initial values for such logic but this is practically impossible due to short design cycle and limited resources. This practical limitation can leave space for critical design bugs that may remain undetected during the design verification cycle. Deepak Jindal, Freescale, India in the paper Gaps and Challenges with Reset Logic Verification discusses these reset logic simulation challenges in detail and shares the experience of evaluating the new semantics in VCS technology which can help to catch most of the POR bugs/issues during RTL stage itself.

SNUG allows users to discuss their current challenges and emerging solutions they are using to address them. You can find all SNUG papers online via solvnet (Of course a login required!!!).

Posted in SystemVerilog, Tools & 3rd Party interfaces, Tutorial | 1 Comment »

Namespaces, Build Order, and Chickens

Posted by Brian Hunter on 14th May 2012

As described in the video, vkits are our convenient method of lumping together reusable UVM packages with the interface(s) that they operate on. Because code within packages can only peek or poke wires that are contained by a virtual interface, it is often useful to wrap these together somehow, and vkits are our technique at Cavium for doing that.

What goes in a vkit? Anything that is reusable. From simple agents and the interfaces they work on to complete UVM environments that connect these agents together, including scoreboards, sequence libraries, types, and utility functions.

What does not go in a vkit are items that are bound to a specific testbench, including the tests themselves.

The video describes the wildcard import syntax as an “egregiously bad idea.” First and foremost, doing so can lead to namespace pollution, which comes about when one engineer independently adds types or classes to their package and only later finds out that they conflict with those of another package. Secondly, wildcard imports prevent our shorter naming conventions of having an agent_c, drv_c, env_c, etc., within each package.

Not described in the video are CSR packages that are auto-generated by RAL, IP-XACT, or your script of choice. These packages should be independent of your vkits, such that your vkits refer to them with their explicit scopes (i.e., chx_csr_pkg::PLUCKING_CFG_C).

Future posts will go into more detail about how we architect UVM testbenches and some of our other conventions that work within this framework. Until then, I’ve got a lot of pies to eat.

PS. I’ll be at DAC this year! Come see me on Tuesday, June 5, during the “Industry Leaders Verify with Synopsys” lunch. Hopefully they’ll be serving some of my favorite foods!

Posted in Organization, Structural Components, SystemVerilog, Tutorial, UVM | 6 Comments »

Using VMM template Generator to ramp up your testbench development

Posted by Amit Sharma on 25th October 2010

Amit Sharma, Synopsys
‘vmmgen’, the template generator for creating robust, extensible VMM compliant environments, has been available for a long time with VMM and it was upgraded significantly with VMM1.2. Though the primary functionality of ‘vmmgen’ is to help minimize VIP and environment development cycle by providing detailed templates for developing VMM Compliant verification environments, a lot of folks also use it to quickly understand how different VMM base classes can be used in different contexts. This is done as the templates uses a rich set of the latest VMM features to ensure the appropriate base classes and their features are picked up optimally.

Given that it has a wide user interaction mechanism which provides available features and options to the user, the user can pick up the modes which are most relevant to his or her requirement. It also provides them the option to provide their own templates thus providing a rich layer of customization. Based on the need, one can generate individual templates of different verification components or they can generate a complete verification environment which comes with a ’Makefile’ and an intuitive directory structure, thus propelling them on their way to catch the first set of bugs in their DUTs. I am sure all of you know where to pick up ‘vmmgen’ form. It available in the <VMM_HOME>/Shared/bin area or in $VCS_HOME/bin

Some of the rich set of features available now includes:

• Template Options:

– Complete environment generation

– Individual templates generation

• Options to create Explicitly phased environments or Implicitly phased environment or to mix Implicitly phased components and Explicitly phased components

• Usage of VMM Shorthand macros

• Creating RAL based environment, and providing Multiplexed domain support if required

• Hooking up VMM Performance Analyzer at the appropriate interfaces

• Hooking up the DS Scoreboard at the relevant interfaces (with options to chose from a range of integration options, e.g. : through callbacks, through TLM2.0 analysis ports, connect directly through to transactors, channels or notifications)

• Ability to hook up different generators (atomic, scenario, Multistream generators) at different interfaces

• Creating a scenario library and Multistream scenario creation

• Multi-driver generator support for different kinds of transactions in the same environment

• Factory support for transactions, scenarios and multi stream scenarios. Sample factory testcase which can explain the usage of transaction override from a testcase.

• ‘RTL config’ support for drivers and receivers.

• Various types of unidirectional and bi-directional TLM connections between generator and driver.

• Analysis ports/exports OR parameterized notify observers to broadcast the information from monitor to scoreboard and coverage collectors.

• Multi test concatenation support and management to run the tests

• Creating portable Interface wrapper object, and setting up interface connections to the testbench components using vmm_opts::set_object/get_object_obj

• Creating a Generic slave component

• Option to use default names or user provided names for different components

As you can see the above list itself is quite comprehensive and let me tell you that that it is not exhaustive as there are many more features in vmmgen.

With respect to the usage as well, there are multiple flavors. In the default mode, the user is taken through multiple detailed choices/options as he is creating/connecting different components in his verification environment. However, some folks might want to use ‘vmmgen’ within their own wrapper script/environment and for them there are options to generate the environments by providing all required options in the command line or through a configuration file… Some of these switches include

-SE [y/n] : Generates a complete environment with sub-environments

-RAL [y/n] : Create RAL based verification environments

-RTL [y/n] : Generates RTL configuration for the environment

-ENV <name>, -TR <name> : Provide the name for the environment class and transaction classes. names for multiple transaction class names can be provide as well:

vmmgen –l sv –TR tr1+tr2

-cfg_file <file_name> : Option to provide a configuration file for the options

There is an option to generate an environment quickly by taking the user through the minimum number of questions (-q).

Additionally, the user can provide his or her own templates through the –L <template directory> option.

As far as individual template generation goes, you have the complete list. Here, I am outlining this down for reference:


I am sure a lot of you have already been using ‘vmmgen’. For those, who haven’t, I encourage you to try out the different options with it. I am sure you will find this immensely useful and it will not only help you create verification components and environments quickly but will also make sure they are optimal and appropriate based on your requirements.

Posted in Automation, Coding Style, Customization, Modeling Transactions, Organization, Performance Analyzer, Scoreboarding, Tools & 3rd Party interfaces, VMM infrastructure | Comments Off

Using VMM Consensus to end your test

Posted by JL Gray on 23rd August 2010

Asif Jafri, Verification Consultant, Verilab

One topic that is often overlooked is how does one end a test. One common approach has been to use pound delays or count the number of transactions generated. While this worked well for directed test environments this approach is not well suited for use with constrained random testbenches. Usually there are several threads running in parallel and to be able to intelligently tell whether all test criterions are met, we need a more centralized approach to manage and decide test completion. vmm_group now has a mechanism to centrally manage test completion with the use of VMM Consensus.To better explain the usage let’s try to build an example:

1.    Instantiate a vmm_voter class to indicate consensus or oppose end of test.
vmm_voter     end_voter;
2.    Identify the participants that will need to consent before the test ends. These participants can be in the form of transactors which will consent when idle, channels consent when empty, notifications and vmm_consensus.  Add the voters in the vmm_group::build_ph()
3.    Add vmm_consensus::wait_for _consensus() to vmm_group::wait_for_end() method. Once all participants consent, the test will complete.



class tb_top extends vmm_group;
`vmm_typename(tb_top)vmm_voter    end_voter;
function void build_ph();
    transaction_channel     master_chan;
    slave_transactor          slave_xactor;
    end_voter =  end_vote.register_channel(master_chan);
    end_voter = end_vote.register_xactor(slave_xactor);

task wait_for_end();

endclass: tb_top 

The code above shows how we can instantiate various voters that will participate in the test completion.

The figure below shows how a participant opposes test completion, while all other participants have given consent.




One of the simplest forms of usage is to oppose completion using the command above before a completing some given task like programming registers or pulling reset and then giving consent by using the command:  this.consent(“Programming Completed”); There is often a need to force consensus to end a test if one of the opposing blocks is not releasing. This can be achieved by using the forced command.




You can choose to use the consensus_force_thru command to pass to propagate the force up.


Using these techniques to end your test will make your testbench scalable and reusable over various projects.



Posted in Coding Style, VMM | 2 Comments »

Using vmm_opts to create a configurable environment

Posted by S. Prashanth on 19th July 2010

S. Prashanth, Verification & Design Engineer, LSI Logic

To accommodate changing specifications and to support different clusters/subsystems which would have multiple processors/memory connected through a bridge), I am building a reusable environment which can support any number of masters and slaves of any standard bus protocols.  The environment requires high level of configurability since it should work for different DUTs. So, I decided to use vmm_opts not just to set the switches globally/hierarchically, but also to specify the ranges (like setting address ranges in scenarios) from the test cases/command line.

I created a list of controls/switches that need to be provided as global options (like simulation timeout, scoreboard/coverage enable, etc)  and hierarchical options  to control instance specific behaviors (like number of transactions to be generated ,  burst enables, transaction speed, address ranges to be generated by a specific instance, set of instances, etc).  Let’s see how these options can be used and what all things are required for it through examples.

Global Options

Step 1:
Declare an integer, say simulation_timeout in the environment class or wherever it is required and use vmm_opts::get_int(..) method as shown below. Specify a default value as well just in case, if the variable is not set anywhere.

simulation_timeout = vmm_opts::get_int(“TIMEOUT”, 10000);

Step 2:
Then either at the test case or/and at the command line, I can override the default value using vmm_opts::set_int  or +vmm_opts+ runtime option.

Override from the test case.

vmm_opts::set_int(“%*:TIMEOUT”, 50000);

Override from the command line.

./simv +vmm_opts+TIMEOUT=80000

Options can also be overridden from a command file, if it is difficult to specify all options in the command line. Also, options can be a string or Boolean as well.

Hierarchical Options

In order to use hierarchical options, I am building the environment with parent/child hierarchy which is a very useful feature of vmm_object. This is required since the hierarchy specified externally (from test case/command line) will be used to map the hierarchy in the environment.

Step 1:
Declare options, say burst_enable and num_of_trans in a subenv class and use vmm_opts::get_object_bit(..) and vmm_opts::get_object_int(..) to retrieve appropriate values passing current hierarchy and default values as arguments. Also, for setting ranges, declare min_addr and max_addr, and use get_object_range().

class master_subenv extends vmm_subenv;
bit burst_enable;
int num_of_trans;
int min_addr;
int max_addr;

function void configure();
bit is_set; //to determine if the default value is overridden
burst_enable = vmm_opts::get_object_bit(is_set, this, “BURST_ENABLE”);
num_of_trans = vmm_opts::get_object_int(is_set, this, “NUM_TRANS”, 100);
vmm_opts::get_object_range(is_set, this, “ADDR_RANGE”, min_addr, max_addr, 0, 32’hFFFF_FFFF);


Step 2:
Build the environment with parent/child hierarchy either by passing the parent handle through the constructor to every child, or using vmm_object::set_parent_object() method. This is easy as almost all base classes are extended from vmm_object by default.

class  dut_env extends vmm_env;
virtual function build();
mst0 = new(“MST0”, ….);
mst1 = new(“MST1”, …);

Step 3:
From the test case, I can override the default value using vmm_opts::set_bit or vmm_opts::set_int(..) specifying the hierarchy. I can use pattern matching as well to avoid specifying to every instance

vmm_opts::set_int(“%*:MST0:NUM_TRANS”, 50);
vmm_opts::set_int(“%*:MST1:NUM_TRANS”, 100);
vmm_opts::set_range(“%*:MST1:ADDR_RANGE”, 32’h1000_0000, 32’h1000_FFFF);

I can also override the default value from the command line as well.
./simv +vmm_opts+NUM_TRANS=50@%*:MST0+NUM_TRANS=100@%*:MST1+burst_enable@%*

In summary, vmm_opts provides a powerful and user friendly way of configuring the environment from the command line or the test case. User can provide explanation of each of the options while calling get_object_*() and get_*() methods which will be displayed when vmm_opts::get_help() is called.

Posted in Coding Style, Configuration, Tutorial | Comments Off

TLM vs. Channels

Posted by JL Gray on 9th June 2010

JL Gray, Consultant, Verilab, Austin, Texas, and Author of Cool Verification

Confused about the difference between using channels and TLM to connect your testbench components? Check out this short video for a simple explanation of the mechanics of the two approaches.

Posted in Transaction Level Modeling (TLM), Tutorial | 2 Comments »

Implicit vs. Explicit Phasing

Posted by JL Gray on 3rd June 2010

JL Gray, Consultant, Verilab, Austin, Texas, and Author of Cool Verification

Last week I discussed the concepts of phases and threads. This week, I’ll continue that theme by focusing on the difference between implicit and explicit phasing.

Quick quiz: If you had to remember just one thing about the benefits of implicit phasing, what would that one thing be?

Posted in Phasing, Tutorial | Comments Off

Verification in the trenches: TLM2.0 or vmm_channel? Communicating with other components using VMM1.2

Posted by Ambar Sarkar on 1st June 2010

Dr. Ambar Sarkar, Chief Verification Technologist, Paradigm Works Inc.

Did life get easier with the availability of TLM2.0 style communication in VMM1.2?

Or the other way around? Are you asking: Should I use a vmm_tlm port or just stick to the tried and trusted vmm_channel as my communication method within the verification environment? You are not alone.

As you are aware, VMM1.2 provides the following interfaces for exchanging transactions between components:

  1. vmm_channel (Pre VMM1.2)
  2. vmm_tlm based blocking and non-blocking interfaces (TLM based)
  3. vmm_analysis_port (TLM based)
  4. vmm_callback (Pre VMM1.2)

Which option is the best for communicating between components? Under what circumstances?

There are two real requirements here,

  1. Maintain the ability to work with existing VMM (pre VMM 1.2) components.
  2. Be forward compatible with components created with TLM based ports, as is expected as the industry moves toward UVM (Yes, the early adopter version was released recently!).

TLM2.0 based communication mechanism offers a flexible, sufficient, efficient, and clear and well-defined semantics for communication between two components, and the industry as a whole is moving towards a TLM based approach. For these reasons, I recommend going forward that any new VIP using only the TLM2.0 based communication.

Since VMM1.2 provides complete interoperability between vmm_channel and tlm 2.0 style ports, the user is guaranteed that the created component will keep on working with vmm_channel based counterparts:

class subenv extends vmm_group;
initiator i0;
target t0;

virtual function void connect_ph();
vmm_connect #(.D(my_trans))::tlm_bind(
t0.in_chan, // Channel
i0.b_port, // TLM port
endfunction: connect_ph

endclass: subenv

Similarly, any vmm_notify based callback events can be communicated using tlm_analysis ports. For details, see example 5-48 in the Advanced Usage section in the user guide.

By creating components that solely depend on TLM style communication schemes will greatly facilitate interoperability and migration of VIP implementations to currently evolving statndard of UVM.

The following summarizes my recommendations:

Type of interaction

Recommended mechanism

Issuing and receiving transactions vmm_tlm based blocking interface
Issuing notification to passive observers vmm_analysis_port

For issuing transactions to other components on a point-to-point manner, typically seen in master-slave based communications, use the vmm_tlm based blocking port interfaces.

For slave-like transactors which expect to receive transactions from other transactors, use vmm_tlm based blocking export interface.

For reactive transactors, define additional vmm_tlm based blocking export interface.

For broadcast transactions to be communicated to multiple consumers such scoreboards, functional coverage models, etc, use vmm_analysis_port.

So, did life get easier with TLM? I believe so. Especially if you want to be forward compatible with the new and upcoming methodologies.

This article is the 7th in the Verification in the trenches series. Hope you found this article useful. If you would like to hear about any other related topic, please comment or drop me a line at Also, if you are starting out fresh, please check out the free VMM1.2 environment generator.

Posted in Communication, Transaction Level Modeling (TLM), Tutorial | Comments Off

Phases and Threads

Posted by JL Gray on 27th May 2010

JL Gray, Consultant, Verilab, Austin, Texas, and Author of Cool Verification

Back in the fall, I wrote about the differences between Phases and Threads, and how that relates to implicit and explicit phasing in the VMM.  In this video, I’ve taken a step back to describe phases, threads, and timelines using a real-world analogy based on the seasons of the year.

As you’re watching this video, one thing to keep in mind is that threads are dealt with by vmm_xactor-based objects, and phases are handled by vmm-timeline and vmm_group-based objects.

Posted in Phasing, Tutorial | 1 Comment »

Simplifying test writing with MSSG and constraint switching feature of System Verilog

Posted by Tushar Mattu on 25th May 2010

Pratish Kumar KT  & Sachin Sohale, Texas Instruments
Tushar Mattu,  Synopsys

In an ideal world, one would want maximum automation and minimal effort. The same holds true when you would want to uncover bugs in your DUV. You would want to provide your team members with an environment whereby they can focus their efforts on debugging issues in the design rather than spend time writing hundreds of lines of testcase code. Here we want to share some simple techniques to make the test writer’s life easier by enabling him to achieve the required objectives with minimum lines of test code, and by providing more automation and reuse. Along with automation, it is important to ensure that the test logic remains easy to understand. In one recent project, where the DUV was an image processing block, the following were some of the requirements:

- The relevant configuration for each block had to be driven first on a configuration port before driving the frame data on another port

- The configuration had to be written to the registers in individual blocks and each block had its own unique configuration.

- Several such blocks together constituted the overall sub-system which required an integrated testbench

- Because all these blocks had to be verified in parallel, the requirement was to have one a generic Register Model which could work not only with the blocks in parallel but also be reused at the sub-system level

- Also, another aspect was to provide as much of reuse in the testcases as possible from block to system level

Given the requirements, we decided to go ahead in the manner described below using a clever mix of the MSS scenarios , named constraints blocks and RAL:


Here is the brief description of the different components and the flow which is represented in the pictorial representation above:

Custom RAL model

As the RAL register model forms the configuration stimulus space, we decided to put all test-specific constraints for register configuration in the top level RAL block itself and had them switched off by default. As seen below, the custom RAL block extended from the generated model has all the test-specific constraints which are switched off.


Basic Multistream Scenario

For each block, one basic scenario (“BASIC SCENARIO”) was always registered with the MSSG. This procedural scenario governs the test flow, which is:

- Randomize the RAL configuration with default constraints,

- Drive the configuration to the configuration port

- Put the desired number of frame data into the driver BFM input channel

The following snippet shows the actual ‘BASIC SCENARIO”


Scenario Library

With the basic infrastructure in place, the strategy for creating a scenario library for the test-specific requirements was simple. Each extended scenario class was meant to change the configuration generation through the RAL model of the different blocks. Thus, for the scenario library for each block, each independent scenario extended the basic scenario and overloaded only the required virtual method for configuration change as shown below:



Each test would then select the scenario corresponding to the test and register it with the MSSG. fifth

At the block level, we were able to reuse this approach across all block level tests consistently and effectively.

Later we were able to reuse all block level tests in the sub-system level testbench , as tests were written in terms of constraints in the RAL model itself. At the sub-system level, we would then turn on specific constraint per block using corresponding block level scenarios. This ensures that there is maximum reuse and that the test flow is consistent across levels.


At higher levels, the scenarios extend the “basic_system_scenario”. The test flow managed by the system level scenarios have a slightly different execution flow than block level tests.But the ‘configuration generation’ is reused consistently and efficiently from block to system. That means modifications of block level test constraints would not require any modification for subsystem level tests using that configuration.

And voila !! Scenarios and test dumping steps were automated at block and sub-system level using simple perl scripting. Test-specific constraint per block were written in an XL sheet which a script will take that as input to generate the scenario and test at block/sub-system level. The scenario execution flow is predetermined and defined in the block/sub-system basic scenario . Furthermore, the flexibility was given to user within this automation process to create multiple such scenarios and reuse existing test constraints.

Posted in Creating tests, Register Abstraction Model with RAL, Tutorial, VMM | 3 Comments »

Verification in the trenches: Configuring your environment using VMM1.2

Posted by Ambar Sarkar on 7th May 2010

Dr. Ambar Sarkar, Chief Verification Technologist, Paradigm Works Inc.

Let’s start with a quick and easy quiz.
Imagine you are verifying a NxN switch where N is configurable, and it supports any configuration from 1×1 to NxN. Assume that for each port, you instantiate one set of verification components such as monitors, bfms, scoreboards. For example, if this was a PCIe switch with 2 ingress and 3 egress ports, you would instantiate 2 ingress and 3 egress instances of PCIe verification components. So, for N = 5, how many environments you should actually write code for and maintain?

The answer better be 1 J™. Maintaining NxN = 25 distinct environments will be quixotic at best. What you want is to write code that creates the desired number and configurations of component instances based on some runtime parameters. This is an example of what is known as “structural” configuration, where you are configuring the structure of your environment.

In VMM1.2 parlance, this means that you want to make sure you can communicate to your build_ph() phase the required number of ingress and egress verification component instances. The build phase can then construct the configured number of components. This way, you get to reuse your verification environment code in multiple topologies, and reduce the number of unique environments that need to be separately created and maintained.

Hopefully, this establishes why structural configuration is important. So the questions that arise next are:

How do you declare the structural configuration parameters?

How do you specify the structural configuration values?

How do you use the structural configuration values in your code?

Declaring the configurable parameters

Structural configuration declarations should sit in a class that derives from vmm_unit. A set of convenient macros are provided, with the prefix `vmm_unit_config_xxx where xxx stands for the data type of the parameter. For now, xxx can be int, boolean, or string.

For example, for integer parameters, you have:

`vmm_unit_config_int    (
<name of the int data member>,
<describe the purpose of this parameter>,
<default value>,
<name of the enclosing class derived from vmm unit>

If you want to declare the number of ports as configurable, you can declare a variable num_ports as shown below:class switch_env extends vmm_group;
int     num_ports; // Will be configured at runtime
vip bfm[$]; // Will need to create num_port number of instances

function new(string inst, vmm_unit parent = null);, inst, parent); // NEVER FORGET THIS!!

// Declare the number of ingress ports. Default is 1.
`vmm_unit_config_int(num_ports,”Number of ingress ports”, 1, switch_env);


Specifying the configuration values

There are two main options:

1.  From code using vmm_opts.

The basic rule is that you need to specify it *before* the build phase gets called, where the construction of the components take place.  A good place to do so is in vmm_test::set_config().
function my_test::set_config();
// Override default with 4. my_env is an instance of switch_env class.
vmm_opts::set_int(“my_env:num_ports”, 4);

2.  From command line or external option file. Here is how the number of ingress ports could be set to 5.
./simv +vmm_opts+num_ports=5@my_env

The command line supersedes option set within code as shown in 1.
Do note that one can specify these options for specific instances or hierarchically using regular expressions.

Using the configuration values

This is the easy part. As long as you declare the parameter using `vmm_unit_config_xxx , you are all set. It will be set to the correct value when the containing object derived from vmm_unit is created.

function switch_env::build_ph();   ?
// Just use the value of num_ports
for (i = 0; i< num_ports; i++) begin
bfm[i] = new ($psprintf(“bfm%0d”, i), this);

This article is the 6th in the Verification in the trenches series. Hope you found this article useful. If you would like to hear about any other related topic, please comment or drop me a line at Also, if you are starting out fresh, please check out the free VMM1.2 environment generator.

Posted in Configuration, Tutorial | Comments Off


Posted by JL Gray on 5th May 2010

by Asif Jafri, verification engineer, Verilab

Atomic Generators

Atomic generators select and randomize transactions based on user constraints. If you do not care about what sequence the transactions are generated, then atomic generators are a good choice.

The code below shows a pcie_config class which defines the read and write transactions. It also has two macros defined that create a channel and the atomic generator to drive the channel.

// filename:

class pcie_config extends vmm_data;
typedef enum {Read, Write} kind_e;
rand kind_e instruction;
rand bit [31:0] address;
rand bit [31:0] data;

endclass: pcie_config
`vmm_atomic_gen(pcie_config, “PCIE Configuration Atomic Generator”)

If you are running functional coverage, there are often corner cases or a sequence of events that will require multiple simulation cycles to be covered. The time to get to these specific points in the state space can be drastically reduced by using scenario generators to generate the specific sequence of events.

Scenario Generators

Scenario generators are useful if you need to constrain your generator to generate a specific sequence of transactions.

Let’s say to configure your PCIE device you need to follow a series of steps.

1.    Read the status register
2.    Read the control register and
3.    Write to the control register

In this case you can use your pcie_config class once again but now define a macro for its scenario generator instead of the atomic generator.

// filename:

`vmm_scenario_gen(pcie_config, “ PCIE Configuration Scenario Generator”)

Here again a single channel will be connected to this scenario generator. We now need to define the scenario as shown below.

// filename:

class pcie_control_scenario extends pcie_config_scenario;
// Variable to identify the scenario
int pcie_control_scenario_id

constraint pcie_control_scenario_items {
if($void(scenario_kind) == pcie_control_scenario_id) {
// number of elements in the scenario
length == 3;
// run scenario more than once
repeated == 0;
this.items[i].instruction == pcie_config:: Read(status_address);
else if(i==1)
this.items[i].instruction == pcie_config:: Read(control_address);
else if(i==2)
this.items[i].instruction == pcie_config:: Write(control_address);

endclass: pcie_gen

These are also called single stream scenarios.  Scenario generators work well for block level testbenches, but if we need to control multiple blocks in the system level testbench to generate specific interactions to improve coverage multi-stream scenario generators should be used.

Multi-stream Scenario (MSS) Generators

In a typical chip there will be more than one type of peripheral. Ex: PCIE, USB, Ethernet. Each needs to be controlled separately with its own scenario generator. But there is often a time when the interface on the PCIE and the USB need to be controlled together. This is where MSS generators are useful. MSS generators can feed transactions to multiple channels, unlike single stream scenario generators and atomic generators which can feed only one.

Figure 1: Multi-Stream Scenarios


Let’s try to create a MSS with the PCIE scenario and the USB scenario together.

1.    Use the macros to generate  the pcie_config channel and scenario

// filename:

`vmm_scenario_gen(pcie_config, “PCIE Configuration Scenario Generation”)

2.    Use the macros to generate  the usb_config channel and scenario

// filename:

`vmm_scenario_gen(usb_config, “USB Configuration Scenario Generation”)

3.    Define scenarios for the PCIE and USB that you want to control. An example for the PCIE control scenario is shown in the previous section.

4.    Next extend your scenario from vmm_ms_scenario to put the above defined scenarios together and in the execute task define how to use the above defined scenarios.  Directed stimulus for the ETH module can be reused from the block level testbench by encapsulating it in the MSS. The directed stimulus can be cut-and-pasted into the MSS or the MSS could directly call an external function to execute the stimulus.

// filename:
class my_ms_scenario extends vmm_ms_scenario;

pcie_control_scenario pcie_control_scenario;
usb_control_scenario  usb_control_scenario;

task execute();
fork begin
pcie_config_channel out_chan;
usb_config_channel out_chan;
// Directed test code goes here
endtask: execute
endclass: my_ms_scenario

5.    Of course in your top level testbench don’t forget to instantiate your multi-stream scenario and register the various channels that it will use to talk to the PCIE and USB peripherals. You will also need to register the multi-stream scenario generator.

// filename:

vmm_ms_scenario_gen mss_gen;
my_ms_scenario my_ms_scenario;

pcie_config_chan pcie_config_chan;
usb_config_chan usb_config_chan;

mss_gen = new(“Multi-stream scenario generator”);
my_ms_scenario = new();
pcie_config_chan = new(“PCIE CONFIGURATION CHANNEL”, pcie_config_chan);
USB_config_chan = new(“USB CONFIGURATION CHANNEL”, usb_config_chan);

mss_gen.register_channel(“PCIE_SCENARIO_CHANNEL”, pcie_config_channel);
mss_gen.register_channel(“USB_SCENARIO_CHANNEL”, usb_config_channel);
mss_gen.register_ms_scenario(“MSS_SCENARIO”, my_ms_scenario);

mss_gen.stop_after_n_scenarios = 10;

As can be seen in Figure 1, you can combine scenarios to make multi-stream scenarios, or you can also have higher level multi-stream scenario generators calling other multi-stream scenario generators and directed tests.

Posted in Reuse, Stimulus Generation, Tutorial | Comments Off

You get real hierarchy with VMM1.2

Posted by Wei-Hua Han on 9th February 2010

If you look at VMM1.2 classes, you may find that almost all new() functions have an argument, vmm_object parent. The purpose of this argument is to build a parent-child hierarchy within a VMM1.2 based environment, so that VMM1.2 can provide an infrastructure where users can access the components inside the environment through hierarchical path and name. And this parent-child hierarchy also contributes to the implicit phasing implementation.

Here is a small example to illustrate how a hierarchy can be built with VMM1.2:

  1. class mike_c extends vmm_object;
  2. function new(vmm_object parent=null, string name=”");
  4. endfunction
  5. endclass
  6. class ben_c extends vmm_object;
  7. function new(vmm_object parent=null, string name=”");
  9. endfunction
  10. endclass
  11. class jason_c extends vmm_object;
  12. mike_c Mike;
  13. ben_c Ben;
  14. int weight;
  15. function new(vmm_object parent=null, string name=”");
  16. bit is_set;
  18. weight=vmm_opts::get_object_int(is_set,this, “weight”,0, “set weight”);
  19. endfunction
  20. function void build();
  21. Mike = new(this,”Mike”);
  22. Ben = new(this,”Ben”);
  23. endfunction
  24. endclass
  25. program p1;
  26. jason_c Jason;
  27. initial begin
  28. vmm_opts::set_int(“Jason:weight”,10);
  29. Jason=new(null,”Jason”);
  31. vmm_object::print_hierarchy(Jason);
  32. $display(“Jason has %0d children”,Jason.get_num_children());
  33. $display(Jason.Mike.get_object_name());
  34. $display(Jason.Ben.get_object_hiername());
  35. $display(Jason.weight);
  36. end
  37. endprogram

In this small example, line 30 creates an object (Jason) for jason_c and its parent is “null”, so Jason is a root component in the hierarchy. When is called in line 31, object Mike and Ben are created and their parent is set to Jason. So in this small system we build the following hierarchy:




Jason has 2 children



This hierarchy can be printed by vmm_object method print_hierarchy().

Please note that unlike Verilog modules and instances where the hierarchy is defined as per the Verilog LRM, the VMM1.2 parent-child hierarchy is really user defined. It depends on how “parent” argument is specified when the object is created, and not on where the object variable is declared or created.

As for the component name, although you may choose to specify a different name as the variable name, it is a good practice to keep it consistent, which makes the code more readable and avoids confusion.

From the above example, you can find that the hierarchical name for object Jason.Ben is “Jason:Ben”. VMM1.2 uses “:” as the hierarchical separator instead of “.”. The reason is that this hierarchical name is actually a made-up name, and we want to differentiate it from the semantic hierarchical reference name specified in Verilog/SystemVerilog which uses “.” as the separator.

There are many methods provided in VMM1.2 which help users to work with the parent-child hierarchy. Some of these methods are:

  • find_child_by_name(): finds the named object relative to this object
  • get_num_children(): gets the total number of children for this object
  • get_nth_child(): returns the nth child of this object
  • get_object_hiername(): gets the complete hierarchical name of this object
  • get_parent_object():returns the parent of this object
  • get_root_object(): gets the root parent of this object
  • get_typename(): returns the name of the actual type of this object
  • is_parent_of(): returns true, if the specified object is a parent of this object
  • print_hierarchy(): prints the object hierarchy
  • Set_parent_object(): sets or replaces the parent of this object

Dr. Ambar Sarkar has explained how users can traverse the hierarchy in his blog post.

This parent-child hierarchical infrastructure is one of the most important mechanisms in VMM1.2. Many other VMM1.2 features rely on this infrastructure:

1.   Implicit phasing

Implicit phasing is new in VMM1.2. In implicit phasing, structural components (transactors) are aligned with each other automatically. The phase specific methods are called automatically throughout the whole hierarchy in a top-down (for functions) or forked (for tasks) mode. Thus implicit phasing makes integration of Verification IPs into the simulation environment or other structural components a lot easier. Other VMM1.2 users also benefit from implicit phasing when building complicated verification environments.

2.   Factory replacement

Factory is an important feature that enables flexibility and reuse inside a verification environment. Because of the parent-child hierarchy, users can replace components, generated transactions or scenarios with their extension type or other objects by specifying hierarchy path and names. Support for regular expression for specifying hierarchies and names make this utility very powerful.

For example, in the following code segment, we override the type mike_c for Mike with mike_ext :


3.   Hierarchical configuration

In addition to supporting runtime configuration through command-line options or files, using the parent-child hierarchy VMM1.2 also supports configuration of components by specifying their hierarchical path and name. All these configuration utilities are provided through vmm_opts.

For example, in the following code segment, we set the property weight of object Jason to 10 using hierarchical configuration:


Like factory, users can also use regular expression with hierarchical configuration.

If you have watched “Growing Pains”, you know that I am not quite accurate when I say

Jason has 2 children

He indeed has three…

Have fun with VMM1.2. J

Posted in Debug, Reuse, Tutorial, VMM infrastructure | 1 Comment »

Exclusive Access of VMM Channel

Posted by Shankar Hemmady on 4th September 2009

rahul_shah1 Rahul V. Shah (bio)
Director of Customer Solutions, eInfochips

It is always challenging when it comes to controlled randomization. Constraints may be an easier way to think about it, but at chip level we are often interested in generating a few scenarios which are controlled in specific sequences. However, we still don’t want to develop scenarios that are very directed.

Let’s consider an example: we have an AHB bus interface with different master. We have a DMA controller on the bus along with few other masters. The chip level stress scenario might include multiple master performing data transfer to the memory interface on the bus. The transactions can be completely random. To make the scenario more interesting, we may want to add random reads from the status register, random read of some read only registers along with other data xfer.

One of the scenarios can include handling an error/exception scenario where we want read the status register, followed by the interrupt register followed by a write transfer to clear the interrupt. In the normal scenario, we can generate such scenario in directed fashion. But that will take away the random behavior.

Earlier, such scenarios were implemented in a lot more complex fashion as it was difficult to create a random scenario while getting exclusive access to randomness whenever required. Here I describe a mechanism to get exclusive access to a channel when required, while utilizing the benefits of randomness.

Scenario generators are used to generate a sequence of transactions. Multiple scenario generators may be connected to the same output channel but such a connection does not prevent other generators to concurrently inject transactions to that channel. A scenario is thus not guaranteed the exclusive access to an output channel. Multiple threads in the same multi-stream scenario, or multiple single stream scenarios, or any transactor may inject transactions in the same channel.

If the requirement is to generate a sequence of transactions without any interruption from other generator or from any transactor, an exclusive access of channel can be obtained and later released when exclusive access is no longer required. Let’s consider the scenario below:

01. class my_scenario extends vmm_ms_scenario;
02. rand atm_cell atm_cell_inst ;
03. atm_cell_channel atm_out_chan;
04. int MSC = this.define_scenario(“MY SCENARIO”, 0);
05. local bit [7:0] id;
07. function new();
09. atm_cell_inst = new;
10. endfunction: new
12. task execute(ref int n);
13. $cast(atm_out_chan, this.get_channel(“ATM_SCENARIO_CHANNEL”));
14. atm_out_chan.grab(this);
15. repeat (10) begin
16. atm_out_chan.put(atm_scenario,.grabber(this));
17. repeat (10) @ (posedge clk) ;
18. end
19. atm_out_chan.ungrab(this);
20. endtask: execute
21. endclass: my_scenario

If a scenario requires exclusive access to a channel to ensure the uninterrupted execution of the sequence of transactions, it can grab the channel as shown in line 16, atm_out_chan.grab(this). This will grab the atm_out_chan channel and once grabbed, access of this channel will not be provided to any other scenario until it is explicitly ungrabbed. As shown in lines 15-18, the scenario sends 10 sequential transactions with a delay of 10 clock cycless after grabbing the channel. To inject transactions in the grabbed channel, a reference to the scenario currently injecting the transaction must be provided to the put method as shown in line 16 atm_out_chan.put(atm_scenario,.grabber(this)). After completion of the sequence of transactions, the channel is ungrabbed at line 19 atm_out_chan.ungrab(this).

When the channel is grabbed by one scenario and other scenarios try to put transactions in the same channel, the put method is blocked until the channel is ungrabbed by the scenario who has previously grabbed the channel. To prevent the blocking that would occur as a result of the grabbed channel, we can check the status of the channel using the vmm_channel::is_grabbed() function. This function will return “1”, if the channel is grabbed.

Posted in Communication, Stimulus Generation, Transaction Level Modeling (TLM), Tutorial | Comments Off

VMM data macros are cool, but how do I customize the constructor?

Posted by Shankar Hemmady on 27th August 2009

Srinivasan VenkataramanPawan BellamkondaSrinivasan Venkataramanan, CVC

Pawan Bellamkonda, Brocade

During our recent VMM training at CVC, we learned about VMM data member macros, and our engineers liked it. Some of our teams at Brocade have started adopting it in their projects right away! We see that we can avoid much of the lengthy code and increase readability with these new macros. We will surely avoid making silly mistakes which might be hard to debug later.

However as with any built-in automation, there are always scenarios where-in user level customization of some or all of the methods is required. VMM provides this flexibility for overriding the default behavior of virtual methods of vmm_data class. In one of our blocks we needed to tweak the constructor of the transaction. One question that perplexed us was:

“I have built a transaction class extending from vmm_data. We have used the short hand macro `vmm_data_member…..  to get all the functions automatically. But while creating an object of this transaction, we want to pass a configuration class object as argument in the new function. How should we override the new() function alone when we use the short hand macros? When we tried using do_new() (like overriding other functions), it did not work.”

As we explored a bit, we found another macro specifically meant for this:


This macro should be used before the beginning of data-member macros. This lets the succeeding macros do all the work except the “new” function implementation.

class s2p_xactn extend vmm_data;
rand bit [7:0] pkt_len, pkt_pld;

function new(int my_own_arg = 2);
`vmm_note (log, $psprintf (“my val is %0d”, my_own_arg));
endfunction : new

`vmm_data_member_scalar(pkt_len, DO_ALL)
`vmm_data_member_scalar(pkt_pld, DO_ALL)
endclass : s2p_xactn

As with traditional martial arts, functional verification too has some slightly different styles/requirements that makes each project interesting and unique. To its credit, we feel that VMM is as proven as traditional martial arts: it can be tailored to different requirements while providing a standardized means of combat.

Posted in Automation, Coding Style, Customization, Modeling | 2 Comments »

class factory

Posted by Wei-Hua Han on 26th August 2009

Weihua Han, CAE, Synopsys

As a well-known Object-Oriented technique, class factory has actually been applied in VMM since inception. For instance, in the vmm atomic and scenario generators, by assigning different blueprints to randomized_obj and scenario_set[] properties, these generators can generate transactions with user specified patterns. Using the class factory pattern, users create an instance with a pre-defined method (such as allocate() or copy()) instead of the constructor. This pre-defined method will create an instance from the factory not just the type of the variable being assigned.

VMM1.2 now simplifies the application of the class factory pattern within the whole verification environment so that users can easily replace any kind of object, transaction, scenario and transactor by a similar object. Users can easily follow the steps below to apply the class factory pattern within the verification environment.

1. define “new”, “allocate”, “copy” methods for a class and create the factory for the class.

class vehicle_c extends vmm_object;

//defines the new function. each argument should have default values

function new(string name=”",vmm_object parent=null);,name);


//defines allocate and copy methods

virtual function vehicle_c allocate();

vehicle_c it;

it = new(this.get_object_name,get_parent_object());

allocate = it;


virtual function vehicle_c copy();

vehicle_c it;

it = new this;

copy = it;


//these two macros will define necessary methods for class factory and create factory for the class




`vmm_typename, `vmm_class_factory will implement the necessary methods to support the class factory pattern, like get_typename(), create_instance(), override_with_new(), override_with_copy(), etc.

Users can also use `vmm_data_member_begin and `vmm_data_member_end to implement the “new”, “copy”, “allocate” methods conveniently.

2. create an instance using the pre-defined “create_instance()” method

To use the class factory, the class instance should be created with pre-defined create_instance() method instead of the constructor. For example:

class driver_c extends vmm_object;

vehicle_c myvehicle;

function new(string name=”",vmm_object parent=null);,name);


task drive();

//create an instance from create_instance method

myvehicle = vehicle_c::create_instance(this,”myvehicle”);

$display(“%s is driving %s(%s)”, this.get_object_name(),





program p1;

driver_c Tom=new(“Tom”,null);

initial begin;



For this example, the output is:

Tom is driving myvehicle(class $unit::vehicle_c)

3.  define a new class

Let’s now define the following new class which is derived from the original class vehicle_c:

class sedan_c extends vehicle_c;


function new(string name=”",vmm_object parent=null);,parent);


virtual function vehicle_c allocate();

sedan_c it;

it = new(this.get_object_name,get_parent_object());

allocate = it;


virtual function vehicle_c copy();

sedan_c it;

it = new this;

copy = it;




And we would like to create myvehicle instance from this new class without modifying driver_c class.

4. override the original instance or type with the new class

VMM1.2 provides two methods for users to override the original instances or type.

  • override_with_new:(string name, new_class factory, vmm_log log,string fname=”",int lineno=0)

With this method, when create_instance() is called, a new instance of new_class will be created through facory.allocate() and returned.

  • override_with_copy(string name, new_class factory,vmm_log log, string fname=”", int lineno=0)

With this method, when create_instance() is called, a new instanced of new_class will be created through factory.copy() and returned.

For both methods, the first argument is the instance name, as specified in the create_instance() method, which users hope to override with the type of new_class. Users can use powerful name matching mechanism defined in VMM to specify the override happens on dedicated instance or all the instances of one class in the whole verification environment.

The code below will override all vehicle_c instances with sedan_c type in the environment:

program p1;

driver_c Tom=new(“Tom”,null);

vmm_log log;

initial begin

//override all vehicle_c instances with type of sedan_c




And the output of the above code is:

Tom is driving myvehicle(class $unit::sedan_c)

If users only want to override one dedicated instance with a copy of another instance, users can call override_with_copy using the following code:


As the above example shows, with the class factory pattern short-hand macros provided with VMM1.2, users can easily use class factories patterns to replace transactors, transactions and other verification components without modifying the testbench code. I find this very useful for increasing the reusability of verification components.

Posted in Configuration, Modeling, SystemVerilog, Tutorial, VMM, VMM infrastructure | 1 Comment »

How to connect your SystemC Reference Models to your verification VMM based framework

Posted by Shankar Hemmady on 17th August 2009


Nasib Naser, Phd, CAE, Synopsys

In this blog I will discuss the Use Model demonstrated in Figure 1 where a VMM layered testbench is used to verify an RTL DUT against a SystemC transaction level model. SystemVerilog allows for the creation of a reusable layered testbench architectures. The VMM methodology provides the basis for such a layered architecture. With a layered approach, transaction-level reference models can be easily integrated at the appropriate level to provide self-checking functions. In this Use Model the VMM Function layer is communicating with SystemC model using the TLI mechanism to perform read/write transactions, and using the same testbench scenarios the VMM command layer is driving the DUT at pin level.


Figure 1 – VMM driving TLM and RTL with checking

Synopsys’ VCS functional verification solution addresses the challenge of this use model with its SystemC-SystemVerilog Transaction-Level Interface (TLI). Using TLI SystemC interface methods can be invoked in SystemVerilog and vice versa. The **value add** for using TLI is that the SystemVerilog DPI based communication code that synchronizes both domains is automatically generated.

Let’s take a look at the various code components in SystemC and SystemVerilog based on the VMM methodology that enables such a verification use model. In the following example we define the read and write transactions as SC Interface methods.

class Buf_if: virtual public sc_interface {

// do the pure virtual function read()/write() declarations here
virtual void read(unsigned int addr, unsigned int* data) = 0;

virtual void write(unsigned int addr, unsigned int data) = 0;

Following code shows VMM Transactor invoking SystemC transactions read and write at function layer. VMM transactor tb_mast is communicating to SystemC TLM using vmm channel tb_mast_out_ch1 and with the RTL model using the vmm channel tb_mast_out_ch2 channel, as shown in the following code:

class tb_master extends vmm_xactor;
virtual tb_if.master ifc;
tb_data_channel tb_master_in_ch;

extern virtual task main();
endclass: tb_master

task tb_master::main();
tb_data tr, tr_out;
forever begin
// Send the Instruction to SC Reference Model
// Send the Instruction to RTL
endtask: main

The following code shows VMM reference Transactors calling the SystemC transaction functions via the adaptor alu_tl_if_adpt_vlog which was automatically generated by VCS TLI.

class tb_ref extends vmm_xactor;

tb_data_channel     tb_ref_in_ch;

alu_tl_if_adpt_vlog alu_tl_if_adpt_vlog_inst0;

extern function new (string instance,
integer stream_id = -1,

tb_data_channel tb_ref_in_ch = null);

extern virtual task main();

endclass: tb_ref

task tb_ref::main();


forever begin


SA_SB_OP_GO : begin




tr_out.data_out = d;




This VCS TLI use model provides a complete and easy way to integrating blocking and non-blocking SystemC reference models into a VMM based multi-layer verification environment.

Posted in SystemC/C/C++, SystemVerilog, Transaction Level Modeling (TLM), Tutorial, VMM | Comments Off

Navigate through sea of log messages in a SoC env – smart application of vmm_log

Posted by Shankar Hemmady on 13th August 2009

srinivasan_VenkataramanBagath_SinghSrinivasan Venkataramanan & Bagath Singh, CVC Pvt. Ltd. Jaya Chelani, Quartics Technologies

Consider an SoC with several interfaces being verified.  It is quite common to have each interface report its activity via display messages.  Now let’s take up a case where-in a specific user is debugging say the AXI interface for failure, performance analysis etc.  While the full log file is providing the overall picture, it is quite possible to get lost quickly in the sea of messages.

Won’t it be nice to have AXI report all its activities to its own log file? This would greatly reduce the analysis/debug time for a given task.  However doing a change inside the testbench is not a good practice, if one were to do it via `ifdef, config etc.  Also, it may be restricted to only few tests/runs, and not for entire regression.

Huh, such a common scenario, you wonder.  Yes, and that’s why vmm_log has that capability built-in.  By default, loggers direct messages to STDOUT, but can be easily asked to direct them to a specific file. The function vmm_log::log_start(file_pointer) performs just this and can be used at will during run time.

1. program automatic sep_log_files;

2.   `include “”

3.   vmm_log axi_logger, cvc_prop_if_logger;

4.   int axi_fp;

5.   initial begin : b1

6.     axi_fp = $fopen (“axi.log”, “w”);

7.     axi_logger = new (“AXI Log”, “0″);

8.     cvc_prop_if_logger = new (“CVC Log”, “0″); // Defaults to STDOUT

9.     axi_logger.log_start(axi_fp); // AXI alone is being sent to a separate LOG file

10.    `vmm_note (axi_logger, “Message from AXI Interface”);

11.    `vmm_note (cvc_prop_if_logger, “Messages from CVC Proprietary Interface”);

12.   end : b1

13.  endprogram : sep_log_files

Line 6 opens a file named “axi.log” for writing. Line 9 ensures that all messages form axi_log to the new log file pointer (created in Line 6). Note that one can point to any logger inside the env via hierarchical path and hence this can be done at a testcase level, if desired.

A few additional notes

1. As these methods work on specific log instance, they ought to be used once the log instance is constructed, typically after the vmm_env::build() phase.

2. There is also a counterpart to stop logging vmm_log::log_stop(file_pointer) to a separate file (it continues sending messages to STDOUT).

3. By design, the vmm_log::log_start() sends a copy of the message to the specified file in addition to STDOUT. This is done so that the complete log (sent to STDOUT) is intact from all loggers in the environment.

4. If it is desired by the user to avoid the duplication of log messages, the user can use an explicit call to vmm_log::log_stop(STDOUT).

5. These two methods can also be useful in performance measurement during a specific time window in a simulation run. One can setup notifiers to indicate the start & stop of the time window, and run a parallel thread to correspondingly perform log_start()  log_stop(). More on this in another blog post.

Posted in Debug, Messaging, SystemVerilog, Tutorial | 2 Comments »

Address alignment in Memory Allocation Manager

Posted by Janick Bergeron on 10th August 2009


Janick Bergeron, Synopsys Fellow

The VMM Memory Allocation Manager (vmm_mam) can be used to manage a shared address space. It does not physically allocate memory but dishes out address ranges that are guaranteed to be previously unallocated.

By default, the only constraints on the Memory Allocation Manager are 1) the entire address range must not be already allocated, 2) the starting offset must be greater than or equal the base offset of the memory and 3) the ending offset must be less than or equal to the offset limit.

That’s it.

So randomly allocating memory will result in memory regions starting at random positions. For example, the following code allocates ten 4-byte regions in a 256-byte memory:

program test;

`include “”
`include “”

vmm_mam_cfg    cfg;
vmm_mam        mgr;
vmm_mam_region bfr;

cfg = new;
cfg.n_bytes      = 1;
cfg.start_offset = 8′h00;
cfg.end_offset   = 8′hFF;

mgr = new(“Mem Mgr”, cfg);

repeat (10) begin
bfr = mgr.request_region(4);
$write(“%s\n”, bfr.psdisplay());

And produce the following results:


but what if you needed the region to be aligned on quad-word boundaries?


You can add constraints by extending the vmm_mam_allocator object and supplying the new allocator to the request_region() call or making it the default allocation policy on the Memory Allocation Manager instance:

program test;

`include “”
`include “”

class qword_aligned_allocator extends vmm_mam_allocator;
constraint qword_aligned {
start_offset[1:0] == 0;

vmm_mam_cfg    cfg;
vmm_mam        mgr;
vmm_mam_region bfr;

cfg = new;
cfg.n_bytes      = 1;
cfg.start_offset = 8′h00;
cfg.end_offset   = 8′hFF;

mgr = new(“Mem Mgr”, cfg);

qword_aligned_allocator alloc = new;

repeat (5) begin
bfr = mgr.request_region(4, alloc);
$write(“%s\n”, bfr.psdisplay());

mgr.default_alloc = alloc;

repeat (5) begin
bfr = mgr.request_region(4);
$write(“%s\n”, bfr.psdisplay());



The resulting regions are now QWORD aligned:


Other kind of constraints can you add via a customized allocator include:

  • Region must be within a 1-k address page
  • 75% of regions must be in the upper half of the address space
  • Regions must be allocated from a pool of pre-allocated regions

Remember that regions are allocated by randomizing the allocator object. That means that a highly directed and procedural allocation policy (such as the last one mentioned above) can be implemented using the post_randomize() method.

Once you have a region, if you have associated the Memory Allocation Manager with a memory RAL abstraction class, you may access it as if it were a tiny memory in and of itself. The integration of MAm and RAL effectively implements a mini virtual-memory system.

Posted in Customization, Tutorial | Comments Off

How to connect your SystemC reference model to your verification framework

Posted by Janick Bergeron on 8th July 2009

Nasib Naser, Phd, CAE, Synopsys

Architects and designers have many reasons to start using the SystemC language to describe their high level specifications for their target System-on-Chip (SoC). In this blog I will not discuss the “why” the SystemC selection BUT the “how” to bring these models into the team’s design verification flow.

Mixed abstractions come from the fact that designers often develop models at the transaction level to capture the correct architecture and analyze the system performance and do so fairly early in the design cycle. Various components of a SoC developed with high level languages such as C, C++, SystemC, and SystemVerilog are increasingly in demand and for obvious reasons. They are faster to write and faster to simulate. High level reference models became increasingly important in verifying the implementation details at the RTL level.

Although SystemVerilog provides different methods (PLI, VPI, DPI, etc) to support communications between SystemVerilog and C/C++ domains, it is not so straight forward for SystemC interface methods. One important reason is that these SystemC methods can be “blocking”, i.e. these methods can “consume” time. And these “blocking” SystemC methods can return value as well. Users should build a very elaborate mechanism to maintain synchronization between the simulations running in the SystemVerilog and the SystemC domains.

Synopsys’ VCS functional verification solution addresses the challenge of this use model with its SystemC-SystemVerilog Transaction-Level Interface (TLI). With TLI, VCS automatically generates adapters to be instantiated in each domain to support SystemVerilog block calls of SystemC methods, or vice versa. These TLI adapters take care of the synchronization between both domains.

To demonstrate the ease of use of this unique interface we start by identifying the SystemC interface methods that SystemVerilog require accessibility to. In the following example we define Read and Write transactions as SC Interface methods as such:

class Mem_if : virtual public sc_interface {
public :

   //do the pure virtual function declaration here
virtual void rd_data(int addr, int &data)=0;
virtual void wr_data(int addr, int data)=0;

Manually we create an interface configuration file that declares the methods required for the interface. These methods are declared in a file called Mem_if.h. The content of this configuration file is as follows:

interface Mem_if
direction sv_calls_sc
verilog_adapter Mem_if_adapt_vlog
systemc_adapter Mem_if_adapt_sc

#include "Mem_if.h"

task rd_data
input int addVal
inout& int dataVal

task wr_data
input int addVal
input int dataVal


VCS uses this configuration file to generate the necessary SystemC and SystemVerilog code that enables complete synchronization. The command line is as follows:

%syscan -idf Mem_if.idf

This operation creates the following helper files:


The helper file contains a SystemVerilog Interface called Mem_if_adapt_vlog which declares the following tasks:

task wr_data(input int addVal, input int dataVal);
task rd_data(input int addVal, inout int dataVal);

These tasks could then be wrapped and used in the SystemVerilog testbench to access SystemC methods as demonstrated in the following code:

module testbench;
reg clk;
int data;
int cnt;

// Instantiate the helper interface
Mem_if_adapt_vlog #(0) mem_if_adapt_vlog_inst();


// Calling SC TLI function **Write Data**
sv_wr_data(input int addr, input int data);

// Calling SC TLI function **Read Data**
sv_rd_data(input int addr, output int data);

. . .

// Use the wrapper code
always @(posedge clk)
   if(cnt < 100)
cnt = cnt+1;

always @(posedge clk)
   if(cnt < 100)


Posted in SystemC/C/C++, SystemVerilog, Tutorial, VMM | 1 Comment »

Protocol Layering Using Transactors

Posted by Janick Bergeron on 9th June 2009

jb_blog Janick Bergeron
Synopsys Fellow

Bus protocols, such as AHB, are ubiquitous and often used in examples because they are simple to use: some control algorithm decides which address to read or write and what value to expect or to write. Pretty simple.

But data protocols can be made a lot more complex because they can often be layered arbitrarily. For example, an ethernet frame may contain an IP segment of an IP frame that contains a TCP packet which carries an FTP frame. Some ethernet frames in that same stream may contain HDLC-encapsulated ATM cells carrying encrypted PPP packets.

How would one generate stimulus for these protocol layers?

One way would be to generate a hierarchy of protocol descriptors representing the layering of the protocol. For example, for an ethernet frame carrying an IP frame, you could do:

class eth_frame extends vmm_data;
rand bit [47:0] da;
rand bit [47:0] sa;
rand bit [15:0] len_typ;
rand ip_frame payload;
rand bit [31:0] fcs;


class ip_frame extends vmm_data;
eth_frame transport;
rand bit [3:0] version;
rand bit [3:0] IHL;

rand bit [7:0] data;

That works if you have exactly one IP frame per ethernet frame. But what if your IP frame does not fit into the ethernet frame and it needs to be segmented? This approach works when you have a one-to-one layering granularity, but not when you have to deal with one-to-many (i.e. segmentation), many-to-one (i.e. reassembly, concatenation) or plesio-synchronous (e.g. justification) payloads.

This approach also limits the reusability of the protocol transactions: the ethernet frame above can only carry an IP frame. How could it carry other protocols? or random bytes? How could the IP frame above be transported by another protocol?

And let’s not even start to think about error injection…

One solution is to use transactors to perform the encapsulation. The encapsulator would have an input channel for the higher layer protocol and an output channel for the lower layer protocol.

class ip_on_ethernet extends vmm_xactor;
ip_frame_channel in_chan;
eth_frame_channel out_chan;


The protocol transactions are generic and may contain generic references to their payload or transport layers.

class eth_frame extends vmm_data;
vmm_data transport[$];
vmm_data payload[$];

rand bit [47:0] da;
rand bit [47:0] sa;
rand bit [15:0] len_typ;
rand bit [  7:0] data[];
rand bit [31:0] fcs;


class ip_frame extends vmm_data;

vmm_data transport[$];
vmm_data payload[$];

rand bit [3:0] version;
rand bit [3:0] IHL;

rand bit [7:0] data;

The transactor main() task, simply waits for higher-layer protocol transactions, packs them into a byte stream, then lays the byte stream into the payload portion of new instances of the lower-layer protocol.

virtual task main();

forever begin
bit [7:0] bytes[];
ip_frame ip;
eth_frame eth;


// Pre-encapsulation callbacks (for delay & error injection)…

ip.byte_pack(bytes, 0);
if (bytes.size() > 1500) begin

`vmm_error(log, “IP packet is too large for Ethernet frame”);

eth = new(); // Should really use a factory here

eth.da = …; = …;
eth.len_typ = ‘h0800;  // Indicate IP payload = bytes;
eth.fcs = 32’h0000_0000;


// Pre-tx callbacks (for delay and ethernet-level error injection)…



// Post-encapsulation callbacks (for functional coverage)…


When setting the header fields in the lower-layer protocol, you can use values from the higher-layer protocols (like setting the len_typ field to 0×0800 above, indicating an IP payload), you can use values configured in the encapsulator (e.g. a routing table) or they can be randomly generated with appropriate constraints:

if (!route.exists(ip.da)) begin
bit [47:0] da = {$urandom, $urandom};  // $urandom is only 32-bit

da[41:40] = 2’b00; // Unicast, global address
route[ip.da] = da;
eth.da = route[ip.da];

The protocol layers observed by your DUT are then defined by the combination and order of these encapsulation transactor.

vmm_scheducler instances may also be used at various points in the layering to combine multiple streams (maybe carrying different protocol stacks and layers) into a single stream.

Posted in Modeling, Modeling Transactions, Phasing, Structural Components, SystemVerilog, Tutorial | 3 Comments »

How VMM can help controlling transactors easily?

Posted by Fabian Delguste on 29th May 2009

Fabian Delguste / Synopsys Verification Group

Controlling VMM transactors can sometimes be a bit hectic. A typical situation I see is when I have registered a list of transactors for driving some DUT interfaces but only want to start a few of them. Another common situation is when I want to turn off scenario generators and replay transactions directly from a file. Yet another task I often face is registering transactor callbacks without knowing where they are exactly located in the environment.

As you can see, there are many situations where fine-grain functional control of transactors is necessary.

Since VMM 1.1 came out, I have been using a new base class called vmm_xactor_iter that allows accessing any transactor directly by name. In this case all I need to do is to construct a vmm_xactor_iter with regular expression and use the iterator to loop thru all matching transactors.

To understand better how this base class works, I’ll show you a real life example. The scope of this example is to show how to start generators only when vmm_channel playback has not been turned on. As you know vmm_channel can be used to replay transactions directly from files that contain transactions that were recorded in a previous session. This can speed up simulation by turning off constraint solving.

1. string match_xactors = (cfg.mode == tb_cfg::PLAYBACK) ? /Drivers/” : “/./”;


3. `foreach_vmm_xactor(vmm_xactor, “/./”, match_xactors)

4. begin

5.  `vmm_note(log, $psprintf(“Starting %s”, xact.get_instance()));

6.   xact.start_xactor();

7. end

  • In line 1, match_xactors string takes “Drivers*” value when playback mode is selected otherwise it takes “.” when no this mode is not selected. In the first case, transactors named “Drivers” match otherwise all transactors, including generators match
  • In line 3, `foreach_vmm_xactor macro is used to create a vmm_xactor_iter using previous regular expression. This macro can be used to traverse and start all matching objects by using the xact object to access transactors

In case you’d like to have more control over vmm_xactor_iter, it’s possible to use its first() / next() / xactor() methods to traverse matching transactors. Also it’s possible to ensure the regular expression returns at least one transactor. Here is the same example written using these methods.

1. string match_xactors = (cfg.mode == tb_cfg::PLAYBACK) ? “/Drivers/” : “/./”;

3. vmm_xactor_iter iter = new(“/./”, match_xactors);

4. if(iter.xactor()==null)

5. `vmm_fatal(log, $psprintf(“No matching transactors for ‘%s’”, match_xactors));

7. while(iter.xactor()!=null) begin

8.   xact = iter.xactor();

9.   xact.start_xactor();


Should you need to reclaim the memory allocation required to store all transactors, it’s possible to enable garbage collection by invoking vmm_xactor::kill().

The good news is that vmm_xactor_iter allows me to:

  • Configure the transactor without knowing its hierarchy
  • Provide dynamic access to transactors
  • Reduce code for multiple configurations and callback extensions
  • Use powerful regular expressions for name matching
  • Reuse transactors: no need to modify code when changing the environment content

I hope you find vmm_xactor_iter, and all of the other VMM features, as useful as I do

Posted in Configuration, Structural Components, SystemVerilog, Tutorial | Comments Off