Verification Martial Arts: A Verification Methodology Blog

Archive for the 'UVM' Category

Universal Verification Methodology (UVM)

Avoiding Redundant Simulation Cycles with your UVM VIP based simulation with a Simple Save-Restore Strategy

Posted by paragg on 6th March 2014

In many verification environments, you reuse the same configuration cycles across different testcases. These cycles might involve writing and reading from different configuration and status registers, loading program memories, and other similar tasks to set up a DUT for the targeted stimulus. In many of these environments, the time taken during this configuration cycles are very long. Also, there is a lot of redundancy as the verification engineers have to run the same set of verified configuration cycles for different testcases leading to a loss in productivity. This is especially true for complex verification environments with multiple interfaces which require different components to be configured.

The Verilog language provides an option of saving the state of the design and the testbench at a particular point in time. You can restore the simulation to the same state and continue from there. This can be done by adding appropriate built in system calls from the Verilog code. VCS provides the same options from the Unified Command line Interpreter (UCLI).

However, it is not enough for you to restore simulation from the saved state. For different simulations, you may want to apply different random stimulus to the DUT. In the context of UVM, you would want to run different sequences from a saved state as show below.

In the above example apart from the last step which varies to large extent, the rest of the steps once established need no iterations.

Here we explain how to achieve the above strategy with the simple existing UBUS example available in the standard UVM installation. Simple changes are made in the environment to show what needs to be done to bring in this additional capability. Within the existing set of tests, the two namely, “test_read_modify_write” and “test_r8_w8_r4_w4”, differs only w.r.t the master sequence being executed – i.e. “read_modify_write_seq” and “r8_w8_r4_w4_seq” respectively.

Let’s say that we have a scenario where we would want to save a simulation once the reset_phase is done and then start executing different sequences post the reset_phase the restored simulations. To demonstrate a similar scenario through the UBUS tests, we introduced a delay in the reset_phase of the base test (in a real test, this may correspond to the PLL lock, DDR Initialization, Basic DUT Configuration).

The following snippet shows how the existing tests are modified to bring in the capability of running different tests in different ‘restored’ simulations.

As evident in the code we made two major modifications.

  • Shifted the setting of the phase default_sequence from the build phase to the start of the main phase.
  • Get the name of the sequence as an argument from the command-line and process the string appropriately in the code to execute the sequence on the relevant sequencer.

As you can see, the changes are kept to a minimum. With this, the above generic framework is ready to be simulated.  In VCS, one of the different ways, the save/restore flow can be enabled as follows.

Thus above strategy helps in optimal utilization of the compute resources with simple changes in your verification flow. Hope this was useful and you manage to easily make the changes in your verification environment to adopt this flow and avoid redundant simulation cycles.

Posted in Automation, Coding Style, Configuration, Creating tests, Customization, Optimization/Performance, Organization, Reuse, Stimulus Generation, SystemVerilog, Tutorial, UVM, Uncategorized, Verification Planning & Management | 1 Comment »

SNUG-2012 Verification Round Up – Miscellaneous Topics

Posted by paragg on 29th March 2013

In my final installment of the series of blogs summing up the various SNUG verification papers of 2012, I try to cover the user papers on the Design IP/Verification IP and SystemC and SystemVerilog co-simulation. Please find my earlier blogs on the other domains here: System Verilog Language, Methodologies & VCS technologies

DesignWare core USB3.0 Controller (DWC_usb3) can be configured as a USB3.0 Device Controller. When verifying a system that comprises a DWC_usb3 Device Controller, the verification environment is responsible for bringing up the DWC_usb3 Device Controller to its proper operation mode to communicate with the USB3.0 Host. The paper Integrating DesignWare USB3.0 Device Controller In a UVM-based Testbench from Ning Guo of Paradigm Works describes the process of configuring and driving the DWC_usb3 Device Controller in a UVM based verification environment using the Discovery USB 3.0 Verification IP. This paper describes how the verification environment needs to be created so that it’s highly configurable and reusable.

The AMBA 4 ACE specification enables system level cache coherency across clusters of multicore processors, such as the ARM Cortex-A15 and Cortex-A7 MPCore™ processors .This ensures optimum performance and power efficiency of complex SoC designs. However, the design complexity associated with these capabilies is also higher.  And it throws up new verification challenges.  In the paper, Creating AMBA4 ACE Test Environment With Discovery VIP”, Whitney Huang, Sean Chou, MediaTek Inc, demonstrates how to tackle complex verification challenges increase their verification productivity by using Synopsys Discovery AMBA ACE VIP.

The paper, Verification Methodology of Dual NIC SOC Using VIPs by A.V. Anil Kumar, Mrinal Sarmah, Sunita Jain of Xilinx India Technology Services Pvt. Ltd, talks about how various features of Synopsys PCIe and Ethernet Verification IPs can be exploited to help in the efficient verification of the DUT across various traffic configurations. The paper explores how the VIP Application Programming Interface (API)s can leveraged in the  tests cases to reach high  functional coverage numbers  in a very short duration. They also show how a dual NIC verification environment can effectively use Ethernet VIP APIs to test various Media Access Control (MAC) features. Finally conclude how of the implementation can be used across future revisions of their design.

The ability to analyze the performance of the SoC at the early stage of the design can make a significant different to the end product.  This can lead to more accurate and an earlier estimate of the desired performance that is expected.  Dayananda Yaraganalu Sadashivappa, Igal Mariasin, Jayaprakash Naradasi of SanDisk India Device Design Centre Pvt. Ltd., in the paperGeneric MLM environment for SoC Performance Enhancement”, outlines the solution that was found by using the Synopsys VIP models. The VIPs were used in conjunction with interconnect, which in this case is a Multi-Layer-Matrix (MLM). The environment was built leveraging the VMM base classes. The VMM multiple stream scenario (vmm_ms_scenario) base class was used to create the traffic across the matrix, and the performance meters were constructed using the base classes. The callbacks were leverage appropriately help in collating the statistics. Multiple knobs were used to make the environment generic and configurable. The approach helped in finding multiple performance bugs which could not have been easily found using conventional verification.

In the paper, “User Experience Verifying Ethernet IP Core”, Puneet Rattia of Altera Corporation, presents his experience with verifying the Altera® 40-100Gbps Ethernet IP core utilizing VMM environment while integrating the Ethernet VIP from Synopsys. He explains how he created a full suite of system and blocks level regression tests and then goes on to show how he  utilizes the coverage mapping capabilities of VCS to merge the results across these various testbenches and produce meaningful reports. Besides showing how to reuse the verification infrastructure at the SoC level, the paper also demonstrates how they went in for horizontal reuse by integrating the reference SystemC based models developed and prototyped in the early phase of the project.

UVM 1.x includes support for the communication interfaces defined by the SystemC TLM-2.0 standard. This enables integration of SystemC TLM-2.0 IP into a SystemVerilog UVM verification environment. Dr David Long, John Aynsley, Doug Smith, Doulos in the paper A Beginner’s Guide to Using SystemC TLM-2.0 IP with UVMdescribes how this is done best. They talk about the fact that the connection between SystemC and SystemVerilog currently requires a tool specific interface such as Synopsys Transaction Level Interface (TLI). This paper begins with a brief overview of TLM-2.0 aimed at novice users. It then discusses the steps required to add a SystemC TLM-2.0 model into a SystemVerilog UVM environment and simulate it with VCS. At each step, issues that users will face are explored and suggestions made for practical fixes, showing the relevant pieces of code. Finally, the paper gives a summary of areas where the UVM implementation of TLM-2.0 differs from the SystemC standard and proposes workarounds to ensure correct communication between the SystemVerilog and SystemC domains.

There is an inherent need to enable the horizontal reuse of components created during the architecture and exploration stage. Subhra S Bandyopadhyay, Pavan N M, Intel Technology India Pvt. Ltd, in Integrating SystemC OSCI TLM 2.0 Models to OVM based System Verilog Verification Environments talks about how  theur architecture team creates SystemC models  for early performance analysis and accelerated software development. In OVM-based verification environment, the objective was to reuse this model as a reference model and thus helped in reducing the overall environment bring-up time. The challenge was not only to integrate the SystemC model in the OVM-based verification environment but also to be able to efficiently send transactions from SV to SystemC and vice versa. This paper explores the successful integration of SystemC TLM2 components in OVM based verification environments and also highlight how the VCS TLI (Transaction Level Interface) adapters help TLM2.0 sockets in SystemC to communicate with those in SV and vice versa.

Truly, I feel overwhelmed by the numbers of papers and the interesting use of technology across a variety of domains on which user share their experiences across the various SNUG conferences. As we speak, the SNUG events for 2013 have started, and the stage is all set for a new set of very informative and interesting sessions. I am sure most of you would be attending the SNUIG conferences in your area. . You can find the detailed schedule of those here.

Posted in Announcements, Automation, Callbacks, Coding Style, Communication, Reuse, Structural Components, SystemC/C/C++, SystemVerilog, Transaction Level Modeling (TLM), Tutorial, UVM, VMM | Comments Off

SNUG-2012 Verification Round Up – Language & Methodologies – II

Posted by paragg on 3rd March 2013

In my previous post, we discussed papers that leveraged SystemVerilog language and constructs, as well as those that covered broad methodology topics.  In this post, I will summarize papers that are focused on the industry standard methodologies: Universal Verification Methodology (UVM) and Verification Methodology Manual (VMM).

Papers on Universal Verification Methodology (UVM)

Some users prefer not to use the base classes of a methodology directly. Adding a custom layer enables them to add in additional capabilities specific to their requirements. This layer would consist of a set of generic classes that extend the classes of the original methodology. These classes provide a convenient location to develop and share the processes that are relevant to an organization for re-use across different projects. Pierre Girodias of IDT (Canada) in the paper, Developing a re-use base layer with UVMfocuses on the recommendations that adopters of these “methodologies” should follow while developing the desired ‘base’ layer.  In the paper typical problems and possible solutions are also identified while developing this layer. Some of these including dealing with the lack of multiple-inheritance and foraging through class templates.

UVM provides many features but fails to define a reset methodology, forcing users to develop their own methodology within the UVM framework to test the ‘reset’ of their DUT. Timothy Kramer of The MITRE Corporation in the paper “Implementing Reset Testingoutlines several different reset strategies and enumerates the merits and disadvantages of each. As is the case for all engineering challenges, there are several competing factors to consider, and in this paper the different strategies are compared on flexibility, scalability, code complexity, efficiency, and how easily they can be integrated into existing testbenches. The paper concludes by presenting the reset strategy which proved to be the most optimal for their application.

The ‘Factory’ concept in advanced OOP based verification methodologies like UVM is something that has baffled most verification engineers. But is it all that complicated? Not necessarily  and this is what is  explained by Clifford E. Cummings of Sunburst Design, Inc. in his paper– The OVM/UVM Factory & Factory Overrides – How They Works – Why They Are Important” . This paper explains the fundamental details related to the OVM/UVM factory and explain how it works and how overrides facilitate simple modification to the testbench component and transaction structures on a test by test basis. This paper not only explains why the factory should be used but also demonstrates how users can create configurable UVM/OVM based environments without it.

Register Abstraction Layer has always been an integral component of most of the HVL methodologies defined so far. Doug Smith of Doulos, in his paper, Easier RAL: All You Need to Know to Use the UVM Register Abstraction Layer”, presents a simple introduction to RAL. He distills the adoption of UVM RAL into a few easy and salient steps which is adequate for most cases. The paper describes the industry standard automation tools for the generation of register model.  Additionally the integration of the generated model along with the front-door and backdoor access mechanism is explained in a lucid manner.

The combination of the SystemVerilog language features coupled with the DPI & VPI language extensions can enable the testbench to generically react to value-changes on arbitrary DUT signals (which might or might not be part  of a standard interface protocol).  Jonathan Bromley, Verilab in “I Spy with My VPI: Monitoring signals by name, for the UVM register package and more”, presents a package which supports both value probing and value-change detection for signals identified at runtime by their hierarchical name, represented as a string. This provides a useful enhancement to the UVM Register package, allowing the same string to be used for backdoor register access.

Proper testing of most digital designs requires that error conditions be stimulated to verify that the design either handles them in the expected fashion, or ignores them, but in all cases recovers gracefully. How to do it efficiently and effectively is presented in “UVM Sequence Item Based Error Injectionby Jeffrey Montesano and Mark Litterick, Verilab. A self-checking constrained-random environment can be put to the test when injecting errors, because unlike the device-under-test (DUT) which can potentially ignore an error, the testbench is required to recognize it, potentially classify it, and determine an appropriate response from the design. This paper presents an error injection strategy using UVM that meets all of these requirements. The strategy encompasses both active and reactive components, with code examples provided to illustrate the implementation details.

The Universal Verification Methodology is a huge win for the Hardware Verification community, but does it have anything to offer Electronic System Level design? David C Black from Doulos Inc. explores UVM on the ESL front in the paperDoes UVM make sense for ESL?The paper considers UVM and SystemVerilog enhancements that could make the methodology even more enticing.

Papers on Verification Methodology Manual (VMM)

Joseph Manzella of LSI Corp in “Snooping to Enhance Verification in a VMM Environmentdiscusses situations in which a verification environment may have to peek at internal RTL states and signals to enhance results, and provides guidelines of what is an acceptable practice. This paper explains how the combination of vmm_log (logger class for VMM) and +vmm_opts (Command-line utility to change the different configurable values) helps in creating a configurable message wrapper for the internal grey-box testing. The techniques show how different assertion failures can be re-routed through the VMM messaging interface. An effective and reusable snooping technique for robust checking is also covered.

At Silicon Valley in Mechanism to allow easy writing of test cases in a SystemVerilog Verification environment, then auto-expand coverage of the test case Ninad Huilgol of VerifySys addresses designer’s apprehension of using a class based environment  through a  tool that leverages the VMM base classes. It  automatically expands the scope of the original test case to cover a larger verification space around it, based on a user friendly API that looks more like Verilog, hiding the complexity underneath.

Andrew Elms of Huawei in Verification of a Custom RISC Processorpresents the successful application of VMM to the verification of a custom RISC processer. The challenges in verifying a programmable design and the solutions to address them  are presented. Three topics explored in detail are the – Use of Verification Planner, Constrained random generation of instructions, Coverage closure.The importance of the Verification Plan as the foundation for the verification effort is explored. Enhancements to the VMM generators are also explored. By default VMM data generation is independent of the current design state, such as register values and outstanding requests. RAL and generator callbacks are used to address this. Finally, experiences with coverage closure are presented.

Keep you covered on the varied verification topics in the upcoming blog ahead!!! Enjoy reading!!!

Posted in Announcements, Register Abstraction Model with RAL, UVM, VMM, VMM infrastructure | 1 Comment »

SNUG-2012 Verification Round Up – Language & Methodologies – I

Posted by paragg on 25th February 2013

As in the previous couple of years, last year’s SNUG – Synopsys User Group showcased an amazing number of useful user papers   leveraging the capabilities of the SystemVerilog language and verification methodologies centered on it.

I am always excited when I see this plethora of useful papers and I try to ensure that I set aside some time to go through all these user experiences.  Now, as we wait for SNUG, Silicon Valley to kick-start the SNUG events for this year, I would want to look back at some of the very interesting and useful paper from the different SNUGs of the year 2012.  Let me start with talking about a few papers in the area of the System Verilog language and SV methodologies.

Papers leveraging the SystemVerilog language and constructs

Hillel Miller of Freescale in the paper “Using covergroups and covergroup filters for effective functional coverageuncovers the mechanisms available for carving out the coverage goals. In the p1800-2012 of the SystemVerilog LRM, new constructs are provided just for doing this. The construct that is focused on is the “with” construct. The new construct provides the ability to carve out of a multidimensional range of possibilities for a sub-range of goals. This is very relevant in a “working” or under development setup that requires frequent reprioritization to meet tape-out goals.

The paperTaming Testbench Timing: Time’s Up for Clocking Block Confusionsby Jonathan Bromley, Kevin Johnston of Verilab, reviews the key features and purpose of clocking blocks and then examines why they continue to be a source of confusion and unexpected behavior for many verification engineers. Drawing from the authors’ project and mentoring experience, it highlights typical usage errors and how to avoid them. They clarify the internal behavior of clocking blocks to help engineers understand the reasons behind common problems, and show techniques that allow clocking blocks to be used productively and with confidence. Finally, they consider some areas that may cause portability problems across simulators and indicate how to avoid them.

Inference of latches and flops based on coding styles has always been a topic creates multiple viewpoints. There are other such scenarios of synthesis and simulation mismatches that one typically comes across. To address all such ambiguity, language developers have provided different constructs to provide for an explicit resolution based on the intent. To help us gain a deeper understanding of the topic, Don Mills of Microchip Technology Inc., presented the related concepts in the paper “Yet Another Latch and Gotchas Paper” @ SNUG Silicon Valley. This paper discusses and provides solutions to issues that designers using SystemVerilog for design come across, such as: Case expression issue for casez and casex, Latches generated when using unique case or priority case, SRFF coding style problems with synthesis, SystemVerilog 2009 new definition of logic

Gabi Glasser from Intel presented the paper “Utilizing SystemVerilog for Mixed-Signal Validation@ SNUG Israel, where he proposed a mechanism for simplifying analysis and increasing coverage for mixed signal simulations.  The method proposed here was to take advantage of SystemVerilog capabilities, which enables defining a hash (associative) array with unlimited size. During the simulation, vectors are created for required analog signals, allowing them to be analyzed within the testbench along or at the end of the simulation, without requiring saving these signals into a file. The flow change enables the ability to launch a large scale mixed signal regression while allowing an easier analysis of coverage data.

Design pattern is a general reusable solution to a commonly recurring problem within a given context. The benefit of using design patterns is clear: it gives a common language for designers when approaching a problem, and gives a set of tools, widely used, to solve issues as they come up.  The paper Design Patterns In Verification” by Guy Levenbroun of Qualcomm explores several common problems, which might rise, during the development of a testbench, and how we can use design patterns to solve these problems. The patterns are categorized majorly into following areas: creational (eg factory), structural (eg composite) and behavioral (eg template) are covered in the paper.

Arik Shmayovitsh, Avishay Tvila, Guy Lidor of Sigma Designs , in their paper “Truly reusable Testbench-to-RTL  connection for System Verilog , presents  a novel approach of  connecting the DUT and testbench using consistent semantics while  reusing the testbench. This is achieved by abstracting the connection layer of each testbench using the SystemVerilog ‘bind’ construct. This ensures that the only thing that is required to be done to reuse the testbench for a new DUT would be to identify the instance of the corresponding DUT.

In the paper, A Mechanism for Hierarchical Reuse of Interface Bindings, Thomas Zboril of Qualcomm (Canada) explores another method to instantiate SV interfaces, connect them to the DUT and wrap the virtual interfaces for use in the test environment. This method allows the reuse of all the code when the original block level DUT becomes a lower level instance  in a larger subsystem or chip. The method involves three key mechanisms: Hierarchical virtual interface wrappers, Novel approach of using hierarchical instantiation of SV interfaces, Another novel approach of automatic management of hierarchical references via SV macros (new)

Thinh Ngo & Sakar Jain of Freescale Semiconductor, in their paper, “100% Functional Coverage-Driven Verification Flow propose a coverage driven verification flow that can efficiently achieve 100% functional coverage during simulation. The flow targets varied functionality, focuses at transaction level, measures coverage during simulation, and fails a test if 100% of the expected coverage is not achieved. This flow maps stimulus coverage to functional coverage, with every stimulus transaction being associated with an event in the coverage model and vice versa. This association is derived from the DUT specification and/or the DUT model. Expected events generated along with stimulus transactions are compared against actual events triggered in the DUT. The comparison results are used to pass or fail the test. 100% functional coverage is achieved via 100% stimulus coverage. The flow enables every test with its targeted functionality to meet 100% functional coverage provided that it passes.

Papers on Verification Methodology

In the paper, Top-down vs. bottom-up verification methodology for complex ASICs, Paul Lungu & Zygmunt Pasturczyk of Ciena at Canada covers the simulation methodology used for two large ASICs requiring block level simulations. A top-down verification methodology was used for one of the ASICs while a larger version needed an expanded bottom-up approach using extended simulation capabilities. Some techniques and verification methods such as chaining of sub environments from block to top-level are highlighted  along with challenges and solutions found by the verification team. The paper presents a useful technique of  of passing a RAL (Register Abstraction Layer) mirror to the C models which are used as scoreboards in the environment. The paper also presents a method of generating stable clocks inside the “program” block.

In the paper,Integration of Legacy Verilog BFMs and VMM VIP in UVM using Abstract Classes by Santosh Sarma of Wipro Technologies(India) presents an alternative approach where Legacy BFMs written in Verilog and not implemented using ‘Classes’ are hooked up to higher level class based components to create a standard UVM VIP structure. The paper also discusses an approach where existing VMM Transactors that are tied to such Legacy BFMs can be reused inside the UVM VIP with the help of the VCS provided UVM-VMM Interoperability Library. The implementation makes use of abstract classes to define functions that invoke the BFM APIs. The abstract class is then concretized using derived classes which give the actual implementation of the functions in the abstract class. The concrete class is then bound to the Verilog instance of the BFM using the SystemVerilog “bind” concept. The concrete class handle is then used by the UVM VIP and the VMM Transactor to interact with the underlying Verilog BFM. Using this approach the UVM VIP can be made truly reusable by using run time binding of the Verilog BFM instance to the VIP instead of using hardcoded macro names or procedural calls.

A Unified Self-Check Infrastructure - A Standardized Approach for Creating the Self-Check Block of Any Verification Environmentby John Sotiropoulos, Matt Muresa , Massi Corba of Draper Laboratories Cambridge, MA, USA presents a structured approach for developing a centralized “Self-Check” block for a verification environment. The approach is flexible enough to work with various testbench architectures and is portable across different verification methodologies. Here, all of the design’s responses are encapsulated under a common base class, providing a single “Self-Check” interface for any checking that needs to be performed. This abstraction, combined with a single centralized scoreboard and a standardized set of components, provides the consistency needed for faster development and easier code maintenance. It expands the concept of ‘self-check’ to incorporate the white-box monitors (tracking internal DUT state changes etc.) and Temporal Models (reacting to wire changes) along-with traditional methodologies for enabling self-checking.

For VMM users looking at migrating to UVM, there is another paper from Courtney Schmitt of Analog Devices, Inc.Transitioning to UVM from VMMdiscusses the process of transitioning to a UVM based  environment from VMM Differences and parallels between the two verification methodologies are presented to show that updating to UVM is mostly a matter of getting acquainted with a new set of base classes. Topics include UVM phases, agents, TLM ports, configuration, sequences, and register models. Best practices and reference resources are highlighted to make the transition from VMM to UVM as painless as possible.

Posted in Announcements, Coverage, Metrics, Creating tests, Customization, Modeling, Optimization/Performance, Reuse, SystemVerilog, UVM, Uncategorized, VMM, VMM infrastructure | 3 Comments »

Reusable Sequences in UVM

Posted by haribali on 12th November 2012

In this blog, I will be sharing the necessary steps one has to take while writing a sequence to make sure it can be reusable. Having developed sequences and tests in UVM, while using Verification IPs, I think writing sequences is the most challenging part in verifying any IP.  Careful planning is required to write sequences without which we end up writing one sequence for every scenario from scratch. This makes sequences hard to maintain and debug.

As we know, sequences are made up of several data items, which together form an interesting scenario. Sequences can be hierarchical thereby creating more complex scenarios. In its simplest form, a sequence should be a derivative of the uvm_sequence base class by specifying request and response item type parameter and implement body task with the specific scenario you want to execute.

class usb_simple_sequence extends uvm_sequence #(usb_transfer);

    rand int unsigned sequence_length;
    constraint   reasonable_seq_len { sequence_length < 10 };
    function new(string name=”usb_simple_bulk_sequence”);;

    //Register with factory

    //the body() task is the actual logic of the sequence
    virtual task body();
        `uvm_do_with(req,   {
            //Setting the device_id to 2
            req.device_id   == 8’d2;
            //Setting transfer type to BULK
            req.type   == usb_transfer::BULK_TRANSFER;
    endtask   : body

In the above sequence we are trying to send usb bulk transfer to a device whose id is 2. Test writers can invoke this by just assigning this sequence to the default sequence of the sequencer in the top level test.

class usb_simple_bulk_test extends uvm_test;

    virtual function void   build_phase(uvm_phase phase );
        uvm_config_db#(uvm_object_wrapper)::set(this, "sequencer_obj.
        main_phase","default_sequence", usb_simple_sequence::type_id::get());
    endfunction : build_phase

So far, things look simple and straight forward. To make sure the sequence is reusable for more complex scenarios, we have to follow a few more guidelines.

  • First off, it is important to manage the end of test by raising and dropping objections in the pre_start and post_start tasks in the sequence class. This way we raise and drop objection only in the top most sequence instead of doing it for all the sub sequences.

task pre_start()

    if(starting_phase != null)
endtask : pre_start

task post_start()

    if(starting_phase   != null)
endtask : post_start

Note that starting_phase is defined only for the sequence which is started as the default sequence for a particular phase. If you have started it explicitly by calling the sequence’s start method then it is the user’s responsibility to set the starting_phase.

class usb_simple_bulk_test extends uvm_test;

    usb_simple_sequence seq;
    virtual function void main_phase(uvm_phase   phase );
        //User need to set the starting_phase as   sequence start method
        is explicitly called to invoke the sequence
        seq.starting_phase = phase;
    endfunction : main_phase


  • Use UVM configurations to get the values from top level test. In the above example there is no controllability given to test writers as the sequence is not using configurations to take values from the top level test or sequence (which will be using this sequence to build a complex scenario). Modifying the sequence to give more control to the top level test or sequence which is using this simple sequence.

class usb_simple_sequence extends uvm_sequence #(usb_transfer);

    rand int unsigned sequence_length;
    constraint reasonable_seq_len {   sequence_length < 10 };
    virtual task body();
        usb_transfer::type_enum local_type;
        bit[7:0] local_device_id;
        //Get the values for the variables in case toplevel
         //test/sequence sets it.
        uvm_config_db#(int   unsigned)::get(null, get_full_name(),
            “sequence_length”, sequence_length);
            get_full_name(), “local_type”, local_type);
        uvm_config_db#(bit[7:0])::get(null, get_full_name(),?
            “local_device_id”, local_device_id);

        `uvm_do_with(req,   {
            req.device_id   == local_device_id;
            req.type   == local_type;
    endtask : body


With the above modifications we have given control to the top-level test or sequence to modify the device_id, sequence_length and type. A few things to note here:-  the parameter type and string (third argument) used in uvm_config_db#()::set should be matching the type being used in uvm_config_db#()::get. Make sure to ‘set’ and ‘get’ with exact datatype. Otherwise value will not get set properly, and debugging will become a nightmare.

One problem with the above sequence is: if there are any constraints in the usb_transfer class on device_id or type, then this will restrict the top-level test or sequence to make sure it is within the constraint.

For example if there is a constraint on the device_id in the usb_transfer class, constraining it to be below 10 then top-level test or sequence should constraint it, within this range. If the top-level test or sequence sets it to a value like 15 (which is over 10) then you will see a constraint failure during runtime.

Sometimes the top-level test or sequence may need to take full control, and may not want to enable the constraints which are defined inside the lower level sequences or data items. One example where this is required is negative testing:- the host wants to make sure devices are not responding to the transfer with a device_id greater than 10 and so wants to send a transfer with device_id 15. So to give full control to the top-level test or sequence, we can modify the body task as shown below:-

virtual task body();

    usb_transfer::type_enum local_type;
    bit[7:0] local_device_id;
    int status_seq_len = 0;
    int status_type = 0;
    int status_device_id = 0;

    status_seq_len = uvm_config_db#(int unsigned)::get(null,
        get_full_name(), “sequence_length”, sequence_length);
    status_type = uvm_config_db#(usb_transfer::type_enum)::get(null,
    status_device_id = uvm_config_db#(bit[7:0])::get(null,
        get_full_name(), “local_device_id”,local_device_id);

    //If status of uvm_config_db::get is true then try to use the values
        // set by toplevel test or sequence instead of the random value.
    if(status_device_id  || status_type)
        //Using the value set by top level test or sequence
        //instead of the random value.
            req.type   = local_type;
            //Using the value set by top level test or sequence
        //instead of the random value.
            req.device_id   = local_device_id;

endtask : body

It is always good to be cautious while using `uvm_do_with as it will add the constraints on top of any existing constraints in a lower level sequence or sequence item.

Also note that if you have more variables to ‘set’ and ‘get’ then I recommend you create the object and set the values in the created object, and then set this object using uvm_config_db from the top-level test/sequence (instead of setting each and every variable inside this object explicitly). This way we can improve runtime performance by not searching each and every variable (when we execute uvm_config_db::get) , and instead get all variables in one shot using the object.

virtual task body();

    usb_simple_sequence local_obj;
    int   status = 0;
    status = uvm_config_db#usb_simple_sequence)::get(null,

    //If status of uvm_config_db::get is true   then try to use
    //the values set in the object we received.
        this.sequence_length   = local_obj.sequence_length;
        //Copy the entire req object inside the object which we
        //received from uvm_config_db   to the local req.
        req.copy   (local_obj.req);
        //If we did not get the object from top level sequence/test
        //then create one and   randomize it.

endtask : body

  • Always try to reuse the simple sequences by creating a top level sequence for complex scenarios. For example, in below sequence am trying to send bulk transfer followed by an interrupt transfer to 2 different devices. For this scenario I will be using our usb_simple_sequence as shown below:-

class usb_complex_sequence extends uvm_sequence #(usb_transfer);

    //Object of simple sequence   used for sending bulk transfer
    usb_simple_sequence simp_seq_bulk;
    //Object of simple sequence used for sending interrupt transfer
    usb_simple_sequence simp_seq_int;
    virtual task body();
        //Variable for getting device_id for bulk transfer
        bit[7:0] local_device_id_bulk;
        //Variable for getting device_id for   interrupt transfer
        bit[7:0] local_device_id_int;
        //Variable for getting sequence length for   bulk
        int unsigned local_seq_len_bulk;
        //Variable for getting sequence length for   interrupt
        int unsigned local_seq_len_int;

        //Get the values for the variables in case top level
        //test/sequence sets it.

        uvm_config_db#(int unsigned)::get(null, get_full_name(),

        uvm_config_db#(int unsigned)::get(null, get_full_name(),

        uvm_config_db#(bit[7:0])::get(null, get_full_name(),

        uvm_config_db#(bit[7:0])::get(null, get_full_name(),

        //Set the values for the variables to the   lowerlevel
        //sequence/sequence item, which we got from
        //above uvm_config_db::get.
        //Setting the values for bulk sequence
        uvm_config_db#(int unsigned)::set(null,   {get_full_name(),”.”,
        ”simp_seq_bulk”}, “sequence_length”,local_seq_len_bulk);
        uvm_config_db#(usb_transfer::type_enum)::set(null, {get_full_name(),
        “.”,“simp_seq_bulk”} , “local_type”,usb_transfer::BULK_TRANSFER);
        uvm_config_db#(bit[7:0])::set(null,   {get_full_name(), “.”,
        ”simp_seq_bulk”}, “local_device_id”,local_device_id_bulk);

        //Setting the values for interrupt   sequence
        uvm_config_db#(int unsigned)::set(null,   {get_full_name(),”.”,
        ”simp_seq_int”}, “sequence_length”,local_ seq_len_int);
        uvm_config_db#(usb_transfer::type_enum)::set(null, {get_full_name(),
        “.”,“simp_seq_int”} , “local_type”,usb_transfer::INT_TRANSFER);

    endtask : body


Note that in the above sequence, we get the values using uvm_config_db::get from the top level test or sequence, and then we set it to a lower level sequence again using uvm_config_db::set. This is important without this if we try to use `uvm_do_with and pass the values inside the constraint block then this will be applied as an additional constraint instead of setting these values.

I came across these guidelines and learned them, at times the hard way. So I am sharing them here. I sure hope these will come in handy when you use sequences that come pre-packed with VIPs to build more complex scenarios, and also when you wish to write your own sequences from scratch. If you come across more such guidelines or rules of thumb for writing re-usable, maintainable and debuggable sequences, please share them with me.

Posted in UVM, Uncategorized | Comments Off

Webinar on Deploying UVM Effectively

Posted by Shankar Hemmady on 22nd October 2012

Over the past two years, several design and verification teams have begun using SystemVerilog testbench with UVM. They are moving to SystemVerilog because coverage, assertions and object-oriented programming concepts like inheritance and polymorphism allow them to reuse code much more efficiently.  This helps them in not only finding the bugs they expect, but also corner-case issues. Building testing frameworks that randomly exercise the stimulus state space of a design-under-test and analyzing completion through coverage metrics seems to be the most effective way to validate a large chip. UVM offers a standard method for abstraction, automation, encapsulation, and coding practice, allowing teams to build effective, reusable testbenches quickly that can be leveraged throughout their organizations.

However, for all of its value, UVM deployment has unique challenges, particularly in the realm of debugging. Some of these challenges are:

  • Phase management: objections and synchronization
  • Thread debugging
  • Tracing issues through automatically generated code, macro expansion, and parameterized classes
  • Default error messages that are verbose but often imprecise
  • Extended classes with methods that have implicit (and maybe unexpected) behavior
  • Object IDs that are distinct from object handles
  • Visualization of dynamic types and ephemeral classes

Debugging even simple issues can be an arduous task without UVM-aware tools. Here is a public webinar that reviews how to utilize VCS and DVE to most effectively deploy, debug and optimize UVM testbenches.

Link at

Posted in Coverage, Metrics, Debug, UVM | Comments Off

Non blocking communication in TLM2.0

Posted by haribali on 2nd July 2012

In this blog I want to introduce how non-blocking communication works in TLM-2.0. This is same in UVM, VMM and SystemC. Only the names of the functions and arguments may differ.

As name suggests, non-blocking communication returns immediately. Due to this behavior target may or may not process the request immediately when the nb_transport_fw function is called. For this very reason we have backward path using which target can intimate initiator whenever the request is processed using nb_transport_bw method.  We can look at this as 2 function calls one from initiator to target for sending the request and second is from target to initiator for sending the response once the request is processed.

If this is as simple as shown above then why do we need an additional argument in form of phase and a return type. Let’s see what these additional arguments are conveying. I have used the term “partner” (as in link partner) in the blog to identify either target or initiator.

Phase argument in the nb_transport function calls is used to communicate the phase of this transaction. Phase can be one of the following

BEGIN_REQ: This is sent by initiator along with the transaction to tell the target that this is a new request.

END_REQ: This is sent by target to tell the initiator that request has accepted but not yet processed.

BEGIN_RESP: This is sent by target to tell initiator that request is processed and this is the response for the corresponding request.

END_RESP: This is sent by the initiator to tell target that response has been accepted. With this the transaction is complete.

Return type for the nb_transport function calls is used to tell the partner (initiator/target, whoever initiated the call) whether the transaction is processed at this stage or not.  Return type can be one of the following

TLM_ACCEPTED: This is used to tell the partner (Initiator/target, whoever initiated the call) that transaction is accepted but processing has not yet started. Whenever this is returned we can simple ignore the arguments as they are not changed.


  • Target returns TLM_ACCEPTED when it has accepted the request sent by the initiator but not processed yet. Initiator need not process the function arguments as they are not changed by the target.

TLM_UPDATED: This is returned to tell the partner (initiator/target, whoever initiated the call) that the transaction is accepted and the function arguments are modified.


  • Target returns TLM_UPDATED when it has accepted the request sent by the initiator and modified the function arguments. Initiator need to take action depending on the phase argument.

TLM_COMPLETED: This is returned to tell the partner (initiator/target, whoever initiated the call) that the transaction is completed and the function arguments are modified.


  • Target returns TLM_COMPLETED when it has accepted the request sent by the initiator and completed processing it. Initiator need to take action depending on the phase argument.

In summary, non-blocking interface has a protocol associated with it which is used to make sure each transaction is properly processed and communicated the response back. If you see the blocking interface, this is not required as initiator waits for target to complete the transaction. In non-blocking there would be no way for initiator to know if the target processed the transaction or not without this protocol.

Non-blocking coding style is generally used in coding accurate models in virtual prototypes. If you are planning to use TLM models from virtual platform to speed up your simulations or for any other reasons then you will end up connecting the non-blocking TLM models to your verification environment where non-blocking interface will be useful. I welcome you to share other use cases of non-blocking interface.

Posted in Transaction Level Modeling (TLM), UVM | Comments Off

Build your own code generator!

Posted by Amit Sharma on 25th June 2012

One of the critical problems developers cope with during SoC development process (architecture plan, RTL design, verification, software development) is the constant need to synchronize between different forms of the same data structure: the SoC registers database. The SoC registers database can be found at the SoC architecture team (who write the SoC registers description document), design engineers (who implement the registers structure in RTL code), verification engineers (who write the verification infrastructure – such as RAL code, and write verification tests – such as exhaustive read/write tests from all registers), and software engineers (who use the registers information to write embedded software).

Since the same information is used in different forms, it is only natural to have a single, central database which will hold all SoC registers data. You would ideally like to generate all the required output files  (documentation, UVM or VMM Regsiter Abstraction Model, RTL, C headers, …) from this central database ..  Different vendors and CAD teams provide different automation solutions for doing this.

The RALF specification contains all of the necessary information to generate RTL and testbench  code that implements the specified registers. There are many ways to code and implement RTL, so it is not possible to provide a general purpose solution.  As far as the testbench abstraction model is concerned, there are multiple ways of customizing your model post generation in both UVM and VMM. Callbacks, factories, configurable RAL model  attributes are some of the ways through which the desired customization can be brought in.  “The ‘user’ in RALF : get ralgen to generate ‘your’ code” highlights a very convenient way of adding of bringing in  SystemVerilog compatible code which will be copied as-is into the RAL model and in the desired scope. When it comes down to generating the RTL and the ‘C’ headers, we cannot leave the customization to such a late stage.  Also, different organizations and project groups have their own RTL and C-code coding styles which means  a generated output of a very generic nature might not be very helpful. For RTL generation, engineers would want the generated code to be power and gate-count efficient. Similar for C registers header generation, it often needs to follow coding styles and match CPU Firmware API. How do we bring in all this customizations to the end user?

Using the RALF C++ API, you have full access to parsed RALF data (through C++ routines), which you can use to implement a customized RTL code generator, or any other feature that needs RALF data. So, you can use  it to generate your C header files, HTML documentation, or translate the i/p RALF files to another register description format, custom covergroups and coverage convergence sequences (DAC 2012 User Track poster 6U.8 — Register Verification on a Fast Lane: Using Automation to Converge on UVM REG Coverage Models)

I have seen two instances of the need to generate a different register specification in the recent past and that is one of the reasons I decided to put this down in a blog.   Let me talk about the first instance..

One of the project groups were in the process of migrating from their own SV base classes to UVM. They had their own Register description format from which they used to generate their Register Abstraction Model. This was a proven flow.

So, when they migrated to UVM , they wanted to have a flow which would validate the changes that they are doing..

Given that they were moving to using RALF and ‘ralgen’, they didn’t want to create Register Specification in the legacy format anymore. So, they wanted to have some automation for generating scripts in the earlier format.  So, how did they go about doing this?..   They took the RAL C++ APIs and using them there were able to create the necessary automation to to generate the legacy format from RALF in no time.. (From what I remember, it was a half days work).. Everyone were involved in doing what they were best at, and that helped in the overall scheme of things.

The other customer had their own format from which they were generating RTL, firmware code and HTML. They had the necessary automation to create RALF for generating the UVM register Model. They also had a mechanism in place to generate IPXACT from this format as well as vice versa.. So, to complete the traceability matrix, they wanted a RALF  to IPXACT conversion.. Again, the most logical approach was to take the RALF CPP APIs and get those to iterate through the parsed RALF data and generate IPXACT.. Though, this effort is not complete, it just took a day or so to be able to generate valid IPXACT1.5 schema and all that is required now is some more additional work to smoothen the corners.

How do you start using these APIs and build your own code/html generators? You need to include ‘ralf.hpp” (which is in $VCS_HOME/include) in your ‘generator’ block. And then to compile the code, you need to pick up the shared library from the VCS installation.

$CPP_COMPILER $CFLAGS -I${VCS_HOME}/include –L${VCS_HOME}/lib –lralf your-file.cpp $LDFLAGS

#include "ralf.hpp"
int main(int argc, char *argv[])
// Check basic command line usage…
if (argc < 3) {
fprintf(stderr, "Error: Wrong Usage.\n");
// Show Correct Usage …
* Parse command line arguments to get the essential
* constructor arguments. See documentation
* of class ralf::vmm_ralf’s constructor parameters.

* Create a ralf::vmm_ralf object by passing in proper
constructor arguments. */
ralf::vmm_ralf ralf_data(ralf_file, top, rtl_dir,
* Get the top level object storing the parsed RALF
* block/system data and traverse that, top-down, to get
* access to complete RALF data.
const ralf::vmm_ralf_blk_or_sys_typ * top_lvl_blk_or_sys
= ralf_data.getTopLevelBlockOrSys();
* TODO–Traverse the parsed RALF data structure top-down
* using/starting-from ‘top_lvl_blk_or_sys’ for getting
* complete access to the RALF data and then, do whatever
* you would want to do, with the parsed RALF data. One
* typical usage of parsed RALF data could be, to generate
* RTL code in your own style.
// TODO – Add your RTL generator code here.
* As part of this library, Synopsys also provides a
* default RTL generator, which can be invoked by
* invoking ‘generateRTL()’ method of ‘ralf::vmm_ralf’
* class, as demonstrated below.

Essentially, you have a handle to the parsed database, and with the available APIs you can do whatever you want with it :) ..  The documentation of the APIs are in the documentation shipped with the VCS installation.. Also, if you are like me and would rather hack away at existing code rather than start with something from scratch, you can just check with Synopsys support to give you existing templates to dump out code in specific format, and you can starting modifying that for your requirements..

Posted in Automation, Coverage, Metrics, Customization, Interoperability, Register Abstraction Model with RAL, Tools & 3rd Party interfaces, UVM, VMM infrastructure | Comments Off

DVE and UVM on wheels

Posted by Yaron Ilani on 22nd May 2012

Sometimes driving to work can be a little bit boring so a few days ago I decided to take advantage of this time slot to introduce myself and tell you a little bit about the behind-the-scenes of my video blog. Hope you’ll like it !

Hey if you’ve missed any of my short DVE videos – here they are:

Posted in Debug, UVM, Verification Planning & Management | Comments Off

Are You Afraid of Breakpoints?

Posted by Yaron Ilani on 17th May 2012

Are you afraid of breakpoints? Don’t worry, many of us have been.  After all, breakpoints are for software folks, not for us chip heads, right??
Well, not really… In many ways, chip verification is pretty much a software project.
Still, most of the people I know fall into one of these two categories – the $display person, or the breakpoints person.

The former doesn’t like breakpoints. He or she would rather fill up their code with $display’s and UVM_INFO’s and recompile their code every time around.
That’s cool.
The latter likes and appreciates breakpoints and uses them whenever possible instead of traditional $display commands.

So if you are the second type – here’s some great news from DVE that will help you be even more efficient.
And $display folks – stay tuned as it may be time for you to finally convert :-)

In this short video I show how to use Object ID’s – a new infrastructure in DVE – to break in a specific class instance !


Want more ?

Go here to catch up with our latest DVE and UVM short videos:

Posted in Debug, SystemVerilog, UVM, Uncategorized | Comments Off

Namespaces, Build Order, and Chickens

Posted by Brian Hunter on 14th May 2012

As described in the video, vkits are our convenient method of lumping together reusable UVM packages with the interface(s) that they operate on. Because code within packages can only peek or poke wires that are contained by a virtual interface, it is often useful to wrap these together somehow, and vkits are our technique at Cavium for doing that.

What goes in a vkit? Anything that is reusable. From simple agents and the interfaces they work on to complete UVM environments that connect these agents together, including scoreboards, sequence libraries, types, and utility functions.

What does not go in a vkit are items that are bound to a specific testbench, including the tests themselves.

The video describes the wildcard import syntax as an “egregiously bad idea.” First and foremost, doing so can lead to namespace pollution, which comes about when one engineer independently adds types or classes to their package and only later finds out that they conflict with those of another package. Secondly, wildcard imports prevent our shorter naming conventions of having an agent_c, drv_c, env_c, etc., within each package.

Not described in the video are CSR packages that are auto-generated by RAL, IP-XACT, or your script of choice. These packages should be independent of your vkits, such that your vkits refer to them with their explicit scopes (i.e., chx_csr_pkg::PLUCKING_CFG_C).

Future posts will go into more detail about how we architect UVM testbenches and some of our other conventions that work within this framework. Until then, I’ve got a lot of pies to eat.

PS. I’ll be at DAC this year! Come see me on Tuesday, June 5, during the “Industry Leaders Verify with Synopsys” lunch. Hopefully they’ll be serving some of my favorite foods!

Posted in Organization, Structural Components, SystemVerilog, Tutorial, UVM | 6 Comments »

Customizing UVM Messages Without Getting a Sunburn

Posted by Brian Hunter on 24th April 2012

The code snippets presented are available below the video.

   `define my_info(MSG, VERBOSITY) \
      begin \
         if(uvm_report_enabled(VERBOSITY,UVM_INFO,get_full_name())) \
            uvm_report_info(get_full_name(), $sformatf MSG, 0, `uvm_file, `uvm_line); \

  `define my_err(MSG)         \
      begin \
         if(uvm_report_enabled(UVM_NONE,UVM_ERROR,get_full_name())) \
            uvm_report_error(get_full_name(), $sformatf MSG, UVM_NONE, `uvm_file, `uvm_line); \

   `define my_warn(MSG)        \
      begin \
         if(uvm_report_enabled(UVM_NONE,UVM_WARNING,get_full_name())) \
            uvm_report_warning(get_full_name(), $sformatf MSG, UVM_NONE, `uvm_file, `uvm_line); \

   `define my_fatal(MSG)       \
      begin \
         if(uvm_report_enabled(UVM_NONE,UVM_FATAL,get_full_name())) \
            uvm_report_fatal(get_full_name(), $sformatf MSG, UVM_NONE, `uvm_file, `uvm_line); \

  initial begin
      my_report_server_c report_server = new("my_report_server");

      if($value$plusargs("fname_width=%d", fwidth)) begin
         report_server.file_name_width = fwidth;
      if($value$plusargs("hier_width=%d", hwidth)) begin
         report_server.hier_width = hwidth;


      // all "%t" shall print out in ns format with 8 digit field width

class my_report_server_c extends uvm_report_server;

   string filename_cache[string];
   string hier_cache[string];

   int    unsigned file_name_width = 28;
   int    unsigned hier_width = 60;

   uvm_severity_type sev_type;
   string prefix, time_str, code_str, fill_char, file_str, hier_str;
   int    last_slash, flen, hier_len;

   function new(string name="my_report_server");;
   endfunction : new

   virtual function string compose_message(uvm_severity severity, string name, string id, string message,
                                           string filename, int line);
      // format filename & line-number
      last_slash = filename.len() - 1;
      if(file_name_width > 0) begin
            file_str = filename_cache[filename];
         else begin
            while(filename[last_slash] != "/" && last_slash != 0)
            file_str = (filename[last_slash] == "/")?
                       filename.substr(last_slash+1, filename.len()-1) :

            flen = file_str.len();
            file_str = (flen > file_name_width)?
                       file_str.substr((flen - file_name_width), flen-1) :
                       {{(file_name_width-flen){" "}}, file_str};
            filename_cache[filename] = file_str;
         $swrite(file_str, "(%s:%6d) ", file_str, line);
      end else
         file_str = "";

      // format hier
      hier_len = id.len();
      if(hier_width > 0) begin
            hier_str = hier_cache[id];
         else begin
            if(hier_len > 13 && id.substr(0,12) == "uvm_test_top.") begin
               id = id.substr(13, hier_len-1);
               hier_len -= 13;
            if(hier_len < hier_width)
               hier_str = {id, {(hier_width - hier_len){" "}}};
            else if(hier_len > hier_width)
               hier_str = id.substr(hier_len - hier_width, hier_len - 1);
               hier_str = id;
            hier_str = {"[", hier_str, "]"};
            hier_cache[id] = hier_str;
      end else
         hier_str = "";

      // format time
      $swrite(time_str, " {%t}", $time);

      // determine fill character
      sev_type = uvm_severity_type'(severity);
         UVM_INFO:    begin code_str = "%I"; fill_char = " "; end
         UVM_ERROR:   begin code_str = "%E"; fill_char = "_"; end
         UVM_WARNING: begin code_str = "%W"; fill_char = "."; end
         UVM_FATAL:   begin code_str = "%F"; fill_char = "*"; end
         default:     begin code_str = "%?"; fill_char = "?"; end

      // create line's prefix (everything up to time)
      $swrite(prefix, "%s-%s%s%s", code_str, file_str, hier_str, time_str);
      if(fill_char != " ") begin
         for(int x = 0; x < prefix.len(); x++)
            if(prefix[x] == " ")
               prefix.putc(x, fill_char);

      // append message
      return {prefix, " ", message};
   endfunction : compose_message
endclass : my_report_server_c

Posted in Debug, Messaging, SystemVerilog, UVM, Verification Planning & Management | Comments Off

Leveraging UVM for verifying High-speed IOs with Verilog-AMS

Posted by Shankar Hemmady on 17th April 2012

When the industry decided to standardize on UVM, we had high hopes that some day we will be able to use the standard to raise the level of abstraction and solve many open issues. Take the case of AMS verification of High-speed IOs, which has largely been the turf of hand-crafted custom designs verified mostly with directed tests.

Warren Anderson and his team at AMD here describe a simple, innovative approach they used with UVM and Verilog-AMS running VCS and CustomSim/XA… yes, UVM really works wonders when used prudently with the right flow.

Posted in AMS, UVM | Comments Off

Using the VMM Datastream Scoreboard in a UVM environment

Posted by Amit Sharma on 2nd February 2012

Implementing the response checking mechanism in a self-checking environment remains the most time-consuming task. The VMM Data Stream Scoreboard package facilitates the implementation of verifying the correct transformation, destination and ordering of ordered data streams. This package is intuitively applicable to packet-oriented design, such as modems, routers and protocol interfaces. This package can also be used to verify any design transforming and moving sequences of data items, such as DSP data paths and floating-point units. Out-of-the-box, the VMM data stream scoreboard can be used to verify single-stream designs that do not modify the data flowing through them. For example, it can be used to verify FIFOs, Ethernet media access controllers (MACs) and bridges.

The VMM data scoreboard can also be used to verify multi-stream designs with user-defined data transformation and input-to-output stream routing. The transformation from input data items into expected data items is not limited to one-to-one transformation. An input data item may be transformed into multiple expected data items (e.g. segmenters) or none (e.g. reassemblers). Compared to this, the functionality available through UVM in-order comparator or the algorithmic comparator is significantly less. Thus, users might want to have access to the functionality provided by the VMM DS Scoreboard in a UVM environment. Using the UBUS example available in $VCS_HOME/doc/examples/uvm/integrated/ubus as a demo vehicle, this article shows how simple adapters are used to integrate the VMM DS scoreboard in a UVM environment and thus get access to more advanced scoreboarding functionality within the UVM environment

The UBUS example uses an example scoreboard to verify that the slave agent is operating as a simple memory. It extends from the uvm_scoreboard class and implements a memory_verify() function to makes the appropriate calls and comparisons needed to verify a memory operation. An uvm_analysis_export is explicitly created and implementation for ‘write’ defined. In the top level environment, the analysis export is connected to the analysis port of the slave monitor.


The simple scoreboard with its explicit implementation of the comparison routines suffices for verifying the basic operations, but would require to be enhanced significantly to provide more detailed information which the user might need. For example, lets take the ‘test_2m_4s’ test. Here , the environment is configured to have 2 Masters and 4 slaves.. Depending on how the slave memory map is configured, different slaves respond to different transfers on the bus. Now, if we want to get some information on how many transfer went into the scoreboard for a specific combination (eg: Master 1 to Slave 3), how many were verified to be processed correctly etc, it would be fair enough to conclude that the existing scoreboarding schemes will not suffice..

Hence, it was felt that the Data Stream Scoreboard with its advanced functionality and support for data transformation, data reordering, data loss, and multi-stream data routing should be available for verification environments not necessarily based on VMM. From VCS  2011.12-1, this integration have meed made very simple.  This VMM DS scoreboard implements a generic data stream scoreboard that accepts parameters for the input and output packet types. A single instance of this class is used to check the proper transformation, multiplexing and ordering of multiple data streams. The scoreboard class now  leverages a policy-based design and parameterized specializations to accepts any ‘Packet’ class or d, be it VMM, UVM or OVM.

The central element in policy-based design is a class template (called the host class, which in this case in the VMM DS Scoreboad), taking several type parameters as input, which are specialized with types selected by the user (called policy classes), each implementing a particular implicit method (called a policy), and encapsulating some orthogonal (or mostly orthogonal) aspect of the behavior of the instantiated host class. In this case, the ‘policies’ implemented by the policy classes are the ‘compare’ and ‘display’ routines.

By supplying a host class combined with a set of different, canned implementations for each policy, the VMM DS scoreboard can support all different behavior combinations, resolved at compile time, and selected by mixing and matching the different supplied policy classes in the instantiation of the host class template. Additionally, by writing a custom implementation of a given policy, a policy-based library can be used in situations requiring behaviors unforeseen by the library implementor .

So, lets go through a set of simple steps to see how you can use the VMM DS scoreboard in the UVM environment

Step 1: Creating the policy class for UVM and define its ‘policies’


Step 2: Replacing the UVM scoreboard with a VMM one extended from “vmm_sb_ds_typed” and specialize it with the ubus_transfer type and the previous created uvm_object_policy.

class ubus_example_scoreboard extends vmm_sb_ds_typed #(ubus_transfer,ubus_transfer, uvm_object_policy);


endclass: ubus_example_scoreboard

Once, this is done, you can either declare an VMM TLM Analysis export to connect to the Bus Monitor in the UBUS environment or use the pre-defined on in the VMM DS scoreboard

vmm_tlm_analysis_export #(ubus_example_scoreboard,ubus_transfer) analysis_exp;

Given that for any configuration, one master and slave would be active, define the appropriate streams in the constructor (though this is not required if there are only single streams, we are defining this explicitly so that this can scale up to multiple input and expect streams for different tests)

this.define_stream(0, “Slave 0″, EXPECT);
this.define_stream(0, “Master 0″, INPUT);

Step 2 .a: Create the ‘write’ implementation for the Analysis export

Since, we are verifying the operation of the slave as a simple memory, we just add in the appropriate logic to insert a packet to the scoreboard when we do a ‘WRITE’ and an expect/check when the transfer is a ‘READ’ with an address that has already been written to.


Step 2.b: Implement the stream_id() method

You can use this method to determine to which stream a specific ‘transfer’ belongs to based on the packet’s content, such as a source or destination address. In this case, the BUS Monitor updates the ‘slave’ property of the collected transfer w.r.t where the address falls on the slave memory map.



Step 3: Create the UVM Analysis to VMM Analysis Adapter

The uvm_analysis_to_vmm_analysis is used to connect any UVM component with an analysis port to any VMM component via an analysis export. The adapter will convert all incoming UVM transactions to a VMM transaction and drive this converted transaction to the VMM component through the analysis port-export. If you are using the VMM UVM interoperability library, you do not have to create the adapter as it will be available in the library



Create the ‘write’ implementation for the analysis export in the adapter

The write method, called via the <analysis_export> would just post the receive UBUS transfer from the UVM analysis port to the VMM analysis port.


Step 4: Make the TLM connections

In the original example, the item_collected_port of the slave monitor was connected to the analysis export of the example scoreboard. Here, the DataStream scoreboard has an analysis port which expects a VMM transaction. Hence, we need the adapter created above to intermediate between the analysis port of the UVM Bus monitor and the analysis export of the VMM DS scoreboard..


Step 5: Define Additional streams if required for multi-master multi-slave configurations

This step is not required for a single master/slave configuration. However, would need to create additional streams so that you can verify the correctness on all the different permutations in terms of tests like “test_2m_4s” .

In this case, the following is added in the test_2m_2s in the connect_phase()


Step 6: Add appropriate options to your compile command and analyze your results

Change the Makefile by adding –ntb_opts rvm on the command line and add +define+UVM_ON_TOP

vcs -sverilog -timescale=1ns/1ns -ntb_opts uvm-1.1+rvm +incdir+../sv -l comp.log +define+UVM_ON_TOP

And that is all, as far and you are ready to go and validate your DUT with a more advanced scoreboard with loads of built-in functionality. This is what you will get when you execute the “test_2m_4s” test

Thus, not only do you have stream specific information now, but you now have access to much more functionality as mentioned earlier. For example, you can model transformations, checks for out of order matches, allow for dropped packets, and iterate over different streams to get access to the specific transfers. Again, depending on your requirements, you can use the simple UVM comparator for your basic checks and switch over to the DS scoreboard for the more complex scenarios with the flip of a switch in the same setup. This is what we did for a UVM PCIe VIP we developed earlier ( From the Magician’s Hat: Developing a Multi-methodology PCIe Gen2 VIP) so that the users has access to all the information they require. Hopefully, this will keep you going, till we have a more powerful UVM scoreboard with some subsequent UVM version

Posted in Communication, Interoperability, Reuse, Scoreboarding, UVM, VMM infrastructure | 2 Comments »

How do I debug issues related to UVM objections?

Posted by Vidyashankar Ramaswamy on 19th January 2012

Recently one of the engineers I work with in the networking industry was describing the challenges in debugging the UVM timeout error message. I was curious and looked into his test bench. After spending an hour or so, I was able to point out the master/slave driver issue where the objection was not dropped and the simulation thread hung waiting for the objections to drop. Then I started thinking, why not use the run time option to track the status of the objection: +UVM_OBJECTION_TRACE? Well, this printed detailed messages about the objections, a lot more than what I was looking for! The problem now was to decipher the overwhelming messages spitted by the objection trace option! In a hierarchical test bench, there could be hundreds of component, and you might be debugging some SoC level test bench which you didn’t write or are familiar with. Here is an excerpt of the message log using the built in objection trace:

VM_INFO @ 0: reset [OBJTN_TRC] Object uvm_test_top.env.master_agent.mast_drv dropped 1 objection(s): count=0 total=0
UVM_INFO @ 0: reset [OBJTN_TRC] Object uvm_test_top.env.master_agent.mast_mon raised 1 objection(s): count=1 total=1UVM_INFO @ 0: reset [OBJTN_TRC] Object uvm_test_top.env.master_agent added 1 objection(s) to its total (raised from source object ): count=0 total=2
UVM_INFO @ 0: reset [OBJTN_TRC] Object uvm_test_top.env added 1 objection(s) to its total (raised from source object ): count=0 total=2
UVM_INFO @ 0: reset [OBJTN_TRC] Object uvm_test_top.env.master_agent.mast_mon dropped 1 objection(s): count=0 total=0
UVM_INFO @ 0: reset [OBJTN_TRC] Object uvm_test_top.env.slave_agent.drv raised 1 objection(s): count=1 total=1

As a verification engineer, you want to begin debugging the component or part of the test bench code which did not lower the objection as soon as possible. You want to minimize looking into the unfamiliar test bench code as much as possible or stepping through the code using a debugger.

The best way is to call the display_objections() just before timeout has been reached. As there is no callback available in the timeout procedure, I thought of writing the following few lines of code which can be forked off in any task based phase. I would recommend doing this in your base test which can be extended to create feature-specific tests. You can save some CPU processing cycles by coding this into a run time option:

top = uvm_root::get();
#(top.phase_timeout -1);

Output of the log message using the above code is shown below:

The total objection count is 2
Source              Total
Count               Count             Object
0                          2                   uvm_top
0                          2                       uvm_test_top
0                          2                           env
0                          1                               master_agent
1 1 mast_drv
0                          1                               slave_agent
1 1 slv_drv

From the above table, it is clear that the master and slave driver did not drop the objection. Now you can look into the master and slave driver components, and further debug why these components did not drop their objection. There are many different ways to achieve the same results. I welcome you to share your thoughts and ideas on this.

Posted in Debug, Messaging, UVM | 1 Comment »