Verification Martial Arts: A Verification Methodology Blog

Archive for the 'VMM' Category

Verification Methodology Manual

SNUG-2012 Verification Round Up – Miscellaneous Topics

Posted by paragg on 29th March 2013

In my final installment of the series of blogs summing up the various SNUG verification papers of 2012, I try to cover the user papers on the Design IP/Verification IP and SystemC and SystemVerilog co-simulation. Please find my earlier blogs on the other domains here: System Verilog Language, Methodologies & VCS technologies

DesignWare core USB3.0 Controller (DWC_usb3) can be configured as a USB3.0 Device Controller. When verifying a system that comprises a DWC_usb3 Device Controller, the verification environment is responsible for bringing up the DWC_usb3 Device Controller to its proper operation mode to communicate with the USB3.0 Host. The paper Integrating DesignWare USB3.0 Device Controller In a UVM-based Testbench from Ning Guo of Paradigm Works describes the process of configuring and driving the DWC_usb3 Device Controller in a UVM based verification environment using the Discovery USB 3.0 Verification IP. This paper describes how the verification environment needs to be created so that it’s highly configurable and reusable.

The AMBA 4 ACE specification enables system level cache coherency across clusters of multicore processors, such as the ARM Cortex-A15 and Cortex-A7 MPCore™ processors .This ensures optimum performance and power efficiency of complex SoC designs. However, the design complexity associated with these capabilies is also higher.  And it throws up new verification challenges.  In the paper, Creating AMBA4 ACE Test Environment With Discovery VIP”, Whitney Huang, Sean Chou, MediaTek Inc, demonstrates how to tackle complex verification challenges increase their verification productivity by using Synopsys Discovery AMBA ACE VIP.

The paper, Verification Methodology of Dual NIC SOC Using VIPs by A.V. Anil Kumar, Mrinal Sarmah, Sunita Jain of Xilinx India Technology Services Pvt. Ltd, talks about how various features of Synopsys PCIe and Ethernet Verification IPs can be exploited to help in the efficient verification of the DUT across various traffic configurations. The paper explores how the VIP Application Programming Interface (API)s can leveraged in the  tests cases to reach high  functional coverage numbers  in a very short duration. They also show how a dual NIC verification environment can effectively use Ethernet VIP APIs to test various Media Access Control (MAC) features. Finally conclude how of the implementation can be used across future revisions of their design.

The ability to analyze the performance of the SoC at the early stage of the design can make a significant different to the end product.  This can lead to more accurate and an earlier estimate of the desired performance that is expected.  Dayananda Yaraganalu Sadashivappa, Igal Mariasin, Jayaprakash Naradasi of SanDisk India Device Design Centre Pvt. Ltd., in the paperGeneric MLM environment for SoC Performance Enhancement”, outlines the solution that was found by using the Synopsys VIP models. The VIPs were used in conjunction with interconnect, which in this case is a Multi-Layer-Matrix (MLM). The environment was built leveraging the VMM base classes. The VMM multiple stream scenario (vmm_ms_scenario) base class was used to create the traffic across the matrix, and the performance meters were constructed using the base classes. The callbacks were leverage appropriately help in collating the statistics. Multiple knobs were used to make the environment generic and configurable. The approach helped in finding multiple performance bugs which could not have been easily found using conventional verification.

In the paper, “User Experience Verifying Ethernet IP Core”, Puneet Rattia of Altera Corporation, presents his experience with verifying the Altera® 40-100Gbps Ethernet IP core utilizing VMM environment while integrating the Ethernet VIP from Synopsys. He explains how he created a full suite of system and blocks level regression tests and then goes on to show how he  utilizes the coverage mapping capabilities of VCS to merge the results across these various testbenches and produce meaningful reports. Besides showing how to reuse the verification infrastructure at the SoC level, the paper also demonstrates how they went in for horizontal reuse by integrating the reference SystemC based models developed and prototyped in the early phase of the project.

UVM 1.x includes support for the communication interfaces defined by the SystemC TLM-2.0 standard. This enables integration of SystemC TLM-2.0 IP into a SystemVerilog UVM verification environment. Dr David Long, John Aynsley, Doug Smith, Doulos in the paper A Beginner’s Guide to Using SystemC TLM-2.0 IP with UVMdescribes how this is done best. They talk about the fact that the connection between SystemC and SystemVerilog currently requires a tool specific interface such as Synopsys Transaction Level Interface (TLI). This paper begins with a brief overview of TLM-2.0 aimed at novice users. It then discusses the steps required to add a SystemC TLM-2.0 model into a SystemVerilog UVM environment and simulate it with VCS. At each step, issues that users will face are explored and suggestions made for practical fixes, showing the relevant pieces of code. Finally, the paper gives a summary of areas where the UVM implementation of TLM-2.0 differs from the SystemC standard and proposes workarounds to ensure correct communication between the SystemVerilog and SystemC domains.

There is an inherent need to enable the horizontal reuse of components created during the architecture and exploration stage. Subhra S Bandyopadhyay, Pavan N M, Intel Technology India Pvt. Ltd, in Integrating SystemC OSCI TLM 2.0 Models to OVM based System Verilog Verification Environments talks about how  theur architecture team creates SystemC models  for early performance analysis and accelerated software development. In OVM-based verification environment, the objective was to reuse this model as a reference model and thus helped in reducing the overall environment bring-up time. The challenge was not only to integrate the SystemC model in the OVM-based verification environment but also to be able to efficiently send transactions from SV to SystemC and vice versa. This paper explores the successful integration of SystemC TLM2 components in OVM based verification environments and also highlight how the VCS TLI (Transaction Level Interface) adapters help TLM2.0 sockets in SystemC to communicate with those in SV and vice versa.

Truly, I feel overwhelmed by the numbers of papers and the interesting use of technology across a variety of domains on which user share their experiences across the various SNUG conferences. As we speak, the SNUG events for 2013 have started, and the stage is all set for a new set of very informative and interesting sessions. I am sure most of you would be attending the SNUIG conferences in your area. . You can find the detailed schedule of those here.

Posted in Announcements, Automation, Callbacks, Coding Style, Communication, Reuse, Structural Components, SystemC/C/C++, SystemVerilog, Transaction Level Modeling (TLM), Tutorial, UVM, VMM | Comments Off

SNUG-2012 Verification Round Up – Language & Methodologies – II

Posted by paragg on 3rd March 2013

In my previous post, we discussed papers that leveraged SystemVerilog language and constructs, as well as those that covered broad methodology topics.  In this post, I will summarize papers that are focused on the industry standard methodologies: Universal Verification Methodology (UVM) and Verification Methodology Manual (VMM).

Papers on Universal Verification Methodology (UVM)

Some users prefer not to use the base classes of a methodology directly. Adding a custom layer enables them to add in additional capabilities specific to their requirements. This layer would consist of a set of generic classes that extend the classes of the original methodology. These classes provide a convenient location to develop and share the processes that are relevant to an organization for re-use across different projects. Pierre Girodias of IDT (Canada) in the paper, Developing a re-use base layer with UVMfocuses on the recommendations that adopters of these “methodologies” should follow while developing the desired ‘base’ layer.  In the paper typical problems and possible solutions are also identified while developing this layer. Some of these including dealing with the lack of multiple-inheritance and foraging through class templates.

UVM provides many features but fails to define a reset methodology, forcing users to develop their own methodology within the UVM framework to test the ‘reset’ of their DUT. Timothy Kramer of The MITRE Corporation in the paper “Implementing Reset Testingoutlines several different reset strategies and enumerates the merits and disadvantages of each. As is the case for all engineering challenges, there are several competing factors to consider, and in this paper the different strategies are compared on flexibility, scalability, code complexity, efficiency, and how easily they can be integrated into existing testbenches. The paper concludes by presenting the reset strategy which proved to be the most optimal for their application.

The ‘Factory’ concept in advanced OOP based verification methodologies like UVM is something that has baffled most verification engineers. But is it all that complicated? Not necessarily  and this is what is  explained by Clifford E. Cummings of Sunburst Design, Inc. in his paper– The OVM/UVM Factory & Factory Overrides – How They Works – Why They Are Important” . This paper explains the fundamental details related to the OVM/UVM factory and explain how it works and how overrides facilitate simple modification to the testbench component and transaction structures on a test by test basis. This paper not only explains why the factory should be used but also demonstrates how users can create configurable UVM/OVM based environments without it.

Register Abstraction Layer has always been an integral component of most of the HVL methodologies defined so far. Doug Smith of Doulos, in his paper, Easier RAL: All You Need to Know to Use the UVM Register Abstraction Layer”, presents a simple introduction to RAL. He distills the adoption of UVM RAL into a few easy and salient steps which is adequate for most cases. The paper describes the industry standard automation tools for the generation of register model.  Additionally the integration of the generated model along with the front-door and backdoor access mechanism is explained in a lucid manner.

The combination of the SystemVerilog language features coupled with the DPI & VPI language extensions can enable the testbench to generically react to value-changes on arbitrary DUT signals (which might or might not be part  of a standard interface protocol).  Jonathan Bromley, Verilab in “I Spy with My VPI: Monitoring signals by name, for the UVM register package and more”, presents a package which supports both value probing and value-change detection for signals identified at runtime by their hierarchical name, represented as a string. This provides a useful enhancement to the UVM Register package, allowing the same string to be used for backdoor register access.

Proper testing of most digital designs requires that error conditions be stimulated to verify that the design either handles them in the expected fashion, or ignores them, but in all cases recovers gracefully. How to do it efficiently and effectively is presented in “UVM Sequence Item Based Error Injectionby Jeffrey Montesano and Mark Litterick, Verilab. A self-checking constrained-random environment can be put to the test when injecting errors, because unlike the device-under-test (DUT) which can potentially ignore an error, the testbench is required to recognize it, potentially classify it, and determine an appropriate response from the design. This paper presents an error injection strategy using UVM that meets all of these requirements. The strategy encompasses both active and reactive components, with code examples provided to illustrate the implementation details.

The Universal Verification Methodology is a huge win for the Hardware Verification community, but does it have anything to offer Electronic System Level design? David C Black from Doulos Inc. explores UVM on the ESL front in the paperDoes UVM make sense for ESL?The paper considers UVM and SystemVerilog enhancements that could make the methodology even more enticing.

Papers on Verification Methodology Manual (VMM)

Joseph Manzella of LSI Corp in “Snooping to Enhance Verification in a VMM Environmentdiscusses situations in which a verification environment may have to peek at internal RTL states and signals to enhance results, and provides guidelines of what is an acceptable practice. This paper explains how the combination of vmm_log (logger class for VMM) and +vmm_opts (Command-line utility to change the different configurable values) helps in creating a configurable message wrapper for the internal grey-box testing. The techniques show how different assertion failures can be re-routed through the VMM messaging interface. An effective and reusable snooping technique for robust checking is also covered.

At Silicon Valley in Mechanism to allow easy writing of test cases in a SystemVerilog Verification environment, then auto-expand coverage of the test case Ninad Huilgol of VerifySys addresses designer’s apprehension of using a class based environment  through a  tool that leverages the VMM base classes. It  automatically expands the scope of the original test case to cover a larger verification space around it, based on a user friendly API that looks more like Verilog, hiding the complexity underneath.

Andrew Elms of Huawei in Verification of a Custom RISC Processorpresents the successful application of VMM to the verification of a custom RISC processer. The challenges in verifying a programmable design and the solutions to address them  are presented. Three topics explored in detail are the – Use of Verification Planner, Constrained random generation of instructions, Coverage closure.The importance of the Verification Plan as the foundation for the verification effort is explored. Enhancements to the VMM generators are also explored. By default VMM data generation is independent of the current design state, such as register values and outstanding requests. RAL and generator callbacks are used to address this. Finally, experiences with coverage closure are presented.

Keep you covered on the varied verification topics in the upcoming blog ahead!!! Enjoy reading!!!

Posted in Announcements, Register Abstraction Model with RAL, UVM, VMM, VMM infrastructure | 1 Comment »

SNUG-2012 Verification Round Up – Language & Methodologies – I

Posted by paragg on 25th February 2013

As in the previous couple of years, last year’s SNUG – Synopsys User Group showcased an amazing number of useful user papers   leveraging the capabilities of the SystemVerilog language and verification methodologies centered on it.

I am always excited when I see this plethora of useful papers and I try to ensure that I set aside some time to go through all these user experiences.  Now, as we wait for SNUG, Silicon Valley to kick-start the SNUG events for this year, I would want to look back at some of the very interesting and useful paper from the different SNUGs of the year 2012.  Let me start with talking about a few papers in the area of the System Verilog language and SV methodologies.

Papers leveraging the SystemVerilog language and constructs

Hillel Miller of Freescale in the paper “Using covergroups and covergroup filters for effective functional coverageuncovers the mechanisms available for carving out the coverage goals. In the p1800-2012 of the SystemVerilog LRM, new constructs are provided just for doing this. The construct that is focused on is the “with” construct. The new construct provides the ability to carve out of a multidimensional range of possibilities for a sub-range of goals. This is very relevant in a “working” or under development setup that requires frequent reprioritization to meet tape-out goals.

The paperTaming Testbench Timing: Time’s Up for Clocking Block Confusionsby Jonathan Bromley, Kevin Johnston of Verilab, reviews the key features and purpose of clocking blocks and then examines why they continue to be a source of confusion and unexpected behavior for many verification engineers. Drawing from the authors’ project and mentoring experience, it highlights typical usage errors and how to avoid them. They clarify the internal behavior of clocking blocks to help engineers understand the reasons behind common problems, and show techniques that allow clocking blocks to be used productively and with confidence. Finally, they consider some areas that may cause portability problems across simulators and indicate how to avoid them.

Inference of latches and flops based on coding styles has always been a topic creates multiple viewpoints. There are other such scenarios of synthesis and simulation mismatches that one typically comes across. To address all such ambiguity, language developers have provided different constructs to provide for an explicit resolution based on the intent. To help us gain a deeper understanding of the topic, Don Mills of Microchip Technology Inc., presented the related concepts in the paper “Yet Another Latch and Gotchas Paper” @ SNUG Silicon Valley. This paper discusses and provides solutions to issues that designers using SystemVerilog for design come across, such as: Case expression issue for casez and casex, Latches generated when using unique case or priority case, SRFF coding style problems with synthesis, SystemVerilog 2009 new definition of logic

Gabi Glasser from Intel presented the paper “Utilizing SystemVerilog for Mixed-Signal Validation@ SNUG Israel, where he proposed a mechanism for simplifying analysis and increasing coverage for mixed signal simulations.  The method proposed here was to take advantage of SystemVerilog capabilities, which enables defining a hash (associative) array with unlimited size. During the simulation, vectors are created for required analog signals, allowing them to be analyzed within the testbench along or at the end of the simulation, without requiring saving these signals into a file. The flow change enables the ability to launch a large scale mixed signal regression while allowing an easier analysis of coverage data.

Design pattern is a general reusable solution to a commonly recurring problem within a given context. The benefit of using design patterns is clear: it gives a common language for designers when approaching a problem, and gives a set of tools, widely used, to solve issues as they come up.  The paper Design Patterns In Verification” by Guy Levenbroun of Qualcomm explores several common problems, which might rise, during the development of a testbench, and how we can use design patterns to solve these problems. The patterns are categorized majorly into following areas: creational (eg factory), structural (eg composite) and behavioral (eg template) are covered in the paper.

Arik Shmayovitsh, Avishay Tvila, Guy Lidor of Sigma Designs , in their paper “Truly reusable Testbench-to-RTL  connection for System Verilog , presents  a novel approach of  connecting the DUT and testbench using consistent semantics while  reusing the testbench. This is achieved by abstracting the connection layer of each testbench using the SystemVerilog ‘bind’ construct. This ensures that the only thing that is required to be done to reuse the testbench for a new DUT would be to identify the instance of the corresponding DUT.

In the paper, A Mechanism for Hierarchical Reuse of Interface Bindings, Thomas Zboril of Qualcomm (Canada) explores another method to instantiate SV interfaces, connect them to the DUT and wrap the virtual interfaces for use in the test environment. This method allows the reuse of all the code when the original block level DUT becomes a lower level instance  in a larger subsystem or chip. The method involves three key mechanisms: Hierarchical virtual interface wrappers, Novel approach of using hierarchical instantiation of SV interfaces, Another novel approach of automatic management of hierarchical references via SV macros (new)

Thinh Ngo & Sakar Jain of Freescale Semiconductor, in their paper, “100% Functional Coverage-Driven Verification Flow propose a coverage driven verification flow that can efficiently achieve 100% functional coverage during simulation. The flow targets varied functionality, focuses at transaction level, measures coverage during simulation, and fails a test if 100% of the expected coverage is not achieved. This flow maps stimulus coverage to functional coverage, with every stimulus transaction being associated with an event in the coverage model and vice versa. This association is derived from the DUT specification and/or the DUT model. Expected events generated along with stimulus transactions are compared against actual events triggered in the DUT. The comparison results are used to pass or fail the test. 100% functional coverage is achieved via 100% stimulus coverage. The flow enables every test with its targeted functionality to meet 100% functional coverage provided that it passes.

Papers on Verification Methodology

In the paper, Top-down vs. bottom-up verification methodology for complex ASICs, Paul Lungu & Zygmunt Pasturczyk of Ciena at Canada covers the simulation methodology used for two large ASICs requiring block level simulations. A top-down verification methodology was used for one of the ASICs while a larger version needed an expanded bottom-up approach using extended simulation capabilities. Some techniques and verification methods such as chaining of sub environments from block to top-level are highlighted  along with challenges and solutions found by the verification team. The paper presents a useful technique of  of passing a RAL (Register Abstraction Layer) mirror to the C models which are used as scoreboards in the environment. The paper also presents a method of generating stable clocks inside the “program” block.

In the paper,Integration of Legacy Verilog BFMs and VMM VIP in UVM using Abstract Classes by Santosh Sarma of Wipro Technologies(India) presents an alternative approach where Legacy BFMs written in Verilog and not implemented using ‘Classes’ are hooked up to higher level class based components to create a standard UVM VIP structure. The paper also discusses an approach where existing VMM Transactors that are tied to such Legacy BFMs can be reused inside the UVM VIP with the help of the VCS provided UVM-VMM Interoperability Library. The implementation makes use of abstract classes to define functions that invoke the BFM APIs. The abstract class is then concretized using derived classes which give the actual implementation of the functions in the abstract class. The concrete class is then bound to the Verilog instance of the BFM using the SystemVerilog “bind” concept. The concrete class handle is then used by the UVM VIP and the VMM Transactor to interact with the underlying Verilog BFM. Using this approach the UVM VIP can be made truly reusable by using run time binding of the Verilog BFM instance to the VIP instead of using hardcoded macro names or procedural calls.

A Unified Self-Check Infrastructure - A Standardized Approach for Creating the Self-Check Block of Any Verification Environmentby John Sotiropoulos, Matt Muresa , Massi Corba of Draper Laboratories Cambridge, MA, USA presents a structured approach for developing a centralized “Self-Check” block for a verification environment. The approach is flexible enough to work with various testbench architectures and is portable across different verification methodologies. Here, all of the design’s responses are encapsulated under a common base class, providing a single “Self-Check” interface for any checking that needs to be performed. This abstraction, combined with a single centralized scoreboard and a standardized set of components, provides the consistency needed for faster development and easier code maintenance. It expands the concept of ‘self-check’ to incorporate the white-box monitors (tracking internal DUT state changes etc.) and Temporal Models (reacting to wire changes) along-with traditional methodologies for enabling self-checking.

For VMM users looking at migrating to UVM, there is another paper from Courtney Schmitt of Analog Devices, Inc.Transitioning to UVM from VMMdiscusses the process of transitioning to a UVM based  environment from VMM Differences and parallels between the two verification methodologies are presented to show that updating to UVM is mostly a matter of getting acquainted with a new set of base classes. Topics include UVM phases, agents, TLM ports, configuration, sequences, and register models. Best practices and reference resources are highlighted to make the transition from VMM to UVM as painless as possible.

Posted in Announcements, Coverage, Metrics, Creating tests, Customization, Modeling, Optimization/Performance, Reuse, SystemVerilog, UVM, Uncategorized, VMM, VMM infrastructure | 3 Comments »

Build your own code generator!

Posted by Amit Sharma on 25th June 2012

One of the critical problems developers cope with during SoC development process (architecture plan, RTL design, verification, software development) is the constant need to synchronize between different forms of the same data structure: the SoC registers database. The SoC registers database can be found at the SoC architecture team (who write the SoC registers description document), design engineers (who implement the registers structure in RTL code), verification engineers (who write the verification infrastructure – such as RAL code, and write verification tests – such as exhaustive read/write tests from all registers), and software engineers (who use the registers information to write embedded software).

Since the same information is used in different forms, it is only natural to have a single, central database which will hold all SoC registers data. You would ideally like to generate all the required output files  (documentation, UVM or VMM Regsiter Abstraction Model, RTL, C headers, …) from this central database ..  Different vendors and CAD teams provide different automation solutions for doing this.

The RALF specification contains all of the necessary information to generate RTL and testbench  code that implements the specified registers. There are many ways to code and implement RTL, so it is not possible to provide a general purpose solution.  As far as the testbench abstraction model is concerned, there are multiple ways of customizing your model post generation in both UVM and VMM. Callbacks, factories, configurable RAL model  attributes are some of the ways through which the desired customization can be brought in.  “The ‘user’ in RALF : get ralgen to generate ‘your’ code” highlights a very convenient way of adding of bringing in  SystemVerilog compatible code which will be copied as-is into the RAL model and in the desired scope. When it comes down to generating the RTL and the ‘C’ headers, we cannot leave the customization to such a late stage.  Also, different organizations and project groups have their own RTL and C-code coding styles which means  a generated output of a very generic nature might not be very helpful. For RTL generation, engineers would want the generated code to be power and gate-count efficient. Similar for C registers header generation, it often needs to follow coding styles and match CPU Firmware API. How do we bring in all this customizations to the end user?

Using the RALF C++ API, you have full access to parsed RALF data (through C++ routines), which you can use to implement a customized RTL code generator, or any other feature that needs RALF data. So, you can use  it to generate your C header files, HTML documentation, or translate the i/p RALF files to another register description format, custom covergroups and coverage convergence sequences (DAC 2012 User Track poster 6U.8 — Register Verification on a Fast Lane: Using Automation to Converge on UVM REG Coverage Models)

I have seen two instances of the need to generate a different register specification in the recent past and that is one of the reasons I decided to put this down in a blog.   Let me talk about the first instance..

One of the project groups were in the process of migrating from their own SV base classes to UVM. They had their own Register description format from which they used to generate their Register Abstraction Model. This was a proven flow.

So, when they migrated to UVM , they wanted to have a flow which would validate the changes that they are doing..

Given that they were moving to using RALF and ‘ralgen’, they didn’t want to create Register Specification in the legacy format anymore. So, they wanted to have some automation for generating scripts in the earlier format.  So, how did they go about doing this?..   They took the RAL C++ APIs and using them there were able to create the necessary automation to to generate the legacy format from RALF in no time.. (From what I remember, it was a half days work).. Everyone were involved in doing what they were best at, and that helped in the overall scheme of things.

The other customer had their own format from which they were generating RTL, firmware code and HTML. They had the necessary automation to create RALF for generating the UVM register Model. They also had a mechanism in place to generate IPXACT from this format as well as vice versa.. So, to complete the traceability matrix, they wanted a RALF  to IPXACT conversion.. Again, the most logical approach was to take the RALF CPP APIs and get those to iterate through the parsed RALF data and generate IPXACT.. Though, this effort is not complete, it just took a day or so to be able to generate valid IPXACT1.5 schema and all that is required now is some more additional work to smoothen the corners.

How do you start using these APIs and build your own code/html generators? You need to include ‘ralf.hpp” (which is in $VCS_HOME/include) in your ‘generator’ block. And then to compile the code, you need to pick up the shared library libralf.so from the VCS installation.

$CPP_COMPILER $CFLAGS -I${VCS_HOME}/include –L${VCS_HOME}/lib –lralf your-file.cpp $LDFLAGS

#include
#include "ralf.hpp"
int main(int argc, char *argv[])
{
// Check basic command line usage…
if (argc < 3) {
fprintf(stderr, "Error: Wrong Usage.\n");
// Show Correct Usage …
exit(1);
}
/**
* Parse command line arguments to get the essential
* constructor arguments. See documentation
* of class ralf::vmm_ralf’s constructor parameters.
*/
{

/**
* Create a ralf::vmm_ralf object by passing in proper
constructor arguments. */
ralf::vmm_ralf ralf_data(ralf_file, top, rtl_dir,
inc_dir);
/**
* Get the top level object storing the parsed RALF
* block/system data and traverse that, top-down, to get
* access to complete RALF data.
*/
const ralf::vmm_ralf_blk_or_sys_typ * top_lvl_blk_or_sys
= ralf_data.getTopLevelBlockOrSys();
#ifndef GEN_RTL_IN_SNPS_STYLE
/*
* TODO–Traverse the parsed RALF data structure top-down
* using/starting-from ‘top_lvl_blk_or_sys’ for getting
* complete access to the RALF data and then, do whatever
* you would want to do, with the parsed RALF data. One
* typical usage of parsed RALF data could be, to generate
* RTL code in your own style.
*/
//
// TODO – Add your RTL generator code here.
//
#else
/*
* As part of this library, Synopsys also provides a
* default RTL generator, which can be invoked by
* invoking ‘generateRTL()’ method of ‘ralf::vmm_ralf’
* class, as demonstrated below.
*/
ralf_data.generateRTL();
#endif
}

Essentially, you have a handle to the parsed database, and with the available APIs you can do whatever you want with it :) ..  The documentation of the APIs are in the documentation shipped with the VCS installation.. Also, if you are like me and would rather hack away at existing code rather than start with something from scratch, you can just check with Synopsys support to give you existing templates to dump out code in specific format, and you can starting modifying that for your requirements..

Posted in Automation, Coverage, Metrics, Customization, Interoperability, Register Abstraction Model with RAL, Tools & 3rd Party interfaces, UVM, VMM infrastructure | Comments Off

Closed Loop Register Verification using IDesignSpec and the Register Abstraction Layer

Posted by Amit Sharma on 26th September 2011

Nitin Ahuja, Agnisys Technology Pvt. Ltd

In the previous article titled “Automatic generation of Register Model for VMM using IDesignSpecTM ” we discussed how it is advantageous to use a register model generator such as IDesignSpecTM, to automate the process of RALF model generation. Taking it forward, in this article we will discuss how to close the loop on register verification.

Various forms of coverage are used to ensure that registers are functioning properly. There are three coverage models in VMM. They are:

1. reg_bits coverage: this model is used to make sure that all the bits in the register are covered. This model works by writing and reading both 1 and 0 on every register bit, hence the name. This is specified using “cover +b” in the RALF model.

2. field_vals coverage: field value coverage model is implemented at the register level and supports value coverage of all fields and cross coverage between fields and other cross coverage points within the same register. This is specified using “cover +f” in the RALF model. User can specify the cross coverage depending on the functionality.

3. Address map: this coverage model is implemented at block level and ensures that all registers and the memories in the block have been read from and written to. This is specified using “cover +a” in the RALF model.

We will discuss how coverage can be switched on/off and how the type of coverage can be controlled for each field directly from the register specification.

Once the RALF model is generated, the next step in verification is to generate the RTL and the SystemVerilog RAL model using ‘ralgen’. The generated RAL model along with the RTL can be compiled and simulated in the VMM environment to generate the coverage database. This database is used for the report generation and analysis.

Reports can be generated using IDesignSpecTM (IDS). IDS generated reports have advantages over other report in that it generates the reports in a much more concise way showing all the coverage at one glance.

Turning Coverage ON or OFF

IDesignSpecTM enables the users to turn ON/OFF all the three types of coverage from within the MS Word specification itself.

Coverage can be specified and controlled using the “coverage” property in IDesignSpecTM which has the following possible values:

image

The hierarchical “coverage” property enables users to control the coverage of the whole block or at the chip level.

Here is a sample of how coverage can be specified in IDesignSpecTM:

image

image

This would be the corresponding RALF file :

agnisys_ralf

image

The coverage bins for each CoverPoint along with the cross for the various CoverPoints can also be defined in the specification as shown below:

image

This would translate to the following RALF:

image

Now, the next step after RALF generation would be to generate the RAL Model from the IDS generated RALF.

RAL MODEL AND RTL GENERATION FROM RALF:

The IDS generated RALF can be used with the Synopsys ‘ralgen’ to generate the RAL  (VMM or UVM) model as well as the RTL.

RAL model can be generated by using the following command:

image

If you specify –uvm above in the fisrt ralgen invocation above, a UVM Register Model would be generated.

COMPILATION AND REPORT GENERATION:

Once the RTL and the RAL model are generated using the ‘ralgen’, the complete model can be compiled and simulated in the VMM environment using VCS.

To compile the model use the following command on the command line:

vcs -R +plusarg_save -sverilog -o “simv1″ -ntb_opts rvm+dtm +incdir+<directories to search `defines> <files to be compiled> +define+RAL_COVERAGE

The compilation and simulation generates the simulation database which is used for the generation of the coverage reports.

Coverage reports can be generated in various forms but the most concise form can be in the form of the graphics showing all the coverage at a glance. For this, a tcl script “ivs_simif.tcl” takes up the simulation database and generates the text based report on execution of the following command:

% ivs_simif.tcl -in simv.vdb –svg

For running the above command set the environment variable “IDS_SIM_DIR”, the text report are generated at this location. This will also tell IDS where to look for the simulation data file.

A detailed graphical view of the report can be generated from IDS with the help of this text report. To generate the graphical report in the form of “scalable vector graphics” (SVG) select the “SVG” output from the IDS config and regenerate.

Another way of generating the SVG could be by using the IDS-XML or the Doc/Docx specification of the model as the input to the IDS in batch mode to generate the graphical report of the simulation by using the following command:

% idsbatch <IDS_generated_XML or doc/docx specification> -out “svg” -dir output_directory

Coverage Reports

IDesignSpec generates two types of reports from the input database.

They are:

1. Field_vals report

2. Reg_bits report

Field_vals report:

Field_vals report gives the graphical view of the field_vals coverage and the address coverage of the various registers and their respective fields.

The amount of coverage for the field (CoverPoints) is depicted by the level of green color in the fields, while that for complete register (CoverGroup) is shown by the color of name of the register.

The address coverage for the individual register (CoverPoint) is shown by the color of the address of the register (green if addressed; black if not addressed), while that of the entire block (CoverGroup) is shown by the color of the name of the block.

The coloring scheme for all the CoverGroups i.e. register name in case of the field_vals coverage and block name in case of the address coverage is:

1. If the overall coverage is greater than or equal to 80% then the name appears in GREEN color

2. If the coverage is greater than 70% but less than 80% then it appears in YELLOW

3. For coverage less than 70% name appears in RED color

Figure1 shows the field_vals and address coverage.

image

Figure:  Closed loop register verification using RALF and IDS

The above sample gives the following coverage information:

a. 2 registers, T and resetvalue, are not addressed out of total of 9 registers. Thus the overall coverage of the block falls in the range >70% &<80% which is depicted by the color of the Stopwatch (name of the block).

b. All the fields of the registers are filled with some amount of the green color which shows the amount of the coverage. As an example field T1 of register arr is covered 100% thus it is completely filled and FLD4 of register X is covered only about 10%. The exact value of coverage can be obtained by hovering over the field to get the tooltip showing the exact coverage value

c. Color of the name of the register, for example X is red, show the overall coverage of the whole register , which is less than 70% for X.

Reg_bits report:

Reg_bits report gives the detailed graphical view of the reg_bits coverage and address coverage.

Address coverage for reg_bits is shown in the same way as for the address coverage in field_vals. Reg_bits coverage has 4 components, that is,

1. Written as 1

2. Read as 1

3. Written as 0

4. Read as 0

Each of the 4 components is allocated a specific region inside a bit. If that component of the coverage is hit then the corresponding region is shown as green else it is blank. The overall coverage of the entire register is shown by the color of the name of the register as in the case of the field_vals.

image

The above sample report shows that there is no issue in “Read as 1” for the ‘resetvalue’ register. While other types or read/write has not been hit completely.

Thus, in this article we described what the various coverage models for a register are and how to generate the RALF coverage model of the registers automatically with minimum effort. An intuitive visualization of the register coverage data will ease the effort involved in deciphering the coverage reports from simulation lengthy log files. This type of closed loop register verification ensures better coverage and high quality results in less time. Hope you found this useful.. Do share with me your feedback on the same and and also let me know if you want any additional details to get the maximum benefits from this flow..

Posted in Automation, Coverage, Metrics, Organization, Register Abstraction Model with RAL, Tools & 3rd Party interfaces, Verification Planning & Management | 1 Comment »

The ‘user’ in RALF : get ralgen to generate ‘your’ code

Posted by S. Varun on 11th August 2011

A lot of times, registers in a device could be associated with configuration fields that may not exist physically inside the DUT. For example, there could be a register field meant for enabling the scrambler, a field that would need to be set to “1” only when the protocol is PCIE. As this protocol-mode is not a physical field one cannot write it as a memory mapped register. For such cases ralgen reserves a “user area” wherein users can write SystemVerilog compatible code which will be copied as-is into the RAL model. This gives users the flexibility to add any variables/constraints that may not necessarily be physical registers/fields while maintaining the automated flow. This ensures that the additional parameters are part of the ‘spec’, in this case RALF from which the Model Generation happens.. Thus, it creates a more seamless sharing of variables across the register model and the testbench..

Lets looks at how it works..  If I have a requirement to randomize the register values based on additional testbench parameters, this is what can be done..

block mdio {

bytes 2;

register mdio_reg_1_0@’h0000000 {

field bit_31_0 {

bits 32;

access rw;

reset ‘h00000000;

constraint c_bit_31_0 {

value inside {[0:15]};

}

}

}

user_code lang=SV {

rand enum {PCIE,XAUI} protocol;

constraint protocol_reg1 {

if(protocol == PCIE) mdio_reg_1_0.bit_31_0.value == 16′hFF;

}

}

}

As shown above the “user_code” RALF construct enables the users to achieve the addition of the user-code inside the generated RAL model. Make note of the fact that this construct allows you to weave custom code without having to modify the generated code. This construct can also be used to generate custom coverage. In the context of the above example the “protocol mode” will not be a coverpoint in the coverage generated by ralgen as it is not a physical field in the DUT. So user can fill this a separate covergroup using “user_code”. The new RALF spec and the generated RAL model with the added coverage are shown below:

block mdio {image

bytes 2;

register mdio_reg_1_0 @’h0000000 {

field bit_15_0 {

bits 16;

access rw;

reset ‘h00000000;

constraint c_bit_15_0 {

value inside {[0:15]};

}

}

}

user_code lang=SV {

rand enum {PCIE,XAUI} protocol;

constraint protocol_reg1 {

if(protocol == XAUI)

mdio_reg_1_0.bit_15_0.value == 16′hff;

}

}

user_code lang=sv {

covergroup protocol_mode;

option.name = name;

mode : coverpoint protocol {

bins pcie = {PCIE};

}

mdio_reg : coverpoint mdio_reg_1_0.bit_15_0.value {

bins set = {‘hff};

}

cross mode, mdio_reg;

endgroup

protocol_mode = new();

}

} Figure: Generated model snippet

User-code gets embedded in the generated RAL classes but there is no way to embed user-code in the “sample” method that exists inside each block. And so for any user embedded covergroups the sampling will need to be done manually (perhaps inside post_write callback of registers/fields) within the user testbench using <covergroup>.sample(). The construct could also be used to embed additional data members and user-defined methods, a sampling method to sample all the newly defined covergroups maybe. Thus “user_code” as a RALF construct comes in as a very handy solution for embedding user code in the automated model generation flow.

Posted in Register Abstraction Model with RAL, Stimulus Generation | 1 Comment »

Automatic generation of Register Model for VMM using IDesignSpec

Posted by Amit Sharma on 5th August 2011

Nitin Ahuja, Verification Engineer, Agnisys Technology Pvt Ltd

Generating a register model by hand could take up a lot of time in the design process and may result in serious bugs, which makes the code inefficient. On the other hand, generating the register model using the register model generator such as IDesignSpecTM reduces the coding effort, as well as generates more competent codes by avoiding the bugs in the first place, thus making the process more efficient and reduces the time to market exponentially.

Register model generator can be proved efficient in the following ways:

1. Error free codes in the first place, i.e. being automatically generated, the register model code is free from all the human as well as logical errors.

2. In the case of change in the register model specification, it is easy to modify the spec and generate the codes again in no time.

3. Generating all kind of hardware, software, industry standard specifications as well as verification codes from a single source of specification.

IDesignSpecTM (IDS) is capable of generating all the RTL as well as the verification codes such as VMM(RALF) from the register specification defined in Word, Excel, Open-office or IDS-XML.

Getting Started

A simple register can be defined inside a block in IDesigSpecTM as:

The above specification is translated into the following RALF code by IDS.

image

As a protocol, all the registers for which the hdl_path is mentioned in the RALF file, the ralgen generates the backdoor access. Thus special properties on the register such as hdl_path and coverage can be mentioned inside the IDS specification itself and will be appropriately translated into the RALF file.

The properties can be defined as below:

For Block:

image

As for the block, hdl_path , coverage or even any other such property can be mentioned for other IDS elements, such as register or field.

For register/field:

clip_image002OR

clip_image004

Note: Coverage property can take up the following three possible values:

1. ON/on: This enables all the coverage types i.e for block or memory address coverage and for registers and field the REG_BITS and FIELD_VALS coverage is on.

2. OFF/off: By default all the coverage is off. This option holds valid only in case, when the coverage is turned ON from the top level of the hierarchy or from the parent and to turn off the coverage for some particular register, specify ‘coverage=off’ for that register or field. The coverage for that specific child will be invert of what its parent has.

3. abf: Any combination of these three characters can be used to turn ON the particular type of the coverage. These characters stand for:

· a : address coverage

· b : reg_bits

· f : field_vals

For example to turn on the reg_bits and field_vals coverage, it can be mentioned as:

“coverage=bf”.

In addition to these properties, there are few more properties that can be mentioned in a similar way as above. Some of them are:

1. bins: various bins for the coverpoints can be specified using this property.

Syntax: {bins=”bin_name = {bin} ; <bin_name>…”}

2. constraint : constraints can also be specified for the register or field or for any element.

Syntax : {constraint=”constraint_name {constraint};<constraint_name> …”}

3. [vmm]<code>[/vmm]: This tag gives the users the ability to specify their own piece of system-verilog code in any element.

4. cross: cross for the coverpoints of the registers can be specified using cross property in the syntax:

Syntax: {cross = “coverpoint_1 <{label label_name}>;< coverpoint_2>…”}

Different types of registers in IDS :

1.Register Arrays:

Register arrays in RALF can be defined in IDS using the register groups. To define a register array of size ‘n’, it can be defined by placing a register inside a regroup with the repeat count equal to size of the array (n).

For example a register array of the name ”reg_array” with size equal to 10 can be defined in the IDS as follows:

clip_image006

The above specification will be translated into the following vmm code by the ids:

clip_image008

2.Memory :

Similar to the register array, Memories can also be defined in the IDS using the register groups. The only difference in the memory and register array definition is that in case of memory the external is equal to “true”. The size of the memory is calculated as, ((End_Address – Start_Address)*Repeat_Count)

As an example a memory of name “RAM” can be defined in IDS as follows:

clip_image010

The above memory specification will be translated into following VMM code:

clip_image012

3.Regfile

Regfile in RALF can be specified in IDS using the register group containing multiple registers(> 1).

One such regfile with 3 registers, repeated 16 times is shown below:

clip_image014

Following is the IDS generated VMM code for the above reg file:

clip_image016

RAL MODEL AND RTL GENERATION FROM RALF:

The IDS generated RALF can be used with the Synopsys Ralgen to generate the RAL model as well as the RTL.

To generate the RAL model use the following command:

clip_image018

And for the RTL generation use the following command:

clip_image020

SUMMARY:

It is beneficial to generate the RALF using the register model generator “IDesignspecTM”, as it guarantees bug free code, making it more competent and also reduces the time and effort. In case of modifications in the register model specification, it enables the users to regenerate the code again in no time.

Note:

We will extend this automation further in the next article where we will cover details about how you can “close the loop” on register verification. The “Closed Loop Register Verification” article will be available on VMM Central soon. Meanwhile if you have any questions/comments you can reach me at nitin[at]agnisys[dot]com .

Posted in Automation, Organization, Register Abstraction Model with RAL, Tools & 3rd Party interfaces, VMM infrastructure | 1 Comment »

The One stop shop: get done with everything you need to do with your registers

Posted by Amit Sharma on 14th July 2011

Ballori Bannerjee, Design Engineer, LSI India

Processes are created, refined and improved upon and the change in productivity which starts with a big leap subsequently slows down and at the same time as the complexity of tasks increases, the existing processes can no longer scale up. This drives the next paradigm shift in moving towards new process and automation. As in the case of all realms of technology, this is true in the context of the Register development and validation flow as well.. So, let’s look at how we changed our process to get the desired boost in productivity that we wanted..

This following flowchart represents our legacy register design and validation process.. This was a closed process and served us well initially when the number of registers, their properties etc were limited.. However, with the complex chips that we are designing and validating today, does this scale up?

register_verif

As an example, in a module that we are implementing, there are four thousand registers. Translating into number of fields, for 4000 32-bit registers we have 128,000 fields, with different hardware and software properties!

Coding the RTL with address decoding for 4000 registers, with fields having different properties is a week’s effort by a designer. Developing a re-usable randomized verification environment with tests like reset value check, read-write is another 2 weeks, at the least. Closure on bugs requires several feedbacks from verification to update design or document. So overall, there is at least a month’s effort plus maintenance overhead anytime the address mapping is modified or a register updated/added.

This flow is susceptible to errors where there could be disconnect between document, design, verification and software.

So, what do we do? We redefine the process! And this is what I will be talking about, our automated register design and verification (DV) flow which streamlines this process.

AUTOMATED REGISTER DESIGN AND VERIFICATION FLOW

The flow starts with the designer modeling the registers using a high level register description language. In our case , we use SystemRDL, and then leverage third party tools are available to generate the various downstream components from the RDL file:

· RTL in Verilog/VHDL

· C/C++ code for firmware

· Documentation ( different formats)

· High level verification environment code (HVL) in VMM

This is shown in below. The RDL file serves as a one-stop point for any register update required following a requirement change.

register2

Automated Register DV Flow

Given, that its critical to create an efficient object oriented abstraction layer to model registers and memories in a design under test, we exploit VMM RAL for the same. How do we generate the VMM RAL Model? This is generated from RALF. Many 3rd party tools are available to generate RALF from various inputs formats and we use one of them to generate RALF from SystemRDL

Thus, a complete VMM compliant randomized, coverage driven register verification environment can be created by extending the flow such that:

i. Using 3rd party tool, from SystemRDL the verification component generated is RALF, Synopsys’ Register Abstraction Layer File.

ii. RALF is passed through RALGEN, a Synopsys utility which converts the RALF information to a complete VMM based register verification environment. This includes automatic generation of pre-defined tests like reset value check, bit bash tests etc of registers and complete functional coverage model, which would have taken considerable staff-days of effort to write.

The flowchart below elucidates the process.

register3

Adopting the automated flow, it took 2 days to write the RDL. The rest of components were generated from this source. A small amount of manual effort may be required for items like back-door path definition, but it is minimal and a one-time effort. The overall benefits are much more than the number of staff days saved and we see this as something which gives us perpetual returns.. I am sure, a lot of you would already be bringing in some amount of automation in your register design and verification setup, and if you aren’t, its time you do it J

While, we are talking about abstraction and automation, lets look at another aspect in register verification.

Multiple Interfaces/Views for a register

It is possible to have registers in today’s complex SOC designs which need to be connected to two or more different buses and accessed differently. The register address will be different for the different physical interfaces it is shared between. So, how do we model this..

This can be defined in SystemRDL by using a parent addressmap with bridge property, which contains sub addressmaps representing the different views.

For example:

addrmap dma_blk_bridge {
bridge;// top level address map
reg commoncontrol_reg {
shared; // register will be shared by multiple address maps
field {
hw=rw;
sw=rw;
reset=32’h0;
} f1[32];
};

addrmap {// Define the Map for the AHB Side of the bridge
commoncontrol_reg cmn_ctl_ahb @0×0; // at address=0
} ahb;

addrmap { // Define the Map for the AXI Side of the bridge
commoncontrol_reg cmn_ctl_axi @0×40; // at address=0×40
} axi;
};

The equivalent of multiple view addressmap, in RALF is domain.

This allows one definition of the shared register while allowing access from each domain to it, where register address associated with each domain may be different .The following code is RALF with domain implementation for above RDL.

register commoncontrol_reg {
shared;
field f1 {
bits 32;
access rw;
reset ‘h0;
}
}

block dma_blk_bridge {
domain ahb {
bytes 4;
register commoncontrol_reg =cmn_ctl_ahb @’h00 ;
}

domain axi {
bytes 4;

register commoncontrol_reg=cmn_ctl_axi @’h40 ;
}
}

Each physical interface is a domain in RALF. Only blocks and systems have domains, registers are in the block. For access to a register from one interface/domain RAL provides read/write methods which can be called with the domain name as argument. This is shown below..

ral_model.STATUS.write(status, data, “pci”);

ral_model.STATUS.read(status, data, “ahb”);

This considerably simplifies the verification environment code for the shared register accesses. For more on the same, you can look at : Shared Register Access in RAL though multiple physical interfaces

However, unfortunately, in our case, the tools we used did not support multiple interfaces and the automated flow created a the RALF having effectively two or more top level systems re-defining the registers. This can blow up the RALF file size and also verification environment code.

system dma_blk_bridge {
bytes 4;
block ahb (ahb) @0×0 {
bytes 4;
register cmn_ctl_ahb @0×0 {
bytes 4;
field cmn_ctl_ahb_fl(cmn_ctl_ahb_f1)@0{
bits 32;
access rw;
reset 0×0;
} }
}

block axi (axi) @0×0 {
bytes 4;
register cmn_ctl_axi @0×40 {
bytes 4;
field cmn_ctl_axi_f1 (cmn_ctl_axi_f1) @0 {
bits 32;
access rw;
reset 0×0;
} }
}
}

Thus, as seen above, the tool is generating two blocks ‘ahb’ and ‘axi’ and re-defining the register in each block. For multiple shared registers, the resulting verification code will be much bigger than if domain had been used.

Also, without the domain associated read/write methods (as shown above) for accessing the shared registers will be at least a few lines of code per register for accessing it from a domain/interface. This makes writing the test scenarios complicated and wordy.

Using ‘domain’ in RALF and VMM RAL makes shared register implementation and access in verification environment easy. We hope that we would soon be able to have our automated flow leverage this effectively..

If you are interested to go through more details about our automation setup and register verification experiences, you might want to look at: http://www10.edacafe.com/link/DVCon-2011-Automated-approach-Register-Design-Verification-complex-SOC/34568/view.html

Posted in Automation, Modeling, Register Abstraction Model with RAL, Tools & 3rd Party interfaces | 4 Comments »

Building & Configuring Coverage Model – VMM Style – Part-III

Posted by paragg on 25th June 2011

Parag Goel, Senior Corporate Application Engineer, Synopsys

In the final blog of this coverage modeling with VMM series, we focus on error coverage. Negative scenario testing is an integral part of verification. But again, we have this question – Whether I have covered all negative scenarios?

So it is important to ensure that the generic coverage model tracks all the error scenarios.

Let’s see, how a specific mechanism provided in VMM in the form of vmm_report_catcher helps to track error coverage efficiently and effectively. The VMM Log Catcher is able to identify/catch a specific string of any type any of the messages issue through the VMM reporting mechanism.

Typically, the Verification Environment issues messages to STDOUT when the DUT responds to an error scenario. These messages can be ‘caught’ by the Log Catcher to update the appropriate coverage groups. Let see how this is done in detail.

The Verification Environment would respond to each negative scenario by issuing a message with a unique text, specific to specific error messages.

In the context of the AXI in framework, we can introduce a wide-range of error scenarios and test if the DUT responds correctly or not. A few possible error scenarios in AXI are listed below for your reference.

clip_image001

However, all the scenarios may not be applicable always and hence configurability is required to enable only the required set of coverpoints tied to the relevant negative scenarios. Thus, we should have similar configurability for error coverage as I talked about in the earlier blogs.

Let’s see how we can catch the relevant responses and sample the appropriate covergroups.

As mentioned earlier, in the example below, we make use of the unique message issued as a result of a negative scenario.

This is how we use the VMM Log catcher.

1. The error coverage class is extended from vmm_log_catcher – VMM base class.

2. The vmm_log::caught() API is utilized as means to qualify the covergroup sampling.

clip_image001[11]

In the code above, whenever a message with the text “AXI_WRITE_RESPONSE_SLVERR “ is issued from anywhere in the verification environment, the ‘caught’ method is invoked which in turn samples the appropriate covergroup. Additionally, you an specify more parameters in the caught API, to restrict what ‘scenarios’ should be caught.

vmm_log_catcher::caught(

string name = “”,

string inst = “”,

bit recurse = 0,

int typs = ALL_TYPS,

int severity = ALL_SEVS,

string text = “”);

The above API, installs the specified message handler to catch any message of the specified type and severity, issued by the specified message service interface instances specified by name and instance arguments, which contains the specified text. By default, this method catches all messages issued by this message service interface instance.

Hope these set of articles would be relevant and useful to you.. I have made an attempt to leverage some of the built-in capabilities of the SV languages and the VMM base classes to target some of the challenges in creating configurable coverage models.. These techniques can be improvised further to make them more efficient and scalable. I would be waiting to hear from you all any inputs that you, have in this area.

Posted in Automation, Configuration, Coverage, Metrics, Reuse, Structural Components, VMM, VMM infrastructure | Comments Off

Building & Configuring Coverage Model – VMM Style – Part-II

Posted by paragg on 25th June 2011

Parag Goel, Senior Corporate Application Engineer, Synopsys

In the previous post, we looked at how you can enable/disable different types of coverage encapsulated in the Coverage Model wrapper class. In this post, let’s look at how we can easily create an infrastructure to pass different inputs to the wrapper class so as to able to configure the coverage collection based on user. The infrastructure ensure that these elements values percolate down to the to the sub-coverage model groups.

The following are some of the key inputs that needs to be passed to the difference coverage component classes

1. SV Virtual Interfaces so that different signal activity can be accessed

2. The Transactions observed and collected by the physical level monitors

3. The ‘Configuration’ information

Picture7

Let’s look at how the we can easily pass the signal level information to the Coverage Model

Step I: Encapsulation of the interface in the class wrapper.

class intf_wrapper extends vmm_object;

virtual axi_if v_if ;

function new (string name, virtual axi_if mst_if);
super.new(null, name);
this.v_if = mst_if;
endfunction

endclass: master_port

Step II: In the top class/environment- Set this object using vmm_opts API.

class axi_env extends vmm_env;
`vmm_typename(axi_env)
intf_wrapper mc_intf;

function void build_ph();
mc_intf = new(“Master_Port”, tb_top.master_if_p0);
// Set the master port interface
vmm_opts::set_object(“VIP_MSTR:vip_mstr_port“, mc_intf, env);
endfunction:build_ph
endclass: axi_env

Step III: Connecting in the coverage class.

A. Get the object containing interface in the coverage model class using vmm_opts.

assert($cast(this.mst_port_obj, vmm_opts::get_object_obj(is_set, this, “vip_mstr_port“)));

B. Connecting local virtual interface to one contained in the object.

this.cov_vif = mstr_port_obj.v_if;

Now, we need to pass the collected transaction object from the monitor needs to the coverage collector. This can be conveniently done in VMM using TLM communication. This is achieved through the vmm_tlm_analysis_port, which establishes the communication between a subscriber & an observer.

class axi_transfer extends vmm_data;

. . .

class axi_bus_monitor  extends  vmm_xactor;

vmm_tlm_analysis_port#(axi_bus_monitor, axi_transfer)  m_ap;
task collect_trans();

//Writing to the analysis port.

m_ap.write(trans);
endtask
endclass

class axi_coverage_model extends vmm_object;
vmm_tlm_analysis_export #( axi_coverage_model, axi_transfer) m_export;

function new (string inst, vmm_object parent = null);
m_export = new(this, “m_export”);

endfunction

function void write(int id, axi_transfer trans);

//Sample the appropriate covergroup, once the transaction is received

in the write function.

endfunction

endclass

To set up the TLM Connections in the agent/environment, we need to do the following:

class axi_subenv extends vmm_group;

//Instantiate the model classes and creates them.

axi_bus_monitor mon;

axi_coverage_model cov;

. . .

virtual function void build_ph;
mon = new( “mon”, this);
cov = new( “cov”, this);
endfunction
virtual function void connect_ph;

//Bind the TLM ports via VMM – tlm_bind

monitor.m_ap.tlm_bind( cov.m_export );

endfunction

To make the Coverage Model truly configurable, we need to look at some of the other key requirements as well at different level of granularity. This can be summarized as the ability to do the following.

1. Enable/Disable coverage collection for each covergroup defined . Every covergroup should be created only if a user wishes to do so. So there should be a configuration parameter which restricts the creation of the covergroup altogether. And this should also be used to control the sampling of a covergroup.

2. The user must be able to configure the limits on the individual values being covered in the coverage model within a legal set of values. Say for example, transaction field BurstLength – user should be able to guide the model what are the limits on this field that one wishes to get coverage on within a legal set of values ranging from ‘1’ to ‘16’ as per AXI spec. So providing lower and upper limits for transaction parameters like burst size, burst length, address etc. makes it re-usable. This limits should be modeled as variables which can be overwritten dynamically

3. The user should be able to control the number of bins to be created. For example in fields like address. auto_bin_max option can be exploited to achieve this in case the user doesn’t have explicitly defined bins..

4. The user must be able to control the number of hits for which a bin can be considered as covered. option.atleast can be used for this purpose and the input to this can be a user defined parameter.

5. The user should also have the control to specify his coverage goal, i.e. when the coverage collector should show the covergroupcovered” even though the coverage is not 100%. This can be achieved by using option.goal, where goal is again a user defined parameter.

All the parameters required to meet the above requirements can be encapsulated in the class (i.e. coverage configuration class) and this can be set and retrieved in a similar fashion described for setting & getting the interface wrapper class using vmm_opts API’s.

class coverage_cfg extends vmm_object;
  int disable_wr_burst_len;
   . . .
  function new( vmm_object parent=null, string name);
     super.new(parent, name);
  endfunction
  coverage_cfg cfg;
  function new(vmm_object parent=null, string name);
     bit is_set;
     super.new(parent, name);
     $cast(cfg, vmm_opts::get_object_obj(is_set, this,
                                           "COV_CFG_OBJ”));
  endfunction

Wei Hua presents another cool mechanism of collecting this parameters using vmm_notification mechanism in this earlier blog  :

A Generic Functional Coverage Solution Based On vmm_notify

Hope you found this useful. I will be talking about how to track Error Coverage in my next blog, so stay tuned!

Posted in Communication, Configuration, Coverage, Metrics, Reuse, Structural Components, VMM, VMM infrastructure | Comments Off

Building & Configuring Coverage Model – VMM Style – Part-I

Posted by paragg on 24th June 2011

Parag Goel, Senior Corporate Application Engineer, Synopsys

To minimize wasted effort, coverage is used as a guide for directing verification resources by identifying tested and untested portions of the design.”

- IEEE Standard for System Verilog (IEEE Std. 1800-2009)

Configurability & reusability are the buzz^^^ words in the verification of chips and this are enabled to a big extent by the present day verification methodologies. Through a set of blogs, I plan to show how we can create configurable coverage models in VMM based environments. Given that, AMBA – AXI is one of the most commonly used protocols in industry for communication amongst the SOC peripherals, I chose protocol AXI based framework for my case study.

The idea here is to create a configurable coverage model leveraging some of the base classes provided in the methodology so that we can make it completely reusable as we move from the block to system level or as we move across projects. Once, we enable that, we can move the coverage model inside the Sub-environment modeled by vmm_group or vmm_subenv which are the units of reuse.

Picture1

Primary Requirements of Configuration Control:

Two important requirements that are needed to be met to ensure that the coverage model is made a part of reusable components are:

1. Ability to enable/disable the coverage model whenever required.

2. Ability to Turn ON/OFF different subgroups at the desired granularity. For example, an user may not always want the Error Coverage to be enabled, unless under specific circumstances.

To meet the above requirements, we make use of the VMM Global and Hierarchical Configurations

Through the vmm_opts base classes, VMM provides a mechanism to control the configuration parameters of a verification environment. This can be done in a hierarchical as well as in a global manner. These options are summarized below:

Picture6

In the environment, the coverage_enable is by default set to 0, i.e. disabled.

coverage_enable = vmm_opts::get_int(“coverage_enable”, 0);

Now, the user can enable the coverage via either of the two mechanisms.

1. From user code using vmm_opts.

The basic rule is that you need to ‘set’ it *before* the ’get’ is invoked and during the time where the construction of the components take place.  As a general recommendation, for the construction of structural configuration, the build phase is the most appropriate place.
function axi_test::build_ph();
// Enable Coverage.
vmm_opts::set_int(“@%*:axi_subenv:enable_coverage”, 1);
endfunction

2. From command line or external option file. The option is specified using the command-line +vmm_name or +vmm_opts+name.
./simv
+vmm_opts+enable_coverage=1@axi_env.axi_subenv

The command line supersedes the option set within code as shown in 1.

User can also specify options for specific instances or hierarchically using regular expressions.

Picture3

Now let’s look at the typical classification of a coverage model.

From the perspective of AXI protocol, we can look at the 4 sub-sections.

Transaction coverage: coverage definition on the user-controlled parameters usually defined in the transaction class & controlled through sequences.

Error coverage: coverage definition on the pre-defined error injection scenarios

Protocol coverage: This is protocol specific ((AXI Handshake coverage)). In case of AXI, it is mainly for coverage on the handshake signals i.e. READY & VALID on all the 5 channels.

Flow coverage: This is again protocol specific and for AXI it covers various features like, outstanding, inter-leaving, write data before write address etc…

clip_image001[11]

At this point, let’s look at how these different sub-groups with the complete coverage model can be enabled or disabled. Once the coverage configuration class is built and passed on to the main coverage model, we need a fine grain control to enable/disable individual coverage models. The code shows how the user can control all the coverage models in the build phase of the main coverage class.

Here too, we can see how we use vmm_opts comes to meet the requirements of controlling individual parameters.

vmm_opts::set_int(“@%*:disable_transaction_coverage”, 0);
vmm_opts::set_int(“@%*:disable_error_coverage”, 0);
vmm_opts::set_int(“@%*:disable_axi_handshake_coverage”, 0);

vmm_opts::set_int(“@%*:disable_flow_coverage”, 0);

In my next blog, I show how the hierarchical VMM Configurations is used to dynamically pass on signal level and other configuration related information to the coverage model. Also, we shall discuss the usage of VMM TLM feature, towards fulfilling the goal of configurable coverage model. Stay tuned!

Posted in Configuration, Coverage, Metrics, Organization, Reuse, SystemVerilog, VMM, VMM infrastructure | Comments Off

Pipelined RAL Access

Posted by Amit Sharma on 12th May 2011

Ashok Chandran, Analog Devices

Many times, we come across scenarios where a register can be accessed from multiple physical interfaces in a system. An example would be a homogenous multi-core system. Here, each core may be able to access registers within the design through its own interfaces. In such scenarios, defining a “domain” (a testbench abstraction for physical interfaces) for each interface may be an overhead.

· From a system verification point of view, it does not make any difference as to which core accesses the registers since they are identical. The flexibility to bring in ‘random selection of interfaces’ can provide additional value.

· Defining a ‘domain’ for each interface in such scenario requires duplication of registers/ or their instantiation.

· Also, the usage of multiple “domains” for homogenous multi-core systems would prevent us from seamlessly reusing our code from block level to system level. This is because as we will have to incorporate the domain definition within the testbench RAL access code when we migrate to system level as the same wouldn’t have been needed during our register abstraction code in the block level.

Another related scenario is where we need to support multiple outstanding transactions at a time. Different threads could initiate distinct transactions which can return data out of order (as in AXI protocol). The default implementation of RAL allows only one transaction at a time for each domain in consideration.

VMM pipelined RAL comes to our rescue in such cases. This mechanism allows multiple RAL accesses to be simultaneously processed by the RAL access layer. This feature of VMM can be enabled with – `define VMM_RAL_PIPELINED_ACCESS. This define adds a new state to vmm_rw::status_e – vmm_rw::PENDING. When vmm_rw::PENDING is returned as status from execute single()/execute_burst(), the transaction initiating thread is kept blocked till vmm_data::ENDED notification is received for vmm_rw_access. New transactions can now be initiated from other testbench threads and pending transactions cleared in parallel when response is received from the system.

image

As shown in the figure above, transactions initiated by thread A (A0 and A1) can be processed / queued even while transactions from thread B (B0 and B1) are in progress. Here A can be processed by one interface and B by the other. Alternately, A and B can be driven together from same interface in case the protocol supports multiple outstanding accesses.

The code below shows how the user can create his execute_single() functionality to use pipelined RAL for a simple protocol like APB. For protocols like AXI which allow multiple outstanding transactions from same interface, the physical layer transactor can control the sequence further using the vmm_data::ENDED notification of the physical layer transaction.

virtual task execute_single(vmm_rw_access tr);

   apb_trans apb = new; //Physical layer transaction

   apb.randomize() with {
      addr == tr.addr;
      if(tr.kind == vmm_rw::READ) {
         dir == READ;
      } else {
         dir == WRITE;
      }
      resp == OKAY;
      interface_id inside{0,1}; //the interface_id property in the physical layer transaction maps to the different physical interface instances
      };

   if(tr.kind == vmm_rw::WRITE) apb.data = tr.data;
   //Fork out the access in  parallel
   
fork begin
   //Get copies for thread
      automatic apb_trans pend = apb;
      automatic vmm_rw_access rw = tr;

      //Push into the physical layer BFM
      
      this.my_intf[pend.interface_id].in_chan.sneak(pend);

      //Wait for transaction completion from the physical layer BFM
      
pend.notify.wait_for(vmm_data::ENDED);

      //Get the response and read data
      
if(pend.resp == apb_trans::OKAY) begin
         rw.status = vmm_rw::IS_OK;
      end else begin
         rw.status = vmm_rw::ERROR;
      end

      if(rw.kind == vmm_rw::READ) begin
         rw.data = pend.data;
      end

      // End of this transaction – Indicate to RAL
      
rw.notify.indicate(vmm_data::ENDED);
   end join_none

   //Return pending status to RAL access layer
   
tr.status = vmm_rw::PENDING;

endtask:execute_single

For more details on creating “Pipelined Accesses”,  you might want to go through the section “Concurrently Executing Generic Transactions” in the VMM RAL User Guide

Posted in Register Abstraction Model with RAL | 3 Comments »

A RAL example with Designware VIP

Posted by S. Varun on 17th March 2011

I often get asked how best RAL ought to be used with Designware VIP. Since several of these VIPs provide a mechanism to
program registers across different DUTs, I felt it would be useful to create an example with Designware AMBA AHB VIP and
RAL. 

The example has a structure as shown in the block diagram below,

     ---------------------------------------------
    |                                             |
    |         Register Abstraction Layer          |
    |                                             |
     ---------------------------------------------
       |
  ----------------
 |                |
 |    RAL2AHB     |
 |  ------------  |     ------------------       -----------      -----------
 | | AHB MASTER | |----| HDL Interconnect |-----| AHB SLAVE |----| Resp Gen  |
 |  ------------  |     ------------------       -----------      -----------
 |                |
  ----------------


It uses a dummy HDL interconnect that has two AHB interfaces to connect a VIP AHB master component with a VIP AHB slave
component. I have created a dummy register specification in "ahb_advanced_ral_slave.ralf". 

This example lacks a real AHB based DUT with real registers and hence the AHB slave VIP components' internal memory is
used to model the register space of the system. The RALF specification is then used to generate the System Verilog RAL
model using the "ralgen" utility as shown by the command-line below.

% ralgen -l sv -t ahb_advanced_slave ahb_advanced_ral_slave.ralf -c b -c a -c f

The above command will dump an SV based RAL model in a file named "ral_ahb_advanced_slave.sv". This model is instantiated
within the top-level environment class, which by the way is an extension of the "vmm_ral_env" class. You may already be
aware that a "vmm_ral_env" based environment has to be used for RAL verification. Once instantiated it is registered using
vmm_ral_access::set_model() method. This would complete the RAL model instantiation and registration leaving only the
translation logic which is the crux of the task.

Note: "vmm_ral_env" is extended from "vmm_env". Check the RAL userguide to see the additional members.

In the block diagram above, the block RAL2AHB is the block that translates a generic RAL access in to a command, in this
case  the command being an AHB transfer. The functional logic that translates generic READ/WRITE into an AHB master
transfer is within the "vmm_rw_xactor::execute_single()". The code snippet below shows how the translation is done.

// --------------------------------------------------------------------
task ral2ahb_xlate::execute_single(vmm_rw_access tr);
  ...

  // The generic read/write being translated into a AHB transfer.
  ahb_xact_fact.data_id        = tr.data_id;
  ahb_xact_fact.scenario_id    = tr.scenario_id;
  ahb_xact_fact.stream_id      = tr.stream_id;

  // Copying over the data width from the system configuration
  ahb_xact_fact.m_enHdataWidth = vip_cfg.m_oSystemCfg.m_enHdataWidth;

  // Setting the burst type to SINGLE & number of beats to 1
  ahb_xact_fact.m_enBurstType  = dw_vip_ahb_transaction::SINGLE;
  ahb_xact_fact.m_nNumBeats    = 1;

  // Copying the address generated by RAL into the AHB address of the AHB transfer
  ahb_xact_fact.m_bvAddress    = tr.addr;

  // Copying the size information over to the AHB transaction. RAL provides
  // the size in terms of bits and the dw_vip_ahb_master_transaction class takes
  // it in terms of bytes.
  ahb_xact_fact.m_nNumBytes    = tr.n_bits/8;
  ahb_xact_fact.m_enXferSize   = dw_vip_ahb_transaction::xfer_size_enum'(func_log(tr.n_bits) - 3);
  ahb_xact_fact.m_bvvData      = new[ahb_xact_fact.m_nNumBytes];

  // Setting the transfer type WRITE/READ based on the kind value generated by RAL
  if (tr.kind == vmm_rw::WRITE) begin
    ahb_xact_fact.m_enXactType = dw_vip_ahb_transaction::WRITE;
  end
  else begin
    ahb_xact_fact.m_enXactType = dw_vip_ahb_transaction::READ;
  end

  // Unpacking the RAL write data element into the byte sized data queue available within
  // dw_vip_ahb_master_transaction class
  if(tr.kind == vmm_rw::WRITE) begin
    for (int i = 0; i < tr.n_bits/8; j++) begin
       ahb_xact_fact.m_bvvData[i] = tr.data >> 8*i;
    end
  end

  // Put the AHB transaction object into the AHB master's input channel
  ahb_xactor.m_oTransactionInputChan.put(xact);

  // Wait for the transfer to END
  xact.notify.wait_for(vmm_data::ENDED);

  // Pack the READ data from the AHB transfer back into the RAL transactions data
  // member
  if (tr.kind == vmm_rw::READ) begin
     int i;
     tr.data = 0;
     for (i = 0; i < xact.m_nNumBytes; i++) begin
        tr.data += xact.m_bvvData[i] << 8*i;
     end
  end

  // Collecting the status of the transfer and returning it to RAL
  if (xact.m_nvRespLast[0] == 0)
     tr.status = vmm_rw::IS_OK;
  else
     tr.status  = vmm_rw::ERROR;

endtask: execute_single
// --------------------------------------------------------------------

The created translator class also has to be instantiated within the environment class and has to be registered using
"vmm_rw_access::add_xactor()" as shown below,

// --------------------------------------------------------------------
class ahb_advanced_ral_env extends vmm_ral_env ;

  ral2ahb_xlate    ral_to_ahb;

    ...

  virtual function void build() ;
  begin
    super.build();
    ral_to_ahb      = new("AHB RAL MASTER XACTOR", master_mp, cfg.cfg_master);

    this.ral.add_xactor(ral_to_abb);
  end
  endfunction

endclass
// --------------------------------------------------------------------

At this juncture, we are set to run the RAL tests and easily program the registers using the Abstarcted model
with simple APIs as shown below;

     env.ral_model.slave_block.REGA.set( ... );
     env.ral_model.slave_block.REGB.set( ... );
     env.ral_model.update();

The above section shows how to setup for single transfers. For burst transfer, there is a slight variation
wherein you provide the burst translation within the vmm_rw_xactor::execute_burst() task. The code snippet
below shows this.

// --------------------------------------------------------------------
task ral2ahb_xlate::execute_burst(vmm_rw_burst tr);
 ...

  ahb_xact_fact.data_id        = tr.data_id;
  ahb_xact_fact.scenario_id    = tr.scenario_id;
  ahb_xact_fact.stream_id      = tr.stream_id;
  ahb_xact_fact.m_enHdataWidth = vip_cfg.m_oSystemCfg.m_enHdataWidth;
  ahb_xact_fact.m_nNumBeats    = tr.n_beats;
  ahb_xact_fact.m_bvAddress    = tr.addr;
  ahb_xact_fact.m_nNumBytes    = tr.n_bits/8*tr.n_beats;
  ahb_xact_fact.m_enXferSize   = dw_vip_ahb_transaction::xfer_size_enum'(func_log(tr.n_bits) - 3);

  if (tr.kind == vmm_rw::WRITE) begin
    ahb_xact_fact.m_bvvData    = new[ahb_xact_fact.m_nNumBytes];
    ahb_xact_fact.m_enXactType = dw_vip_ahb_transaction::WRITE;
  end
  else begin
    ahb_xact_fact.m_enXactType = dw_vip_ahb_transaction::READ;
  end

  case(tr.n_beats)
    4  : begin
               ahb_xact_fact.m_enBurstType = dw_vip_ahb_transaction::INCR4;
         end
    8  : begin
               ahb_xact_fact.m_enBurstType = dw_vip_ahb_transaction::INCR8;
         end
    16 : begin
               ahb_xact_fact.m_enBurstType = dw_vip_ahb_transaction::INCR16;
         end
    default : begin
               ahb_xact_fact.m_enBurstType = dw_vip_ahb_transaction::INCR;
              end
  endcase

  // Write cycle
  if(tr.kind == vmm_rw::WRITE) begin
     for(int i=0; i<tr.n_beats; i++) begin
       for(int j = 0; < tr.n_bits/8; j++) begin
         ahb_xact_fact.m_bvvData[i*(tr.n_bits/8) + j] = tr.data[i] >> 8*j;
       end
     end
  end

  // Put the AHB burst transaction into the AHB master's input channel
 ahb_xactor.m_oTransactionInputChan.put(xact);

 // Read cycle
 if(tr.kind == vmm_rw::READ) begin
     tr.data = new[tr.n_beats];
     for(int i=0; i<tr.n_beats; i++) begin
        tr.data[i] = 0;
        for(int j = 0; j<tr.n_bits/8; j++) begin
           tr.data[i] += xact.m_bvvData[i*tr.n_bits/8 +  j] <<8*j;
        end
     end
 end

  // Collecting the status of the transfer and returning it to RAL
  if (xact.m_nvRespLast[0] == 0)
     tr.status = vmm_rw::IS_OK;
  else
     tr.status  = vmm_rw::ERROR;

endtask: execute_burst
// --------------------------------------------------------------------

For invoking burst transfers in RAL, the vmm_ral_access::burst_write() & vmm_ral_access::burst_read() tasks
have to be used as shown below;

     env.ral.burst_write(status, 8'h00, 4, 8'h0f, exp_data, , 32);
     env.ral.burst_read(status, 8'h00, 4, 8'h0f, 4, act_data, , 32);

The complete example is downloadable from solvnet "tb_ahb_vmm_10_advanced_ral_sys.tar.gz". You will need DesignWare
licenses to compile and run the example. Please follow the instructions in the README to compile and run this example.
The example can be used as a reference for creating a RAL environment with other Designware VIP titles as well.
Do write to me if you have any questions on the example.

Posted in Register Abstraction Model with RAL, VMM | 1 Comment »

Register Programming using RAL package

Posted by Vidyashankar Ramaswamy on 15th February 2011

There are many methods available to program (read/write) registers in a design using RAL.

1.    ral_model::read()/write(): This is the old fashion method where you specify the address and the data. No need to know the register by name.

              EX:    ral_model.ral.read(status, addr, data, . . .);

 2.    ral_model::read_by_name()/write_by_name(): You have to specify the register name and data to execute this method. Here the register name is hard-coded.

              EX: ral_model.read_by_name(status, “reg_name”, data, . . .);

 3.    ral_reg::read()/write(): Here you have to specify the hierarchical path to the register to execute the task. Only value needs to be specified.

             EX:   ral_model.block_name.reg1.read(status,data);

 Option 1 works fine at all levels of the test bench — block, core or system – if a portable addressing scheme is used for register programming. The only issue is that it is not self documented. For instance, a sequence of writes/reads to program a DDR memory controller makes it hard to understand and debug the code when the controller does not behave as expected.

Option 2, uses the register name to perform the read/write process. This code is self documented and works great in the block level test bench. However, this option breaks when used at the core or system level. Why? Let us analyze the following situation:

Unit level test bench is using read_by_name()/write_by_name().  This works as the register names are unique. However, when multiple instances of the same block  exist at the system level, multiple registers exist with the same name, thus creating name conflict. To ensure that the name is properly scoped and that the same scope is used from block to top, read_by_name()/write_by_name() should not be used as it uses a flat name space.  To reuse the same code across different levels of test bench, one should use option 3 which is scalable.  The following code segment demonstrates this concept:

task init_ddr_controller (vmm_ral_block ddr_block);

     vmm_ral_reg  reg_in_use;

     reg_in_use = ddr_block.mode_reg_0;

     reg_in_use.write(…);

      . . .

endtask 

Note: You can  use,  reg_in_use = ddr_block.get_reg_by_name(“reg_name”); if the task is written to take in register name as well.

Then in core/integration or in SoC  level:

   init_ddr_controller(system.blk1);
   init_ddr_controller (system.blk2);

Please do share your comments/ideas on this.

Posted in Register Abstraction Model with RAL | 1 Comment »

Migrating Legacy File-Based Testbenches to VMM

Posted by JL Gray on 9th February 2011

Scott Roland, Verilab Inc, Austin, TX

As a verification engineer, it is common to be given a design to test that is based on earlier design. Presumably, that existing design also comes with a proven verification environment and suite of tests. Unfortunately, the legacy verification environment might be rather rudimentary. If we are going to create a modern, VMM-based, verification environment for the new design, then what can and should be done with the existing “simple” tests? As Joel Spolsky once said, “the single worst strategic mistake [is deciding] to rewrite the code from scratch.”

Verification reuse is just as valuable as design reuse. This can be true even if the legacy tests are a set of text command files that are read in at runtime and perform “dumb” directed testing. If the original tests are of good quality, then they will still cover important functionality of the design. They might also tests critical corner cases or problems that were seen during the initial development of the design being reused. In addition, reusing existing directed tests could help you achieve some early testing of your device faster than if you had started your verification effort from scratch.

The first way you can leverage a legacy testbench is to reuse some of the code responsible for stimulating and monitoring the DUT. As you build the new VMM environment you can properly encapsulate the existing code into the relevant VMM transactors.

Next, it would be nice to reuse the original tests themselves. The tests could be a set of tasks calls or a text file containing commands, as mentioned before. You could translate each test individually into a proper VMM test that generates transaction directly, but it would be better to create an adaption layer between the original tests and the new environment. That would obviate the need to modify the tests and allow the VMM environment to also handle new tests written for the old testbench.

To illustrate an example of using a file-based test in a VMM environment, I chose to extend the memsys_cntrlr example that is distributed with VMM release 1.2.1 in the directory sv/examples/std_lib. The example contains a number of scenarios that are implemented as extensions of the VMM Multi-stream Scenario class. I want to create an additional scenario that reads in a command file and generates transaction based on the file. First, assume that my command file contains lines that specify the command, address and data:

WRITE 8888_8888 2A
READ  2222_2222 24
WRITE 3333_3333 1A
READ  5555_5555 81
READ  7777_7777 42

Using the existing cpu_directed_scenario as a template, I created a new cpu_filebased_scenario. The execute() task, that is responsible for defining what the scenario does, takes care of reading in the test file and calling the proper write/read tasks based on the individual commands. Since the original command file specified the expected return value of every read command, the read task checks the actual return value against the given expected value. Eventually, you might create a reference model that would enable the VMM environment to predict the expected read values. Implementing the directed check in the scenario enables you to run the legacy tests before a reference model is completed and later validate the initial tests and reference model against each other. Here is the implementation of the scenario:

/// Scenario that executes commands read from a directed test file.
class cpu_filebased_scenario extends cpu_rand_scenario;
…
  /// Overloaded version of vmm_ms_scenario::execute(). Body of our scenario.
  /// Reads each line in the file and performs the specified action.
  task execute(ref int n);
    integer      fileID;
    bit [8*5:1]  cmd_str;
    bit   [7:0]  data;
    bit  [31:0]  addr;
    if (chan == null) chan = get_channel("cpu_chan");
    fileID = $fopen("test.file", "r" );
    // Lines look like: "CMD ADDRESS DATA"
    while ($fscanf(fileID, "%s %h %h", cmd_str, addr, data) != -1) begin
      unique case (cmd_str)
        "WRITE": this.write(addr, data);
        "READ" : this.read (addr, data);
        default: `vmm_error(this.log, $psprintf("Unknown command %s", cmd_str));
      endcase
      n += 1;
    end
    $fclose(fileID);
  endtask

  /// Send a write transaction for the given address and data.
  task write(input bit [31:0]  addr,
             input bit [ 7:0]  data);
    cpu_trans  tr = new();
    tr.randomize() with {tr.address == addr; tr.kind == WRITE;tr.data == data;};
    chan.put(tr);
  endtask

  /// Send a read transaction for the given address and check for the given expected data.
  task read(input bit [31:0]  addr,
            input bit [ 7:0]  exp_data);
    cpu_trans  tr = new();
    tr.randomize() with {tr.address == addr; tr.kind == READ;};
    chan.put(tr);
    if (tr.data !== exp_data) begin
      `vmm_error(this.log, $psprintf("READ(A:%X, D:%X) did not match expected:%X",
                                     addr, tr.data, exp_data));
    end
  endtask
endclass

After defining the scenario class, I created an extension of vmm_test that tells the VMM factory to use the file-based scenario for this test and run it once. The VMM architecture and factory makes it possible to run the legacy tests just like any randomized VMM tests. Plus, it does not require modifying any other component in the environment. Here is the implementation of the test class:

class test_filebased extends vmm_test;
…
  function void configure_test_ph();
    // Tell the factory which scenario class to use for this test.

    cpu_rand_scenario::override_with_new("@%*:CPU:rand_scn",

          cpu_filebased_scenario::this_type(), log, `__FILE__, `__LINE__);
  endfunction

  function void build_ph();
    // Run the scenario only once.
    vmm_opts::set_int("%*:num_scenarios", 1);
  endfunction
endclass

Since the directed test file is read into a new environment, it stands to reason that the driver in the new environment could act differently than the one in the legacy environment. For example, the driver might try to combine multiple transactions into bursts or reorder them. While this should probably be done in a specific scenario in the VMM environment, not the driver, you should compare the final stimulus that is performed on the DUT between the two environments. Only after you have done such an assessment can you say that the testcase is being reused for it’s original intent.

Once you have the legacy tests running in a modern VMM environment, you can enhance the environment to have randomization, self-checking and functional coverage. You can analyze the existing tests with a coverage model to determine what new tests you need to write to verify functionality that was initially missed or has been added or modified. The coverage model can also tell you which legacy tests duplicate functionality in other tests, providing justification for getting rid of legacy tests and giving confidence in the quality of your VMM environment.

Posted in VMM | 3 Comments »

Verification in the trenches: Transform your sc_module into a vmm_xactor

Posted by Ambar Sarkar on 19th January 2011

Dr. Ambar Sarkar, Chief Verification Technologist, Paradigm Works Inc.

Say you have SystemC VIP lying around, tried and true. More likely than not, they are BFMs that connect at the signal level to the DUT and have a procedural API supporting transaction level abstraction.

What would be the best way to hook these components up with a VMM environment? With VMM now being available in SystemC as well, you really want to make these models look and behave as vmm_xactor derived objects that interact seamlessly across the SystemC/SystemVerilog language boundary. Your VMM environment can thus take full advantage of your existing SystemC components. And your sc_module can still be used, just as before, in other non VMM environments!

Enough motivation. Can this be done? Since Syst
emC is really C++, and it supports multiple inheritance, is there a way to just create a class that inherits from both your SystemC component as well vmm_xactor?

Here is an example..

Originally, suppose you had a consumer bfm defined(keeping the example simple for illustration purposes).

   1:  SC_MODULE(consumer) {
   2:    sc_out<sc_logic>   reset;
   3:    sc_out<sc_lv<32> > sample;
   4:    
   5:    sc_in_clk    clk;
   6:      SC_CTOR(consumer_wrapper):    clk("clk"),    reset("reset"),   sample("sample") {
   7:    }
   8:    
   9:    . . .  
  10:  };

Solution Attempt 1) The first thing to try would be to simply create a new class called consumer_vmm as follows and define the required vmm_xactor methods.

   1:  class consumer_vmm : public consumer, public vmm_xactor 
   2:  {
   3:    consumer(vmm_object* parent, sc_module_name _nm) 
   4:           : vmm_xactor(_nm,"consumer",0,parent)
   5:              ,reset("reset") 
   6:              ,sample("sample") 
   7:              ,clk("clk")   
   8:       {   
   9:           SC_METHOD(entry);
  10:           sensitive << clk.pos();
  11:          . . .
  13:        
  14:       }
  15:      . . . define the remaining vmm_xactor methods as needed . . .
  16:  };
Unfortunately, this does not work. Reason? As it turns out, vmm_xactor also inherits from sc_module.So consumer_vmm will end up inheriting same sc_module through two separate classes, the consumer and the vmm_xactor. This is known as the Diamond Problem.  Check out for some fun reading 
at http://en.wikipedia.org/wiki/Diamond_problem. 
 
Okay, so what can be done? Well, luckily, we can get all of this to work reasonably well with some additional tweaks/steps. Yes, you will need to very slightly modify the original source code, but in a backward compatible way. 
 

Solution Attempt 2) Make the original consumer class  derive from vmm_xactor instead of sc_module. This is the only change to existing code, and this will be backward compatible since vmm_xactor inherits from sc_module as well. Of course, add any further vmm_xactor:: derived methods using the old api as needed.

   1:  class consumer: public vmm_xactor
   2:  {
   3:   public:
   4:    sc_out<sc_logic>   reset;
   5:    sc_out<sc_lv<32> > sample;
   6:    sc_in_clk    clk;
   7:    . . . 
   8:  }
 
 
Solution) Here are all the steps. It looks like quite a few steps, but other than creating the 
wrappers and hooking them, the rest of the steps remain the same regardless of whether you use 
the sc_module or the vmm_xactor. 

Step 1. Make the original consumer class  derive from vmm_xactor instead of sc_module. This is the only change to existing code, and this will be backward compatible since vmm_xactor inherits from sc_module as well. Of course, add any further vmm_xactor:: derived methods using the old api as needed.

   1:  class consumer: public vmm_xactor
   2:  {
   3:   public:
   4:    sc_out<sc_logic>   reset;
   5:    sc_out<sc_lv<32> > sample;
   6:    sc_in_clk    clk;
   7:    . . . 
   8:  }

step 2. define sc_module(consumer_wrapper) declare class that has the same set of pins as needed by consumer.

   1:  sc_module(consumer_wrapper) {
   2:    sc_out<sc_logic>   reset;
   3:    sc_out<sc_lv<32> > sample;
   4:    sc_in_clk    clk;
   5:    
   6:    sc_ctor(consumer_wrapper):    clk("clk"),    reset("reset"),   sample("sample") {
   7:    }
   8:      
   9:  };
 
step 3. declare pointers to instances(not instances)  to these wrappers in env class
   1:  class env: public vmm_group
   2:  {
   3:  public:
   4:     consumer *consumer_inst0;
   5:     consumer *consumer_inst1;
   6:     consumer_wrapper *wrapper0, *wrapper1;
   7:   . . .
   8:  }

step 4. in the connect_ph phase, connect the pins of consumer instances and the corresponding wrappers instances

   1:  virtual void env::connect_ph() {
   2:      consumer_inst0->reset(wrapper0->reset);
   3:      consumer_inst0->clk(wrapper0->clk);
   4:      consumer_inst0->sample(wrapper0->sample);
   5:   
   6:      consumer_inst1->reset(wrapper1->reset);
   7:      consumer_inst1->clk(wrapper1->clk);
   8:      consumer_inst1->sample(wrapper1->sample);
   9:  }
  10:     

Step 5. In the constructor for sc_top, after the  vmmm_env instance is created, make sure the pointers in the env point to the these wrappers

   1:  class sc_top : public sc_module
   2:  {
   3:  public: 
   4:    
   5:    vmm_timeline*  t1;
   6:    env*           e1;
   7:   
   8:    sc_out<sc_logic>   reset0;
   9:    sc_out<sc_lv<32> > sample0;
  10:    sc_in_clk    clk;
  11:   
  12:    sc_out<sc_logic>   reset1;
  13:    sc_out<sc_lv<32> > sample1;
  14:      
  15:    consumer_wrapper wrapper0;
  16:    consumer_wrapper wrapper1;
  17:   
  18:    SC_CTOR(sc_top):
  19:      wrapper0("wrapper0")
  20:      ,wrapper1("wrapper1")
  21:      ,reset0("reset0")
  22:      ,sample0("sample0")
  23:      ,reset1("reset1")
  24:      ,sample1("sample1")
  25:      ,clk("clk")
  26:     {
  27:        t1 = new vmm_timeline("timeline","t1");
  28:        e1 = new env("env","e1",t1);
  29:   
  30:        e1->wrapper0 = &wrapper0;
  31:        e1->wrapper1 = &wrapper1;
  32:   
  33:        vmm_simulation::run_tests();
  34:   
  35:        wrapper0.clk(clk);
  36:        wrapper0.reset(reset0);
  37:        wrapper0.sample(sample0);
  38:   
  39:        wrapper1.clk(clk);
  40:        wrapper1.reset(reset1);
  41:        wrapper1.sample(sample1);
  42:   
  43:     }
  44:   
  45:  };
 
 
So while it looks like a few more than we had hoped, you do it only once, and mechanically. Small price to pay for reuse. Maybe someone can create a simple script. 


 

Also, contact me if you want the complete example. The example also shows how you can add tlm ports as well.
 

This article is the 10th in the Verification in the trenches series. Hope you found this article useful. If you would like to hear about any other related topic, please comment or drop me a line at ambar.sarkar@paradigm-works.com. Also, if you are starting out fresh, please check out the free VMM1.2 environment generator.

 

Posted in Interoperability, SystemC/C/C++, VMM | 1 Comment »

You can also “C” with RAL.

Posted by S. Varun on 18th November 2010

The RAL C interface provides a convenient way for verification engineers to develop firmware code which can be debugged on a RTL simulation of the design. This interface provides a rich set of C API’s using which one can access fields, registers and memories included within the RAL model. The developed firmware code can be used to interface with the RAL model running on a SystemVerilog simulator using DPI (Direct Programming Interface) and can also be re-used as application-level code to be compiled on the target processor in the final application as illustrated in the figure below.

image

The “-gen_c” option available with “ralgen“ has to be used for this and that would cause the generator to generate the necessary files containing the API’s to interface to the C domain. The generated API’s can be used in one of two forms.

1. To interface to the System Verilog RAL model running on a System Verilog simulator using DPI and

2. As a pure standalone-C code designed to compile on the target processor.

Typically firmware runs a set of pre-defined sequences of writes/reads to the registers on a device performing functions for boot-up, servicing interrupts etc. You generally have these functions coded in C, and these would need to access the registers in the DUT. Using the RAL C model, these functions can be generated so that the firmware can now perform the register access through the SystemVerilog RAL model. Thus, this allows firmware and application-level code to be developed and debugged on a simulation of the design and the same functions can later be used as part of the device drivers to perform the same tasks on the hardware.

In the first scenario, when executing the C code within a simulation, it is necessary for the C code to be called by the simulation to be executed and hence the application software’s main() routine must be replaced by one or more entry points known to the simulation. All entry points must take at least one argument that will receive the base address of the RAL model to be used by the C code. The C-side reference is then used by the RAL C API to access required fields, registers or memories.

image

Consider a sequence of register accesses performed at the boot time of a router as defined in function system_bootup() shown below,

File : router_test.c

#ifdef VMM_RAL_PURE_C_MODEL

/*when used as a pure C model for the target processor */

int main() {

void *blk = calloc(410024,1);

system_bootup((void *) (((size_t)blk)>>2)+1);

}

#endif

void system_bootup (unsigned int blk){

unsigned int dev_id = 0xABCD;

unsigned int ver_num = 0×1234;

/* Invoking the generated RAL C APIs */

ral_write_DEV_ID_in_ROUTER_BLK(blk, &dev_id);

ral_write_VER_NUM_in_ROUTER_BLK(blk, &ver_num);

}

In the above example “ral_write_DEV_ID_in_ROUTER_BLK()” & “ral_write_VER_NUM_in_ROUTER_BLK()” are C API’s generated by “ralgen” to access registers named “DEV_ID” & “VER_NUM” located in a block named “ROUTER_BLK”. In general registers can be accessed using “ral_read_<reg>_in_<blk>” & “ral_write_<reg>_in_<blk>” macros present within the RAL-C interface.

The above C function can also be called from within a System Verilog testbench via DPI

File : ral_boot_seq_test.sv

import “DPI-C” context task system_bootup(int unsigned ral_model_ID);

program boot_seq_test();

initial

begin

router_blk_env env = new();

// Configuring the DUT

env.cfg_dut();

// Calling the C function using DPI

system_bootup(env.ral_model.get_block_ID()); // ‘ral_model ‘is the instance of the generated SV model in the env class

end

endprogram

In the example above, system_bootup is the service entry point which is called from the SV simulation and is passed the RAL Model reference. The ‘C’ code that is executed and simulation freezes till one of the registers accesses is made which then shifts the execution to the SV side.

image

The entire execution in ‘C’ is in ‘0’ time in the simulation timeline. The RAL C API hides the physical addresses of registers and the position and size of fields. The hiding is performed by functions and macros rather than an object-oriented structure like the native RAL model in SystemVerilog. This eliminates the need to compile a complete object-oriented model in an embedded processor object code with a limited amount of memory

Thus by using the RAL C interface one can develop a compact and efficient firmware code, while preserving as much as possible of the abstraction offered by RAL. I hope this was useful and do let me know your thoughts on the same if you plan to use this flow to meet some of your requirements of having your firmware and application-level code to be developed and debugged on a simulation of the design.

Please refer to the RAL userguide for more information. You can also refer to the example present within VCS installation located at $VCS_HOME/doc/examples/vmm/applications/vmm_ral/C_api_ex.

Posted in Register Abstraction Model with RAL, SystemC/C/C++, SystemVerilog, VMM | Comments Off

Reusing Your Block Level Testbench

Posted by JL Gray on 12th November 2010

JL Gray, Vice President, Verilab, Austin, Texas, and Author of Cool Verification

Building reusable testbench components is a desirable, but not always achievable goal on most verification projects. Engineers love the idea of not having to rewrite the same code over and over again, and managers love the idea that they can get more work out of their existing team by simply reusing code within a project and/or between projects. It is interesting, then, that I frequently encounter code written by engineers that cannot work in a reusable way.

Consider the following scenario. You’ve just coded up a new block level testbench to verify an OCP to AHB bridge. As any good engineer knows, your environment must be self-checking, so you create a VMM scenario that does the following:

class my_dma_scenario extends vmm_scenario;

  rand ocp_trans ocp;
  ahb_trans ahb;

  // ... 

  virtual task execute(ref int n);
    vmm_channel ocp_chan = this.get_channel("OCP");
    vmm_channel ahb_chan = this.get_channel("AHB");
    this.ocp.randomize();
    ocp_chan.put(this.ocp.copy());

    // Wait for the transaction to complete...

    ahb_chan.get(ahb) 

    // Compare actual and expected values
    // in the stimulus instead of the scoreboard.
    my_compare_function(ocp, ahb);

  endtask

endclass

Or, perhaps you decide to get fancy and update and create a situation where you add the expected transaction from the scenario directly to a scoreboard, which compares the results with transactions on the AHB interface as observed by a monitor. Either way, you go merrily about your business verifying the bridge. You get excellent coverage numbers, and eventually, with the help of the block’s designer, you feel you’ve fully verified the module at the block level. However, things are not going well in the full chip environment. Full chip tests are failing in unexpected ways, and other engineers debugging the issue feel it must be a bug in the brigde! They start assigning bugs to you and the block designer. If only you had some checkers operating at the full chip level you could prove your module was operating correctly…

Unfortunately, you have a problem. In your block level testbench, your checkers only work in the presence of stimulus, and in the full chip environment the stimulus components cannot be used. And at this stage of the project, there is no time to go back and create the additional infrastructure (monitor(s) and possibly a new scoreboard) required to run your self-checking module-level testbench at the full chip level.

Fortunately, there is another way to build a module level testbench so that it will be guaranteed to work at the full chip. Follow these simple rules:

  1. Always create a monitor, driver, and scenario generator for every testbench component.
  2. Never populate a scoreboard from a driver or scenario. Always pull in this information from a passive monitor.

In general, when writing your testbench components always assume they will need to run in the absence of testbench stimulus. Yes – this means additional up-front work. Writing checkers that work with passive monitors instead of the information you have available to you in the driver can be time consuming. Often, though, the amount of additional work required is much less than the time you will spend debugging issues without checkers at the full chip level. That being said, sometimes it is too difficult to implement purely passive checkers. You will have a good idea of whether or not this is the case based on the upfront work you did writing a comprehensive testplan. You do have a testplan, right?

Posted in Reuse, VMM, Verification Planning & Management | 3 Comments »

Modeling ISRs with VMM RAL

Posted by Amit Sharma on 4th November 2010

In a verification environment, different components may be trying to access the DUT registers and memories. For example, the BFM might be programming some registers while the bus monitor might be sampling the values of these registers. In specific cases, there may be an interrupt monitor which triggers an Interrupt Service Routine (ISR) whenever it sees an Interrupt pin toggling in the interface. The ISR might end up having to read the Interrupt registers and end up clearing the Interrupt bit/s through a front door access.

To ensure that different components in a verification environment can access the DUT registers at any given point in time, the RAL model instantiated in the environment can be passed to different VMM components. These different components whose methods are executing in separate parallel threads can now access the same set of registers in the DUT through the RAL model. A question many folks ask is: when there are multiple parallel register accesses, how do they get scheduled through the RAL layer?  Here is an explanation of how threads are scheduled in RAL:

A Register read/write from different threads is comparable to an ‘atomic’ channel.put() from  different threads. Hence it gets scheduled in the order of pipelining of the threads.

A write/read would basically consist of the following atomic operations:

- A generic vmm_rw_access transaction with its fields (addr, data, kind etc) being populated and posted onto the execute_single()   task of the Translate Transactor

- The transaction being translated in the execute_single() task and pushed into the input channel of the user BFM

- The transaction being retrieved through get/activate in the User BFM main thread and then subsequently driven to the DUT interface

Thus ‘posting’ of RAL accesses whenever a Read/Write/Mirror/Update is invoked is in the same order they are issued. Subsequently, the execute_single() task just translates the generic RW RAL transaction to a User BFM comprehensible transaction, and doesn’t change the order.

Now, how do we handle a scenario when specific register accesses like those coming from an ISR need to be given a higher priority than accesses coming from other threads in a verification environment?  VMM and RAL methods give us specific hooks to achievethis requirement. Here is one of the options on how this can be done.

If we look at the vmm_ral_reg::write method, we have the given signature:

virtual task write( output vmm_rw::status_e status,

    input bit [63:0] value,

    input vmm_ral::path_e path = vmm_ral::DEFAULT,

    input string domain = "",

    input int data_id = -1,

    input int scenario_id = -1,

    input int stream_id = -1)

Now, a generic RAL transaction that gets created through any Register access has the same data_id, scenario_id, stream_id arguments which get passed on from the read/write call. These arguments help us tag and track the transactions if that is so desired. Now, these arguments can be made use of in the execute_single() task to ensure that accesses from ISRs have the highest priority. But, first, if we go back to the earlier post by Varun, Issuing concurrent READ/WRITE accesses to the same register on two physical interfaces using RAL, Issuing concurrent accesses to the same registers on two physical interfaces using RAL , we note that if the RAL model is processing a ‘register access’ , it will not initiate the next one until the earlier one is completed. So, this is what we need to get done.We first use the RAL Proxy transactor to schedule the ISR register access simultaneously. After that, we flush out any existing accesses and prevent any new register access through RAL until access from the ISR is completed

This is how it, will be done:

image

For a normal register access, a read/write method will be invoked as follows:

          ral_model.<reg_name>.write(stat,wdata, “AHB”); //’wdata’ is the value to be driven, “AHB” is the domain /physical interface

For a register access in an ISR modeled in a MS Scenario, we have:

         env.bfm.to_ahb.grab(this); //grabs the channel

         env.ral.read(status, env.ral_model.<reg_name>.get_address_in_system("AHB"), data, 32, "AHB",,,1); // The last argument is again the “stream_id” argument

         env.bfm.to_ahb.ungrab(this); //allows the channel to be accessed from other threads once the ISR is completed

Once, this is done, the execute_single() task inside the translate Transcator will know if an access is through an ISR and can ensure that ISRs are processed on priority through a combination of functionality provided through the VMM Channel methods in combination with a semaphore:

virtual task execute_single(vmm_rw_access tr);

        AHB_tr cyc;

        AHB_tr cyc_active; //to keep any transactions residing on the active slot

        semaphore sem = new(1); //semaphore with a single key to prevent new accesses when a reg access from an ISR is processed

       // Translate the generic RW into a simple RW

       cyc = new;

       {cyc.dev, cyc.addr} = tr.addr;

       if (tr.kind == vmm_rw::WRITE) begin

         cyc.cycle = simple_tr::WRITE;

         cyc.data = tr.data;

         …

       end

      else begin

         cyc.cycle = simple_tr::READ;

         …

      end

      if (tr.stream_id != 1) sem.get(1); // regular register access gets blocked here when a reg access from an ISR is processed

      else begin

         if(this.bfm.to_ahb.is_full()) begin

           this.bfm.to_ahb.activate(cyc_active); //removes any existing transactions in the channel’s active slot so that the current access can be pushed through

           this.in_chan.start();

           this.in_chan.complete();

           this.in_chan.remove();

           this.bfm.to_ahb.put(cyc);

           this.bfm.to_ahb.put(cyc_active); //restores the original transaction back into the active slot

           cyc = cyc_active;

           sem.put(1); //ths ISR access puts back the key for normal accesses to resume

        end

        else this.bfm.to_ahb.put(cyc);

      end

      // Send the result back to the RAL

      if (tr.kind == vmm_rw::READ) begin

        tr.data = cyc.data;

      end

      endtask: execute_single

Posted in Register Abstraction Model with RAL, VMM infrastructure | Comments Off

Setting data members for factory-created objects

Posted by Avinash Agrawal on 11th October 2010

Avinash Agrawal, Corporate Applications, Synopsys

One question asked by verification engineers is that given that we are using the VMM factory, how do we assign the data members of VMM class factory service while calling the create_instance method ?

For example, the create_instance() method of class factory service always calls
the class constructor with default arguments. Therefore, it would appear that extra arguments
cannot be passed to the class instance class. i.e., there is no way to pass
additional arguments for class members with the following statements:

  tr = ahb_trans::create_instance(this, "Ahb_Tr0", `__FILE__, `__LINE__);

The answer is that while you cannot use the create_instance() function to initialize   ?
data members, you can use any of the following methods to deal with this assignment:
1. By using virtual function/task set_* in the base class and derived class, and the
   arguments are passed from the arguments list of set_* function/task. But there
   is a limitation with this as the set_* function/task arguments for the base
   class and the derived class must be the same.

2. By using vmm_opts::set_* and vmm_opts::get_* combination to set the value of
   properties of class. 

Examples:

1. By using set_* to pass arguments as described in point 1:

////////////////////
//Source code
////////////////////
`define VMM_12
program P;
`include "vmm.sv"

// Define base class
class ahb_trans extends vmm_data;
  `vmm_typename(ahb_trans00)
  rand int addr;
  static vmm_log log = new("ahb_trans", "object");

  `vmm_data_member_begin(ahb_trans)
    `vmm_data_member_scalar(addr,	DO_ALL)
  `vmm_data_member_end(ahb_trans)

  virtual function void set_data(int addr=0, int data=0);	// <-- Two arguments
    this.addr = addr;
  endfunction

  virtual function void display_data();
    `vmm_note(log, $psprintf("ahb_trans.addr=='h%0h", this.addr));
  endfunction

  `vmm_class_factory(ahb_trans)
endclass

// Define derived class
class my_ahb_trans extends ahb_trans;
  `vmm_typename(my_ahb_trans)
  rand int data;

  static vmm_log log = new("my_ahb_trans", "object");

  `vmm_data_member_begin(my_ahb_trans)
    `vmm_data_member_scalar(data,		DO_ALL)
  `vmm_data_member_end(my_ahb_trans)

  virtual function void set_data(int addr=0, int data=0);	// <-- Two arguments
    this.addr = addr;
    this.data = data;
  endfunction

  virtual function void display_data();
    `vmm_note(log, $psprintf("my_ahb_trans.addr=='h%0h, my_ahb_trans.data=='h%0h", this.addr, this.data));
  endfunction

  `vmm_class_factory(my_ahb_trans)
endclass

class env extends vmm_env;

   ahb_trans tr;

   function new;
      super.new("ENV");
   endfunction;

   function void build;
      super.build;

     ahb_trans::override_with_new("@%*", my_ahb_trans::this_type, log, `__FILE__, `__LINE__);
     tr = ahb_trans::create_instance(this, "Ahb_Tr0", `__FILE__, `__LINE__); 

     if(!(tr.get_typename == "class P.my_ahb_trans"))
        `vmm_error(log,"ERROR");

     tr.set_data(32'h5555_5555, 32'haaaa_aaaa);	// <-- Call set_* after create_instance
     tr.display_data();

   endfunction

endclass

initial begin
  env env = new();
  env.run;
end

endprogram

////////////////////
//Simulation Result
////////////////////
Normal[NOTE] on my_ahb_trans(object) at                    0:
    my_ahb_trans.addr=='h55555555, my_ahb_trans.data=='haaaaaaaa
Simulation PASSED on /./ (/./) at                    0 (0 warnings, 0 demoted errors & 0 demoted warnings)

2. By using vmm_opts::set_* and vmm_opts::get_* combination as described in point 2:

////////////////////
//Source code
////////////////////
`define VMM_12
program P;
`include "vmm.sv"

// Define base class
class ahb_trans extends vmm_data;
  `vmm_typename(ahb_trans00)
  rand int addr;
  static vmm_log log = new("ahb_trans", "object");

  `vmm_data_member_begin(ahb_trans)
    `vmm_data_member_scalar(addr,	DO_ALL)
  `vmm_data_member_end(ahb_trans)

  virtual function void set_data();
    this.addr = vmm_opts::get_int ("ADDR", 0, "Value set for addr");
  endfunction

  virtual function void display_data();	//<-- no arguments passed, so there is no limitation on number of arguments
    `vmm_note(log, $psprintf("ahb_trans.addr=='h%0h", this.addr));
  endfunction

  `vmm_class_factory(ahb_trans)
endclass

// Define derived class
class my_ahb_trans extends ahb_trans;
  `vmm_typename(my_ahb_trans)
  rand int data;

  static vmm_log log = new("my_ahb_trans", "object");

  `vmm_data_member_begin(my_ahb_trans)
    `vmm_data_member_scalar(data,		DO_ALL)
  `vmm_data_member_end(my_ahb_trans)

  virtual function void set_data();	//<-- no arguments passed, so there is no limitation on number of arguments
    this.addr = vmm_opts::get_int ("ADDR", 0, "Value set for addr");
    this.data = vmm_opts::get_int ("DATA", 0, "Value set for data");
  endfunction

  virtual function void display_data();
    `vmm_note(log, $psprintf("my_ahb_trans.addr=='h%0h, my_ahb_trans.data=='h%0h", this.addr, this.data));
  endfunction

  `vmm_class_factory(my_ahb_trans)
endclass

class env extends vmm_env;

   ahb_trans tr;

   function new;
      super.new("ENV");
   endfunction;

   function void build;
      super.build;

     vmm_opts::set_int("ADDR", 32'h55,null);	//	<-- Call vmm_opts::set_*
     vmm_opts::set_int("DATA", 32'haa,null);

     ahb_trans::override_with_new("@%*", my_ahb_trans::this_type, log, `__FILE__, `__LINE__);
     tr = ahb_trans::create_instance(this, "Ahb_Tr0", `__FILE__, `__LINE__); 

     if(!(tr.get_typename == "class P.my_ahb_trans"))
        `vmm_error(log,"ERROR");

     tr.set_data();
     tr.display_data();

   endfunction

endclass

initial begin
  env env = new();
  env.run;
end

endprogram

////////////////////
//Simulation Result
////////////////////
Normal[NOTE] on my_ahb_trans(object) at                    0:
    my_ahb_trans.addr=='h55, my_ahb_trans.data=='haa
Simulation PASSED on /./ (/./) at                    0 (0 warnings, 0 demoted errors & 0 demoted warnings)

Posted in VMM, VMM infrastructure | Comments Off

Shared Register Access in RAL though multiple physical interfaces

Posted by Amit Sharma on 19th September 2010

Amit Sharma, Synopsys

Usually, designs have a single interface but some designs may have more than one physical interface, each with accessible registers or memories. RAL supports designs with multiple physical interfaces, as well as registers and memories shared across multiple interfaces.

In RAL, a physical interface is called a domain. Only blocks and systems can have domains. Domains can contain registers and memories.

image

Now, how do you enable a register or a memory to be shared across multiple physical interfaces or ‘domains’ in RAL? This is done by declaring the register/memory as ‘shared’ in the RALF file and instantiating it in more than one domain. Once this is done, the functionality will be enabled in the generated RAL model.

Let us take an example. Suppose the design block has two domains or interfaces (say pci/ahb) which can have a shared access to the register STATUS. This can be specified through the ‘shared’ keyword in the register description in RALF file, generating a RAL block model, as follows.

image

In the example above, the text box on the left shows a snippet of the generated code. You can see that ‘domain’s do not create an additional level of hierarchy. The different registers belonging to the two domains in the block are available in the flat namespace in the block and can be accessed directly without specifying the domain name in the access hierarchy.

Once a register is defined as ‘shared’, I can go ahead and access that register through different domains. A typical usage scenario for a ‘shared’ register is that it can be read from one interface and written from another interface. The two different physical BFMs are mapped to the corresponding domains easily in the SV environment.

The following snippet of code how this is done:

this.ahb = new(…); // AHB BFM instantiation

this.ral.add_xactor(this.ahb, “ahb”); //tying the AHB BFM to the “ahb” domain

this.pci = new(…); // PCI BFM instantiation

this.ral.add_xactor(this.pci, “pci”); //tying the PCI BFM to the “pci” domain

Once the mapping is done, the way you access a specific ‘shared’ register is quite simple. You need to additionally specify the ‘domain’ name in your read/write task

image

When you make the writes/reads as described above, the accesses would be routed through the different BFMs corresponding to the different domains as specified through the ‘domain’ argument in the read/write access tasks as seen in the code specified in the textbox above.

In the test environment, if simulation is run with ‘trace’ or ‘debug’ verbosity , details of the access to the shared register through different domains can be observed.

The following lines shows a snippet of messages going to STDOUT

//———

Trace[DEBUG] on RVM RAL Access(Main) at 135:

Writing ‘h00000000000000aa at ‘h0000000000000A004 via domain “pci”…

Trace[DEBUG] on RAL(register) at 245:

Wrote register “ral_model.INT_MASK” via frontdoor with: ‘h00000000000000aa

Trace[DEBUG] on RVM RAL Access(Main) at 355:

Read ‘h00000000000000aa from ‘h0000000000000002 via domain “ahb”…

Trace[DEBUG] on RAL(register) at 355:

Read register “ral_model.INT_MASK” via frontdoor: ‘h00000000000000aa

//———

By the way, one of the pre-defined tests that ship with the VMM RAL is the “shared_access” test. It exercises all shared registers and memories using the following process for each domain (requires at least one domain with READ capability or backdoor access).

- Write a random value via one domain

- Check the content via all other domains.

Hence, without writing a single line of test code, you can verify the basic functionality of all shared registers/memories of your DUT through this test.


Posted in Register Abstraction Model with RAL | 1 Comment »

Using the Factory Infrastructure in VMM

Posted by Avinash Agrawal on 15th September 2010

Avinash Agrawal, Corporate Applications, Synopsys

As a well-known Object-Oriented technique, the factory concept has been applied in VMM ever since inception.

For example, if you assign different blueprints to randomized_obj and scenario_set[]
in the VMM atomic and scenario generators, these generators can generate transactions
with user specified patterns. By using the class factory pattern, you can create an
instance with a pre-defined method such as allocate() or copy() instead of the
constructor.

VMM now simplifies the application of the class factory pattern within the complete
verification environment that helps you to easily replace any kind of object,
transaction, scenario and transactor by a similar object. 

The following simple procedure helps you to apply the class factory pattern within
the verification environment:

1. Define "new", "allocate" and "copy" methods for a class and use the factory macros
   to create the necessary infrastructure for factory replacement.

class vehicle_c extends vmm_object;
`vmm_typename(vehicle_c);
//defines the new function. each argument should have default values
   function new(string name="",vmm_object parent=null);
      super.new(parent,name);
   endfunction

//defines allocate and copy methods, you can use the VMM Data shorthand macros for
the same if classes are extended from vmm_data.

   virtual function vehicle_c allocate();
      vehicle_c it;
      it = new(this.get_object_name,get_parent_object());
      allocate = it;
   endfunction

   virtual function vehicle_c copy();
      vehicle_c it;
      it = new this;
      copy = it;
   endfunction

//`vmm_typename and `vmm_class_factory will  define necessary infrastructure
 `vmm_class_factory(vehicle_c);
endclass

2. Create an instance using the pre-defined "create_instance()" method. To use the
   class factory, the class instance should be created with the pre-defined
   create_instance() method instead of the constructor. 

For example,

class driver_c extends vmm_xactor;
   vehicle_c myvehicle;
   function new(string name="",vmm_object parent=null);
      super.new(parent,name);
   endfunction

   task drive();
   //create an instance from create_instance method
      myvehicle = vehicle_c::create_instance(this,"myvehicle");
      $display("%s is driving %s(%s)", this.get_object_name(),
       myvehicle.get_object_name(),myvehicle.get_typename());
   endtask
endclass

program p1;
      driver_c Tom=new("Tom",null);
      initial begin
         tom.drive();
     end
endprogram

The output for the example will be,
Tom is driving myvehicle(class $unit::vehicle_c)

3.  Create a new class which should replace the existing one as shown in the
    following example:

class sedan_c extends vehicle_c;
   `vmm_typename(sedan_c);
    function new(string name="",vmm_object parent=null);
       super.new(name,parent);
    endfunction

    virtual function vehicle_c allocate();
       sedan_c it;
       it = new(this.get_object_name,get_parent_object());
       allocate = it;
    endfunction

    virtual function vehicle_c copy();
       sedan_c it;
       it = new this;
      copy = it;
    endfunction
    `vmm_class_factory(sedan_c);
endclass

You are now required to create a 'myvehicle' instance from this new class without
modifying the driver_c class.

4. Override the original instance or type with the new class. VMM provides two
   methods for you to override the original instances or type.

- override_with_new(string name, new_class factory, vmm_log   log,string
  fname="",int lineno=0);
  With this method, when create_instance() is called, a new instance of new_class
  will be created through facory.allocate() and returned.

- override_with_copy(string name, new_class factory,vmm_log log, string
  fname="", int lineno=0);
  With this method, when create_instance() is called, a new instanced of new_class
  will be created through factory.copy() and returned.

For both the methods, the first argument is the instance name, as specified in
the create_instance() method, which you would want to override with the type of
new_class. You can use the powerful pattern matching mechanism defined in VMM to
specify the override happens on specified instances or all the instances of one
class in the whole verification environment.

The code below will override all vehicle_c instances with sedan_c type in the
environment:

program p1;
   driver_c Tom=new("Tom",null);
   vmm_log log;
   initial begin
      //override all vehicle_c instances with type of sedan_c
      vehicle_c::override_with_new("@%*",sedan_c::this_type,log);
      Tom.drive();
    end
endprogram

And the output of the above code is:

  Tom is driving myvehicle(class $unit::sedan_c)

If you only want to override one dedicated instance with a copy of another
instance, you can call override_with_copy by using the following code:

vehicle_c::override_with_copy("@%Tom:myvehicle",another_sedan_c_instance,log);

As the above example shows, with the class factory, match patterns and shorthand
macros provided with VMM, you can easily use class factories patterns to replace
transactors, transactions and other verification components without modifying the
testbench code.

Posted in VMM | 5 Comments »

RAL: Using TCL to conditionally generate registers

Posted by Vidyashankar Ramaswamy on 12th September 2010

 

Let us say you have a DMA RTL block and its registers are specified using RALF format. Now you want to instantiate it 4 times at the system level. The most common approach is to use array notation. What if you want to include one (or many) extra registers only for one of the instances ? The trivial approach is to manually create two block instances with appropriate registers. What if I want to create 5 different flavors with a very small addition or deletion of a register ? Do I have to create 5 of them by duplicating the same content ? No! This can be done automatically using the ralgen tool and its built-in tcl interpreter. Let us take a look at the following simple example.

# File: dma_registers.ralf

# Register definitions

register device_id {

   bytes 4;
 
 field device_id {
      
bits   32;
 
     access   ro;
 
     soft_reset   32′h0;
   
}
}

register command_status {

   field cmd_status {
      bits 31;
     
access rw;
      reset 32′h0;
   }
}

# DMA Block code generation

for {set i 0} {$i < 2} {incr i} {
   block dma_$i {
      bytes 4;
      register device_id@’h100;?
      if {$i == 1} {
         register command_status @’h124;
      }
   }
}

# System top level

system SYSTEM_TOP {

   bytes 4;
   block dma_0 @’h0000 ;
   block dma_1[3] @’h1000 + ‘h1000;
}

This example shows a DMA block which contains 2 registers. Say I want to create two different DMA blocks out of which one instance has the command status register. Let us call them dma_0 and dma_1. This can be easily achieved using a for loop construct to conditionally include the register as shown above. The system top instantiates one dma_0 type and three dma_1 type blocks. The following is the output from the ralgen tool.

class ral_sys_SYSTEM_TOP extends vmm_ral_sys;

   rand ral_block_dma_0 dma_0;
   rand ral_block_dma_1 dma_1[3];
   . . .

endclass : ral_sys_SYSTEM_TOP

Can I use the loop construct even at the system level ? Yes, you can replace the array notation with the loop construct. Though this is bit more complex, you will have better control over the instantiation names generated by the tool. Following code demonstrates the concept.

 system SYSTEM_TOP {

   bytes 4;

   for {set i 0} {$i < 2} {incr i} {
      set OFFSET [expr $i * 2048];
      if {$i == 1} {
        for {set j 0} {$j < 3} {incr j} {
           set OFFSET_1 [expr $OFFSET + ($j+1) * 2048];
           block dma_$i=dma_$i$j @$OFFSET_1;
        }
      }
      else {
            block dma_$i @$OFFSET;
      }
   }
}

 

The tool output is shown below.

 class ral_sys_SYSTEM_TOP extends vmm_ral_sys;

   rand ral_block_dma_0 dma_0;
   rand ral_block_dma_1 dma_10;
   rand ral_block_dma_1 dma_11;
   rand ral_block_dma_1 dma_12;
   . . .

endclass : ral_sys_SYSTEM_TOP

 Please consider creating separate files for block and systems which will help in block to top reuse of RALF.

 

 

Posted in Register Abstraction Model with RAL | Comments Off

Using VMM Consensus to end your test

Posted by JL Gray on 23rd August 2010

Asif Jafri, Verification Consultant, Verilab

One topic that is often overlooked is how does one end a test. One common approach has been to use pound delays or count the number of transactions generated. While this worked well for directed test environments this approach is not well suited for use with constrained random testbenches. Usually there are several threads running in parallel and to be able to intelligently tell whether all test criterions are met, we need a more centralized approach to manage and decide test completion. vmm_group now has a mechanism to centrally manage test completion with the use of VMM Consensus.To better explain the usage let’s try to build an example:

1.    Instantiate a vmm_voter class to indicate consensus or oppose end of test.
vmm_voter     end_voter;
2.    Identify the participants that will need to consent before the test ends. These participants can be in the form of transactors which will consent when idle, channels consent when empty, notifications and vmm_consensus.  Add the voters in the vmm_group::build_ph()
vmm_consensus::register_*
3.    Add vmm_consensus::wait_for _consensus() to vmm_group::wait_for_end() method. Once all participants consent, the test will complete.
 

 

 


class tb_top extends vmm_group;
`vmm_typename(tb_top)vmm_voter    end_voter;
function void build_ph();
    transaction_channel     master_chan;
    slave_transactor          slave_xactor;
    end_voter =  end_vote.register_channel(master_chan);
    end_voter = end_vote.register_xactor(slave_xactor);
endfunction

task wait_for_end();
     super.wait_for_end();
     end_vote.wait_for_consensus();
endtask

endclass: tb_top 

The code above shows how we can instantiate various voters that will participate in the test completion.

The figure below shows how a participant opposes test completion, while all other participants have given consent.
 

 

 

image

One of the simplest forms of usage is to oppose completion using the command above before a completing some given task like programming registers or pulling reset and then giving consent by using the command:  this.consent(“Programming Completed”); There is often a need to force consensus to end a test if one of the opposing blocks is not releasing. This can be achieved by using the forced command.
 

 

 

image

You can choose to use the consensus_force_thru command to pass to propagate the force up.

image

Using these techniques to end your test will make your testbench scalable and reusable over various projects.

 

 

Posted in Coding Style, VMM | 2 Comments »

Issuing concurrent READ/WRITE accesses to the same register on two physical interfaces using RAL.

Posted by S. Varun on 9th August 2010

Currently, RAL does not schedule multiple concurrent READ/WRITE accesses to the same register at the same time. RAL
assumes all physical transactions to be atomic, i.e. a transaction is completed before the next cycle is started.
Even if we try to do a parallel READ, RAL will block one of the two accesses and schedule it sequentially. This is
to ensure that the accesses are not corrupted and also because of the fact that the main idea behind RAL is in
abstracting accesses to the registers.

However, in specific cases, there might be a verification requirement to create specific corner cases on the buses.

A typical corner case to check parallel access to a shared register would be as shown in the code snippet below,

Sample code :

     fork
       /* READ the CONTROL_REG via the APB interface */
       env.ral_model.blk_inst.CONTROL_REG.read(status, data , , "APB");

       /* Read the CONTROL_REG via the AHB interface */
       env.ral_model.blk_inst.CONTROL_REG.read(status, data , , "AHB");
     join

Our intention here is to query the control register from two different domains "APB" & "AHB" in parallel or write
through one interface and do a read access through another. But as already mentioned, a controlling semaphore
mechanism tied to each register will sequentially schedule the above two accesses. To create such corner cases we
can use the RAL proxy transactor with "vmm_ral_access::read/write()" which is not going be blocked as the semaphore
resides in the register. The code below illustrates how we can do this.

Sample code :

     fork
       /* READ the CONTROL_REG via the APB interface */
       env.ral.read(status, env.ral_model.blk_inst.CONTROL_REG.get_address_in_system("APB"), data, 32, "APB");

       /* READ the CONTROL_REG via the AHB interface */
       env.ral.read(status, env.ral_model.blk_inst.CONTROL_REG.get_address_in_system("AHB"), data, 32, "AHB");
     join

The "vmm_ral_reg::get_address_in_system()" method can be used here to generate the correct address corresponding
to the domain of interest and the same can be passed with the "vmm_ral_access::read()/write()" task. 

We can also do the following:

1)	Read CONTROL_REG through one of the interfaces using "env.ral_model.blk_inst.CONTROL_REG.read()" &
2)	Read the same register via vmm_ral_access::read() through the other interface as illustrated in the
        sample code below.

Sample code :

     fork
       /* READ the CONTROL_REG via the APB interface */
       env.ral_model.blk_inst.CONTROL_REG.read(status, data , , "APB");

       /* READ the CONTROL_REG via the AHB interface */
       env.ral.read(status, env.ral_model.blk_inst.CONTROL_REG.get_address_in_system("AHB"), data, 32, "AHB");
     join

Because the semaphore is in the register, the access through the proxy transactor is not going to be blocked. 

Also, if the DUT and physical transactor supports pipelined read/write accesses, the generic transactions can
also be executed concurrently. You must first enable this capability by defining the compile time macro,
VMM_RAL_PIPELINED_ACCESS.

Posted in Register Abstraction Model with RAL | Comments Off

b3168d093cad6f59c8d65b4b4002bdafxxxxxxxxxxxxxxxxxxxx