Verification Martial Arts: A Verification Methodology Blog

Archive for the 'Automation' Category

Avoiding Redundant Simulation Cycles with your UVM VIP based simulation with a Simple Save-Restore Strategy

Posted by paragg on 6th March 2014

In many verification environments, you reuse the same configuration cycles across different testcases. These cycles might involve writing and reading from different configuration and status registers, loading program memories, and other similar tasks to set up a DUT for the targeted stimulus. In many of these environments, the time taken during this configuration cycles are very long. Also, there is a lot of redundancy as the verification engineers have to run the same set of verified configuration cycles for different testcases leading to a loss in productivity. This is especially true for complex verification environments with multiple interfaces which require different components to be configured.

The Verilog language provides an option of saving the state of the design and the testbench at a particular point in time. You can restore the simulation to the same state and continue from there. This can be done by adding appropriate built in system calls from the Verilog code. VCS provides the same options from the Unified Command line Interpreter (UCLI).

However, it is not enough for you to restore simulation from the saved state. For different simulations, you may want to apply different random stimulus to the DUT. In the context of UVM, you would want to run different sequences from a saved state as show below.

In the above example apart from the last step which varies to large extent, the rest of the steps once established need no iterations.

Here we explain how to achieve the above strategy with the simple existing UBUS example available in the standard UVM installation. Simple changes are made in the environment to show what needs to be done to bring in this additional capability. Within the existing set of tests, the two namely, “test_read_modify_write” and “test_r8_w8_r4_w4”, differs only w.r.t the master sequence being executed – i.e. “read_modify_write_seq” and “r8_w8_r4_w4_seq” respectively.

Let’s say that we have a scenario where we would want to save a simulation once the reset_phase is done and then start executing different sequences post the reset_phase the restored simulations. To demonstrate a similar scenario through the UBUS tests, we introduced a delay in the reset_phase of the base test (in a real test, this may correspond to the PLL lock, DDR Initialization, Basic DUT Configuration).

The following snippet shows how the existing tests are modified to bring in the capability of running different tests in different ‘restored’ simulations.

As evident in the code we made two major modifications.

  • Shifted the setting of the phase default_sequence from the build phase to the start of the main phase.
  • Get the name of the sequence as an argument from the command-line and process the string appropriately in the code to execute the sequence on the relevant sequencer.

As you can see, the changes are kept to a minimum. With this, the above generic framework is ready to be simulated.  In VCS, one of the different ways, the save/restore flow can be enabled as follows.

Thus above strategy helps in optimal utilization of the compute resources with simple changes in your verification flow. Hope this was useful and you manage to easily make the changes in your verification environment to adopt this flow and avoid redundant simulation cycles.

Posted in Automation, Coding Style, Configuration, Creating tests, Customization, Optimization/Performance, Organization, Reuse, Stimulus Generation, SystemVerilog, Tutorial, UVM, Uncategorized, Verification Planning & Management | 1 Comment »

SNUG-2012 Verification Round Up – Miscellaneous Topics

Posted by paragg on 29th March 2013

In my final installment of the series of blogs summing up the various SNUG verification papers of 2012, I try to cover the user papers on the Design IP/Verification IP and SystemC and SystemVerilog co-simulation. Please find my earlier blogs on the other domains here: System Verilog Language, Methodologies & VCS technologies

DesignWare core USB3.0 Controller (DWC_usb3) can be configured as a USB3.0 Device Controller. When verifying a system that comprises a DWC_usb3 Device Controller, the verification environment is responsible for bringing up the DWC_usb3 Device Controller to its proper operation mode to communicate with the USB3.0 Host. The paper Integrating DesignWare USB3.0 Device Controller In a UVM-based Testbench from Ning Guo of Paradigm Works describes the process of configuring and driving the DWC_usb3 Device Controller in a UVM based verification environment using the Discovery USB 3.0 Verification IP. This paper describes how the verification environment needs to be created so that it’s highly configurable and reusable.

The AMBA 4 ACE specification enables system level cache coherency across clusters of multicore processors, such as the ARM Cortex-A15 and Cortex-A7 MPCore™ processors .This ensures optimum performance and power efficiency of complex SoC designs. However, the design complexity associated with these capabilies is also higher.  And it throws up new verification challenges.  In the paper, Creating AMBA4 ACE Test Environment With Discovery VIP”, Whitney Huang, Sean Chou, MediaTek Inc, demonstrates how to tackle complex verification challenges increase their verification productivity by using Synopsys Discovery AMBA ACE VIP.

The paper, Verification Methodology of Dual NIC SOC Using VIPs by A.V. Anil Kumar, Mrinal Sarmah, Sunita Jain of Xilinx India Technology Services Pvt. Ltd, talks about how various features of Synopsys PCIe and Ethernet Verification IPs can be exploited to help in the efficient verification of the DUT across various traffic configurations. The paper explores how the VIP Application Programming Interface (API)s can leveraged in the  tests cases to reach high  functional coverage numbers  in a very short duration. They also show how a dual NIC verification environment can effectively use Ethernet VIP APIs to test various Media Access Control (MAC) features. Finally conclude how of the implementation can be used across future revisions of their design.

The ability to analyze the performance of the SoC at the early stage of the design can make a significant different to the end product.  This can lead to more accurate and an earlier estimate of the desired performance that is expected.  Dayananda Yaraganalu Sadashivappa, Igal Mariasin, Jayaprakash Naradasi of SanDisk India Device Design Centre Pvt. Ltd., in the paperGeneric MLM environment for SoC Performance Enhancement”, outlines the solution that was found by using the Synopsys VIP models. The VIPs were used in conjunction with interconnect, which in this case is a Multi-Layer-Matrix (MLM). The environment was built leveraging the VMM base classes. The VMM multiple stream scenario (vmm_ms_scenario) base class was used to create the traffic across the matrix, and the performance meters were constructed using the base classes. The callbacks were leverage appropriately help in collating the statistics. Multiple knobs were used to make the environment generic and configurable. The approach helped in finding multiple performance bugs which could not have been easily found using conventional verification.

In the paper, “User Experience Verifying Ethernet IP Core”, Puneet Rattia of Altera Corporation, presents his experience with verifying the Altera® 40-100Gbps Ethernet IP core utilizing VMM environment while integrating the Ethernet VIP from Synopsys. He explains how he created a full suite of system and blocks level regression tests and then goes on to show how he  utilizes the coverage mapping capabilities of VCS to merge the results across these various testbenches and produce meaningful reports. Besides showing how to reuse the verification infrastructure at the SoC level, the paper also demonstrates how they went in for horizontal reuse by integrating the reference SystemC based models developed and prototyped in the early phase of the project.

UVM 1.x includes support for the communication interfaces defined by the SystemC TLM-2.0 standard. This enables integration of SystemC TLM-2.0 IP into a SystemVerilog UVM verification environment. Dr David Long, John Aynsley, Doug Smith, Doulos in the paper A Beginner’s Guide to Using SystemC TLM-2.0 IP with UVMdescribes how this is done best. They talk about the fact that the connection between SystemC and SystemVerilog currently requires a tool specific interface such as Synopsys Transaction Level Interface (TLI). This paper begins with a brief overview of TLM-2.0 aimed at novice users. It then discusses the steps required to add a SystemC TLM-2.0 model into a SystemVerilog UVM environment and simulate it with VCS. At each step, issues that users will face are explored and suggestions made for practical fixes, showing the relevant pieces of code. Finally, the paper gives a summary of areas where the UVM implementation of TLM-2.0 differs from the SystemC standard and proposes workarounds to ensure correct communication between the SystemVerilog and SystemC domains.

There is an inherent need to enable the horizontal reuse of components created during the architecture and exploration stage. Subhra S Bandyopadhyay, Pavan N M, Intel Technology India Pvt. Ltd, in Integrating SystemC OSCI TLM 2.0 Models to OVM based System Verilog Verification Environments talks about how  theur architecture team creates SystemC models  for early performance analysis and accelerated software development. In OVM-based verification environment, the objective was to reuse this model as a reference model and thus helped in reducing the overall environment bring-up time. The challenge was not only to integrate the SystemC model in the OVM-based verification environment but also to be able to efficiently send transactions from SV to SystemC and vice versa. This paper explores the successful integration of SystemC TLM2 components in OVM based verification environments and also highlight how the VCS TLI (Transaction Level Interface) adapters help TLM2.0 sockets in SystemC to communicate with those in SV and vice versa.

Truly, I feel overwhelmed by the numbers of papers and the interesting use of technology across a variety of domains on which user share their experiences across the various SNUG conferences. As we speak, the SNUG events for 2013 have started, and the stage is all set for a new set of very informative and interesting sessions. I am sure most of you would be attending the SNUIG conferences in your area. . You can find the detailed schedule of those here.

Posted in Announcements, Automation, Callbacks, Coding Style, Communication, Reuse, Structural Components, SystemC/C/C++, SystemVerilog, Transaction Level Modeling (TLM), Tutorial, UVM, VMM | Comments Off

Build your own code generator!

Posted by Amit Sharma on 25th June 2012

One of the critical problems developers cope with during SoC development process (architecture plan, RTL design, verification, software development) is the constant need to synchronize between different forms of the same data structure: the SoC registers database. The SoC registers database can be found at the SoC architecture team (who write the SoC registers description document), design engineers (who implement the registers structure in RTL code), verification engineers (who write the verification infrastructure – such as RAL code, and write verification tests – such as exhaustive read/write tests from all registers), and software engineers (who use the registers information to write embedded software).

Since the same information is used in different forms, it is only natural to have a single, central database which will hold all SoC registers data. You would ideally like to generate all the required output files  (documentation, UVM or VMM Regsiter Abstraction Model, RTL, C headers, …) from this central database ..  Different vendors and CAD teams provide different automation solutions for doing this.

The RALF specification contains all of the necessary information to generate RTL and testbench  code that implements the specified registers. There are many ways to code and implement RTL, so it is not possible to provide a general purpose solution.  As far as the testbench abstraction model is concerned, there are multiple ways of customizing your model post generation in both UVM and VMM. Callbacks, factories, configurable RAL model  attributes are some of the ways through which the desired customization can be brought in.  “The ‘user’ in RALF : get ralgen to generate ‘your’ code” highlights a very convenient way of adding of bringing in  SystemVerilog compatible code which will be copied as-is into the RAL model and in the desired scope. When it comes down to generating the RTL and the ‘C’ headers, we cannot leave the customization to such a late stage.  Also, different organizations and project groups have their own RTL and C-code coding styles which means  a generated output of a very generic nature might not be very helpful. For RTL generation, engineers would want the generated code to be power and gate-count efficient. Similar for C registers header generation, it often needs to follow coding styles and match CPU Firmware API. How do we bring in all this customizations to the end user?

Using the RALF C++ API, you have full access to parsed RALF data (through C++ routines), which you can use to implement a customized RTL code generator, or any other feature that needs RALF data. So, you can use  it to generate your C header files, HTML documentation, or translate the i/p RALF files to another register description format, custom covergroups and coverage convergence sequences (DAC 2012 User Track poster 6U.8 — Register Verification on a Fast Lane: Using Automation to Converge on UVM REG Coverage Models)

I have seen two instances of the need to generate a different register specification in the recent past and that is one of the reasons I decided to put this down in a blog.   Let me talk about the first instance..

One of the project groups were in the process of migrating from their own SV base classes to UVM. They had their own Register description format from which they used to generate their Register Abstraction Model. This was a proven flow.

So, when they migrated to UVM , they wanted to have a flow which would validate the changes that they are doing..

Given that they were moving to using RALF and ‘ralgen’, they didn’t want to create Register Specification in the legacy format anymore. So, they wanted to have some automation for generating scripts in the earlier format.  So, how did they go about doing this?..   They took the RAL C++ APIs and using them there were able to create the necessary automation to to generate the legacy format from RALF in no time.. (From what I remember, it was a half days work).. Everyone were involved in doing what they were best at, and that helped in the overall scheme of things.

The other customer had their own format from which they were generating RTL, firmware code and HTML. They had the necessary automation to create RALF for generating the UVM register Model. They also had a mechanism in place to generate IPXACT from this format as well as vice versa.. So, to complete the traceability matrix, they wanted a RALF  to IPXACT conversion.. Again, the most logical approach was to take the RALF CPP APIs and get those to iterate through the parsed RALF data and generate IPXACT.. Though, this effort is not complete, it just took a day or so to be able to generate valid IPXACT1.5 schema and all that is required now is some more additional work to smoothen the corners.

How do you start using these APIs and build your own code/html generators? You need to include ‘ralf.hpp” (which is in $VCS_HOME/include) in your ‘generator’ block. And then to compile the code, you need to pick up the shared library libralf.so from the VCS installation.

$CPP_COMPILER $CFLAGS -I${VCS_HOME}/include –L${VCS_HOME}/lib –lralf your-file.cpp $LDFLAGS

#include
#include "ralf.hpp"
int main(int argc, char *argv[])
{
// Check basic command line usage…
if (argc < 3) {
fprintf(stderr, "Error: Wrong Usage.\n");
// Show Correct Usage …
exit(1);
}
/**
* Parse command line arguments to get the essential
* constructor arguments. See documentation
* of class ralf::vmm_ralf’s constructor parameters.
*/
{

/**
* Create a ralf::vmm_ralf object by passing in proper
constructor arguments. */
ralf::vmm_ralf ralf_data(ralf_file, top, rtl_dir,
inc_dir);
/**
* Get the top level object storing the parsed RALF
* block/system data and traverse that, top-down, to get
* access to complete RALF data.
*/
const ralf::vmm_ralf_blk_or_sys_typ * top_lvl_blk_or_sys
= ralf_data.getTopLevelBlockOrSys();
#ifndef GEN_RTL_IN_SNPS_STYLE
/*
* TODO–Traverse the parsed RALF data structure top-down
* using/starting-from ‘top_lvl_blk_or_sys’ for getting
* complete access to the RALF data and then, do whatever
* you would want to do, with the parsed RALF data. One
* typical usage of parsed RALF data could be, to generate
* RTL code in your own style.
*/
//
// TODO – Add your RTL generator code here.
//
#else
/*
* As part of this library, Synopsys also provides a
* default RTL generator, which can be invoked by
* invoking ‘generateRTL()’ method of ‘ralf::vmm_ralf’
* class, as demonstrated below.
*/
ralf_data.generateRTL();
#endif
}

Essentially, you have a handle to the parsed database, and with the available APIs you can do whatever you want with it :) ..  The documentation of the APIs are in the documentation shipped with the VCS installation.. Also, if you are like me and would rather hack away at existing code rather than start with something from scratch, you can just check with Synopsys support to give you existing templates to dump out code in specific format, and you can starting modifying that for your requirements..

Posted in Automation, Coverage, Metrics, Customization, Interoperability, Register Abstraction Model with RAL, Tools & 3rd Party interfaces, UVM, VMM infrastructure | Comments Off

Auto-Generation of Performance Charts with the VMM Performance Analyzer

Posted by Amit Sharma on 30th September 2011

Amit Sharma, Synopsys

Quoting from one of Janick’s earlier blog on the VMM Performance Analyzer Analyzing results of the Performance Analyzer with Excel,
”The VMM Performance Analyzer stores its performance-related data in a SQL database.SQL was chosen because it is an IEEEANSI/ISO standard with a multiplicity of implementation, from powerful enterprise systems like Oracle, to open source versions like MySQL to simple file-based like SQLite. SQL offers the power of a rich query and analysis language to generate the reports that are relevant to yourapplication.”

And given that everyone doesn’t understand SQL, he goes on to show how one can get VMM Performance Analyzer data from a SQLite database into an Excel spreadsheet and then subsequently analyze the data by doing any additional computation and creating the appropriate graphs. This involves a set of steps leveraging the SQlite ODBC (Open Database Conduit) and thus requires the installation of the same.

This article presents a mechanism how TCL scripts are used to bring in the next level of automation so that the users can retrieve the required data from the SQL DB and even automate the process of results analysis by auto-generating the relevant performance charts for statistical analysis.. Also, as users migrate to using DVE as a single platform for their design debug, coverage analysis, verification planning, it is shown how these scripts can be integrated into DVE, so that the generation process is a matter of using the relevant menus and clicking on the appropriate buttons in DVE.

For generating the SQL databases with the VMM Performance Analyzer, an SQLite Installation is required which can be obtained from www.sqlite.org. Once, you have installed it, you would need to set the SQLITE3_HOME environment variable to the path where its installed. Once that is done, these are the following steps that you need to follow to generate the appropriate graphs out of the data gets generated out of your batch regressions runs..

First, you need to download the utility from the link provided in the article DVE TCL Utility for Auto-Generation of Performance Charts

Once it is extracted, you can try it on the tl_bus examples that ships with the utility. You would need to go the directory vmm_perf_utility/tl_bus.

Use make to run the tl_bus which will generate the sql_data.db and sql_data.sql. Now, go to the ‘chart_utility’ directory

(cd vmm_perf_utility/chart_utility/)

The TCL scripts which are involved in the automation of the performance charts are in this directory.

This script vmm_perf_utility/chart_utility/chart.tcl  can then be executed from inside DVE as shown below

dve_an

Once, that is done, it will add will add a button “Generate Chart” in View menu.. BTW, adding a button is fairly simple..

eg:    gui_set_hotkey -menu “View->Generate Chart” -hot_key “G”

is how the button gets added..

Now,  click on a “Generate Chart” to select the sql database.

dve_an2

This will bring up the dialog box to select the SQL database..

dve_an3

Once, the appropriate data base is selected, the user can select which table to work with and then generate the appropriate.. The options would be provided to the user based on the data that is dumped into the SQL database.. From the combinations of charts, that is shown, select the graph that you want to generate and the required graphs will be generated for you. This is what you can see when you use the SQL DB generated for the TL bus example

dve_an4

dve_an5

Once, you have made the selections, you would see the following chart generated..

dve_an6

Now, obviously, you as a user would not just want the graphs to be  generated but you would also want these values to be available to you..

Thus, once you use this chart generation mechanism, the relevant .csv files corresponding to the graphs that you have generated would also be dumped for you..

This will be generated in the perfReports directory that would be created as well.. So, you can do any additional custom computation in Excel or by running your own scripts..  To generate the graphs for any other example, you just need to pick up the appropriate SQL DB  that was generated based on your simulation runs and then subsequently generate the reports and graphs of your interest.

So whether you use the Performance Analyzer in VMM (Performance and statistical analysis from HDL simulations using the VMM Performance Analyzer) or in UVM (Using the VMM Performance Analyzer in a UVM Environment) and even while you are doing your own PA customizations Performance appraisal time – Getting the analyzer to give more feedback , you can easily generate whatever charts you require which  would easily help you analyze all the  different performance aspects of the design you are verifying..

Posted in Automation, Coverage, Metrics, Customization, Performance Analyzer, Tools & 3rd Party interfaces | Comments Off

Closed Loop Register Verification using IDesignSpec and the Register Abstraction Layer

Posted by Amit Sharma on 26th September 2011

Nitin Ahuja, Agnisys Technology Pvt. Ltd

In the previous article titled “Automatic generation of Register Model for VMM using IDesignSpecTM ” we discussed how it is advantageous to use a register model generator such as IDesignSpecTM, to automate the process of RALF model generation. Taking it forward, in this article we will discuss how to close the loop on register verification.

Various forms of coverage are used to ensure that registers are functioning properly. There are three coverage models in VMM. They are:

1. reg_bits coverage: this model is used to make sure that all the bits in the register are covered. This model works by writing and reading both 1 and 0 on every register bit, hence the name. This is specified using “cover +b” in the RALF model.

2. field_vals coverage: field value coverage model is implemented at the register level and supports value coverage of all fields and cross coverage between fields and other cross coverage points within the same register. This is specified using “cover +f” in the RALF model. User can specify the cross coverage depending on the functionality.

3. Address map: this coverage model is implemented at block level and ensures that all registers and the memories in the block have been read from and written to. This is specified using “cover +a” in the RALF model.

We will discuss how coverage can be switched on/off and how the type of coverage can be controlled for each field directly from the register specification.

Once the RALF model is generated, the next step in verification is to generate the RTL and the SystemVerilog RAL model using ‘ralgen’. The generated RAL model along with the RTL can be compiled and simulated in the VMM environment to generate the coverage database. This database is used for the report generation and analysis.

Reports can be generated using IDesignSpecTM (IDS). IDS generated reports have advantages over other report in that it generates the reports in a much more concise way showing all the coverage at one glance.

Turning Coverage ON or OFF

IDesignSpecTM enables the users to turn ON/OFF all the three types of coverage from within the MS Word specification itself.

Coverage can be specified and controlled using the “coverage” property in IDesignSpecTM which has the following possible values:

image

The hierarchical “coverage” property enables users to control the coverage of the whole block or at the chip level.

Here is a sample of how coverage can be specified in IDesignSpecTM:

image

image

This would be the corresponding RALF file :

agnisys_ralf

image

The coverage bins for each CoverPoint along with the cross for the various CoverPoints can also be defined in the specification as shown below:

image

This would translate to the following RALF:

image

Now, the next step after RALF generation would be to generate the RAL Model from the IDS generated RALF.

RAL MODEL AND RTL GENERATION FROM RALF:

The IDS generated RALF can be used with the Synopsys ‘ralgen’ to generate the RAL  (VMM or UVM) model as well as the RTL.

RAL model can be generated by using the following command:

image

If you specify –uvm above in the fisrt ralgen invocation above, a UVM Register Model would be generated.

COMPILATION AND REPORT GENERATION:

Once the RTL and the RAL model are generated using the ‘ralgen’, the complete model can be compiled and simulated in the VMM environment using VCS.

To compile the model use the following command on the command line:

vcs -R +plusarg_save -sverilog -o “simv1″ -ntb_opts rvm+dtm +incdir+<directories to search `defines> <files to be compiled> +define+RAL_COVERAGE

The compilation and simulation generates the simulation database which is used for the generation of the coverage reports.

Coverage reports can be generated in various forms but the most concise form can be in the form of the graphics showing all the coverage at a glance. For this, a tcl script “ivs_simif.tcl” takes up the simulation database and generates the text based report on execution of the following command:

% ivs_simif.tcl -in simv.vdb –svg

For running the above command set the environment variable “IDS_SIM_DIR”, the text report are generated at this location. This will also tell IDS where to look for the simulation data file.

A detailed graphical view of the report can be generated from IDS with the help of this text report. To generate the graphical report in the form of “scalable vector graphics” (SVG) select the “SVG” output from the IDS config and regenerate.

Another way of generating the SVG could be by using the IDS-XML or the Doc/Docx specification of the model as the input to the IDS in batch mode to generate the graphical report of the simulation by using the following command:

% idsbatch <IDS_generated_XML or doc/docx specification> -out “svg” -dir output_directory

Coverage Reports

IDesignSpec generates two types of reports from the input database.

They are:

1. Field_vals report

2. Reg_bits report

Field_vals report:

Field_vals report gives the graphical view of the field_vals coverage and the address coverage of the various registers and their respective fields.

The amount of coverage for the field (CoverPoints) is depicted by the level of green color in the fields, while that for complete register (CoverGroup) is shown by the color of name of the register.

The address coverage for the individual register (CoverPoint) is shown by the color of the address of the register (green if addressed; black if not addressed), while that of the entire block (CoverGroup) is shown by the color of the name of the block.

The coloring scheme for all the CoverGroups i.e. register name in case of the field_vals coverage and block name in case of the address coverage is:

1. If the overall coverage is greater than or equal to 80% then the name appears in GREEN color

2. If the coverage is greater than 70% but less than 80% then it appears in YELLOW

3. For coverage less than 70% name appears in RED color

Figure1 shows the field_vals and address coverage.

image

Figure:  Closed loop register verification using RALF and IDS

The above sample gives the following coverage information:

a. 2 registers, T and resetvalue, are not addressed out of total of 9 registers. Thus the overall coverage of the block falls in the range >70% &<80% which is depicted by the color of the Stopwatch (name of the block).

b. All the fields of the registers are filled with some amount of the green color which shows the amount of the coverage. As an example field T1 of register arr is covered 100% thus it is completely filled and FLD4 of register X is covered only about 10%. The exact value of coverage can be obtained by hovering over the field to get the tooltip showing the exact coverage value

c. Color of the name of the register, for example X is red, show the overall coverage of the whole register , which is less than 70% for X.

Reg_bits report:

Reg_bits report gives the detailed graphical view of the reg_bits coverage and address coverage.

Address coverage for reg_bits is shown in the same way as for the address coverage in field_vals. Reg_bits coverage has 4 components, that is,

1. Written as 1

2. Read as 1

3. Written as 0

4. Read as 0

Each of the 4 components is allocated a specific region inside a bit. If that component of the coverage is hit then the corresponding region is shown as green else it is blank. The overall coverage of the entire register is shown by the color of the name of the register as in the case of the field_vals.

image

The above sample report shows that there is no issue in “Read as 1” for the ‘resetvalue’ register. While other types or read/write has not been hit completely.

Thus, in this article we described what the various coverage models for a register are and how to generate the RALF coverage model of the registers automatically with minimum effort. An intuitive visualization of the register coverage data will ease the effort involved in deciphering the coverage reports from simulation lengthy log files. This type of closed loop register verification ensures better coverage and high quality results in less time. Hope you found this useful.. Do share with me your feedback on the same and and also let me know if you want any additional details to get the maximum benefits from this flow..

Posted in Automation, Coverage, Metrics, Organization, Register Abstraction Model with RAL, Tools & 3rd Party interfaces, Verification Planning & Management | 1 Comment »

Automatic generation of Register Model for VMM using IDesignSpec

Posted by Amit Sharma on 5th August 2011

Nitin Ahuja, Verification Engineer, Agnisys Technology Pvt Ltd

Generating a register model by hand could take up a lot of time in the design process and may result in serious bugs, which makes the code inefficient. On the other hand, generating the register model using the register model generator such as IDesignSpecTM reduces the coding effort, as well as generates more competent codes by avoiding the bugs in the first place, thus making the process more efficient and reduces the time to market exponentially.

Register model generator can be proved efficient in the following ways:

1. Error free codes in the first place, i.e. being automatically generated, the register model code is free from all the human as well as logical errors.

2. In the case of change in the register model specification, it is easy to modify the spec and generate the codes again in no time.

3. Generating all kind of hardware, software, industry standard specifications as well as verification codes from a single source of specification.

IDesignSpecTM (IDS) is capable of generating all the RTL as well as the verification codes such as VMM(RALF) from the register specification defined in Word, Excel, Open-office or IDS-XML.

Getting Started

A simple register can be defined inside a block in IDesigSpecTM as:

The above specification is translated into the following RALF code by IDS.

image

As a protocol, all the registers for which the hdl_path is mentioned in the RALF file, the ralgen generates the backdoor access. Thus special properties on the register such as hdl_path and coverage can be mentioned inside the IDS specification itself and will be appropriately translated into the RALF file.

The properties can be defined as below:

For Block:

image

As for the block, hdl_path , coverage or even any other such property can be mentioned for other IDS elements, such as register or field.

For register/field:

clip_image002OR

clip_image004

Note: Coverage property can take up the following three possible values:

1. ON/on: This enables all the coverage types i.e for block or memory address coverage and for registers and field the REG_BITS and FIELD_VALS coverage is on.

2. OFF/off: By default all the coverage is off. This option holds valid only in case, when the coverage is turned ON from the top level of the hierarchy or from the parent and to turn off the coverage for some particular register, specify ‘coverage=off’ for that register or field. The coverage for that specific child will be invert of what its parent has.

3. abf: Any combination of these three characters can be used to turn ON the particular type of the coverage. These characters stand for:

· a : address coverage

· b : reg_bits

· f : field_vals

For example to turn on the reg_bits and field_vals coverage, it can be mentioned as:

“coverage=bf”.

In addition to these properties, there are few more properties that can be mentioned in a similar way as above. Some of them are:

1. bins: various bins for the coverpoints can be specified using this property.

Syntax: {bins=”bin_name = {bin} ; <bin_name>…”}

2. constraint : constraints can also be specified for the register or field or for any element.

Syntax : {constraint=”constraint_name {constraint};<constraint_name> …”}

3. [vmm]<code>[/vmm]: This tag gives the users the ability to specify their own piece of system-verilog code in any element.

4. cross: cross for the coverpoints of the registers can be specified using cross property in the syntax:

Syntax: {cross = “coverpoint_1 <{label label_name}>;< coverpoint_2>…”}

Different types of registers in IDS :

1.Register Arrays:

Register arrays in RALF can be defined in IDS using the register groups. To define a register array of size ‘n’, it can be defined by placing a register inside a regroup with the repeat count equal to size of the array (n).

For example a register array of the name ”reg_array” with size equal to 10 can be defined in the IDS as follows:

clip_image006

The above specification will be translated into the following vmm code by the ids:

clip_image008

2.Memory :

Similar to the register array, Memories can also be defined in the IDS using the register groups. The only difference in the memory and register array definition is that in case of memory the external is equal to “true”. The size of the memory is calculated as, ((End_Address – Start_Address)*Repeat_Count)

As an example a memory of name “RAM” can be defined in IDS as follows:

clip_image010

The above memory specification will be translated into following VMM code:

clip_image012

3.Regfile

Regfile in RALF can be specified in IDS using the register group containing multiple registers(> 1).

One such regfile with 3 registers, repeated 16 times is shown below:

clip_image014

Following is the IDS generated VMM code for the above reg file:

clip_image016

RAL MODEL AND RTL GENERATION FROM RALF:

The IDS generated RALF can be used with the Synopsys Ralgen to generate the RAL model as well as the RTL.

To generate the RAL model use the following command:

clip_image018

And for the RTL generation use the following command:

clip_image020

SUMMARY:

It is beneficial to generate the RALF using the register model generator “IDesignspecTM”, as it guarantees bug free code, making it more competent and also reduces the time and effort. In case of modifications in the register model specification, it enables the users to regenerate the code again in no time.

Note:

We will extend this automation further in the next article where we will cover details about how you can “close the loop” on register verification. The “Closed Loop Register Verification” article will be available on VMM Central soon. Meanwhile if you have any questions/comments you can reach me at nitin[at]agnisys[dot]com .

Posted in Automation, Organization, Register Abstraction Model with RAL, Tools & 3rd Party interfaces, VMM infrastructure | 1 Comment »

The One stop shop: get done with everything you need to do with your registers

Posted by Amit Sharma on 14th July 2011

Ballori Bannerjee, Design Engineer, LSI India

Processes are created, refined and improved upon and the change in productivity which starts with a big leap subsequently slows down and at the same time as the complexity of tasks increases, the existing processes can no longer scale up. This drives the next paradigm shift in moving towards new process and automation. As in the case of all realms of technology, this is true in the context of the Register development and validation flow as well.. So, let’s look at how we changed our process to get the desired boost in productivity that we wanted..

This following flowchart represents our legacy register design and validation process.. This was a closed process and served us well initially when the number of registers, their properties etc were limited.. However, with the complex chips that we are designing and validating today, does this scale up?

register_verif

As an example, in a module that we are implementing, there are four thousand registers. Translating into number of fields, for 4000 32-bit registers we have 128,000 fields, with different hardware and software properties!

Coding the RTL with address decoding for 4000 registers, with fields having different properties is a week’s effort by a designer. Developing a re-usable randomized verification environment with tests like reset value check, read-write is another 2 weeks, at the least. Closure on bugs requires several feedbacks from verification to update design or document. So overall, there is at least a month’s effort plus maintenance overhead anytime the address mapping is modified or a register updated/added.

This flow is susceptible to errors where there could be disconnect between document, design, verification and software.

So, what do we do? We redefine the process! And this is what I will be talking about, our automated register design and verification (DV) flow which streamlines this process.

AUTOMATED REGISTER DESIGN AND VERIFICATION FLOW

The flow starts with the designer modeling the registers using a high level register description language. In our case , we use SystemRDL, and then leverage third party tools are available to generate the various downstream components from the RDL file:

· RTL in Verilog/VHDL

· C/C++ code for firmware

· Documentation ( different formats)

· High level verification environment code (HVL) in VMM

This is shown in below. The RDL file serves as a one-stop point for any register update required following a requirement change.

register2

Automated Register DV Flow

Given, that its critical to create an efficient object oriented abstraction layer to model registers and memories in a design under test, we exploit VMM RAL for the same. How do we generate the VMM RAL Model? This is generated from RALF. Many 3rd party tools are available to generate RALF from various inputs formats and we use one of them to generate RALF from SystemRDL

Thus, a complete VMM compliant randomized, coverage driven register verification environment can be created by extending the flow such that:

i. Using 3rd party tool, from SystemRDL the verification component generated is RALF, Synopsys’ Register Abstraction Layer File.

ii. RALF is passed through RALGEN, a Synopsys utility which converts the RALF information to a complete VMM based register verification environment. This includes automatic generation of pre-defined tests like reset value check, bit bash tests etc of registers and complete functional coverage model, which would have taken considerable staff-days of effort to write.

The flowchart below elucidates the process.

register3

Adopting the automated flow, it took 2 days to write the RDL. The rest of components were generated from this source. A small amount of manual effort may be required for items like back-door path definition, but it is minimal and a one-time effort. The overall benefits are much more than the number of staff days saved and we see this as something which gives us perpetual returns.. I am sure, a lot of you would already be bringing in some amount of automation in your register design and verification setup, and if you aren’t, its time you do it J

While, we are talking about abstraction and automation, lets look at another aspect in register verification.

Multiple Interfaces/Views for a register

It is possible to have registers in today’s complex SOC designs which need to be connected to two or more different buses and accessed differently. The register address will be different for the different physical interfaces it is shared between. So, how do we model this..

This can be defined in SystemRDL by using a parent addressmap with bridge property, which contains sub addressmaps representing the different views.

For example:

addrmap dma_blk_bridge {
bridge;// top level address map
reg commoncontrol_reg {
shared; // register will be shared by multiple address maps
field {
hw=rw;
sw=rw;
reset=32’h0;
} f1[32];
};

addrmap {// Define the Map for the AHB Side of the bridge
commoncontrol_reg cmn_ctl_ahb @0×0; // at address=0
} ahb;

addrmap { // Define the Map for the AXI Side of the bridge
commoncontrol_reg cmn_ctl_axi @0×40; // at address=0×40
} axi;
};

The equivalent of multiple view addressmap, in RALF is domain.

This allows one definition of the shared register while allowing access from each domain to it, where register address associated with each domain may be different .The following code is RALF with domain implementation for above RDL.

register commoncontrol_reg {
shared;
field f1 {
bits 32;
access rw;
reset ‘h0;
}
}

block dma_blk_bridge {
domain ahb {
bytes 4;
register commoncontrol_reg =cmn_ctl_ahb @’h00 ;
}

domain axi {
bytes 4;

register commoncontrol_reg=cmn_ctl_axi @’h40 ;
}
}

Each physical interface is a domain in RALF. Only blocks and systems have domains, registers are in the block. For access to a register from one interface/domain RAL provides read/write methods which can be called with the domain name as argument. This is shown below..

ral_model.STATUS.write(status, data, “pci”);

ral_model.STATUS.read(status, data, “ahb”);

This considerably simplifies the verification environment code for the shared register accesses. For more on the same, you can look at : Shared Register Access in RAL though multiple physical interfaces

However, unfortunately, in our case, the tools we used did not support multiple interfaces and the automated flow created a the RALF having effectively two or more top level systems re-defining the registers. This can blow up the RALF file size and also verification environment code.

system dma_blk_bridge {
bytes 4;
block ahb (ahb) @0×0 {
bytes 4;
register cmn_ctl_ahb @0×0 {
bytes 4;
field cmn_ctl_ahb_fl(cmn_ctl_ahb_f1)@0{
bits 32;
access rw;
reset 0×0;
} }
}

block axi (axi) @0×0 {
bytes 4;
register cmn_ctl_axi @0×40 {
bytes 4;
field cmn_ctl_axi_f1 (cmn_ctl_axi_f1) @0 {
bits 32;
access rw;
reset 0×0;
} }
}
}

Thus, as seen above, the tool is generating two blocks ‘ahb’ and ‘axi’ and re-defining the register in each block. For multiple shared registers, the resulting verification code will be much bigger than if domain had been used.

Also, without the domain associated read/write methods (as shown above) for accessing the shared registers will be at least a few lines of code per register for accessing it from a domain/interface. This makes writing the test scenarios complicated and wordy.

Using ‘domain’ in RALF and VMM RAL makes shared register implementation and access in verification environment easy. We hope that we would soon be able to have our automated flow leverage this effectively..

If you are interested to go through more details about our automation setup and register verification experiences, you might want to look at: http://www10.edacafe.com/link/DVCon-2011-Automated-approach-Register-Design-Verification-complex-SOC/34568/view.html

Posted in Automation, Modeling, Register Abstraction Model with RAL, Tools & 3rd Party interfaces | 4 Comments »

Building & Configuring Coverage Model – VMM Style – Part-III

Posted by paragg on 25th June 2011

Parag Goel, Senior Corporate Application Engineer, Synopsys

In the final blog of this coverage modeling with VMM series, we focus on error coverage. Negative scenario testing is an integral part of verification. But again, we have this question – Whether I have covered all negative scenarios?

So it is important to ensure that the generic coverage model tracks all the error scenarios.

Let’s see, how a specific mechanism provided in VMM in the form of vmm_report_catcher helps to track error coverage efficiently and effectively. The VMM Log Catcher is able to identify/catch a specific string of any type any of the messages issue through the VMM reporting mechanism.

Typically, the Verification Environment issues messages to STDOUT when the DUT responds to an error scenario. These messages can be ‘caught’ by the Log Catcher to update the appropriate coverage groups. Let see how this is done in detail.

The Verification Environment would respond to each negative scenario by issuing a message with a unique text, specific to specific error messages.

In the context of the AXI in framework, we can introduce a wide-range of error scenarios and test if the DUT responds correctly or not. A few possible error scenarios in AXI are listed below for your reference.

clip_image001

However, all the scenarios may not be applicable always and hence configurability is required to enable only the required set of coverpoints tied to the relevant negative scenarios. Thus, we should have similar configurability for error coverage as I talked about in the earlier blogs.

Let’s see how we can catch the relevant responses and sample the appropriate covergroups.

As mentioned earlier, in the example below, we make use of the unique message issued as a result of a negative scenario.

This is how we use the VMM Log catcher.

1. The error coverage class is extended from vmm_log_catcher – VMM base class.

2. The vmm_log::caught() API is utilized as means to qualify the covergroup sampling.

clip_image001[11]

In the code above, whenever a message with the text “AXI_WRITE_RESPONSE_SLVERR “ is issued from anywhere in the verification environment, the ‘caught’ method is invoked which in turn samples the appropriate covergroup. Additionally, you an specify more parameters in the caught API, to restrict what ‘scenarios’ should be caught.

vmm_log_catcher::caught(

string name = “”,

string inst = “”,

bit recurse = 0,

int typs = ALL_TYPS,

int severity = ALL_SEVS,

string text = “”);

The above API, installs the specified message handler to catch any message of the specified type and severity, issued by the specified message service interface instances specified by name and instance arguments, which contains the specified text. By default, this method catches all messages issued by this message service interface instance.

Hope these set of articles would be relevant and useful to you.. I have made an attempt to leverage some of the built-in capabilities of the SV languages and the VMM base classes to target some of the challenges in creating configurable coverage models.. These techniques can be improvised further to make them more efficient and scalable. I would be waiting to hear from you all any inputs that you, have in this area.

Posted in Automation, Configuration, Coverage, Metrics, Reuse, Structural Components, VMM, VMM infrastructure | Comments Off

Cool Things You Can Do With DVE – Part 4

Posted by Yaron Ilani on 26th April 2011

Yaron Ilani, Apps. Consultant, Synopsys

If you liked part 2 where I explained how Interactive Rewind could save you precious time during debug, then here’s another one for you. Obviously one of the most powerful methods of debugging interactively is by adding breakpoints at interesting points. You could have a single breakpoint as a starting point and then go on step by step. But in most cases it would be wiser to put multiple breakpoints in your code so that you could have more control over your simulation or even jump from one interesting point to another (remember you can always go backwards in time).

So the process of adding breakpoints and refining them might take some time and ideally you wouldn’t want to repeat that process all over again when you start a new interactive debug session. Wouldn’t it be nice to be able to save your breakpoints so that you or someone else from your team could reuse them in a different simulation? Well, DVE lets you do that! Simply launch the Breakpoints window from the Simulator menu:

In the example above I’ve added 3 breakpoints. In the source code window they are marked in red, but they are also listed in the Breakpoints window where each breakpoint can be enabled or disabled individually. In the bottom left you can see the “Save” button. Clicking on it will save all your breakpoints to a TCL file. You may use this file later on in any other DVE session by clicking on the “Load” button.

Once your test bench code is more or less stable, with this new feature you can actually create a number of useful breakpoints files (a breakpoints library if you will…). Each breakpoints file could be designed to help debugging a different part of your test bench. Or if you’re debugging some unfamiliar verification IP, you can create a breakpoints file and send it to its owner for help.

Happy debugging!

Check out the previous parts of this series to learn more about more cool features available today in DVE.

Posted in Automation, Debug | 4 Comments »

Cool Things You Can Do With DVE – Part 3

Posted by Yaron Ilani on 13th April 2011

Yaron Ilani, Apps. Consultant, Synopsys

If you missed part 1 or part 2 of this series don’t worry, you can go on reading and catch up with the previous parts later on. Today I’m going to show you a small, yet very powerful feature in DVE that you may not be aware of.

Remember the last time you had to count clock cycles in the waveform window? Sometimes this is a quick way to verify that an internal counter behaves correctly or that a signal goes up just at the right clock edge. Remember how frustrating it is when you lose count for some reason and have to start over? Remember how you’re never 100% sure about the result even if you calculated the time difference between the left and right cursors and divided by the clock period? If your answer was yes to any of those questions then you’re going to love the Grid feature. What Grid simply does, as its name suggests, is draw a grid on top of the waveform. Here’s what it looks like:

If you click on the Grid button (in the red circle) the Grid will show up as dotted lines. As you can see, the Grid can count clock cycles for you. You can set it up to count falling edges, rising edges or any edge. You can also set the range either by entering the start time and end time, or simply by placing the cursors at the desired points and clicking on the “Get Time From C1/C2” button in the Grid Properties window:

The Grid Properties window lets you have even more control over the grid. For example – you can set its cycle time to a custom value. This could be very useful if you want to be able to visually inspect drifting clocks or duty-cycle issues, etc.

In short, the Grid is one of those little things that make a big difference when it comes to efficient debugging and you should definitely become familiar with it. If you ever have to count clock cycles again, remember that you no longer have to do this manually. You don’t even have to make any calculations. Simply launch the Grid and voila!

If you’d like to learn more about DVE’s advanced debug features check out part 1 and part 2 of this series.

Posted in Automation, Debug | 3 Comments »

Cool Things You Can Do With DVE – Part 2

Posted by Yaron Ilani on 7th April 2011

Yaron Ilani, Apps. Consultant, Synopsys

In Part 1 of this series we discussed how SystemVerilog macros might add complexity when it comes to debugging your test bench and how DVE can make your life much easier in that area. Today we’re going to show you another cool feature in DVE that if used wisely, could save you a significant amount of time when debugging. Let’s recall for a moment the two main use models of DVE – Post Processing and Interactive. The former is where you’re debugging your simulation results after it has completed. The latter is where you’re running and debugging your simulation simultaneously, trading off performance for enhanced debug capabilities. Today we shall focus on the interactive mode. We’re about to see how the Interactive Rewind feature will help you minimize your debug turnaround time.

So during a typical interactive session you put a breakpoint somewhere in your code and let the simulation run until it reaches your breakpoint. From that point and on you take the controls and advance the simulation step by step which allows you to inspect your signals or variables very closely. You may assign different values to signals along the way to try out potential workarounds or fixes. Now here’s the tricky part: simulation can only advance forward! So if your breakpoint occurs late in the simulation, every time you want to restart your debug trail you’re bound to wait for the simulation to rerun from the beginning. Why would you want to start over? Well, you might want to change a signal value (remember you can force signals interactively in DVE). But more often than not, stepping through your code gets very complicated and you might miss the interesting point where the bug occurs, or you lose track of the debug trail and need to start fresh. In certain cases your breakpoints occur periodically (e.g. every time a packet is transmitted) and you just wish you could go back in time to the previous occurrence to step through the code again.

The good news is that DVE allows you to navigate backwards in simulation!  We call it Interactive Rewind. All you have to do is set up one or more checkpoints along the simulation. Upon each checkpoint a snapshot of the simulation is saved and you may use it later on to literally go back in time. To give you a sense of how easy it is to work with checkpoints, here’s how it looks like – the left button is used to add a new checkpoint. The right button will be used later to rewind to any of the previously added checkpoints.

Selecting a checkpoint to rewind to is easy: simply select one from the drop-down menu:

You can also control your checkpoints via UCLI, where you’ll find many advanced features such as the ability to add periodic checkpoints automatically.

To sum up, interactive debugging with DVE becomes much more efficient with Interactive Rewind. Simply add checkpoints at strategic points along the debug trail. Then use Step/Next to advance simulation time and Rewind to go back. This will keep your debug turnaround time to minimum, thus enabling you to focus on debugging and not waiting.

Posted in Automation, Debug | 5 Comments »

Using VMM template Generator to ramp up your testbench development

Posted by Amit Sharma on 25th October 2010

Amit Sharma, Synopsys
‘vmmgen’, the template generator for creating robust, extensible VMM compliant environments, has been available for a long time with VMM and it was upgraded significantly with VMM1.2. Though the primary functionality of ‘vmmgen’ is to help minimize VIP and environment development cycle by providing detailed templates for developing VMM Compliant verification environments, a lot of folks also use it to quickly understand how different VMM base classes can be used in different contexts. This is done as the templates uses a rich set of the latest VMM features to ensure the appropriate base classes and their features are picked up optimally.

Given that it has a wide user interaction mechanism which provides available features and options to the user, the user can pick up the modes which are most relevant to his or her requirement. It also provides them the option to provide their own templates thus providing a rich layer of customization. Based on the need, one can generate individual templates of different verification components or they can generate a complete verification environment which comes with a ’Makefile’ and an intuitive directory structure, thus propelling them on their way to catch the first set of bugs in their DUTs. I am sure all of you know where to pick up ‘vmmgen’ form. It available in the <VMM_HOME>/Shared/bin area or in $VCS_HOME/bin

Some of the rich set of features available now includes:

• Template Options:

– Complete environment generation

– Individual templates generation

• Options to create Explicitly phased environments or Implicitly phased environment or to mix Implicitly phased components and Explicitly phased components

• Usage of VMM Shorthand macros

• Creating RAL based environment, and providing Multiplexed domain support if required

• Hooking up VMM Performance Analyzer at the appropriate interfaces

• Hooking up the DS Scoreboard at the relevant interfaces (with options to chose from a range of integration options, e.g. : through callbacks, through TLM2.0 analysis ports, connect directly through to transactors, channels or notifications)

• Ability to hook up different generators (atomic, scenario, Multistream generators) at different interfaces

• Creating a scenario library and Multistream scenario creation

• Multi-driver generator support for different kinds of transactions in the same environment

• Factory support for transactions, scenarios and multi stream scenarios. Sample factory testcase which can explain the usage of transaction override from a testcase.

• ‘RTL config’ support for drivers and receivers.

• Various types of unidirectional and bi-directional TLM connections between generator and driver.

• Analysis ports/exports OR parameterized notify observers to broadcast the information from monitor to scoreboard and coverage collectors.

• Multi test concatenation support and management to run the tests

• Creating portable Interface wrapper object, and setting up interface connections to the testbench components using vmm_opts::set_object/get_object_obj

• Creating a Generic slave component

• Option to use default names or user provided names for different components

As you can see the above list itself is quite comprehensive and let me tell you that that it is not exhaustive as there are many more features in vmmgen.

With respect to the usage as well, there are multiple flavors. In the default mode, the user is taken through multiple detailed choices/options as he is creating/connecting different components in his verification environment. However, some folks might want to use ‘vmmgen’ within their own wrapper script/environment and for them there are options to generate the environments by providing all required options in the command line or through a configuration file… Some of these switches include

-SE [y/n] : Generates a complete environment with sub-environments

-RAL [y/n] : Create RAL based verification environments

-RTL [y/n] : Generates RTL configuration for the environment

-ENV <name>, -TR <name> : Provide the name for the environment class and transaction classes. names for multiple transaction class names can be provide as well:

vmmgen –l sv –TR tr1+tr2

-cfg_file <file_name> : Option to provide a configuration file for the options

There is an option to generate an environment quickly by taking the user through the minimum number of questions (-q).

Additionally, the user can provide his or her own templates through the –L <template directory> option.

As far as individual template generation goes, you have the complete list. Here, I am outlining this down for reference:

image

I am sure a lot of you have already been using ‘vmmgen’. For those, who haven’t, I encourage you to try out the different options with it. I am sure you will find this immensely useful and it will not only help you create verification components and environments quickly but will also make sure they are optimal and appropriate based on your requirements.

Posted in Automation, Coding Style, Customization, Modeling Transactions, Organization, Performance Analyzer, Scoreboarding, Tools & 3rd Party interfaces, VMM infrastructure | Comments Off

Sharing RTL Configuration with the Testbench

Posted by JL Gray on 3rd May 2010

by Jason Sprott, CTO, Verilab

Often times we find ourselves with some configurable RTL to verify. The amount of configuration can vary from a few bus width parameters, or a configurable IP block with optional features and performance related control parameters, to a whole chip with optional interfaces. This can make our life as verification engineers that bit more complicated. We not only have to verify the multitude of scenarios for a specific implementation, we have to somehow handle building and running a testbench for the various RTL configurations. Especially in the case of standalone IP, it’s often the case that different configurations are included in our verification space.

Configurations may affect the physical interface between the testbench and RTL, for example:
•    Bus widths
•    Number of interrupts
•    Number of instances of an external interfaces such as USB or Ethernet

Or, the configurations might control internal behavior, which doesn’t affect the physical interface, but does affect the way the test bench has to interact with the DUT functionally. Examples might include:
•    FIFO Depth – which may affect data pattern generation to hit corner cases, or performance related water marks
•    QoS algorithms – where different algorithms implementations are selected depending on requirements, which could affect traffic generation, checking, or functional coverage

The VMM has a new RTL configuration feature which can make life a bit easier. RTL configurations can now be encapsulated in a testbench object that can be used to generate an output file. This output file can be shared between the RTL and testbench. The steps in the process go something like this:

STEP 1: Compile the RTL (with some default configuration) and testbench. This is needed to run the configuration file generation only. Tests will not be run in the simulation.

STEP 2: Run simulation to generate configuration file

./simv +vmm_rtl_config=<PREFIX> +vmm_gen_rtl_config …

The format of the output file can be customized by extending a companion format class. This determines the output format written and also the parsing of the file back into the testbench. The (simple) format that comes out of the box looks like this:

num_of_mems : 4
has_buffer  : 1

This obviously isn’t RTL code, so if we want our Verilog to understand it, for example converting it to assign parameter values, we have to perform the next step.

STEP 3: Convert output configuration file to verilog params file (or format of your choice, such as IP-XACT)

A script is provided as a demonstration, in the memsys_cntrlr std_lib VMM example.

./cfg2param <config_file>

STEP 4: Re-compile the RTL using the new parameters generated by the configuration, e.g. using -parameters switch in VCS.

STEP 5: Run the simulation executing tests, passing the configuration into the testbench

./simv +vmm_rtl_config=<PREFIX> …

The code encapsulating the configuration parameters in the testbench is encapsulated using the vmm_rtl_config class. The code that implements output to the configuration file for each variable is done using macros. The example below shows implementation for class member variables.

class my_cfg extends vmm_rtl_config;
rand int num_of_mems;
rand bit has_buffer;
constraint valid_c {
num_of_mems inside {1,2,4,8};
}

`vmm_rtl_config_begin(my_cfg)
`vmm_rtl_config_int(num_of_mems,num_of_mems)
`vmm_rtl_config_boolean(has_buffer,has_buffer)
`vmm_rtl_config_end(my_cfg)

endclass:my_cfg

On the testbench side the configuration is set using the vmm_opts::set_object() in the initial block of the program block of your testbench and can be retrieved through the vmm_opts::get_object().

The beauty of using this method for RTL configuration is that constrained randomization and functional coverage collection can be used for RTL configurations. In projects where highly configurable IP has to be verified, it’s a convenient way to control and observe progress of verification across, not only the modes of operation, but also those modes relating to specific RTL configurations.

Posted in Automation, Configuration, Reuse | 3 Comments »

Transactor generator with VMM technology for efficient usage of CPU resources.

Posted by Oded Edelstein on 5th April 2010

OdedEdelsteinPic

Oded Edelstein – Founder and CEO of SolidVer

Background:

Many network designs require an efficient transactor generator
to cover DUT functionality.

In a random test we would like to cover all scenarios,
but also to use the CPU mostly on cases which push the design to its edge.

In this VMM example, I will demonstrate 3 cases, and solutions for a better
usage of CPU resources.

Cases:

Case A – In network designs packet size can vary between 40 Bytes, for small packets
and 10KBytes, for large MTU packets.
A test which is based on the number of packets(Transactions), might be very short
or very long depends on the total size of packets. The long tests scenarios can
be covered in a separate random test.

Case B – Some network designs forward packets to different channels(queues) with
different levels of bandwidth support. Random generation of channel
number does not cover many cases (e.g. filling a certain queue with packets),
since the probability that the same channel will be chosen one after the other,
in a system with many channels is very low.

Case C – In some projects, the transactor generates many packets to all queues
while some queues are randomly configured to a low bandwidth. This causes the
test to be very long, until all packets are being forwarded.
At the beginning of the test, the DUT is very busy – almost every cycle, it gets data.
But after the high bandwidth queues got all packets, the low bandwidth queues
continue slowly to get packets, until all packets have been forwarded.
Now, most of the DUT queues are empty and the DUT is using only a small
portion of its performance ability. That has no added value for coverage.

Solutions:

The following code example, shows a simple solution for the above cases.
The solution is based on the following techniques:

1. Test length is defined based on sum of packets size, instead of the number of
packets(transactions).

2. Add the random test a basic case where a number of packets are send  to the same channel
one after the other.
Low probability cases need to be identified and added as case inside
the random test (cases inside directed tests are not good enough for a good coverage).

3. No packets are generated in advance for all queues.
Packets are randomly generated and driven, on the fly, only to available queues.
From my experience, random tests can generate all parameters, and a good coverage can be achieved.
At the same time, a much better coverage can be achieved if idle periods, which consume
CPU during the test are identified and handle correctly.

Code Example:

//——————————————————————————————————
//
//  packet.sv
//
//——————————————————————————————————
class packet extends vmm_data;

rand byte payload[];
rand int packet_size;
rand int channel_num;

static int cnt;

constraint c_payload_size { payload.size == packet_size; }

constraint c_pkt_size_dist { packet_size  dist { 40:= 20,
[41:200]:= 50,
[200:2000]:= 5,
[2000:10000]:= 1,
10000 := 1};}

`vmm_data_member_begin(packet)
`vmm_data_member_scalar_array(payload, DO_ALL)
`vmm_data_member_scalar(packet_size, DO_ALL)
`vmm_data_member_scalar(channel_numm, DO_ALL)
`vmm_data_member_end(packet)

endclass : packet

//—————————————————————————–
// VMM Macros – Channel and Atomic Generator
//—————————————————————————–
`vmm_channel(packet)
`vmm_atomic_gen(packet, “Packet atomic generator”)

//——————————————————————————————————
//  End file packet.sv
//——————————————————————————————————

//——————————————————————————————————
//
//  bfm_master.sv
//
//————————————————————————————————————

//
// SUM_PACKETS_SIZE_IN_TEST : The sum of packets size in bytes that the BFM will drive the DUT.
// We used sum of packets size, to define test length, instead of number of packets, since packet
// size distribution could randomly vary between 40 bytes (small packets) – 10KByes (Large MTU packets).
// This could cause for some seeds to be very long, with no significant added value for coverage.
// These long scenarios were tested in a separate random test.
//
`define SUM_PACKETS_SIZE_IN_TEST 10000000 // 10MB

//
// In this example The DUT gets a packet with a channel number.
// The DUT holds a sperate FIFO for every channel.
//
`define MAX_NUMBER_OF_CAHNNELS   16
class bfm_master extends vmm_xactor;

vmm_log log;

// Packet Transaction channels
//
packet_channel    packet_chan;

//
// The DUT will send the BFM back pressure signal, separately for every channel,
// when the channel FIFO, inside the DUT is full.
// avail_channel_list – Holds a bit for every channel, The BFM can drive packets only on channels
//                      which are not back pressured.
//
bit [(`MAX_NUMBER_OF_CAHNNELS-1):0] avail_channel_list;
int done = 0;

extern function new (string instance,
integer stream_id,
packet_channel packet_chan);
extern function int generate_stream_size();
extern function int generate_avail_channel();

extern virtual task main();
extern virtual task drive_packet(packet packet_trans);

endclass: bfm_master

function bfm_master::new(string instance,
integer stream_id,
packet_channel packet_chan);

super.new(“BFM MASTER”, instance, stream_id);
log = new(“BFM MASTER”, “BFM MASTER”);
if (packet_chan == null) packet_chan = new(“BFM MASTER INPUT CHANNEL”, instance);
this.packet_chan = packet_chan;

endfunction: new

//—————————————————————————–
// main() – Main task for driving packets.
//—————————————————————————–

task bfm_master::drive_packet(packet packet_trans);
// drive the packet …
endtask: drive_packet

function int bfm_master::generate_avail_channel();
….
endfunction

function int bfm_master::generate_stream_size();
….
endfunction
task bfm_master::main();

//
// The sum of packets that the BFM will drive the DUT. when this value is
// above SUM_PACKETS_SIZE_IN_TEST the BFM will stop driving packets
//
int sum_packet_data_sent = 0;

//
// The channel on which the DUT will drive packets. This channel should not be back pressured
// while driving packets
//
int channel_num;

//
// packet_stream_size
// The number of packets that will be sent one after the other to the same channel.
// The idea behind this variable is to get a good coverage for cases where number
// of packets are sent to the same channel one after the other, to quickly fill the FIFO.
// Otherwise the BFM will generate statistically every time, a different
// channel. The probability that the same channel will be generated one after the other
// is very low. e.g. Statistically the probability that the same channel will be
// generated 5, 10, or 20 times, one after the other, is 16 power 5, 16 power 10 or
// 16 power 20, which is a very low probability.
//

int packet_stream_size;

// Counter for the number of packets that were driven in the test.
//
int packet_cnt = 0;
int i;
packet packet_trans;
super.main();
while(sum_packet_data_sent < `SUM_PACKETS_SIZE_IN_TEST) begin
// gen random channel from avail_channel_list;
channel_num = generate_avail_channel();
packet_stream_size =  generate_stream_size();

for(i = 0; i < packet_stream_size; i++ ) begin
this.wait_if_stopped_or_empty(this.packet_chan);
if(avail_channel_list[channel_num] == 1) begin
packet_chan.get(packet_trans);
packet_trans.channel_num = channel_num;
drive_packet(packet_trans);
sum_packet_data_sent = sum_packet_data_sent + packet_trans.packet_size;
packet_cnt++;
`vmm_note(log, $psprintf(“drive packet = %0d  size = %0d  channel = %0d  stream index = %0d  sum = %0d “,
packet_cnt, packet_trans.packet_size, channel_num, i, sum_packet_data_sent));
end
else begin
break;
end
end // for loop
end// end while loop
done = 1;

endtask: main

//——————————————————————————————————
//
//  End file bfm_master.sv
//
//——————————————————————————————————

Posted in Automation, Modeling, Optimization/Performance | 1 Comment »

Shorthand macros with user defined implementation

Posted by Vidyashankar Ramaswamy on 3rd February 2010

The transaction class objects are created by extending the base class vmm_data. Vmm_data class has many virtual methods which need to be implemented by the extended class. This can become a laborious process as this is done for each extended class object. However, a set of shorthand macros available to help minimize the amount of code required to create these data class extensions. These shorthand macros can be used on per data member basis which provides a default implementation of all the require methods. Following is an example.

. . .
1    class apb_trans extends vmm_data;
2       `vmm_typename(apb_trans)

3       rand enum {READ, WRITE} kind;
4       rand bit [31:0] addr;
5       rand logic [31:0] data;

6       `vmm_data_member_begin(apb_trans)
7            `vmm_data_member_scalar(addr, DO_ALL)
8            `vmm_data_member_scalar(data, DO_ALL)
9            `vmm_data_member_enum(kind, DO_ALL)
10     `vmm_data_member_end(apb_trans)
11     …
12  endclass: apb_trans
. . .

The class properties are declared as shown in Line number 3 to 5. Line number 6 and 10 marks the start and end of the shorthand macro section. Based on the variable type , the appropriate macros are called (line number 7 to 9). As the name says “DO_ALL” means use this variable in all the virtual method implementations. Say if you want to exclude the “kind” property from printing, then you can use “DO_ALL – DO_PRINT”. Please refer to the VMM user guide for more details on this.

User defined implementation

Shorthand macros provide the default implementation for all the vmm_data virtual methods. If you want to override the default implementation of a method, then you have to implement the do_* method. For example say you want to change the implementation for byte_size, You can still use shorthand macros but need to explicitly implement the apb_trans::do_byte_size() method and force VMM not to provide the default implementation. The example code is shown below.

1   virtual function int unsigned do_byte_size (int kind = –1) ;
2       . . .
3       . . .
4   endfunction

Constructor replacement

In some cases a transaction class might need a custom constructor with different arguments. Please note that the explicit constructor implementation is done using the shorthand macro `vmm_data_new() as shown below (line 1). The new implementation should follow the macro definition (Line number 2 to 5). It is also important to provide default values for the arguments to make the transaction class factory-enabled (Line number 2).

1   `vmm_data_new(apb_trans)
2   function new (vmm_log log=null, vmm_object parent=null, string name=””);
3       super.new(. . ., . . .) ;
4       . . .
5   endfunction

The shorthand macros are also available for messaging service (vmm_log), vmm_unit configuration, RTL configuration (vmm_rtl_config) and TLM ports. For the complete list, please refer to the VMM user guide.

Posted in Automation, Customization, Modeling Transactions | Comments Off

VMM data macros are cool, but how do I customize the constructor?

Posted by Shankar Hemmady on 27th August 2009

Srinivasan VenkataramanPawan BellamkondaSrinivasan Venkataramanan, CVC

Pawan Bellamkonda, Brocade

During our recent VMM training at CVC, we learned about VMM data member macros, and our engineers liked it. Some of our teams at Brocade have started adopting it in their projects right away! We see that we can avoid much of the lengthy code and increase readability with these new macros. We will surely avoid making silly mistakes which might be hard to debug later.

However as with any built-in automation, there are always scenarios where-in user level customization of some or all of the methods is required. VMM provides this flexibility for overriding the default behavior of virtual methods of vmm_data class. In one of our blocks we needed to tweak the constructor of the transaction. One question that perplexed us was:

“I have built a transaction class extending from vmm_data. We have used the short hand macro `vmm_data_member…..  to get all the functions automatically. But while creating an object of this transaction, we want to pass a configuration class object as argument in the new function. How should we override the new() function alone when we use the short hand macros? When we tried using do_new() (like overriding other functions), it did not work.”

As we explored a bit, we found another macro specifically meant for this:

`vmm_data_new(<class_name>)

This macro should be used before the beginning of data-member macros. This lets the succeeding macros do all the work except the “new” function implementation.

class s2p_xactn extend vmm_data;
rand bit [7:0] pkt_len, pkt_pld;

`vmm_data_new(s2p_xactn)
function new(int my_own_arg = 2);
`vmm_note (log, $psprintf (“my val is %0d”, my_own_arg));
endfunction : new

`vmm_data_member_begin(s2p_xactn)
`vmm_data_member_scalar(pkt_len, DO_ALL)
`vmm_data_member_scalar(pkt_pld, DO_ALL)
`vmm_data_member_end(s2p_xactn)
endclass : s2p_xactn

As with traditional martial arts, functional verification too has some slightly different styles/requirements that makes each project interesting and unique. To its credit, we feel that VMM is as proven as traditional martial arts: it can be tailored to different requirements while providing a standardized means of combat.

Posted in Automation, Coding Style, Customization, Modeling | 2 Comments »

24595ccb8a1cc8bba0843fc2afa4a1dfxxxxxxxxxxxxxxxx