Verification Martial Arts: A Verification Methodology Blog

Archive for the 'Register Abstraction Model with RAL' Category

SNUG-2012 Verification Round Up – Language & Methodologies – II

Posted by paragg on 3rd March 2013

In my previous post, we discussed papers that leveraged SystemVerilog language and constructs, as well as those that covered broad methodology topics.  In this post, I will summarize papers that are focused on the industry standard methodologies: Universal Verification Methodology (UVM) and Verification Methodology Manual (VMM).

Papers on Universal Verification Methodology (UVM)

Some users prefer not to use the base classes of a methodology directly. Adding a custom layer enables them to add in additional capabilities specific to their requirements. This layer would consist of a set of generic classes that extend the classes of the original methodology. These classes provide a convenient location to develop and share the processes that are relevant to an organization for re-use across different projects. Pierre Girodias of IDT (Canada) in the paper, Developing a re-use base layer with UVMfocuses on the recommendations that adopters of these “methodologies” should follow while developing the desired ‘base’ layer.  In the paper typical problems and possible solutions are also identified while developing this layer. Some of these including dealing with the lack of multiple-inheritance and foraging through class templates.

UVM provides many features but fails to define a reset methodology, forcing users to develop their own methodology within the UVM framework to test the ‘reset’ of their DUT. Timothy Kramer of The MITRE Corporation in the paper “Implementing Reset Testingoutlines several different reset strategies and enumerates the merits and disadvantages of each. As is the case for all engineering challenges, there are several competing factors to consider, and in this paper the different strategies are compared on flexibility, scalability, code complexity, efficiency, and how easily they can be integrated into existing testbenches. The paper concludes by presenting the reset strategy which proved to be the most optimal for their application.

The ‘Factory’ concept in advanced OOP based verification methodologies like UVM is something that has baffled most verification engineers. But is it all that complicated? Not necessarily  and this is what is  explained by Clifford E. Cummings of Sunburst Design, Inc. in his paper– The OVM/UVM Factory & Factory Overrides – How They Works – Why They Are Important” . This paper explains the fundamental details related to the OVM/UVM factory and explain how it works and how overrides facilitate simple modification to the testbench component and transaction structures on a test by test basis. This paper not only explains why the factory should be used but also demonstrates how users can create configurable UVM/OVM based environments without it.

Register Abstraction Layer has always been an integral component of most of the HVL methodologies defined so far. Doug Smith of Doulos, in his paper, Easier RAL: All You Need to Know to Use the UVM Register Abstraction Layer”, presents a simple introduction to RAL. He distills the adoption of UVM RAL into a few easy and salient steps which is adequate for most cases. The paper describes the industry standard automation tools for the generation of register model.  Additionally the integration of the generated model along with the front-door and backdoor access mechanism is explained in a lucid manner.

The combination of the SystemVerilog language features coupled with the DPI & VPI language extensions can enable the testbench to generically react to value-changes on arbitrary DUT signals (which might or might not be part  of a standard interface protocol).  Jonathan Bromley, Verilab in “I Spy with My VPI: Monitoring signals by name, for the UVM register package and more”, presents a package which supports both value probing and value-change detection for signals identified at runtime by their hierarchical name, represented as a string. This provides a useful enhancement to the UVM Register package, allowing the same string to be used for backdoor register access.

Proper testing of most digital designs requires that error conditions be stimulated to verify that the design either handles them in the expected fashion, or ignores them, but in all cases recovers gracefully. How to do it efficiently and effectively is presented in “UVM Sequence Item Based Error Injectionby Jeffrey Montesano and Mark Litterick, Verilab. A self-checking constrained-random environment can be put to the test when injecting errors, because unlike the device-under-test (DUT) which can potentially ignore an error, the testbench is required to recognize it, potentially classify it, and determine an appropriate response from the design. This paper presents an error injection strategy using UVM that meets all of these requirements. The strategy encompasses both active and reactive components, with code examples provided to illustrate the implementation details.

The Universal Verification Methodology is a huge win for the Hardware Verification community, but does it have anything to offer Electronic System Level design? David C Black from Doulos Inc. explores UVM on the ESL front in the paperDoes UVM make sense for ESL?The paper considers UVM and SystemVerilog enhancements that could make the methodology even more enticing.

Papers on Verification Methodology Manual (VMM)

Joseph Manzella of LSI Corp in “Snooping to Enhance Verification in a VMM Environmentdiscusses situations in which a verification environment may have to peek at internal RTL states and signals to enhance results, and provides guidelines of what is an acceptable practice. This paper explains how the combination of vmm_log (logger class for VMM) and +vmm_opts (Command-line utility to change the different configurable values) helps in creating a configurable message wrapper for the internal grey-box testing. The techniques show how different assertion failures can be re-routed through the VMM messaging interface. An effective and reusable snooping technique for robust checking is also covered.

At Silicon Valley in Mechanism to allow easy writing of test cases in a SystemVerilog Verification environment, then auto-expand coverage of the test case Ninad Huilgol of VerifySys addresses designer’s apprehension of using a class based environment  through a  tool that leverages the VMM base classes. It  automatically expands the scope of the original test case to cover a larger verification space around it, based on a user friendly API that looks more like Verilog, hiding the complexity underneath.

Andrew Elms of Huawei in Verification of a Custom RISC Processorpresents the successful application of VMM to the verification of a custom RISC processer. The challenges in verifying a programmable design and the solutions to address them  are presented. Three topics explored in detail are the – Use of Verification Planner, Constrained random generation of instructions, Coverage closure.The importance of the Verification Plan as the foundation for the verification effort is explored. Enhancements to the VMM generators are also explored. By default VMM data generation is independent of the current design state, such as register values and outstanding requests. RAL and generator callbacks are used to address this. Finally, experiences with coverage closure are presented.

Keep you covered on the varied verification topics in the upcoming blog ahead!!! Enjoy reading!!!

Posted in Announcements, Register Abstraction Model with RAL, UVM, VMM, VMM infrastructure | 1 Comment »

Build your own code generator!

Posted by Amit Sharma on 25th June 2012

One of the critical problems developers cope with during SoC development process (architecture plan, RTL design, verification, software development) is the constant need to synchronize between different forms of the same data structure: the SoC registers database. The SoC registers database can be found at the SoC architecture team (who write the SoC registers description document), design engineers (who implement the registers structure in RTL code), verification engineers (who write the verification infrastructure – such as RAL code, and write verification tests – such as exhaustive read/write tests from all registers), and software engineers (who use the registers information to write embedded software).

Since the same information is used in different forms, it is only natural to have a single, central database which will hold all SoC registers data. You would ideally like to generate all the required output files  (documentation, UVM or VMM Regsiter Abstraction Model, RTL, C headers, …) from this central database ..  Different vendors and CAD teams provide different automation solutions for doing this.

The RALF specification contains all of the necessary information to generate RTL and testbench  code that implements the specified registers. There are many ways to code and implement RTL, so it is not possible to provide a general purpose solution.  As far as the testbench abstraction model is concerned, there are multiple ways of customizing your model post generation in both UVM and VMM. Callbacks, factories, configurable RAL model  attributes are some of the ways through which the desired customization can be brought in.  “The ‘user’ in RALF : get ralgen to generate ‘your’ code” highlights a very convenient way of adding of bringing in  SystemVerilog compatible code which will be copied as-is into the RAL model and in the desired scope. When it comes down to generating the RTL and the ‘C’ headers, we cannot leave the customization to such a late stage.  Also, different organizations and project groups have their own RTL and C-code coding styles which means  a generated output of a very generic nature might not be very helpful. For RTL generation, engineers would want the generated code to be power and gate-count efficient. Similar for C registers header generation, it often needs to follow coding styles and match CPU Firmware API. How do we bring in all this customizations to the end user?

Using the RALF C++ API, you have full access to parsed RALF data (through C++ routines), which you can use to implement a customized RTL code generator, or any other feature that needs RALF data. So, you can use  it to generate your C header files, HTML documentation, or translate the i/p RALF files to another register description format, custom covergroups and coverage convergence sequences (DAC 2012 User Track poster 6U.8 — Register Verification on a Fast Lane: Using Automation to Converge on UVM REG Coverage Models)

I have seen two instances of the need to generate a different register specification in the recent past and that is one of the reasons I decided to put this down in a blog.   Let me talk about the first instance..

One of the project groups were in the process of migrating from their own SV base classes to UVM. They had their own Register description format from which they used to generate their Register Abstraction Model. This was a proven flow.

So, when they migrated to UVM , they wanted to have a flow which would validate the changes that they are doing..

Given that they were moving to using RALF and ‘ralgen’, they didn’t want to create Register Specification in the legacy format anymore. So, they wanted to have some automation for generating scripts in the earlier format.  So, how did they go about doing this?..   They took the RAL C++ APIs and using them there were able to create the necessary automation to to generate the legacy format from RALF in no time.. (From what I remember, it was a half days work).. Everyone were involved in doing what they were best at, and that helped in the overall scheme of things.

The other customer had their own format from which they were generating RTL, firmware code and HTML. They had the necessary automation to create RALF for generating the UVM register Model. They also had a mechanism in place to generate IPXACT from this format as well as vice versa.. So, to complete the traceability matrix, they wanted a RALF  to IPXACT conversion.. Again, the most logical approach was to take the RALF CPP APIs and get those to iterate through the parsed RALF data and generate IPXACT.. Though, this effort is not complete, it just took a day or so to be able to generate valid IPXACT1.5 schema and all that is required now is some more additional work to smoothen the corners.

How do you start using these APIs and build your own code/html generators? You need to include ‘ralf.hpp” (which is in $VCS_HOME/include) in your ‘generator’ block. And then to compile the code, you need to pick up the shared library from the VCS installation.

$CPP_COMPILER $CFLAGS -I${VCS_HOME}/include –L${VCS_HOME}/lib –lralf your-file.cpp $LDFLAGS

#include "ralf.hpp"
int main(int argc, char *argv[])
// Check basic command line usage…
if (argc < 3) {
fprintf(stderr, "Error: Wrong Usage.\n");
// Show Correct Usage …
* Parse command line arguments to get the essential
* constructor arguments. See documentation
* of class ralf::vmm_ralf’s constructor parameters.

* Create a ralf::vmm_ralf object by passing in proper
constructor arguments. */
ralf::vmm_ralf ralf_data(ralf_file, top, rtl_dir,
* Get the top level object storing the parsed RALF
* block/system data and traverse that, top-down, to get
* access to complete RALF data.
const ralf::vmm_ralf_blk_or_sys_typ * top_lvl_blk_or_sys
= ralf_data.getTopLevelBlockOrSys();
* TODO–Traverse the parsed RALF data structure top-down
* using/starting-from ‘top_lvl_blk_or_sys’ for getting
* complete access to the RALF data and then, do whatever
* you would want to do, with the parsed RALF data. One
* typical usage of parsed RALF data could be, to generate
* RTL code in your own style.
// TODO – Add your RTL generator code here.
* As part of this library, Synopsys also provides a
* default RTL generator, which can be invoked by
* invoking ‘generateRTL()’ method of ‘ralf::vmm_ralf’
* class, as demonstrated below.

Essentially, you have a handle to the parsed database, and with the available APIs you can do whatever you want with it :) ..  The documentation of the APIs are in the documentation shipped with the VCS installation.. Also, if you are like me and would rather hack away at existing code rather than start with something from scratch, you can just check with Synopsys support to give you existing templates to dump out code in specific format, and you can starting modifying that for your requirements..

Posted in Automation, Coverage, Metrics, Customization, Interoperability, Register Abstraction Model with RAL, Tools & 3rd Party interfaces, UVM, VMM infrastructure | Comments Off

Closed Loop Register Verification using IDesignSpec and the Register Abstraction Layer

Posted by Amit Sharma on 26th September 2011

Nitin Ahuja, Agnisys Technology Pvt. Ltd

In the previous article titled “Automatic generation of Register Model for VMM using IDesignSpecTM ” we discussed how it is advantageous to use a register model generator such as IDesignSpecTM, to automate the process of RALF model generation. Taking it forward, in this article we will discuss how to close the loop on register verification.

Various forms of coverage are used to ensure that registers are functioning properly. There are three coverage models in VMM. They are:

1. reg_bits coverage: this model is used to make sure that all the bits in the register are covered. This model works by writing and reading both 1 and 0 on every register bit, hence the name. This is specified using “cover +b” in the RALF model.

2. field_vals coverage: field value coverage model is implemented at the register level and supports value coverage of all fields and cross coverage between fields and other cross coverage points within the same register. This is specified using “cover +f” in the RALF model. User can specify the cross coverage depending on the functionality.

3. Address map: this coverage model is implemented at block level and ensures that all registers and the memories in the block have been read from and written to. This is specified using “cover +a” in the RALF model.

We will discuss how coverage can be switched on/off and how the type of coverage can be controlled for each field directly from the register specification.

Once the RALF model is generated, the next step in verification is to generate the RTL and the SystemVerilog RAL model using ‘ralgen’. The generated RAL model along with the RTL can be compiled and simulated in the VMM environment to generate the coverage database. This database is used for the report generation and analysis.

Reports can be generated using IDesignSpecTM (IDS). IDS generated reports have advantages over other report in that it generates the reports in a much more concise way showing all the coverage at one glance.

Turning Coverage ON or OFF

IDesignSpecTM enables the users to turn ON/OFF all the three types of coverage from within the MS Word specification itself.

Coverage can be specified and controlled using the “coverage” property in IDesignSpecTM which has the following possible values:


The hierarchical “coverage” property enables users to control the coverage of the whole block or at the chip level.

Here is a sample of how coverage can be specified in IDesignSpecTM:



This would be the corresponding RALF file :



The coverage bins for each CoverPoint along with the cross for the various CoverPoints can also be defined in the specification as shown below:


This would translate to the following RALF:


Now, the next step after RALF generation would be to generate the RAL Model from the IDS generated RALF.


The IDS generated RALF can be used with the Synopsys ‘ralgen’ to generate the RAL  (VMM or UVM) model as well as the RTL.

RAL model can be generated by using the following command:


If you specify –uvm above in the fisrt ralgen invocation above, a UVM Register Model would be generated.


Once the RTL and the RAL model are generated using the ‘ralgen’, the complete model can be compiled and simulated in the VMM environment using VCS.

To compile the model use the following command on the command line:

vcs -R +plusarg_save -sverilog -o “simv1″ -ntb_opts rvm+dtm +incdir+<directories to search `defines> <files to be compiled> +define+RAL_COVERAGE

The compilation and simulation generates the simulation database which is used for the generation of the coverage reports.

Coverage reports can be generated in various forms but the most concise form can be in the form of the graphics showing all the coverage at a glance. For this, a tcl script “ivs_simif.tcl” takes up the simulation database and generates the text based report on execution of the following command:

% ivs_simif.tcl -in simv.vdb –svg

For running the above command set the environment variable “IDS_SIM_DIR”, the text report are generated at this location. This will also tell IDS where to look for the simulation data file.

A detailed graphical view of the report can be generated from IDS with the help of this text report. To generate the graphical report in the form of “scalable vector graphics” (SVG) select the “SVG” output from the IDS config and regenerate.

Another way of generating the SVG could be by using the IDS-XML or the Doc/Docx specification of the model as the input to the IDS in batch mode to generate the graphical report of the simulation by using the following command:

% idsbatch <IDS_generated_XML or doc/docx specification> -out “svg” -dir output_directory

Coverage Reports

IDesignSpec generates two types of reports from the input database.

They are:

1. Field_vals report

2. Reg_bits report

Field_vals report:

Field_vals report gives the graphical view of the field_vals coverage and the address coverage of the various registers and their respective fields.

The amount of coverage for the field (CoverPoints) is depicted by the level of green color in the fields, while that for complete register (CoverGroup) is shown by the color of name of the register.

The address coverage for the individual register (CoverPoint) is shown by the color of the address of the register (green if addressed; black if not addressed), while that of the entire block (CoverGroup) is shown by the color of the name of the block.

The coloring scheme for all the CoverGroups i.e. register name in case of the field_vals coverage and block name in case of the address coverage is:

1. If the overall coverage is greater than or equal to 80% then the name appears in GREEN color

2. If the coverage is greater than 70% but less than 80% then it appears in YELLOW

3. For coverage less than 70% name appears in RED color

Figure1 shows the field_vals and address coverage.


Figure:  Closed loop register verification using RALF and IDS

The above sample gives the following coverage information:

a. 2 registers, T and resetvalue, are not addressed out of total of 9 registers. Thus the overall coverage of the block falls in the range >70% &<80% which is depicted by the color of the Stopwatch (name of the block).

b. All the fields of the registers are filled with some amount of the green color which shows the amount of the coverage. As an example field T1 of register arr is covered 100% thus it is completely filled and FLD4 of register X is covered only about 10%. The exact value of coverage can be obtained by hovering over the field to get the tooltip showing the exact coverage value

c. Color of the name of the register, for example X is red, show the overall coverage of the whole register , which is less than 70% for X.

Reg_bits report:

Reg_bits report gives the detailed graphical view of the reg_bits coverage and address coverage.

Address coverage for reg_bits is shown in the same way as for the address coverage in field_vals. Reg_bits coverage has 4 components, that is,

1. Written as 1

2. Read as 1

3. Written as 0

4. Read as 0

Each of the 4 components is allocated a specific region inside a bit. If that component of the coverage is hit then the corresponding region is shown as green else it is blank. The overall coverage of the entire register is shown by the color of the name of the register as in the case of the field_vals.


The above sample report shows that there is no issue in “Read as 1” for the ‘resetvalue’ register. While other types or read/write has not been hit completely.

Thus, in this article we described what the various coverage models for a register are and how to generate the RALF coverage model of the registers automatically with minimum effort. An intuitive visualization of the register coverage data will ease the effort involved in deciphering the coverage reports from simulation lengthy log files. This type of closed loop register verification ensures better coverage and high quality results in less time. Hope you found this useful.. Do share with me your feedback on the same and and also let me know if you want any additional details to get the maximum benefits from this flow..

Posted in Automation, Coverage, Metrics, Organization, Register Abstraction Model with RAL, Tools & 3rd Party interfaces, Verification Planning & Management | 1 Comment »

The ‘user’ in RALF : get ralgen to generate ‘your’ code

Posted by S. Varun on 11th August 2011

A lot of times, registers in a device could be associated with configuration fields that may not exist physically inside the DUT. For example, there could be a register field meant for enabling the scrambler, a field that would need to be set to “1” only when the protocol is PCIE. As this protocol-mode is not a physical field one cannot write it as a memory mapped register. For such cases ralgen reserves a “user area” wherein users can write SystemVerilog compatible code which will be copied as-is into the RAL model. This gives users the flexibility to add any variables/constraints that may not necessarily be physical registers/fields while maintaining the automated flow. This ensures that the additional parameters are part of the ‘spec’, in this case RALF from which the Model Generation happens.. Thus, it creates a more seamless sharing of variables across the register model and the testbench..

Lets looks at how it works..  If I have a requirement to randomize the register values based on additional testbench parameters, this is what can be done..

block mdio {

bytes 2;

register mdio_reg_1_0@’h0000000 {

field bit_31_0 {

bits 32;

access rw;

reset ‘h00000000;

constraint c_bit_31_0 {

value inside {[0:15]};




user_code lang=SV {

rand enum {PCIE,XAUI} protocol;

constraint protocol_reg1 {

if(protocol == PCIE) mdio_reg_1_0.bit_31_0.value == 16′hFF;




As shown above the “user_code” RALF construct enables the users to achieve the addition of the user-code inside the generated RAL model. Make note of the fact that this construct allows you to weave custom code without having to modify the generated code. This construct can also be used to generate custom coverage. In the context of the above example the “protocol mode” will not be a coverpoint in the coverage generated by ralgen as it is not a physical field in the DUT. So user can fill this a separate covergroup using “user_code”. The new RALF spec and the generated RAL model with the added coverage are shown below:

block mdio {image

bytes 2;

register mdio_reg_1_0 @’h0000000 {

field bit_15_0 {

bits 16;

access rw;

reset ‘h00000000;

constraint c_bit_15_0 {

value inside {[0:15]};




user_code lang=SV {

rand enum {PCIE,XAUI} protocol;

constraint protocol_reg1 {

if(protocol == XAUI)

mdio_reg_1_0.bit_15_0.value == 16′hff;



user_code lang=sv {

covergroup protocol_mode; = name;

mode : coverpoint protocol {

bins pcie = {PCIE};


mdio_reg : coverpoint mdio_reg_1_0.bit_15_0.value {

bins set = {‘hff};


cross mode, mdio_reg;


protocol_mode = new();


} Figure: Generated model snippet

User-code gets embedded in the generated RAL classes but there is no way to embed user-code in the “sample” method that exists inside each block. And so for any user embedded covergroups the sampling will need to be done manually (perhaps inside post_write callback of registers/fields) within the user testbench using <covergroup>.sample(). The construct could also be used to embed additional data members and user-defined methods, a sampling method to sample all the newly defined covergroups maybe. Thus “user_code” as a RALF construct comes in as a very handy solution for embedding user code in the automated model generation flow.

Posted in Register Abstraction Model with RAL, Stimulus Generation | 1 Comment »

Automatic generation of Register Model for VMM using IDesignSpec

Posted by Amit Sharma on 5th August 2011

Nitin Ahuja, Verification Engineer, Agnisys Technology Pvt Ltd

Generating a register model by hand could take up a lot of time in the design process and may result in serious bugs, which makes the code inefficient. On the other hand, generating the register model using the register model generator such as IDesignSpecTM reduces the coding effort, as well as generates more competent codes by avoiding the bugs in the first place, thus making the process more efficient and reduces the time to market exponentially.

Register model generator can be proved efficient in the following ways:

1. Error free codes in the first place, i.e. being automatically generated, the register model code is free from all the human as well as logical errors.

2. In the case of change in the register model specification, it is easy to modify the spec and generate the codes again in no time.

3. Generating all kind of hardware, software, industry standard specifications as well as verification codes from a single source of specification.

IDesignSpecTM (IDS) is capable of generating all the RTL as well as the verification codes such as VMM(RALF) from the register specification defined in Word, Excel, Open-office or IDS-XML.

Getting Started

A simple register can be defined inside a block in IDesigSpecTM as:

The above specification is translated into the following RALF code by IDS.


As a protocol, all the registers for which the hdl_path is mentioned in the RALF file, the ralgen generates the backdoor access. Thus special properties on the register such as hdl_path and coverage can be mentioned inside the IDS specification itself and will be appropriately translated into the RALF file.

The properties can be defined as below:

For Block:


As for the block, hdl_path , coverage or even any other such property can be mentioned for other IDS elements, such as register or field.

For register/field:



Note: Coverage property can take up the following three possible values:

1. ON/on: This enables all the coverage types i.e for block or memory address coverage and for registers and field the REG_BITS and FIELD_VALS coverage is on.

2. OFF/off: By default all the coverage is off. This option holds valid only in case, when the coverage is turned ON from the top level of the hierarchy or from the parent and to turn off the coverage for some particular register, specify ‘coverage=off’ for that register or field. The coverage for that specific child will be invert of what its parent has.

3. abf: Any combination of these three characters can be used to turn ON the particular type of the coverage. These characters stand for:

· a : address coverage

· b : reg_bits

· f : field_vals

For example to turn on the reg_bits and field_vals coverage, it can be mentioned as:


In addition to these properties, there are few more properties that can be mentioned in a similar way as above. Some of them are:

1. bins: various bins for the coverpoints can be specified using this property.

Syntax: {bins=”bin_name = {bin} ; <bin_name>…”}

2. constraint : constraints can also be specified for the register or field or for any element.

Syntax : {constraint=”constraint_name {constraint};<constraint_name> …”}

3. [vmm]<code>[/vmm]: This tag gives the users the ability to specify their own piece of system-verilog code in any element.

4. cross: cross for the coverpoints of the registers can be specified using cross property in the syntax:

Syntax: {cross = “coverpoint_1 <{label label_name}>;< coverpoint_2>…”}

Different types of registers in IDS :

1.Register Arrays:

Register arrays in RALF can be defined in IDS using the register groups. To define a register array of size ‘n’, it can be defined by placing a register inside a regroup with the repeat count equal to size of the array (n).

For example a register array of the name ”reg_array” with size equal to 10 can be defined in the IDS as follows:


The above specification will be translated into the following vmm code by the ids:


2.Memory :

Similar to the register array, Memories can also be defined in the IDS using the register groups. The only difference in the memory and register array definition is that in case of memory the external is equal to “true”. The size of the memory is calculated as, ((End_Address – Start_Address)*Repeat_Count)

As an example a memory of name “RAM” can be defined in IDS as follows:


The above memory specification will be translated into following VMM code:



Regfile in RALF can be specified in IDS using the register group containing multiple registers(> 1).

One such regfile with 3 registers, repeated 16 times is shown below:


Following is the IDS generated VMM code for the above reg file:



The IDS generated RALF can be used with the Synopsys Ralgen to generate the RAL model as well as the RTL.

To generate the RAL model use the following command:


And for the RTL generation use the following command:



It is beneficial to generate the RALF using the register model generator “IDesignspecTM”, as it guarantees bug free code, making it more competent and also reduces the time and effort. In case of modifications in the register model specification, it enables the users to regenerate the code again in no time.


We will extend this automation further in the next article where we will cover details about how you can “close the loop” on register verification. The “Closed Loop Register Verification” article will be available on VMM Central soon. Meanwhile if you have any questions/comments you can reach me at nitin[at]agnisys[dot]com .

Posted in Automation, Organization, Register Abstraction Model with RAL, Tools & 3rd Party interfaces, VMM infrastructure | 1 Comment »

The One stop shop: get done with everything you need to do with your registers

Posted by Amit Sharma on 14th July 2011

Ballori Bannerjee, Design Engineer, LSI India

Processes are created, refined and improved upon and the change in productivity which starts with a big leap subsequently slows down and at the same time as the complexity of tasks increases, the existing processes can no longer scale up. This drives the next paradigm shift in moving towards new process and automation. As in the case of all realms of technology, this is true in the context of the Register development and validation flow as well.. So, let’s look at how we changed our process to get the desired boost in productivity that we wanted..

This following flowchart represents our legacy register design and validation process.. This was a closed process and served us well initially when the number of registers, their properties etc were limited.. However, with the complex chips that we are designing and validating today, does this scale up?


As an example, in a module that we are implementing, there are four thousand registers. Translating into number of fields, for 4000 32-bit registers we have 128,000 fields, with different hardware and software properties!

Coding the RTL with address decoding for 4000 registers, with fields having different properties is a week’s effort by a designer. Developing a re-usable randomized verification environment with tests like reset value check, read-write is another 2 weeks, at the least. Closure on bugs requires several feedbacks from verification to update design or document. So overall, there is at least a month’s effort plus maintenance overhead anytime the address mapping is modified or a register updated/added.

This flow is susceptible to errors where there could be disconnect between document, design, verification and software.

So, what do we do? We redefine the process! And this is what I will be talking about, our automated register design and verification (DV) flow which streamlines this process.


The flow starts with the designer modeling the registers using a high level register description language. In our case , we use SystemRDL, and then leverage third party tools are available to generate the various downstream components from the RDL file:

· RTL in Verilog/VHDL

· C/C++ code for firmware

· Documentation ( different formats)

· High level verification environment code (HVL) in VMM

This is shown in below. The RDL file serves as a one-stop point for any register update required following a requirement change.


Automated Register DV Flow

Given, that its critical to create an efficient object oriented abstraction layer to model registers and memories in a design under test, we exploit VMM RAL for the same. How do we generate the VMM RAL Model? This is generated from RALF. Many 3rd party tools are available to generate RALF from various inputs formats and we use one of them to generate RALF from SystemRDL

Thus, a complete VMM compliant randomized, coverage driven register verification environment can be created by extending the flow such that:

i. Using 3rd party tool, from SystemRDL the verification component generated is RALF, Synopsys’ Register Abstraction Layer File.

ii. RALF is passed through RALGEN, a Synopsys utility which converts the RALF information to a complete VMM based register verification environment. This includes automatic generation of pre-defined tests like reset value check, bit bash tests etc of registers and complete functional coverage model, which would have taken considerable staff-days of effort to write.

The flowchart below elucidates the process.


Adopting the automated flow, it took 2 days to write the RDL. The rest of components were generated from this source. A small amount of manual effort may be required for items like back-door path definition, but it is minimal and a one-time effort. The overall benefits are much more than the number of staff days saved and we see this as something which gives us perpetual returns.. I am sure, a lot of you would already be bringing in some amount of automation in your register design and verification setup, and if you aren’t, its time you do it J

While, we are talking about abstraction and automation, lets look at another aspect in register verification.

Multiple Interfaces/Views for a register

It is possible to have registers in today’s complex SOC designs which need to be connected to two or more different buses and accessed differently. The register address will be different for the different physical interfaces it is shared between. So, how do we model this..

This can be defined in SystemRDL by using a parent addressmap with bridge property, which contains sub addressmaps representing the different views.

For example:

addrmap dma_blk_bridge {
bridge;// top level address map
reg commoncontrol_reg {
shared; // register will be shared by multiple address maps
field {
} f1[32];

addrmap {// Define the Map for the AHB Side of the bridge
commoncontrol_reg cmn_ctl_ahb @0×0; // at address=0
} ahb;

addrmap { // Define the Map for the AXI Side of the bridge
commoncontrol_reg cmn_ctl_axi @0×40; // at address=0×40
} axi;

The equivalent of multiple view addressmap, in RALF is domain.

This allows one definition of the shared register while allowing access from each domain to it, where register address associated with each domain may be different .The following code is RALF with domain implementation for above RDL.

register commoncontrol_reg {
field f1 {
bits 32;
access rw;
reset ‘h0;

block dma_blk_bridge {
domain ahb {
bytes 4;
register commoncontrol_reg =cmn_ctl_ahb @’h00 ;

domain axi {
bytes 4;

register commoncontrol_reg=cmn_ctl_axi @’h40 ;

Each physical interface is a domain in RALF. Only blocks and systems have domains, registers are in the block. For access to a register from one interface/domain RAL provides read/write methods which can be called with the domain name as argument. This is shown below..

ral_model.STATUS.write(status, data, “pci”);, data, “ahb”);

This considerably simplifies the verification environment code for the shared register accesses. For more on the same, you can look at : Shared Register Access in RAL though multiple physical interfaces

However, unfortunately, in our case, the tools we used did not support multiple interfaces and the automated flow created a the RALF having effectively two or more top level systems re-defining the registers. This can blow up the RALF file size and also verification environment code.

system dma_blk_bridge {
bytes 4;
block ahb (ahb) @0×0 {
bytes 4;
register cmn_ctl_ahb @0×0 {
bytes 4;
field cmn_ctl_ahb_fl(cmn_ctl_ahb_f1)@0{
bits 32;
access rw;
reset 0×0;
} }

block axi (axi) @0×0 {
bytes 4;
register cmn_ctl_axi @0×40 {
bytes 4;
field cmn_ctl_axi_f1 (cmn_ctl_axi_f1) @0 {
bits 32;
access rw;
reset 0×0;
} }

Thus, as seen above, the tool is generating two blocks ‘ahb’ and ‘axi’ and re-defining the register in each block. For multiple shared registers, the resulting verification code will be much bigger than if domain had been used.

Also, without the domain associated read/write methods (as shown above) for accessing the shared registers will be at least a few lines of code per register for accessing it from a domain/interface. This makes writing the test scenarios complicated and wordy.

Using ‘domain’ in RALF and VMM RAL makes shared register implementation and access in verification environment easy. We hope that we would soon be able to have our automated flow leverage this effectively..

If you are interested to go through more details about our automation setup and register verification experiences, you might want to look at:

Posted in Automation, Modeling, Register Abstraction Model with RAL, Tools & 3rd Party interfaces | 4 Comments »

Pipelined RAL Access

Posted by Amit Sharma on 12th May 2011

Ashok Chandran, Analog Devices

Many times, we come across scenarios where a register can be accessed from multiple physical interfaces in a system. An example would be a homogenous multi-core system. Here, each core may be able to access registers within the design through its own interfaces. In such scenarios, defining a “domain” (a testbench abstraction for physical interfaces) for each interface may be an overhead.

· From a system verification point of view, it does not make any difference as to which core accesses the registers since they are identical. The flexibility to bring in ‘random selection of interfaces’ can provide additional value.

· Defining a ‘domain’ for each interface in such scenario requires duplication of registers/ or their instantiation.

· Also, the usage of multiple “domains” for homogenous multi-core systems would prevent us from seamlessly reusing our code from block level to system level. This is because as we will have to incorporate the domain definition within the testbench RAL access code when we migrate to system level as the same wouldn’t have been needed during our register abstraction code in the block level.

Another related scenario is where we need to support multiple outstanding transactions at a time. Different threads could initiate distinct transactions which can return data out of order (as in AXI protocol). The default implementation of RAL allows only one transaction at a time for each domain in consideration.

VMM pipelined RAL comes to our rescue in such cases. This mechanism allows multiple RAL accesses to be simultaneously processed by the RAL access layer. This feature of VMM can be enabled with – `define VMM_RAL_PIPELINED_ACCESS. This define adds a new state to vmm_rw::status_e – vmm_rw::PENDING. When vmm_rw::PENDING is returned as status from execute single()/execute_burst(), the transaction initiating thread is kept blocked till vmm_data::ENDED notification is received for vmm_rw_access. New transactions can now be initiated from other testbench threads and pending transactions cleared in parallel when response is received from the system.


As shown in the figure above, transactions initiated by thread A (A0 and A1) can be processed / queued even while transactions from thread B (B0 and B1) are in progress. Here A can be processed by one interface and B by the other. Alternately, A and B can be driven together from same interface in case the protocol supports multiple outstanding accesses.

The code below shows how the user can create his execute_single() functionality to use pipelined RAL for a simple protocol like APB. For protocols like AXI which allow multiple outstanding transactions from same interface, the physical layer transactor can control the sequence further using the vmm_data::ENDED notification of the physical layer transaction.

virtual task execute_single(vmm_rw_access tr);

   apb_trans apb = new; //Physical layer transaction

   apb.randomize() with {
      addr == tr.addr;
      if(tr.kind == vmm_rw::READ) {
         dir == READ;
      } else {
         dir == WRITE;
      resp == OKAY;
      interface_id inside{0,1}; //the interface_id property in the physical layer transaction maps to the different physical interface instances

   if(tr.kind == vmm_rw::WRITE) =;
   //Fork out the access in  parallel
fork begin
   //Get copies for thread
      automatic apb_trans pend = apb;
      automatic vmm_rw_access rw = tr;

      //Push into the physical layer BFM

      //Wait for transaction completion from the physical layer BFM

      //Get the response and read data
if(pend.resp == apb_trans::OKAY) begin
         rw.status = vmm_rw::IS_OK;
      end else begin
         rw.status = vmm_rw::ERROR;

      if(rw.kind == vmm_rw::READ) begin =;

      // End of this transaction – Indicate to RAL
   end join_none

   //Return pending status to RAL access layer
tr.status = vmm_rw::PENDING;


For more details on creating “Pipelined Accesses”,  you might want to go through the section “Concurrently Executing Generic Transactions” in the VMM RAL User Guide

Posted in Register Abstraction Model with RAL | 3 Comments »

A RAL example with Designware VIP

Posted by S. Varun on 17th March 2011

I often get asked how best RAL ought to be used with Designware VIP. Since several of these VIPs provide a mechanism to
program registers across different DUTs, I felt it would be useful to create an example with Designware AMBA AHB VIP and

The example has a structure as shown in the block diagram below,

    |                                             |
    |         Register Abstraction Layer          |
    |                                             |
 |                |
 |    RAL2AHB     |
 |  ------------  |     ------------------       -----------      -----------
 | | AHB MASTER | |----| HDL Interconnect |-----| AHB SLAVE |----| Resp Gen  |
 |  ------------  |     ------------------       -----------      -----------
 |                |

It uses a dummy HDL interconnect that has two AHB interfaces to connect a VIP AHB master component with a VIP AHB slave
component. I have created a dummy register specification in "ahb_advanced_ral_slave.ralf". 

This example lacks a real AHB based DUT with real registers and hence the AHB slave VIP components' internal memory is
used to model the register space of the system. The RALF specification is then used to generate the System Verilog RAL
model using the "ralgen" utility as shown by the command-line below.

% ralgen -l sv -t ahb_advanced_slave ahb_advanced_ral_slave.ralf -c b -c a -c f

The above command will dump an SV based RAL model in a file named "". This model is instantiated
within the top-level environment class, which by the way is an extension of the "vmm_ral_env" class. You may already be
aware that a "vmm_ral_env" based environment has to be used for RAL verification. Once instantiated it is registered using
vmm_ral_access::set_model() method. This would complete the RAL model instantiation and registration leaving only the
translation logic which is the crux of the task.

Note: "vmm_ral_env" is extended from "vmm_env". Check the RAL userguide to see the additional members.

In the block diagram above, the block RAL2AHB is the block that translates a generic RAL access in to a command, in this
case  the command being an AHB transfer. The functional logic that translates generic READ/WRITE into an AHB master
transfer is within the "vmm_rw_xactor::execute_single()". The code snippet below shows how the translation is done.

// --------------------------------------------------------------------
task ral2ahb_xlate::execute_single(vmm_rw_access tr);

  // The generic read/write being translated into a AHB transfer.
  ahb_xact_fact.data_id        = tr.data_id;
  ahb_xact_fact.scenario_id    = tr.scenario_id;
  ahb_xact_fact.stream_id      = tr.stream_id;

  // Copying over the data width from the system configuration
  ahb_xact_fact.m_enHdataWidth = vip_cfg.m_oSystemCfg.m_enHdataWidth;

  // Setting the burst type to SINGLE & number of beats to 1
  ahb_xact_fact.m_enBurstType  = dw_vip_ahb_transaction::SINGLE;
  ahb_xact_fact.m_nNumBeats    = 1;

  // Copying the address generated by RAL into the AHB address of the AHB transfer
  ahb_xact_fact.m_bvAddress    = tr.addr;

  // Copying the size information over to the AHB transaction. RAL provides
  // the size in terms of bits and the dw_vip_ahb_master_transaction class takes
  // it in terms of bytes.
  ahb_xact_fact.m_nNumBytes    = tr.n_bits/8;
  ahb_xact_fact.m_enXferSize   = dw_vip_ahb_transaction::xfer_size_enum'(func_log(tr.n_bits) - 3);
  ahb_xact_fact.m_bvvData      = new[ahb_xact_fact.m_nNumBytes];

  // Setting the transfer type WRITE/READ based on the kind value generated by RAL
  if (tr.kind == vmm_rw::WRITE) begin
    ahb_xact_fact.m_enXactType = dw_vip_ahb_transaction::WRITE;
  else begin
    ahb_xact_fact.m_enXactType = dw_vip_ahb_transaction::READ;

  // Unpacking the RAL write data element into the byte sized data queue available within
  // dw_vip_ahb_master_transaction class
  if(tr.kind == vmm_rw::WRITE) begin
    for (int i = 0; i < tr.n_bits/8; j++) begin
       ahb_xact_fact.m_bvvData[i] = >> 8*i;

  // Put the AHB transaction object into the AHB master's input channel

  // Wait for the transfer to END

  // Pack the READ data from the AHB transfer back into the RAL transactions data
  // member
  if (tr.kind == vmm_rw::READ) begin
     int i; = 0;
     for (i = 0; i < xact.m_nNumBytes; i++) begin += xact.m_bvvData[i] << 8*i;

  // Collecting the status of the transfer and returning it to RAL
  if (xact.m_nvRespLast[0] == 0)
     tr.status = vmm_rw::IS_OK;
     tr.status  = vmm_rw::ERROR;

endtask: execute_single
// --------------------------------------------------------------------

The created translator class also has to be instantiated within the environment class and has to be registered using
"vmm_rw_access::add_xactor()" as shown below,

// --------------------------------------------------------------------
class ahb_advanced_ral_env extends vmm_ral_env ;

  ral2ahb_xlate    ral_to_ahb;


  virtual function void build() ;
    ral_to_ahb      = new("AHB RAL MASTER XACTOR", master_mp, cfg.cfg_master);


// --------------------------------------------------------------------

At this juncture, we are set to run the RAL tests and easily program the registers using the Abstarcted model
with simple APIs as shown below;

     env.ral_model.slave_block.REGA.set( ... );
     env.ral_model.slave_block.REGB.set( ... );

The above section shows how to setup for single transfers. For burst transfer, there is a slight variation
wherein you provide the burst translation within the vmm_rw_xactor::execute_burst() task. The code snippet
below shows this.

// --------------------------------------------------------------------
task ral2ahb_xlate::execute_burst(vmm_rw_burst tr);

  ahb_xact_fact.data_id        = tr.data_id;
  ahb_xact_fact.scenario_id    = tr.scenario_id;
  ahb_xact_fact.stream_id      = tr.stream_id;
  ahb_xact_fact.m_enHdataWidth = vip_cfg.m_oSystemCfg.m_enHdataWidth;
  ahb_xact_fact.m_nNumBeats    = tr.n_beats;
  ahb_xact_fact.m_bvAddress    = tr.addr;
  ahb_xact_fact.m_nNumBytes    = tr.n_bits/8*tr.n_beats;
  ahb_xact_fact.m_enXferSize   = dw_vip_ahb_transaction::xfer_size_enum'(func_log(tr.n_bits) - 3);

  if (tr.kind == vmm_rw::WRITE) begin
    ahb_xact_fact.m_bvvData    = new[ahb_xact_fact.m_nNumBytes];
    ahb_xact_fact.m_enXactType = dw_vip_ahb_transaction::WRITE;
  else begin
    ahb_xact_fact.m_enXactType = dw_vip_ahb_transaction::READ;

    4  : begin
               ahb_xact_fact.m_enBurstType = dw_vip_ahb_transaction::INCR4;
    8  : begin
               ahb_xact_fact.m_enBurstType = dw_vip_ahb_transaction::INCR8;
    16 : begin
               ahb_xact_fact.m_enBurstType = dw_vip_ahb_transaction::INCR16;
    default : begin
               ahb_xact_fact.m_enBurstType = dw_vip_ahb_transaction::INCR;

  // Write cycle
  if(tr.kind == vmm_rw::WRITE) begin
     for(int i=0; i<tr.n_beats; i++) begin
       for(int j = 0; < tr.n_bits/8; j++) begin
         ahb_xact_fact.m_bvvData[i*(tr.n_bits/8) + j] =[i] >> 8*j;

  // Put the AHB burst transaction into the AHB master's input channel

 // Read cycle
 if(tr.kind == vmm_rw::READ) begin = new[tr.n_beats];
     for(int i=0; i<tr.n_beats; i++) begin[i] = 0;
        for(int j = 0; j<tr.n_bits/8; j++) begin
 [i] += xact.m_bvvData[i*tr.n_bits/8 +  j] <<8*j;

  // Collecting the status of the transfer and returning it to RAL
  if (xact.m_nvRespLast[0] == 0)
     tr.status = vmm_rw::IS_OK;
     tr.status  = vmm_rw::ERROR;

endtask: execute_burst
// --------------------------------------------------------------------

For invoking burst transfers in RAL, the vmm_ral_access::burst_write() & vmm_ral_access::burst_read() tasks
have to be used as shown below;

     env.ral.burst_write(status, 8'h00, 4, 8'h0f, exp_data, , 32);
     env.ral.burst_read(status, 8'h00, 4, 8'h0f, 4, act_data, , 32);

The complete example is downloadable from solvnet "tb_ahb_vmm_10_advanced_ral_sys.tar.gz". You will need DesignWare
licenses to compile and run the example. Please follow the instructions in the README to compile and run this example.
The example can be used as a reference for creating a RAL environment with other Designware VIP titles as well.
Do write to me if you have any questions on the example.

Posted in Register Abstraction Model with RAL, VMM | 1 Comment »

Register Programming using RAL package

Posted by Vidyashankar Ramaswamy on 15th February 2011

There are many methods available to program (read/write) registers in a design using RAL.

1.    ral_model::read()/write(): This is the old fashion method where you specify the address and the data. No need to know the register by name.

              EX:, addr, data, . . .);

 2.    ral_model::read_by_name()/write_by_name(): You have to specify the register name and data to execute this method. Here the register name is hard-coded.

              EX: ral_model.read_by_name(status, “reg_name”, data, . . .);

 3.    ral_reg::read()/write(): Here you have to specify the hierarchical path to the register to execute the task. Only value needs to be specified.


 Option 1 works fine at all levels of the test bench — block, core or system – if a portable addressing scheme is used for register programming. The only issue is that it is not self documented. For instance, a sequence of writes/reads to program a DDR memory controller makes it hard to understand and debug the code when the controller does not behave as expected.

Option 2, uses the register name to perform the read/write process. This code is self documented and works great in the block level test bench. However, this option breaks when used at the core or system level. Why? Let us analyze the following situation:

Unit level test bench is using read_by_name()/write_by_name().  This works as the register names are unique. However, when multiple instances of the same block  exist at the system level, multiple registers exist with the same name, thus creating name conflict. To ensure that the name is properly scoped and that the same scope is used from block to top, read_by_name()/write_by_name() should not be used as it uses a flat name space.  To reuse the same code across different levels of test bench, one should use option 3 which is scalable.  The following code segment demonstrates this concept:

task init_ddr_controller (vmm_ral_block ddr_block);

     vmm_ral_reg  reg_in_use;

     reg_in_use = ddr_block.mode_reg_0;


      . . .


Note: You can  use,  reg_in_use = ddr_block.get_reg_by_name(“reg_name”); if the task is written to take in register name as well.

Then in core/integration or in SoC  level:

   init_ddr_controller (system.blk2);

Please do share your comments/ideas on this.

Posted in Register Abstraction Model with RAL | 1 Comment »

You can also “C” with RAL.

Posted by S. Varun on 18th November 2010

The RAL C interface provides a convenient way for verification engineers to develop firmware code which can be debugged on a RTL simulation of the design. This interface provides a rich set of C API’s using which one can access fields, registers and memories included within the RAL model. The developed firmware code can be used to interface with the RAL model running on a SystemVerilog simulator using DPI (Direct Programming Interface) and can also be re-used as application-level code to be compiled on the target processor in the final application as illustrated in the figure below.


The “-gen_c” option available with “ralgen“ has to be used for this and that would cause the generator to generate the necessary files containing the API’s to interface to the C domain. The generated API’s can be used in one of two forms.

1. To interface to the System Verilog RAL model running on a System Verilog simulator using DPI and

2. As a pure standalone-C code designed to compile on the target processor.

Typically firmware runs a set of pre-defined sequences of writes/reads to the registers on a device performing functions for boot-up, servicing interrupts etc. You generally have these functions coded in C, and these would need to access the registers in the DUT. Using the RAL C model, these functions can be generated so that the firmware can now perform the register access through the SystemVerilog RAL model. Thus, this allows firmware and application-level code to be developed and debugged on a simulation of the design and the same functions can later be used as part of the device drivers to perform the same tasks on the hardware.

In the first scenario, when executing the C code within a simulation, it is necessary for the C code to be called by the simulation to be executed and hence the application software’s main() routine must be replaced by one or more entry points known to the simulation. All entry points must take at least one argument that will receive the base address of the RAL model to be used by the C code. The C-side reference is then used by the RAL C API to access required fields, registers or memories.


Consider a sequence of register accesses performed at the boot time of a router as defined in function system_bootup() shown below,

File : router_test.c


/*when used as a pure C model for the target processor */

int main() {

void *blk = calloc(410024,1);

system_bootup((void *) (((size_t)blk)>>2)+1);



void system_bootup (unsigned int blk){

unsigned int dev_id = 0xABCD;

unsigned int ver_num = 0×1234;

/* Invoking the generated RAL C APIs */

ral_write_DEV_ID_in_ROUTER_BLK(blk, &dev_id);

ral_write_VER_NUM_in_ROUTER_BLK(blk, &ver_num);


In the above example “ral_write_DEV_ID_in_ROUTER_BLK()” & “ral_write_VER_NUM_in_ROUTER_BLK()” are C API’s generated by “ralgen” to access registers named “DEV_ID” & “VER_NUM” located in a block named “ROUTER_BLK”. In general registers can be accessed using “ral_read_<reg>_in_<blk>” & “ral_write_<reg>_in_<blk>” macros present within the RAL-C interface.

The above C function can also be called from within a System Verilog testbench via DPI

File :

import “DPI-C” context task system_bootup(int unsigned ral_model_ID);

program boot_seq_test();



router_blk_env env = new();

// Configuring the DUT


// Calling the C function using DPI

system_bootup(env.ral_model.get_block_ID()); // ‘ral_model ‘is the instance of the generated SV model in the env class



In the example above, system_bootup is the service entry point which is called from the SV simulation and is passed the RAL Model reference. The ‘C’ code that is executed and simulation freezes till one of the registers accesses is made which then shifts the execution to the SV side.


The entire execution in ‘C’ is in ‘0’ time in the simulation timeline. The RAL C API hides the physical addresses of registers and the position and size of fields. The hiding is performed by functions and macros rather than an object-oriented structure like the native RAL model in SystemVerilog. This eliminates the need to compile a complete object-oriented model in an embedded processor object code with a limited amount of memory

Thus by using the RAL C interface one can develop a compact and efficient firmware code, while preserving as much as possible of the abstraction offered by RAL. I hope this was useful and do let me know your thoughts on the same if you plan to use this flow to meet some of your requirements of having your firmware and application-level code to be developed and debugged on a simulation of the design.

Please refer to the RAL userguide for more information. You can also refer to the example present within VCS installation located at $VCS_HOME/doc/examples/vmm/applications/vmm_ral/C_api_ex.

Posted in Register Abstraction Model with RAL, SystemC/C/C++, SystemVerilog, VMM | Comments Off

Modeling ISRs with VMM RAL

Posted by Amit Sharma on 4th November 2010

In a verification environment, different components may be trying to access the DUT registers and memories. For example, the BFM might be programming some registers while the bus monitor might be sampling the values of these registers. In specific cases, there may be an interrupt monitor which triggers an Interrupt Service Routine (ISR) whenever it sees an Interrupt pin toggling in the interface. The ISR might end up having to read the Interrupt registers and end up clearing the Interrupt bit/s through a front door access.

To ensure that different components in a verification environment can access the DUT registers at any given point in time, the RAL model instantiated in the environment can be passed to different VMM components. These different components whose methods are executing in separate parallel threads can now access the same set of registers in the DUT through the RAL model. A question many folks ask is: when there are multiple parallel register accesses, how do they get scheduled through the RAL layer?  Here is an explanation of how threads are scheduled in RAL:

A Register read/write from different threads is comparable to an ‘atomic’ channel.put() from  different threads. Hence it gets scheduled in the order of pipelining of the threads.

A write/read would basically consist of the following atomic operations:

- A generic vmm_rw_access transaction with its fields (addr, data, kind etc) being populated and posted onto the execute_single()   task of the Translate Transactor

- The transaction being translated in the execute_single() task and pushed into the input channel of the user BFM

- The transaction being retrieved through get/activate in the User BFM main thread and then subsequently driven to the DUT interface

Thus ‘posting’ of RAL accesses whenever a Read/Write/Mirror/Update is invoked is in the same order they are issued. Subsequently, the execute_single() task just translates the generic RW RAL transaction to a User BFM comprehensible transaction, and doesn’t change the order.

Now, how do we handle a scenario when specific register accesses like those coming from an ISR need to be given a higher priority than accesses coming from other threads in a verification environment?  VMM and RAL methods give us specific hooks to achievethis requirement. Here is one of the options on how this can be done.

If we look at the vmm_ral_reg::write method, we have the given signature:

virtual task write( output vmm_rw::status_e status,

    input bit [63:0] value,

    input vmm_ral::path_e path = vmm_ral::DEFAULT,

    input string domain = "",

    input int data_id = -1,

    input int scenario_id = -1,

    input int stream_id = -1)

Now, a generic RAL transaction that gets created through any Register access has the same data_id, scenario_id, stream_id arguments which get passed on from the read/write call. These arguments help us tag and track the transactions if that is so desired. Now, these arguments can be made use of in the execute_single() task to ensure that accesses from ISRs have the highest priority. But, first, if we go back to the earlier post by Varun, Issuing concurrent READ/WRITE accesses to the same register on two physical interfaces using RAL, Issuing concurrent accesses to the same registers on two physical interfaces using RAL , we note that if the RAL model is processing a ‘register access’ , it will not initiate the next one until the earlier one is completed. So, this is what we need to get done.We first use the RAL Proxy transactor to schedule the ISR register access simultaneously. After that, we flush out any existing accesses and prevent any new register access through RAL until access from the ISR is completed

This is how it, will be done:


For a normal register access, a read/write method will be invoked as follows:

          ral_model.<reg_name>.write(stat,wdata, “AHB”); //’wdata’ is the value to be driven, “AHB” is the domain /physical interface

For a register access in an ISR modeled in a MS Scenario, we have:

         env.bfm.to_ahb.grab(this); //grabs the channel
, env.ral_model.<reg_name>.get_address_in_system("AHB"), data, 32, "AHB",,,1); // The last argument is again the “stream_id” argument

         env.bfm.to_ahb.ungrab(this); //allows the channel to be accessed from other threads once the ISR is completed

Once, this is done, the execute_single() task inside the translate Transcator will know if an access is through an ISR and can ensure that ISRs are processed on priority through a combination of functionality provided through the VMM Channel methods in combination with a semaphore:

virtual task execute_single(vmm_rw_access tr);

        AHB_tr cyc;

        AHB_tr cyc_active; //to keep any transactions residing on the active slot

        semaphore sem = new(1); //semaphore with a single key to prevent new accesses when a reg access from an ISR is processed

       // Translate the generic RW into a simple RW

       cyc = new;

       {, cyc.addr} = tr.addr;

       if (tr.kind == vmm_rw::WRITE) begin

         cyc.cycle = simple_tr::WRITE;



      else begin

         cyc.cycle = simple_tr::READ;



      if (tr.stream_id != 1) sem.get(1); // regular register access gets blocked here when a reg access from an ISR is processed

      else begin

         if(this.bfm.to_ahb.is_full()) begin

           this.bfm.to_ahb.activate(cyc_active); //removes any existing transactions in the channel’s active slot so that the current access can be pushed through





           this.bfm.to_ahb.put(cyc_active); //restores the original transaction back into the active slot

           cyc = cyc_active;

           sem.put(1); //ths ISR access puts back the key for normal accesses to resume


        else this.bfm.to_ahb.put(cyc);


      // Send the result back to the RAL

      if (tr.kind == vmm_rw::READ) begin =;


      endtask: execute_single

Posted in Register Abstraction Model with RAL, VMM infrastructure | Comments Off

Shared Register Access in RAL though multiple physical interfaces

Posted by Amit Sharma on 19th September 2010

Amit Sharma, Synopsys

Usually, designs have a single interface but some designs may have more than one physical interface, each with accessible registers or memories. RAL supports designs with multiple physical interfaces, as well as registers and memories shared across multiple interfaces.

In RAL, a physical interface is called a domain. Only blocks and systems can have domains. Domains can contain registers and memories.


Now, how do you enable a register or a memory to be shared across multiple physical interfaces or ‘domains’ in RAL? This is done by declaring the register/memory as ‘shared’ in the RALF file and instantiating it in more than one domain. Once this is done, the functionality will be enabled in the generated RAL model.

Let us take an example. Suppose the design block has two domains or interfaces (say pci/ahb) which can have a shared access to the register STATUS. This can be specified through the ‘shared’ keyword in the register description in RALF file, generating a RAL block model, as follows.


In the example above, the text box on the left shows a snippet of the generated code. You can see that ‘domain’s do not create an additional level of hierarchy. The different registers belonging to the two domains in the block are available in the flat namespace in the block and can be accessed directly without specifying the domain name in the access hierarchy.

Once a register is defined as ‘shared’, I can go ahead and access that register through different domains. A typical usage scenario for a ‘shared’ register is that it can be read from one interface and written from another interface. The two different physical BFMs are mapped to the corresponding domains easily in the SV environment.

The following snippet of code how this is done:

this.ahb = new(…); // AHB BFM instantiation

this.ral.add_xactor(this.ahb, “ahb”); //tying the AHB BFM to the “ahb” domain

this.pci = new(…); // PCI BFM instantiation

this.ral.add_xactor(this.pci, “pci”); //tying the PCI BFM to the “pci” domain

Once the mapping is done, the way you access a specific ‘shared’ register is quite simple. You need to additionally specify the ‘domain’ name in your read/write task


When you make the writes/reads as described above, the accesses would be routed through the different BFMs corresponding to the different domains as specified through the ‘domain’ argument in the read/write access tasks as seen in the code specified in the textbox above.

In the test environment, if simulation is run with ‘trace’ or ‘debug’ verbosity , details of the access to the shared register through different domains can be observed.

The following lines shows a snippet of messages going to STDOUT


Trace[DEBUG] on RVM RAL Access(Main) at 135:

Writing ‘h00000000000000aa at ‘h0000000000000A004 via domain “pci”…

Trace[DEBUG] on RAL(register) at 245:

Wrote register “ral_model.INT_MASK” via frontdoor with: ‘h00000000000000aa

Trace[DEBUG] on RVM RAL Access(Main) at 355:

Read ‘h00000000000000aa from ‘h0000000000000002 via domain “ahb”…

Trace[DEBUG] on RAL(register) at 355:

Read register “ral_model.INT_MASK” via frontdoor: ‘h00000000000000aa


By the way, one of the pre-defined tests that ship with the VMM RAL is the “shared_access” test. It exercises all shared registers and memories using the following process for each domain (requires at least one domain with READ capability or backdoor access).

- Write a random value via one domain

- Check the content via all other domains.

Hence, without writing a single line of test code, you can verify the basic functionality of all shared registers/memories of your DUT through this test.

Posted in Register Abstraction Model with RAL | 1 Comment »

RAL: Using TCL to conditionally generate registers

Posted by Vidyashankar Ramaswamy on 12th September 2010


Let us say you have a DMA RTL block and its registers are specified using RALF format. Now you want to instantiate it 4 times at the system level. The most common approach is to use array notation. What if you want to include one (or many) extra registers only for one of the instances ? The trivial approach is to manually create two block instances with appropriate registers. What if I want to create 5 different flavors with a very small addition or deletion of a register ? Do I have to create 5 of them by duplicating the same content ? No! This can be done automatically using the ralgen tool and its built-in tcl interpreter. Let us take a look at the following simple example.

# File: dma_registers.ralf

# Register definitions

register device_id {

   bytes 4;
 field device_id {
bits   32;
     access   ro;
     soft_reset   32′h0;

register command_status {

   field cmd_status {
      bits 31;
access rw;
      reset 32′h0;

# DMA Block code generation

for {set i 0} {$i < 2} {incr i} {
   block dma_$i {
      bytes 4;
      register device_id@’h100;?
      if {$i == 1} {
         register command_status @’h124;

# System top level

system SYSTEM_TOP {

   bytes 4;
   block dma_0 @’h0000 ;
   block dma_1[3] @’h1000 + ‘h1000;

This example shows a DMA block which contains 2 registers. Say I want to create two different DMA blocks out of which one instance has the command status register. Let us call them dma_0 and dma_1. This can be easily achieved using a for loop construct to conditionally include the register as shown above. The system top instantiates one dma_0 type and three dma_1 type blocks. The following is the output from the ralgen tool.

class ral_sys_SYSTEM_TOP extends vmm_ral_sys;

   rand ral_block_dma_0 dma_0;
   rand ral_block_dma_1 dma_1[3];
   . . .

endclass : ral_sys_SYSTEM_TOP

Can I use the loop construct even at the system level ? Yes, you can replace the array notation with the loop construct. Though this is bit more complex, you will have better control over the instantiation names generated by the tool. Following code demonstrates the concept.

 system SYSTEM_TOP {

   bytes 4;

   for {set i 0} {$i < 2} {incr i} {
      set OFFSET [expr $i * 2048];
      if {$i == 1} {
        for {set j 0} {$j < 3} {incr j} {
           set OFFSET_1 [expr $OFFSET + ($j+1) * 2048];
           block dma_$i=dma_$i$j @$OFFSET_1;
      else {
            block dma_$i @$OFFSET;


The tool output is shown below.

 class ral_sys_SYSTEM_TOP extends vmm_ral_sys;

   rand ral_block_dma_0 dma_0;
   rand ral_block_dma_1 dma_10;
   rand ral_block_dma_1 dma_11;
   rand ral_block_dma_1 dma_12;
   . . .

endclass : ral_sys_SYSTEM_TOP

 Please consider creating separate files for block and systems which will help in block to top reuse of RALF.



Posted in Register Abstraction Model with RAL | Comments Off

Issuing concurrent READ/WRITE accesses to the same register on two physical interfaces using RAL.

Posted by S. Varun on 9th August 2010

Currently, RAL does not schedule multiple concurrent READ/WRITE accesses to the same register at the same time. RAL
assumes all physical transactions to be atomic, i.e. a transaction is completed before the next cycle is started.
Even if we try to do a parallel READ, RAL will block one of the two accesses and schedule it sequentially. This is
to ensure that the accesses are not corrupted and also because of the fact that the main idea behind RAL is in
abstracting accesses to the registers.

However, in specific cases, there might be a verification requirement to create specific corner cases on the buses.

A typical corner case to check parallel access to a shared register would be as shown in the code snippet below,

Sample code :

       /* READ the CONTROL_REG via the APB interface */, data , , "APB");

       /* Read the CONTROL_REG via the AHB interface */, data , , "AHB");

Our intention here is to query the control register from two different domains "APB" & "AHB" in parallel or write
through one interface and do a read access through another. But as already mentioned, a controlling semaphore
mechanism tied to each register will sequentially schedule the above two accesses. To create such corner cases we
can use the RAL proxy transactor with "vmm_ral_access::read/write()" which is not going be blocked as the semaphore
resides in the register. The code below illustrates how we can do this.

Sample code :

       /* READ the CONTROL_REG via the APB interface */, env.ral_model.blk_inst.CONTROL_REG.get_address_in_system("APB"), data, 32, "APB");

       /* READ the CONTROL_REG via the AHB interface */, env.ral_model.blk_inst.CONTROL_REG.get_address_in_system("AHB"), data, 32, "AHB");

The "vmm_ral_reg::get_address_in_system()" method can be used here to generate the correct address corresponding
to the domain of interest and the same can be passed with the "vmm_ral_access::read()/write()" task. 

We can also do the following:

1)	Read CONTROL_REG through one of the interfaces using "" &
2)	Read the same register via vmm_ral_access::read() through the other interface as illustrated in the
        sample code below.

Sample code :

       /* READ the CONTROL_REG via the APB interface */, data , , "APB");

       /* READ the CONTROL_REG via the AHB interface */, env.ral_model.blk_inst.CONTROL_REG.get_address_in_system("AHB"), data, 32, "AHB");

Because the semaphore is in the register, the access through the proxy transactor is not going to be blocked. 

Also, if the DUT and physical transactor supports pipelined read/write accesses, the generic transactions can
also be executed concurrently. You must first enable this capability by defining the compile time macro,

Posted in Register Abstraction Model with RAL | Comments Off

Accessing Virtual Registers in RAL

Posted by Amit Sharma on 5th August 2010

Amit Sharma, Synopsys

In one of my previous posts on  Virtual Registers I talked about how you use RAL to model Virtual Registers or fields which are an efficient means of implementing large number of registers in  memory or RAM instead of individual flip-flops.  I also mentioned that they are  implemented as arrays associated with a memory.

In this post, I will talk about how you access these registers through RAL. Normal registers can be accessed using  the hierarchical name in RAL. For these, RAL  would generate the offsets and addresses required.  However, for virtual registers, along with  the virtual register name you need to provide the index ID to refer to them for read and write operations. Accordingly, RAL  generates the offset address based on the index ID of the virtual register.

Consider the following example:


The above RALF specification would translate into the following SV classes in the RAL model



‘DMA_BFRS’  which is the instance of the Virtual Registers class ‘ral_vreg_dut_DMA_BFRS’ is not an array in the RAL Model and is thus different from a typical register array modeled in RAL.

Now, how do you access the individual registers mapped to the memory?  You have to specify the index as the argument to the read/write methods. For example, To access the 4th index of this array, the access would like:

blk.DMA_BFRS.write(4, status, ‘hFF);

RAL would ultimately access the RAM to enable this access. Hence, the following are functionally equivalent:

blk.DMA_BFRS.write(4, status, ‘hFF);
blk.ram.write(4 * 0×0004 + 0×1000, status, ‘hFF);

The following illustration explains this point:


You can see that you now have an option to access these registers in different modes, but both will eventually go through the RAM. You can leverage the Virtual register Callbacks or the RAL memory callbacks for additional customizations and extensibility, in addition to other capabilities that you get when you are using VMM RAL.

Hope, this was useful.. do comment on any other specific functionality that you might look at when modeling these kind of registers.

Posted in Register Abstraction Model with RAL | Comments Off

Low Power Verification for telecom designs with VMM-LP

Posted by Paul Kaunds on 2nd August 2010

Paul Kaunds, founder, Kacper Technologies

Today power has become a dominant factor for most applications including hand-held portable devices, consumer electronics, communications and computing devices. Market requirements are shifting focus from traditional constraints like area, cost, performance and reliability to power consumption.

For efficient power management, power intent has to be reviewed at each phase along the design flow — starting from RTL, during synthesis, and in place and route in the physical design. Power specifications of the design will be common throughout the flow. In order to implement uniform power flow, we need a common format which is easily understood by all tools used along the flow with reusability and that can be declared independent of the RTL. The creation of the UPF (unified power format) as the IEEE standard has been the best solution for specifying power intent for design implementation as well as verification. UPF sits beside design and verification, and develops relationship between the design specifications and low power specifications.

Understanding the driving factors, designers have opted for advanced low power design techniques like Power Gating, Multi Supply Multi Voltage (MSMV), Power-Retention, Power-Isolation etc. Such low power constraints have increased design complexity, which have a direct impact on verification. This makes the verification engineers’ job more challenging as they have to verify the power intent bugs along with functionality.

Some of the low-power design verification challenges are:

  • · Power switch off/on duration
  • · Transitions between different power modes and states
  • · Interface between power domains
  • · Missing of level shifters, isolation and retention
  • · Legal and illegal transitions
  • · Clock enabling/toggling
  • · Verifying retention registers and isolation cells and level shifter Strategies

To tackle these challenges, we need a structured and reusable verification environment which is power aware and encapsulates the best practices when verifying such complex low power designs. To address the low power requirements of one of the complex telecom designs, we made use of (Verification Methodology Manual for Low Power (VMM-LP) ) for SONET/SDH verification. (

We developed a framework for accelerating the verification of low power designs. Our verification environment included VMM along with UPF, RAL, and a power state manager controlled by a power management unit.

Generation of Power control signals

Using extensive features provided by VMM, we developed a VMM based power generator to enable the user to generate the power control signals as random, constrained random and directed to verify power management blocks, and implement low power strategies like Retention Registers and Isolation Cells. This power generator was designed with a creed that it can be hooked into any VMM based environment. It provides the user the ability to generate power signals without any hassles in generating the complex power control sequencing of such signals.


Power Scenario generation

Power scenario generation eased the verification of power aware designs. To generate power signals, a user can simply enter the ranges in a scenario file depending on power specifications. The same generator can be used along with any other VMM environment without any modifications for conventional verification. Some of the capabilities that can be made configurable in this scenario include: varying retention, isolation and power widths.  Additionally, other key capabilities were enabled:

  • · Adheres to standard power sequence as specified in VMM_LP
  • · All the power signals and power domains are parameterizable
  • · Power Generator can be hooked to any VMM environment
  • · Gives a well defined structure to define power sequences
  • · Avoids overlapping of control signals
  • · Allows multiple save, restore in each sequence

The Power state manager takes care of state transition operations by defining power domains, states, modes and their dependencies. It is the key holder for all power transitions that can monitor all transitions dynamically.

Automated Power Aware Assertion Library

Another mechanism to address the challenges in low power verification is the usage of powerful LP assertions. Low power assertions are dealt in conjunction with design’s logic data checkers. VMM-LP assertion rules and coverage techniques helped us to achieve a comprehensive low power verification solution. It suggested some handy recommendations to ensure a consistent and portable implementation of the methodology.

To increase our efficiency of Low Power Telecom design verification, we also developed an automated Power Aware Assertion Library which is generic to any domain and can be hooked to any design. Assertion library is built on top of VMM_LP rules and guidelines with standard power sequencing. Key features of our Assertion library include:

  • · Assertions to verify access of software addressable registers in on/off conditions
  • · Clock toggling during power on/off
  • · Reset during power off
  • · Power control signals sequencing etc.

More details on our flow and usage can be found online in the paper we recently presented at the Synopsys User Group meeting (SNUG) in Bangalore:

Posted in Low Power, Register Abstraction Model with RAL | 3 Comments »

Modeling Virtual Registers and Fields in RAL

Posted by Amit Sharma on 27th July 2010

Amit Sharma, Synopsys

Typically, fields and registers are assumed to be implemented in individual, dedicated hardware structures with a constant and permanent physical location such as a set of D flip-flops. However for some registers which are typically large in number, implementing them in  memory or RAM instead of individual flip-flops is most efficient.  These are the virtual fields and virtual registers, and as they are implemented in a RAM, their physical location and layout is created by an agreement between the hardware and the software, not by their physical implementation.

Virtual fields and registers can be modeled using RAL by creating a logical overlay on a RAL memory model that can be accessed as if they were real physical fields and registers. Virtual fields are contained in virtual registers.



Virtual registers are defined as continuous locations within a memory and can span across multiple memory locations. They are always composed of entire memory locations and not fractions and are modeled arrays associated with a memory.

The association of a virtual register array with a memory can be static or dynamic, but the structure of these registers should be specified in the RALF file.


Static virtual registers are associated with a specific memory and are located at specific offsets within this memory. This association should be specified in the RALF file as shown in the following example. This association is permanent and cannot be broken at runtime.

Static Virtual Register Array:

Example 1 :

            memory ram1 {
              size 32; bits 8; access rw; initial 0; 

            block dut {
               bytes 2;
               memory ram1@0×0000
               virtual register vreg[3] ram1 @0×5 +2 {
                   bytes 2;
                   field f1 { bits 4; };
                   field f2 { bits 4; };
                   field f3 { bits 8; };

        An array of new virtual registers with array size as 3 is associated with the memory ‘ram1′ starting at the offset 5. The increment value is specified as 2 as minimum 2 locations in the memory are required to implement 1 virtual register. The 3 virtual registers are associated with the ram1 as shown:

    vreg[0] is associated at offset 0×5 of ram1.
    vreg[1] is associated at offset 0×7(0×5 +2) of ram1.
    vreg[2] is associated at offset 0×9(0×7 +2) of ram1.
    vreg2[2] is associated at offset 0×24(0×22 +2) of ram1.


Dynamic virtual registers are associated with a user-specified memory and are located at user-specified offsets within that memory. The association is done at runtime. The dynamic allocation of virtual register arrays can also be performed randomly by a Memory Allocation Manager instance.

The number of virtual registers in the array and its association with the memory is specified in the SystemVerilog code and must be correctly implemented by the user. Dynamic virtual registers arrays can be relocated or resized at runtime.

Dynamic Virtual Register Array:

Example 1:

            memory ram1 {
              size 64; bits 8; access rw; initial 0; 

            virtual register vreg {
             bytes 2;
             field f1 { bits 4; };
             field f2 { bits 4; };
             field f3 { bits 8; };

            block dut {
             bytes 2;
             memory ram1@0×0000
             virtual register vreg=vreg1;
             virtual register vreg=vreg2;           

           Here, the virtual register structure is specified in the RALF file, but the association is done during runtime (in the env or testcase) in one of the following ways:

a. Dynamic specification:


   Virtual register array of size 4 is implemented starting at the offset ‘h20 of the memory ram1 and ’2′ is the increment value. The virtual registers will be associated as:

    vreg1[0] is associated at offset 0×20 of ram1.
    vreg1[1] is associated at offset 0×22(0×20 +2) of ram1.
    vreg1[2] is associated at offset 0×24(0×22 +2) of ram1.
    vreg1[3] is associated at offset 0×26(0×24 +2) of ram1.

b. Randomly implemented dynamic specification:  


   Virtual register array of size 5 is allocated randomly by the memory allocation manager (MAM). The allocated region is randomly selected in the   address space managed by the specified MAM.


Hope this post would help you to model and verify the registers and fields implemented in the DUT more efficiently. Please look out for the next blog post where I will talk about the different modes through which you can access these registers and fields

Posted in Register Abstraction Model with RAL | Comments Off

Learn about VMM adoption from customers – straight from SNUG India 2010

Posted by Srinivasan Venkataramanan on 1st July 2010

…Reflections form core engineering team of CVC, fresh from SNUG India 2010

Jijo PS, Thirumalai Prabhu, Kumar Shivam, Avit Kori, Praveen & Nikhil – TeamCVC

Here is a quick collection of various VMM updates from SNUG India 2010 – as seen by TeamCVC. Expect to hear more on VMM1.2 soon from us as now I have a young team all charged up with VMM 1.2 (thanks to Amit @SNPS). All the papers and presentations can be accessed through:

TI’s usage of VMM 1.2 & RAL

In one of the well received papers, TI Bangalore talked about “Pragmatic Approach for Reuse with VMM1.2 and RAL “. The design is a complex digital display subsystem involving numerous register configurations. Not only handling the register configurations is a challenge, but also the ability to reuse of block level subenvs at system level with ease, and with minimal rework and reduced verification time. The author presented their success with VMM 1.2 & RAL to address these challenges.

Key elements touched up on advanced VMM are:

· TLM 2.0 communication mechanism

· VMM Multi-Stream Scenario gen (MSS)


Automated Coverage Closure with ECHO

It is real and live – automated coverage closure is slowly becoming reality atleast in select designs & projects. Having been attempted by various vendors for a while. VCS has added this under ECHO technology. At SNUG, TI presented their experience with ECHO & VMM-RAL. In their paper titled “Automating Functional Coverage Convergence and Avoiding Coverage Triage with ECHO Technology” TI described how an ECHO based methodology in a VMM RAL based environment, can in an automated manner close the feedback loop in targeting coverage groups involving register configuration combinations resulting in significant reduction in verification time.

WRED Verification with VMM

In her paper on “WRED verification with VMM”, Puja shared her usage of advanced VMM capabilities for a challenging verification task. Specifically she touched upon:

· VMM Multi-Stream Scenario gen

· VMM Datastream Scoreboard with its powerful “with_loss” predictor engine

· VMM RAL to access direct & indirect RAMs & registers

What we really liked is to see real application of some of these advanced VMM features – we were taught all of these during our regular CVC trainings and we even tried many of them on our own designs. It feels great to hear form peers on similar usage and to appreciate the value we derive out of VMM @CVC and the vibrant ecosystem that CVC creates around the same.

System-Level verification with VMM

Ashok Chandran, of Analog Devices presented their use of specialized VMM components in a system-level verification project. Specifically he touched upon specialized VMM base classes like vmm_broadcast and vmm_scheduler

At the end the audience learnt what are some of the unique challenges a SoC verification project can present. Even more interesting was the fact that the ever growing VMM seems to have solution for a wide variety of such problems, well thought-out upfront – Kudos to the VMM developers!

Ashok also briefed on his team’s usage of relatively new features in VMM such as vmm_record and vmm_playback and how it helps us to quickly replay streams.

On the tool side, a very useful feature for regressions is the usage of separate compile option in VCS.

VMM 1.2 for VMM users

Amit from SNPS gave a very useful and upto-the-point update on VMM 1.2 for long time VMM users. It was rejuvenating to listen to the VMM 1.2 run_tests feature and the implicit phasing techniques. Though look like little “magic” these features are bound to improve our productivity as there are lesser things to code-debug and move-on..

Amit also touched upon the use of TLM 2.0 ports and how they can be nicely used for integrating functional coverage, instead of using the vmm_callbacks.

The hierarchical component creation and configurations in VMM 1.2 puts us on track for the emerging UVM and is very pleasing to see how the industry keeps moving to more-n-more automation.

A truly vibrant ecosystem enabled by CVC -VMM Catalyst member

A significant addition to this year’s SNUG India was the DCE – Designer Community Expo – a genuine initiative by Synopsys to bring in partners to serve the larger customer base better all under one roof. CVC ( being the most active VMM catalyst member in this region was invited to setup a booth showcasing its offerings. We gave away several books including our popular VMM adoption book and all the new SVA Handbook 2nd edition .

Here is a snapshot of CVC’s booth with our VMM and other offerings.


Posted in Coverage, Metrics, Register Abstraction Model with RAL, VMM | 1 Comment »

VMM 1.2.1 is now available

Posted by Janick Bergeron on 15th June 2010

We, in the VMM team, have been so busy working on improving VMM that we only recently noticed that it has been almost a full year since we released an Open Source distribution of VMM. With the release of VCS 2010.06, we took the opportunity to release an updated Open Source distribution that contains all of the new features and capability in VMM now available in the VCS distribution.

I am not going to repeat in details what changed (you can refer to the RELEASE.txt file for that), but I will point two of the most important highlights…

First, this version supports the VMM/UVM interoperability package (also available from download). This interoperability package will allow you to use UVM verification assets in your VMM verification environment (and vice-versa). Note that the VMM/UVM interoperability package is also included in the VCS distribution (along with the UVM-1.0EA release) in VCS2010.06.

Second, many new features were added to the VMM Register Abstraction Layer (RAL). For example, RAL now supports automatic mirroring of registers by passively observing read/write transaction or by actively monitoring changes in the RTL code itself. Another important addition is the ability to perform sub-register accesses when fields are located in individual byte lanes.

The Open Source distribution is the exact same source code as the VMM distribution included in VCS. Therefore, you can trust its robustness acquired in the field through many hundreds of successful verification projects.

Posted in Announcements, Interoperability, Register Abstraction Model with RAL | 1 Comment »

Simplifying test writing with MSSG and constraint switching feature of System Verilog

Posted by Tushar Mattu on 25th May 2010

Pratish Kumar KT  & Sachin Sohale, Texas Instruments
Tushar Mattu,  Synopsys

In an ideal world, one would want maximum automation and minimal effort. The same holds true when you would want to uncover bugs in your DUV. You would want to provide your team members with an environment whereby they can focus their efforts on debugging issues in the design rather than spend time writing hundreds of lines of testcase code. Here we want to share some simple techniques to make the test writer’s life easier by enabling him to achieve the required objectives with minimum lines of test code, and by providing more automation and reuse. Along with automation, it is important to ensure that the test logic remains easy to understand. In one recent project, where the DUV was an image processing block, the following were some of the requirements:

- The relevant configuration for each block had to be driven first on a configuration port before driving the frame data on another port

- The configuration had to be written to the registers in individual blocks and each block had its own unique configuration.

- Several such blocks together constituted the overall sub-system which required an integrated testbench

- Because all these blocks had to be verified in parallel, the requirement was to have one a generic Register Model which could work not only with the blocks in parallel but also be reused at the sub-system level

- Also, another aspect was to provide as much of reuse in the testcases as possible from block to system level

Given the requirements, we decided to go ahead in the manner described below using a clever mix of the MSS scenarios , named constraints blocks and RAL:


Here is the brief description of the different components and the flow which is represented in the pictorial representation above:

Custom RAL model

As the RAL register model forms the configuration stimulus space, we decided to put all test-specific constraints for register configuration in the top level RAL block itself and had them switched off by default. As seen below, the custom RAL block extended from the generated model has all the test-specific constraints which are switched off.


Basic Multistream Scenario

For each block, one basic scenario (“BASIC SCENARIO”) was always registered with the MSSG. This procedural scenario governs the test flow, which is:

- Randomize the RAL configuration with default constraints,

- Drive the configuration to the configuration port

- Put the desired number of frame data into the driver BFM input channel

The following snippet shows the actual ‘BASIC SCENARIO”


Scenario Library

With the basic infrastructure in place, the strategy for creating a scenario library for the test-specific requirements was simple. Each extended scenario class was meant to change the configuration generation through the RAL model of the different blocks. Thus, for the scenario library for each block, each independent scenario extended the basic scenario and overloaded only the required virtual method for configuration change as shown below:



Each test would then select the scenario corresponding to the test and register it with the MSSG. fifth

At the block level, we were able to reuse this approach across all block level tests consistently and effectively.

Later we were able to reuse all block level tests in the sub-system level testbench , as tests were written in terms of constraints in the RAL model itself. At the sub-system level, we would then turn on specific constraint per block using corresponding block level scenarios. This ensures that there is maximum reuse and that the test flow is consistent across levels.


At higher levels, the scenarios extend the “basic_system_scenario”. The test flow managed by the system level scenarios have a slightly different execution flow than block level tests.But the ‘configuration generation’ is reused consistently and efficiently from block to system. That means modifications of block level test constraints would not require any modification for subsystem level tests using that configuration.

And voila !! Scenarios and test dumping steps were automated at block and sub-system level using simple perl scripting. Test-specific constraint per block were written in an XL sheet which a script will take that as input to generate the scenario and test at block/sub-system level. The scenario execution flow is predetermined and defined in the block/sub-system basic scenario . Furthermore, the flexibility was given to user within this automation process to create multiple such scenarios and reuse existing test constraints.

Posted in Creating tests, Register Abstraction Model with RAL, Tutorial, VMM | 3 Comments »

Using RAL callbacks to model latencies within complex systems

Posted by Amit Sharma on 20th May 2010

Varun S, CAE, Synopsys

In very complex systems there may be cases where a write to a register may be delayed by blocks internal to the system while the block sitting on the periphery might signal a write complete. The RAL BFM will generally be connected to the peripheral block and as soon as it sees the write complete from the peripheral block it will update the RAL’s shadow registers. But due to latencies that might be present within the internal blocks, the state of the physical register wouldn’t have changed yet.The value within the real register and the shadow register will not match for some duration due to the internal latencies
within the system. This could cause problems if another component happens to read from the register, there would be a mis-match as the shadow register reflects the new value which hasn’t reached the register yet, whereas the DUT register still holds the old value.

To prevent such mis-matches one can use the callback methods within the vmm_rw_xactor_callbacks class and model the delay within the relevant callback task. The following are the methods present within the callback,

1. pre_single()  – This task is invoked before execution of a single cycle i.e., before execute_single().
2. pre_burst()   – This task is invoked before execution of a burst cycle i.e., before execute_burst().
3. post_single() – This task is invoked after execution of a single cycle i.e., after execute_single().
4. post_burst()  – This task is invoked after execution of a burst cycle i.e., after execute_burst().

In the scenario described above it would be ideal if we introduced a delay such that the updation of RAL’s shadow register is simultaneous with the state change of the real register. For this we will need a monitor that sits very close to the register which would correctly indicate the state changes of the register. Within the post_write() method of the callback we can wait till the monitor senses a change and then exit the task. This would thus delay the shadow register from being updated until the real register changes its state. The sample code below gives an illustration of how this is done.


class my_rw_xactor_callbacks extends vmm_rw_xactor_callbacks;

my_monitor_channel mon_activity_chan;

function new(my_monitor_channel mon_activity_chan);
this.mon_activity_chan = mon_activity_chan;

virtual task post_single(vmm_rw_xactor xact, vmm_rw_access tr);

my_monitor_transaction mon_tr;

while(1) begin
if(tr.kind == vmm_ral::WRITE)
if(tr.addr == mon_tr.addr)

endclass : my_rw_xactor_callbacks


In the example code above the callback class receives a handle to the monitors activity channel which is then used within the post_single() task to wait till the monitor observes a state change on the register. A check is made using the address of the register to confirm if the state change was for the register in question. Real scenarios could get much more complex with multiple domains, drivers etc., additional complexities can be accordingly handled within the post_single() callback task using the monitor/monitors that monitor the interfaces through which the register is written to.

RAL also has a built-in mechanism known as auto mirroring to automatically update the mirror whenever the register changes inside the DUT through a different interface. One can refer to the latest RAL User Guide to get more details on its usage.

Posted in Register Abstraction Model with RAL, Reuse | Comments Off

Using a RAL test to go from Block to Top quickly and effectively….

Posted by Srivatsa Vasudevan on 9th February 2010

Using a RAL test to go from Block to Top quickly and effectively….

Often, I’ve been in situations where the chip lead came up to me asking me to write a test to makes sure that the block quickly integrates at the top, and had to pull some tricks out of a hat in fairly short order. I’m sure the below has happened to many of you way more than you want to count.

In order for the top level integration to run, I recommend a simple sanity register read/write test with all the blocks to make sure the integration went ok and the RTL that is available can be debugged further.


The point being, that if the CPU cannot read/write configuration registers from any one block on the SOC, the core will fail the integration test anyways.

Many folks typically wind up writing such a test using tasks in verilog, One of the challenges is that the test has to be continually kept up to date

As the memory map changes and the environment for that has to be updated as well.

However, using RAL allows both the Block owner and the chip top owner to both Tag off the same RAL File and get their work done. The core level person can continue on his path of verifying

His tests, while the top chip owner can continue on a separate path of writing top level tests.


All the core owner has to do is share the one RALF for now for each release of the core while he works on other integration tests.

In the coming posts, we’ll see how to cut the amount of work you do using these generated models.

My mantra has always been “ work less/verify more” We’ll explore sequences and user generated code and how to kill the documentation issues.

Till then, Stay tuned.

Posted in Register Abstraction Model with RAL, Reuse | Comments Off

A generic functional coverage solution based on vmm_notify

Posted by Wei-Hua Han on 15th June 2009

Weihua Han, CAE, Synopsys

Functional coverage plays an essential role in Coverage Driven Verification. In this blog, I’ll explain a modular way of modeling and implementing  functional coverage models.

SystemVerilog users can take the advantage of  the “covergroup” construct to implement functional coverage. However this is not enough. The VMM methodology provides some important design-independent guidelines on how to implement functional coverage in a reusable verification environment.

Here are a few guidelines  from the VMM book that I consider very important for implementing a functional coverage model:

“Stimulus coverage shall be sampled after submission to the design” because in some cases it is possible that not all generated transactions will be applied to DUT.

“Stimulus coverage should be sampled via a passive transactor stack” for the purpose of reuse so that the same implementation can be used in a different stimulus generation structure. For example in another verification environment, the stimulus may be generated by a design block instead of testbench component.

“Functional coverage should be associated with testbench components with a static lifetime” to avoid creating a large number of coverage group instances. So “Coverage groups should not be added to vmm_data class extensions”. These static testbench components include monitors, generators, self-checking structure, etc.

“The comment option in covergroup, coverpoint, and cross should be used to document the corresponding verification requirements”.
You can find some more details on these and other functional coverage related guidelines in the VMM book, pages 263-277.

In an earlier post “Did you notice vmm_notify?”, Janick showed how vmm_notify can be used to connect a transactor to a passive observer like a functional coverage model. Here, let me borrow his idea to implement the following VMM-based functional coverage example.

1.    Transaction class

1. class eth_frame extends vmm_data;
2.    typedef enum {UNTAGGED, TAGGED, CONTROL} frame_formats_e;
3.    rand frame_formats_e format;
4.    rand bit[3:0] src;
5.     …
6. endclass

There is some random property defined in the transaction class.  We would like to collect the coverage information for these properties in our coverage class, for example “format” and “src”.

2.    A generic subscribe class

1.  class subscribe #(type T = int) extends vmm_notify_callbacks;
2.     local T obs;
3.     function new(T obs, vmm_notify ntfy, int id);
4.        this.obs = obs;
5.        ntfy.append_callback(id, this);
6.     endfunction
7.     virtual function void indicated(vmm_data status);
8.          this.obs.observe(status);
9.       endfunction
10. endclass

The generic subscribe class specifies the behavior for “indicated”method which will be called when vmm_notify “indicate” method is called. Then with using this generic subscribe class, functional coverage and other observer models only need to implement the desired behavior with “observe” method. Please note that in line 8 the vmm_data object is passed to “observe” method through vmm notification status information.

3.    Coverage class

1.   class eth_cov;
2.      eth_frame tr;

3.      covergroup cg_eth_frame(string cg_name,string cg_comment,int cg_at_least);
4.         type_option.comment = “eth frame coverage”;
5.         option.at_least = cg_at_least;
7.         option.comment = cg_comment;
8.         option.per_instance=1;
9.         cp_format:coverpoint tr.format {
10.         type_option.weight=10;
11.       }
12.       cp_src: coverpoint tr.src {
13.          illegal_bins ilg = {4′b0000};
14.         wildcard bins src0[] = {4′b0???};
15.         wildcard bins src1[] = {4′b1???};
16.         wildcard bins src2[] = (4′b0??? => 4′b1???);
17.      }
18.       format_src_crs: cross cp_format, cp_src {
19.          bins c1 = !binsof(cp_src) intersect {4’b0000,4’b1111 };
20.      }
21.    endgroup

22.    function new(string name=”eth_cov”, vmm_notify notify, int id);
23.       subscribe #(eth_cov) cb=new(this,notify,id);
24.       cg_eth_frame = new(“cg_eth_frame”,”eth frame coverage”,1);
25.    endfunction
26.    function void observe (vmm_data tr);
27.       $cast(,tr);
28.       cg_eth_frame.sample();
29.    endfunction
30. endclass

The coverage group is implemented in coverage class eth_cov. And this coverage class is registered to one vmm_notify service through scubscribe class. The coverage group is sampled in “observe” method so when the notification is indicated the interesting properties will be sampled.

4.    Monitor

1.   class eth_mon extends vmm_xactor;
2.      int OBSERVED;
3.      eth_frame tr; // Contains reassembled eth_frame transaction

4.      function new(…)
5.         OBSERVED = this.notify.configure();
6.         …
7.      endfunction
8.      protected virtual task main();
9.         forever begin
10.          tr = new;
11.          //catch transaction from interface
12.         …
13.         this.notify.indicate(OBSERVED,tr);
14.         …
15.       end
16.    endtask
17. endclass

The monitor extracts the transaction from the interface then it indicates the notification with the transaction as the status information (line 13).

5.    Connect monitor and coverage object

1.   class eth_env extends vmm_env;
2.      eth_mon mon;
3.      eth_cov cov;
4.      …
5.      virtual function void build();
6.         …
7.         mon = new(…);
8.         cov = new(“eth_cov”,mon.notify,mon.OBSERVED);
9.         …
10.    endfunction
11.     …
12. endclass

In the verification environment, the coverage object is created with the monitor notification service interface and the proper notification ID.

There are other ways to implement functional coverage in VMM based environments.  For example a callback-based implementation is used in the example located under sv/examples/std_lib/vmm_test in the VMM package which can be downloaded from

I haven’t discussed assertion coverage in this post, which are another important type of “functional coverage”.  If you are interested in using assertions for functional coverage please check out chapter 3 and chapter 6 in the VMM book for its suggestions.

Posted in Communication, Coverage, Metrics, Register Abstraction Model with RAL, Reuse, SystemVerilog, VMM | 4 Comments »

VMM VIP’s on multiple buses

Posted by Adiel Khan on 27th May 2009


Adiel Khan, Synopsys CAE

Increasingly, more design-oriented engineers are writing VMM code. Some are trying to map typically good design architecture practices to verification development.

A dangerous mapping is parameterization, from modules to classes.

In my old Verilog testbenches I would develop reusable modules and use #parameters extensively to control the settings of the modules I was instantiating. (It was a sad day when I heard IEEE was deprecating my friend the defparam).

1. module vip #(parameter int data_width = 16,

2. parameter int addr_width = 16)

3. (addr, data);

4. output [addr_width-1:0] addr;

5. inout [data_width-1:0] data;


7. endmodule

This would allow me to instantiate this VIP for many bus variants.

8. vip #(64, 32) vip_inst1(…);

9. vip #(32, 128) vip_inst2(…);

Mapping the approach from modules to classes, I could end up with:

1. class pkt_c #( parameter int data_size=16,

2. parameter int addr_size=16)

3. extends vmm_data;

4. rand bit [addr_size-1:0] addr;

5. rand bit [data_size-1:0] data;


7. endclass

8. //specialized class with 64 & 32 sizes

9. pkt_c #(64, 32) pkt1=new();

10. //specialized class with 32 & 128 sizes

11. pkt_c #(32, 128) pkt2=new();

Be warned, in the SystemVerilog testbench centric view of VIP reusability, parameterization of classes leads to a dead-end path. Moving one layer of abstraction up, I really don’t care if it is a 32/64/128 bits wide interfaces. What I want to do is use pkt_c around the verification environment. The simplest case is creating a reusable driver using pkt_c to drive any bus-width interface.

However, if I try to use a generic class instantiation, I will get a specialization with parameters = 16&16. I cannot perform the $cast() to put the right pkt_c type onto the bus.

1. class pkt_driver_c extends vmm_xactor;

2. virtual protected task main();

3. forever begin : GET_OBJ_TO_SEND

4. pkt_c pkt_to_send; //default class instance

5. pkt_c #(64, 32) pkt_created;

6. randomize(pkt_created);// generator code

7. $cast(pkt_to_send, pkt_created); //FAILS !!!!!

If you are using VMM channels, they must similarly be specialized and cannot carry generic parameterized classes:

8. `vmm_channel(pkt_c)

9. class pkt_driver_c extends vmm_xactor;

10. pkt_c_channel in_chan; //Can only carry pkt_c#(16,16)!!!

Or you must upfront select which specialization you want for use with a parameterized channel.

8. class pkt_driver_c extends vmm_xactor;

9. vmm_channel_typed #(pkt_c#(64, 32)) in_chan;

Hence, for the driver to operate on the correct object type, I need to instantiate the exact specialization throughout my entire environment and make the driver itself parameterized. Now you can clearly see instantiating a specific specialization in the driver (or monitor, scoreboard etc) stops the code from being really reusable for other bus_widths.

1. pkt_c #(64, 32) pkt_to_send;

2. pkt_driver_c #(64, 32) driver;

A better approach is one that was described by Janick in the “Size Does Matter” blog of using `define. Let’s expand on this and see how it works for reusable VIPs. Well, the first thing that comes to my mind is that a `define is a global namespace macro with a single value, whereas I am using my VIP with 2 different bus architectures. Therefore, the `define alone is not enough: you also need a local constant to be able to exclude unwanted bits when you have a VIP instantiated for various bus widths.

1. //default define values

2. `define MAX_DATA_SIZE 16

3. `define MAX_ADDR_SIZE 16

4. class pkt_c extends vmm_data;

5. static vmm_log log = new(“Pkt”, “class”);

6. //instance constant to control actual bus sizes

7. const int addr_size;

8. logic [`MAX_ADDR_SIZE:0] addr;

9. logic [`MAX_DATA_SIZE:0] data;

10. // pass a_size as arg to coverage

11. // ensuring valid coverage ranges.

12. covergroup cg (int a_size);

13. coverpoint addr

14. {bins ad_bin[] = {[0:a_size]};}

15. endgroup

16. // sizes specialized at construction for pkts

17. // on buses less than MAX bus widths

18. function new(int a_s=`MAX_ADDR_SIZE);

19. addr_size = a_s;

20. cg = new(addr_size);

21. `vmm_note(log, $psprintf(“\nADDR_TYPE: “,$typename(addr),

22. “\nDATA_TYPE: “,$typename(data),

23. “\nMAX_BUS_SIZE: “, addr_size));

24. endfunction

The code above allows for a default implementation and all the user needs to do is set the `MAX_ADDR_SIZE and `MAX_DATA_SIZE symbols and all the code will be fully reusable across drivers, monitors, subenv, SoC etc.

For situations where two VIP’s of differing bus architectures are used, the compiler symbols need to be set to the biggest bus architecture in the system; smaller bus-widths are set using addr_size. It is not necessary for addr_size variable to be an instance constant or set during construction. By using instance constants, this ensures the bus-widths are not changed at runtime by users. Having the value of addr_size set during construction gives the users the flexibility to setup the object as they want. For pseudo-static objects such as drivers, monitors, subenvs, masters, slaves, scoreboards etc you should check the construction of verification modules for your particular design architecture during the vmm_env::start phase.

N.B not shown above, but assumed, is that the addr_size variable would be used to ensure correct masking occurs when performing do_pack(), do_unpack() compare() etc.

Just to wrap up some loose ends…

I’m not totally discounting the merits of parameterized classes just insuring people look at all the options. For instance you could parameterize everything and then set the SIZE at the vmm_subenv level and map the SIZE parameters to all other objects. At some point you will want to monitor or scoreboard across different bus-widths and then the parameterized class casting will bite you, reducing you to manually mapping the members within the comparison objects. There is a time and place for everything, so probably need another blog showing merits and where to use parameterized classes.

The vmm_data class is not the only place you might need to know the size of the bus, the same `define & instance constant technique can be used throughout your VIP classes.

This blog does not discuss the pros and cons of putting coverage groups in your data object class. I merely used the covergroup in the data-object as a vehicle to demonstrate how you can make your classes more reusable. I think a separate blog about where best to put coverage will clarify the usage models.

All the code snippets can be run with VCS-2009.06 & VMM1.1. Contact me for more complete code examples and bugs or issues you find.

Posted in Coding Style, Configuration, Register Abstraction Model with RAL, Reuse, Structural Components, VMM | 8 Comments »

Garbage collection

Posted by Janick Bergeron on 18th July 2008

In SystemVerilog, unlike C, you don’t have to explictly free dynamically allocated class instances. Like most modern programming languages, SystemVerilog includes a garbage collector that frees memory that is no longer needed.

Although there are a few garbage collection processes out there, the lack of a root object (like the sys object in e) almost garantees that Reference Counting is used1.

To illustrate how reference counting works, consider the following code segment:

   while (...) begin
      eth_frame pkt = new;         // Line 1
      scoreboard.push_back(pkt);   // Line 2
   end                             // Line 3
   forever begin
      eth_frame pkt;
      pkt = rx();
      if (!scoreboard[0].compare(pkt)) ...
      scoreboard.pop_front();      // Line 4

When the eth_frame instance gets first created on line 1, its reference count is set to “1″. When it is later added to the scoreboard on line 2, its reference count is incremented to “2″ (one for the “pkt” variable, one for the “scoreboard” queue). When the while loop iterates on line 3, the dynamic “pkt” variable disappears, reducing the reference count to 1. When the class instance is finally popped out of the scoreboard on line 4, its reference count is decremented to “0″ and its memory is garbage collected.

This usually works well until you create circular references, i.e. objects that refer to each other.

Circular references are always created whenever you introduce the concept of a “parent” object. Take a RAL model for example, an instance of the vmm_ral_reg class has a reference to its parent vmm_ral_block class instance that gets returned by the vmm_ral_reg::get_block() method. But the vmm_ral_block class instance has a reference to all of the vmm_ral_reg instances it contains, which get returned by the vmm_ral_block::get_registers() method.

That create a circular reference for every register in a block.

That means that a RAL model cannot be garbage collected. Ever.

In the case of a RAL model, that’s not a problem because it corresponds to the DUT and that DUT, being modelled using modules and interfaces, is static in nature and can never be dynamically modified.

But in the case of objects that get created in large numbers but should live only for as long as needed (such as packets or transactions who only need to live while they are processed by the DUT), then you have to be very careful to break that circular reference to enable garbage collection. Otherwise you will end up consuming an ever-increase amount of memory.

1Turns out not to be accurate. See follow-up comment.

Posted in Optimization/Performance, Register Abstraction Model with RAL | 5 Comments »