Verification Martial Arts: A Verification Methodology Blog

Archive for the 'Customization' Category

Avoiding Redundant Simulation Cycles with your UVM VIP based simulation with a Simple Save-Restore Strategy

Posted by paragg on 6th March 2014

In many verification environments, you reuse the same configuration cycles across different testcases. These cycles might involve writing and reading from different configuration and status registers, loading program memories, and other similar tasks to set up a DUT for the targeted stimulus. In many of these environments, the time taken during this configuration cycles are very long. Also, there is a lot of redundancy as the verification engineers have to run the same set of verified configuration cycles for different testcases leading to a loss in productivity. This is especially true for complex verification environments with multiple interfaces which require different components to be configured.

The Verilog language provides an option of saving the state of the design and the testbench at a particular point in time. You can restore the simulation to the same state and continue from there. This can be done by adding appropriate built in system calls from the Verilog code. VCS provides the same options from the Unified Command line Interpreter (UCLI).

However, it is not enough for you to restore simulation from the saved state. For different simulations, you may want to apply different random stimulus to the DUT. In the context of UVM, you would want to run different sequences from a saved state as show below.

In the above example apart from the last step which varies to large extent, the rest of the steps once established need no iterations.

Here we explain how to achieve the above strategy with the simple existing UBUS example available in the standard UVM installation. Simple changes are made in the environment to show what needs to be done to bring in this additional capability. Within the existing set of tests, the two namely, “test_read_modify_write” and “test_r8_w8_r4_w4”, differs only w.r.t the master sequence being executed – i.e. “read_modify_write_seq” and “r8_w8_r4_w4_seq” respectively.

Let’s say that we have a scenario where we would want to save a simulation once the reset_phase is done and then start executing different sequences post the reset_phase the restored simulations. To demonstrate a similar scenario through the UBUS tests, we introduced a delay in the reset_phase of the base test (in a real test, this may correspond to the PLL lock, DDR Initialization, Basic DUT Configuration).

The following snippet shows how the existing tests are modified to bring in the capability of running different tests in different ‘restored’ simulations.

As evident in the code we made two major modifications.

  • Shifted the setting of the phase default_sequence from the build phase to the start of the main phase.
  • Get the name of the sequence as an argument from the command-line and process the string appropriately in the code to execute the sequence on the relevant sequencer.

As you can see, the changes are kept to a minimum. With this, the above generic framework is ready to be simulated.  In VCS, one of the different ways, the save/restore flow can be enabled as follows.

Thus above strategy helps in optimal utilization of the compute resources with simple changes in your verification flow. Hope this was useful and you manage to easily make the changes in your verification environment to adopt this flow and avoid redundant simulation cycles.

Posted in Automation, Coding Style, Configuration, Creating tests, Customization, Optimization/Performance, Organization, Reuse, Stimulus Generation, SystemVerilog, Tutorial, UVM, Uncategorized, Verification Planning & Management | 1 Comment »

Coverage coding: simple tips and gotchas

Posted by paragg on 28th March 2013

Author - Bhushan Safi (E-Infochips)

Functional coverage has been the most widely accepted way by which we track the completeness of any constrained random testbench. However, does achieving 100% functional coverage means that the DUV is bug free? Certainly not , but it boosts the confidence of the verification engineer and management team.

Based on my experience of defining functional covergroups for different projects, I realized that coverage constructs and options in the SystemVerilog language have their own nuances for which one needs to keep an eye out. These “gotchas” have to be understood so that coverage can be used optimally to achieve appropriate usage results in correct alignment with the intent desired. Let me talk about some of these finer aspects of coverage so that you can use the constructs more productively.

Usage of ignore_bins

The ‘ignore_bins’ construct is meant to exclude a collection of bins from coverage.  While using this particular construct, you might end up with multiple ‘shapes’ issues (By ‘shapes’ I mean “Guard_OFF” and “Guard_ON”, which appears in the report whenever ‘ignore_bins’ is used). Lets look at a simple usage of ignore_bins is as shown in figure 1.

Looking at the code in figure 1, we would assume that since we have set “cfg.disable = 1” the bin with value 1 would be ignored from the generated coverage report.  Here we use the ‘iff’ condition to try to match our intent of not creating a bin for the variable under the said condition.  However in simulations, where the sample_event is not triggered, we see that we end up having an instance of our covergroup which still expects both the bins to be hit. (See the generated report in figure 1). Why does this happen? If you dig deep into the semantics, you will understand that the “iff” condition will come into action only when the event sample_event is triggered. So if we are writing  ‘ignore_bins’ for a covergroup which may/may not be sampled on each run then we need to look for an alternative.  Indeed there is a way to address this requirement and that is through the usage of the multi-mastered intelligent ternary operator. Look at the code in figure 2 to see how the ternary operator is used to model the same intent.

Now the report is as you expect!!!

Using the above mentioned coding style we make sure that the bin which is not desired in specific conditions is ignored irrespective of the condition of whether or not the covergroup is being sampled.  Also, we use the value “2’b11” to make sure that we don’t end up in ignoring a valid value for the variable concerned.

Using detect_overlap

The coverage option called “detect_overlap” helps in issuing a warning if there is an overlap between the range list (or transition list) of two bins of a coverpoint. Whenever we have plenty of ranges to be covered, and there is a possibility of overlap, it is important to use this option.

Why is it important and how can you be impacted if you don’t use it? You might actually end up with incorrect and unwanted coverage results!

Let’s look at an example. In the above scenario, if a value of 25 is generated, the coverage scores reported would be 50% when the desired outcome would ideally have been 25%. This is because the value ‘25’ contributes to two bins out of four bins when that was probably not wanted. The usage of ‘detect_overlap’ would have warned you about this and you could have fixed the bins to make sure that such a scenario doesn’t occur.

Coverage coding for crosses and assigning weight

What does the LRM (Table 19-1—Instance-specific coverage options) say about the ’weight’ attribute?  

If set at the covergroup syntactic level, it specifies the weight of this covergroup instance for computing the overall instance coverage of the simulation. If set at the coverpoint (or cross) syntactic level, it specifies the weight of a coverpoint (or cross) for computing the instance coverage of the enclosing covergroup. The specified weight shall be a non-negative integral value.”

What kinds of surprises can a combination of cross and option.weight create?

The SystemVerilog LRM shows a very simple way of writing a cross. Let’s look at the code below.

The expectation here is that for a single simulation (expecting one of the bins to be hit), we will end up with 25 % coverage as we have specified the weight of the individual coverpoints to zero. However, what essentially happens is the following, 2 internal coverpoints for check_4_a and check_4_b are generated, which are used to compute the coverage score of the ‘crossed’ coverpoint here. So you’ll end up having a total of four coverpoints, two of which have option.weight specified to 0 (i.e. CHECK_A and CHECK_B) and two of which are coverpoints with option.weigh as 1 (i.e. check_4_a and check_4_b). Thus for a single simulation, you will not get the 25% coverage desired.

Now with this report we see the following issues:

  • => We see four coverpoints while expectation is only two coverpoints
  • => The weights of the individual coverpoints is set to be expected to zero as option.weight is set to ‘0’
  • => The overall coverage numbers are undesired.

In order to avoid above disastrous results we need to take care of following aspects:

  • => Use the type_option.weight = 0, instead of option.weight = 0.
  • => Use the coverpoint labels instead of coverpoint names to specify the cross.

Hope my findings will be useful for you and you will use these options/attributes appropriately to get the best value out of your coverage metrics (without losing any sleep or debug cycles to figure out why they didn’t behave as you expected them to)!

Posted in Coding Style, Coverage, Metrics, Customization, Reuse, SystemVerilog, Tutorial | 9 Comments »

SNUG-2012 Verification Round Up – Language & Methodologies – I

Posted by paragg on 25th February 2013

As in the previous couple of years, last year’s SNUG – Synopsys User Group showcased an amazing number of useful user papers   leveraging the capabilities of the SystemVerilog language and verification methodologies centered on it.

I am always excited when I see this plethora of useful papers and I try to ensure that I set aside some time to go through all these user experiences.  Now, as we wait for SNUG, Silicon Valley to kick-start the SNUG events for this year, I would want to look back at some of the very interesting and useful paper from the different SNUGs of the year 2012.  Let me start with talking about a few papers in the area of the System Verilog language and SV methodologies.

Papers leveraging the SystemVerilog language and constructs

Hillel Miller of Freescale in the paper “Using covergroups and covergroup filters for effective functional coverageuncovers the mechanisms available for carving out the coverage goals. In the p1800-2012 of the SystemVerilog LRM, new constructs are provided just for doing this. The construct that is focused on is the “with” construct. The new construct provides the ability to carve out of a multidimensional range of possibilities for a sub-range of goals. This is very relevant in a “working” or under development setup that requires frequent reprioritization to meet tape-out goals.

The paperTaming Testbench Timing: Time’s Up for Clocking Block Confusionsby Jonathan Bromley, Kevin Johnston of Verilab, reviews the key features and purpose of clocking blocks and then examines why they continue to be a source of confusion and unexpected behavior for many verification engineers. Drawing from the authors’ project and mentoring experience, it highlights typical usage errors and how to avoid them. They clarify the internal behavior of clocking blocks to help engineers understand the reasons behind common problems, and show techniques that allow clocking blocks to be used productively and with confidence. Finally, they consider some areas that may cause portability problems across simulators and indicate how to avoid them.

Inference of latches and flops based on coding styles has always been a topic creates multiple viewpoints. There are other such scenarios of synthesis and simulation mismatches that one typically comes across. To address all such ambiguity, language developers have provided different constructs to provide for an explicit resolution based on the intent. To help us gain a deeper understanding of the topic, Don Mills of Microchip Technology Inc., presented the related concepts in the paper “Yet Another Latch and Gotchas Paper” @ SNUG Silicon Valley. This paper discusses and provides solutions to issues that designers using SystemVerilog for design come across, such as: Case expression issue for casez and casex, Latches generated when using unique case or priority case, SRFF coding style problems with synthesis, SystemVerilog 2009 new definition of logic

Gabi Glasser from Intel presented the paper “Utilizing SystemVerilog for Mixed-Signal Validation@ SNUG Israel, where he proposed a mechanism for simplifying analysis and increasing coverage for mixed signal simulations.  The method proposed here was to take advantage of SystemVerilog capabilities, which enables defining a hash (associative) array with unlimited size. During the simulation, vectors are created for required analog signals, allowing them to be analyzed within the testbench along or at the end of the simulation, without requiring saving these signals into a file. The flow change enables the ability to launch a large scale mixed signal regression while allowing an easier analysis of coverage data.

Design pattern is a general reusable solution to a commonly recurring problem within a given context. The benefit of using design patterns is clear: it gives a common language for designers when approaching a problem, and gives a set of tools, widely used, to solve issues as they come up.  The paper Design Patterns In Verification” by Guy Levenbroun of Qualcomm explores several common problems, which might rise, during the development of a testbench, and how we can use design patterns to solve these problems. The patterns are categorized majorly into following areas: creational (eg factory), structural (eg composite) and behavioral (eg template) are covered in the paper.

Arik Shmayovitsh, Avishay Tvila, Guy Lidor of Sigma Designs , in their paper “Truly reusable Testbench-to-RTL  connection for System Verilog , presents  a novel approach of  connecting the DUT and testbench using consistent semantics while  reusing the testbench. This is achieved by abstracting the connection layer of each testbench using the SystemVerilog ‘bind’ construct. This ensures that the only thing that is required to be done to reuse the testbench for a new DUT would be to identify the instance of the corresponding DUT.

In the paper, A Mechanism for Hierarchical Reuse of Interface Bindings, Thomas Zboril of Qualcomm (Canada) explores another method to instantiate SV interfaces, connect them to the DUT and wrap the virtual interfaces for use in the test environment. This method allows the reuse of all the code when the original block level DUT becomes a lower level instance  in a larger subsystem or chip. The method involves three key mechanisms: Hierarchical virtual interface wrappers, Novel approach of using hierarchical instantiation of SV interfaces, Another novel approach of automatic management of hierarchical references via SV macros (new)

Thinh Ngo & Sakar Jain of Freescale Semiconductor, in their paper, “100% Functional Coverage-Driven Verification Flow propose a coverage driven verification flow that can efficiently achieve 100% functional coverage during simulation. The flow targets varied functionality, focuses at transaction level, measures coverage during simulation, and fails a test if 100% of the expected coverage is not achieved. This flow maps stimulus coverage to functional coverage, with every stimulus transaction being associated with an event in the coverage model and vice versa. This association is derived from the DUT specification and/or the DUT model. Expected events generated along with stimulus transactions are compared against actual events triggered in the DUT. The comparison results are used to pass or fail the test. 100% functional coverage is achieved via 100% stimulus coverage. The flow enables every test with its targeted functionality to meet 100% functional coverage provided that it passes.

Papers on Verification Methodology

In the paper, Top-down vs. bottom-up verification methodology for complex ASICs, Paul Lungu & Zygmunt Pasturczyk of Ciena at Canada covers the simulation methodology used for two large ASICs requiring block level simulations. A top-down verification methodology was used for one of the ASICs while a larger version needed an expanded bottom-up approach using extended simulation capabilities. Some techniques and verification methods such as chaining of sub environments from block to top-level are highlighted  along with challenges and solutions found by the verification team. The paper presents a useful technique of  of passing a RAL (Register Abstraction Layer) mirror to the C models which are used as scoreboards in the environment. The paper also presents a method of generating stable clocks inside the “program” block.

In the paper,Integration of Legacy Verilog BFMs and VMM VIP in UVM using Abstract Classes by Santosh Sarma of Wipro Technologies(India) presents an alternative approach where Legacy BFMs written in Verilog and not implemented using ‘Classes’ are hooked up to higher level class based components to create a standard UVM VIP structure. The paper also discusses an approach where existing VMM Transactors that are tied to such Legacy BFMs can be reused inside the UVM VIP with the help of the VCS provided UVM-VMM Interoperability Library. The implementation makes use of abstract classes to define functions that invoke the BFM APIs. The abstract class is then concretized using derived classes which give the actual implementation of the functions in the abstract class. The concrete class is then bound to the Verilog instance of the BFM using the SystemVerilog “bind” concept. The concrete class handle is then used by the UVM VIP and the VMM Transactor to interact with the underlying Verilog BFM. Using this approach the UVM VIP can be made truly reusable by using run time binding of the Verilog BFM instance to the VIP instead of using hardcoded macro names or procedural calls.

A Unified Self-Check Infrastructure - A Standardized Approach for Creating the Self-Check Block of Any Verification Environmentby John Sotiropoulos, Matt Muresa , Massi Corba of Draper Laboratories Cambridge, MA, USA presents a structured approach for developing a centralized “Self-Check” block for a verification environment. The approach is flexible enough to work with various testbench architectures and is portable across different verification methodologies. Here, all of the design’s responses are encapsulated under a common base class, providing a single “Self-Check” interface for any checking that needs to be performed. This abstraction, combined with a single centralized scoreboard and a standardized set of components, provides the consistency needed for faster development and easier code maintenance. It expands the concept of ‘self-check’ to incorporate the white-box monitors (tracking internal DUT state changes etc.) and Temporal Models (reacting to wire changes) along-with traditional methodologies for enabling self-checking.

For VMM users looking at migrating to UVM, there is another paper from Courtney Schmitt of Analog Devices, Inc.Transitioning to UVM from VMMdiscusses the process of transitioning to a UVM based  environment from VMM Differences and parallels between the two verification methodologies are presented to show that updating to UVM is mostly a matter of getting acquainted with a new set of base classes. Topics include UVM phases, agents, TLM ports, configuration, sequences, and register models. Best practices and reference resources are highlighted to make the transition from VMM to UVM as painless as possible.

Posted in Announcements, Coverage, Metrics, Creating tests, Customization, Modeling, Optimization/Performance, Reuse, SystemVerilog, UVM, Uncategorized, VMM, VMM infrastructure | 3 Comments »

Build your own code generator!

Posted by Amit Sharma on 25th June 2012

One of the critical problems developers cope with during SoC development process (architecture plan, RTL design, verification, software development) is the constant need to synchronize between different forms of the same data structure: the SoC registers database. The SoC registers database can be found at the SoC architecture team (who write the SoC registers description document), design engineers (who implement the registers structure in RTL code), verification engineers (who write the verification infrastructure – such as RAL code, and write verification tests – such as exhaustive read/write tests from all registers), and software engineers (who use the registers information to write embedded software).

Since the same information is used in different forms, it is only natural to have a single, central database which will hold all SoC registers data. You would ideally like to generate all the required output files  (documentation, UVM or VMM Regsiter Abstraction Model, RTL, C headers, …) from this central database ..  Different vendors and CAD teams provide different automation solutions for doing this.

The RALF specification contains all of the necessary information to generate RTL and testbench  code that implements the specified registers. There are many ways to code and implement RTL, so it is not possible to provide a general purpose solution.  As far as the testbench abstraction model is concerned, there are multiple ways of customizing your model post generation in both UVM and VMM. Callbacks, factories, configurable RAL model  attributes are some of the ways through which the desired customization can be brought in.  “The ‘user’ in RALF : get ralgen to generate ‘your’ code” highlights a very convenient way of adding of bringing in  SystemVerilog compatible code which will be copied as-is into the RAL model and in the desired scope. When it comes down to generating the RTL and the ‘C’ headers, we cannot leave the customization to such a late stage.  Also, different organizations and project groups have their own RTL and C-code coding styles which means  a generated output of a very generic nature might not be very helpful. For RTL generation, engineers would want the generated code to be power and gate-count efficient. Similar for C registers header generation, it often needs to follow coding styles and match CPU Firmware API. How do we bring in all this customizations to the end user?

Using the RALF C++ API, you have full access to parsed RALF data (through C++ routines), which you can use to implement a customized RTL code generator, or any other feature that needs RALF data. So, you can use  it to generate your C header files, HTML documentation, or translate the i/p RALF files to another register description format, custom covergroups and coverage convergence sequences (DAC 2012 User Track poster 6U.8 — Register Verification on a Fast Lane: Using Automation to Converge on UVM REG Coverage Models)

I have seen two instances of the need to generate a different register specification in the recent past and that is one of the reasons I decided to put this down in a blog.   Let me talk about the first instance..

One of the project groups were in the process of migrating from their own SV base classes to UVM. They had their own Register description format from which they used to generate their Register Abstraction Model. This was a proven flow.

So, when they migrated to UVM , they wanted to have a flow which would validate the changes that they are doing..

Given that they were moving to using RALF and ‘ralgen’, they didn’t want to create Register Specification in the legacy format anymore. So, they wanted to have some automation for generating scripts in the earlier format.  So, how did they go about doing this?..   They took the RAL C++ APIs and using them there were able to create the necessary automation to to generate the legacy format from RALF in no time.. (From what I remember, it was a half days work).. Everyone were involved in doing what they were best at, and that helped in the overall scheme of things.

The other customer had their own format from which they were generating RTL, firmware code and HTML. They had the necessary automation to create RALF for generating the UVM register Model. They also had a mechanism in place to generate IPXACT from this format as well as vice versa.. So, to complete the traceability matrix, they wanted a RALF  to IPXACT conversion.. Again, the most logical approach was to take the RALF CPP APIs and get those to iterate through the parsed RALF data and generate IPXACT.. Though, this effort is not complete, it just took a day or so to be able to generate valid IPXACT1.5 schema and all that is required now is some more additional work to smoothen the corners.

How do you start using these APIs and build your own code/html generators? You need to include ‘ralf.hpp” (which is in $VCS_HOME/include) in your ‘generator’ block. And then to compile the code, you need to pick up the shared library from the VCS installation.

$CPP_COMPILER $CFLAGS -I${VCS_HOME}/include –L${VCS_HOME}/lib –lralf your-file.cpp $LDFLAGS

#include "ralf.hpp"
int main(int argc, char *argv[])
// Check basic command line usage…
if (argc < 3) {
fprintf(stderr, "Error: Wrong Usage.\n");
// Show Correct Usage …
* Parse command line arguments to get the essential
* constructor arguments. See documentation
* of class ralf::vmm_ralf’s constructor parameters.

* Create a ralf::vmm_ralf object by passing in proper
constructor arguments. */
ralf::vmm_ralf ralf_data(ralf_file, top, rtl_dir,
* Get the top level object storing the parsed RALF
* block/system data and traverse that, top-down, to get
* access to complete RALF data.
const ralf::vmm_ralf_blk_or_sys_typ * top_lvl_blk_or_sys
= ralf_data.getTopLevelBlockOrSys();
* TODO–Traverse the parsed RALF data structure top-down
* using/starting-from ‘top_lvl_blk_or_sys’ for getting
* complete access to the RALF data and then, do whatever
* you would want to do, with the parsed RALF data. One
* typical usage of parsed RALF data could be, to generate
* RTL code in your own style.
// TODO – Add your RTL generator code here.
* As part of this library, Synopsys also provides a
* default RTL generator, which can be invoked by
* invoking ‘generateRTL()’ method of ‘ralf::vmm_ralf’
* class, as demonstrated below.

Essentially, you have a handle to the parsed database, and with the available APIs you can do whatever you want with it :) ..  The documentation of the APIs are in the documentation shipped with the VCS installation.. Also, if you are like me and would rather hack away at existing code rather than start with something from scratch, you can just check with Synopsys support to give you existing templates to dump out code in specific format, and you can starting modifying that for your requirements..

Posted in Automation, Coverage, Metrics, Customization, Interoperability, Register Abstraction Model with RAL, Tools & 3rd Party interfaces, UVM, VMM infrastructure | Comments Off

Auto-Generation of Performance Charts with the VMM Performance Analyzer

Posted by Amit Sharma on 30th September 2011

Amit Sharma, Synopsys

Quoting from one of Janick’s earlier blog on the VMM Performance Analyzer Analyzing results of the Performance Analyzer with Excel,
”The VMM Performance Analyzer stores its performance-related data in a SQL database.SQL was chosen because it is an IEEEANSI/ISO standard with a multiplicity of implementation, from powerful enterprise systems like Oracle, to open source versions like MySQL to simple file-based like SQLite. SQL offers the power of a rich query and analysis language to generate the reports that are relevant to yourapplication.”

And given that everyone doesn’t understand SQL, he goes on to show how one can get VMM Performance Analyzer data from a SQLite database into an Excel spreadsheet and then subsequently analyze the data by doing any additional computation and creating the appropriate graphs. This involves a set of steps leveraging the SQlite ODBC (Open Database Conduit) and thus requires the installation of the same.

This article presents a mechanism how TCL scripts are used to bring in the next level of automation so that the users can retrieve the required data from the SQL DB and even automate the process of results analysis by auto-generating the relevant performance charts for statistical analysis.. Also, as users migrate to using DVE as a single platform for their design debug, coverage analysis, verification planning, it is shown how these scripts can be integrated into DVE, so that the generation process is a matter of using the relevant menus and clicking on the appropriate buttons in DVE.

For generating the SQL databases with the VMM Performance Analyzer, an SQLite Installation is required which can be obtained from Once, you have installed it, you would need to set the SQLITE3_HOME environment variable to the path where its installed. Once that is done, these are the following steps that you need to follow to generate the appropriate graphs out of the data gets generated out of your batch regressions runs..

First, you need to download the utility from the link provided in the article DVE TCL Utility for Auto-Generation of Performance Charts

Once it is extracted, you can try it on the tl_bus examples that ships with the utility. You would need to go the directory vmm_perf_utility/tl_bus.

Use make to run the tl_bus which will generate the sql_data.db and sql_data.sql. Now, go to the ‘chart_utility’ directory

(cd vmm_perf_utility/chart_utility/)

The TCL scripts which are involved in the automation of the performance charts are in this directory.

This script vmm_perf_utility/chart_utility/chart.tcl  can then be executed from inside DVE as shown below


Once, that is done, it will add will add a button “Generate Chart” in View menu.. BTW, adding a button is fairly simple..

eg:    gui_set_hotkey -menu “View->Generate Chart” -hot_key “G”

is how the button gets added..

Now,  click on a “Generate Chart” to select the sql database.


This will bring up the dialog box to select the SQL database..


Once, the appropriate data base is selected, the user can select which table to work with and then generate the appropriate.. The options would be provided to the user based on the data that is dumped into the SQL database.. From the combinations of charts, that is shown, select the graph that you want to generate and the required graphs will be generated for you. This is what you can see when you use the SQL DB generated for the TL bus example



Once, you have made the selections, you would see the following chart generated..


Now, obviously, you as a user would not just want the graphs to be  generated but you would also want these values to be available to you..

Thus, once you use this chart generation mechanism, the relevant .csv files corresponding to the graphs that you have generated would also be dumped for you..

This will be generated in the perfReports directory that would be created as well.. So, you can do any additional custom computation in Excel or by running your own scripts..  To generate the graphs for any other example, you just need to pick up the appropriate SQL DB  that was generated based on your simulation runs and then subsequently generate the reports and graphs of your interest.

So whether you use the Performance Analyzer in VMM (Performance and statistical analysis from HDL simulations using the VMM Performance Analyzer) or in UVM (Using the VMM Performance Analyzer in a UVM Environment) and even while you are doing your own PA customizations Performance appraisal time – Getting the analyzer to give more feedback , you can easily generate whatever charts you require which  would easily help you analyze all the  different performance aspects of the design you are verifying..

Posted in Automation, Coverage, Metrics, Customization, Performance Analyzer, Tools & 3rd Party interfaces | Comments Off

Extending Hierarchical Options in VMM to work with all data types

Posted by Amit Sharma on 2nd September 2011

Abhisek Verma, CAE, Synopsys

Tyler Bennet, Senior Application Consultant, Synopsys

Traditionally, to pass a custom data type like a struct or a virtual interface using vmm_opts, it is recommended to wrap it in a class and then use the set/get_obj/get_object_obj on the same. This approach has been explained in another blog here.  But wouldn’t you prefer to have the same usage for these data types as the simple use model you have for integers, strings and objects?  This blog describes how to create a simple helper package around vmm_opts that uses parameterization to pass user-defined types. It will work with any user-defined type that can be assigned with a simple “=”, including virtual interfaces.

Such a package can be created as follows:-

STEP1:: Create the parameterized wrapper class inside the package


The above vmm_opts_p class is used to encapsulate any custom data type which it takes as a parameter “t”.

STEP2:: Define the ‘get’ methods inside the package.

Analogous to vmm_opts::get_obj()/get_object_obj(), we define get_type and get_object_type. These static functions allow the user to get an option of a non-standard type. The only restriction is that the datatype must work with the assignment operator. Also note that since this uses vmm_opts::get_obj, these options cannot be set via the command-line or options file.


STEP3:: Define the ‘set’ methods inside the package.

Similarly, analogous to vmm_opts::set_object(), the custom package needs to declare set_type. This static function allows the user to set an option of a non-standard type. .



The above package can be imported and used to set/get virtual interfaces as follows :-

vmm_opts_p#(virtual dut_if)::set_type(“@BAR”, top.intf, null); //to set the virtual interface of type dut_if

tb_intf = vmm_opts_p#(virtual dut_if)::get_object_type(is_set, this, “BAR”, null, “SET testbench interface”, 0); //to get the virtual interface of type dut_if, set by the above operation.

The following template example shows the usage of the package in complete detail in the context of passing virtual interfaces

1. Define the interface, Your DUT


2. Instantiate the DUT, Interface and make the connections


3.  Leverage the Hierarchical options and the package in your Testbench


So, there you go.. Now , whether you are using your own user defined types, structs, queues , you can go ahead and use this package and thus have your TB components communicate and pass data structures  elegantly and efficiently..

Posted in Communication, Configuration, Customization, Organization | Comments Off

Using VMM template Generator to ramp up your testbench development

Posted by Amit Sharma on 25th October 2010

Amit Sharma, Synopsys
‘vmmgen’, the template generator for creating robust, extensible VMM compliant environments, has been available for a long time with VMM and it was upgraded significantly with VMM1.2. Though the primary functionality of ‘vmmgen’ is to help minimize VIP and environment development cycle by providing detailed templates for developing VMM Compliant verification environments, a lot of folks also use it to quickly understand how different VMM base classes can be used in different contexts. This is done as the templates uses a rich set of the latest VMM features to ensure the appropriate base classes and their features are picked up optimally.

Given that it has a wide user interaction mechanism which provides available features and options to the user, the user can pick up the modes which are most relevant to his or her requirement. It also provides them the option to provide their own templates thus providing a rich layer of customization. Based on the need, one can generate individual templates of different verification components or they can generate a complete verification environment which comes with a ’Makefile’ and an intuitive directory structure, thus propelling them on their way to catch the first set of bugs in their DUTs. I am sure all of you know where to pick up ‘vmmgen’ form. It available in the <VMM_HOME>/Shared/bin area or in $VCS_HOME/bin

Some of the rich set of features available now includes:

• Template Options:

– Complete environment generation

– Individual templates generation

• Options to create Explicitly phased environments or Implicitly phased environment or to mix Implicitly phased components and Explicitly phased components

• Usage of VMM Shorthand macros

• Creating RAL based environment, and providing Multiplexed domain support if required

• Hooking up VMM Performance Analyzer at the appropriate interfaces

• Hooking up the DS Scoreboard at the relevant interfaces (with options to chose from a range of integration options, e.g. : through callbacks, through TLM2.0 analysis ports, connect directly through to transactors, channels or notifications)

• Ability to hook up different generators (atomic, scenario, Multistream generators) at different interfaces

• Creating a scenario library and Multistream scenario creation

• Multi-driver generator support for different kinds of transactions in the same environment

• Factory support for transactions, scenarios and multi stream scenarios. Sample factory testcase which can explain the usage of transaction override from a testcase.

• ‘RTL config’ support for drivers and receivers.

• Various types of unidirectional and bi-directional TLM connections between generator and driver.

• Analysis ports/exports OR parameterized notify observers to broadcast the information from monitor to scoreboard and coverage collectors.

• Multi test concatenation support and management to run the tests

• Creating portable Interface wrapper object, and setting up interface connections to the testbench components using vmm_opts::set_object/get_object_obj

• Creating a Generic slave component

• Option to use default names or user provided names for different components

As you can see the above list itself is quite comprehensive and let me tell you that that it is not exhaustive as there are many more features in vmmgen.

With respect to the usage as well, there are multiple flavors. In the default mode, the user is taken through multiple detailed choices/options as he is creating/connecting different components in his verification environment. However, some folks might want to use ‘vmmgen’ within their own wrapper script/environment and for them there are options to generate the environments by providing all required options in the command line or through a configuration file… Some of these switches include

-SE [y/n] : Generates a complete environment with sub-environments

-RAL [y/n] : Create RAL based verification environments

-RTL [y/n] : Generates RTL configuration for the environment

-ENV <name>, -TR <name> : Provide the name for the environment class and transaction classes. names for multiple transaction class names can be provide as well:

vmmgen –l sv –TR tr1+tr2

-cfg_file <file_name> : Option to provide a configuration file for the options

There is an option to generate an environment quickly by taking the user through the minimum number of questions (-q).

Additionally, the user can provide his or her own templates through the –L <template directory> option.

As far as individual template generation goes, you have the complete list. Here, I am outlining this down for reference:


I am sure a lot of you have already been using ‘vmmgen’. For those, who haven’t, I encourage you to try out the different options with it. I am sure you will find this immensely useful and it will not only help you create verification components and environments quickly but will also make sure they are optimal and appropriate based on your requirements.

Posted in Automation, Coding Style, Customization, Modeling Transactions, Organization, Performance Analyzer, Scoreboarding, Tools & 3rd Party interfaces, VMM infrastructure | Comments Off

Customizing VMM Memory Allocation Manager to cater to your requirements

Posted by Amit Sharma on 11th February 2010

Rakshit Singhal, Nvidia, Amit Sharma, Synopsys

The VMM MAM (Memory Allocation Manager) package offers the capability to dynamically manage memory shared across multiple clients. It helps to simulate HW memory usage patterns and guarantees memory block allocation based on constraints.

This utility is centered on a set of four base classes with configurable memory address ranges and allocation schemes.

  • vmm_cfg: This class is used to specify the memory managed by an instance of a “vmm_mam” memory allocation manager class.
  • vmm_mam: This class is a memory allocation management utility similar to C’s malloc() and free(). A single instance of this class is used to manage a single, contiguous address space.
  • vmm_mam_allocator: An instance of this class is randomized to determine the starting offset of a randomly allocated memory region. This class can be extended to provide additional constraints on the starting offset.
  • vmm_mam_region: This class is used by the memory allocation manager to describe allocated memory regions. Instances of this class should not be created directly.

The mode of operation is as follows. An instance of “vmm_mam” class would use the “vmm_cfg” class handle to determine the minimum, maximum addresses and difference allocation schemes (mode and locality) of a memory space. A memory space can be reconfigured with new min, max and allocation scheme at run-time through the “vmm_cfg” class with the exception that number of bytes per memory location cannot be modified once a “vmm_mam” instance has been constructed. Additionally the currently allocated regions must fall within the new address space. On every call of request_region() function “vmm_mam” randomizes “vmm_mam_allocator” instance to determine the ‘start_offset’ of the randomly allocated memory region. This region is represented by an instance of “vmm_mam_region” class. Now if this “vmm_mam” is associated with a “vmm_ral_mem” instance, this “vmm_mam_region” can be used to perform read/write operation on the actual memory block through RAL. Thus we have a simple and easy-to-use framework to constrain different kinds of allocation requests based on specific requirements.

However, there are generally additional requirements in different verification environments. For example, a memory map can have a non-contiguous space allocated as device memory and various devices can simply request a portion of this region. Also for a PC memory model, it is required that each memory region has a clear ownership so that we can verify the memory accesses from various clients in system follow the design rules. Moreover some system would work in multiple address modes such as 32-bit or 64-bit mode and depending on which addressing mode is selected the complete memory management and allocation changes. The other requirements were to provide memory owners some control over the alignment of the base addresses of the requested regions. For example a USB driver in a system may ask for a memory region with its base address aligned to cacheline size or page size. It may also be required to make memory allocation and access management to be self checking for ease of verification. For example a cacheable region is accessed only by the designated owners or to build some backdoor polling on specific memory locations to trap memory accesses etc. Portability and extendibility of such environment was also a big concern.

To sum it up all we needed three primary extensions to VMM MAM; one, the ability to independently constrain each allocation/de-allocation request such that memory attributes could be set to the memory map; two, the flexibility to hook up any memory model with VMM MAM to achieve a centralized memory allocation management which could handle multiple memory implementations; three, a place holder to built self checking routines, polling routines and front-door/back-door memory accesses such that we could automate the whole memory verification.

To allow individual clients a control over the attributes set to the memory regions and to choose a specific type of memory we extended the following VMM MAM classes.

1. vmm_mam_allocator: – This has been extended to include features like addressing mode, memory type and address alignment by adding in new variables. Addressing mode specifies the address size i.e. if the unit/system requires a 32bit, 64bit etc addressing. Memory type attribute specifies the allocation of memory of the similar type within a specified range. Example: All the MMIO, write-cache, write-back etc addresses can be restricted to a specific region in memory and allocation can be done for each type from within their sub-regions. Address alignment specifies if the required allocation has to be a cache-aligned, word-aligned etc

  1. typedef enum {R, RW, RSVD} mem_acc_t;
  2. typedef enum {NONE, Above4GB, Below4GB, PreFetch, Cacheable} mem_attr_t;
  3. typedef enum {PCIE, SATA, MAC, USB, USB3} mem_owner_t;
  4. typedef enum {BYTE_ALIGN, WORD_ALIGN, CACHE_ALIGN, PAGE_ALIGN}address_align_t;
  5. typedef enum {BIT32, BIT40, BIT64} address_mode_t;
  6. class nv_mam_allocator extends vmm_mam_allocator;
  7. address_align_t addr_align;
  8. .
  9. .
  10. .
  11. // system memory allocation
  12. constraint sys_mem_alloc_cons {
  13. this.start_offset > 64′hFFFF; // reserved memory area below FFFF
  14. (addr_align == WORD_ALIGN) -> {this.start_offset[1:0] == 2′b0}; // Allocate only word (4B) aligned addresses
  15. (addr_align == CACHE_ALIGN) -> {this.start_offset[7:0] == 8′b0}; // Allocate only cacheline (256B) aligned addresses
  16. }
  17. endclass : nv_mam_allocator

2. vmm_mam_region – this class has been extended to apply various user-defined attributes to memory regions and sub-regions. This has a customized vmm_mam::request_region() function which can take user variables as input arguments to constraint “vmm_mam_region” allocation. Addtionally, the psdisplay() of  “vmm_mam” and “vmm_mam_region” is overridden to print out values of these variable for debugging purpose. Ownership of a region could also be queried from testbench using an inbuilt function get_owner.

  1. class nv_mam_region extends vmm_mam_region;
  2. mem_owner_t mem_owner;
  3. function new(mem_addr_t start_offset, end_offset, offset_range, int unsigned n_bytes, nv_mam parent);
  4., end_offset, offset_range, n_bytes, parent);
  5. this.n_bytes = n_bytes;
  6. endfunction
  7. function mem_owner_t get_owner();
  8. return this.mem_owner;
  9. endfunction
  10. endclass : nv_mam_region

3. vmm_mam – This class has been extended such that the features added to the above 2 classes are made use of in the allocate/de-allocate methods. The handles of the allocated regions are stored and used in the release regions method. There is also a provision to release the regions particular to an owner or a group of owners.

  1. class nv_mam extends vmm_mam;
  2. nv_mam_allocator default_nv_alloc;
  3. vmm_mam_region in_use[$];
  4. nv_mam_region nv_in_use[$];
  5. address_mode_t amode;
  6. // request region
  7. function nv_mam_region nv_request_region(int unsigned n_bytes, mem_owner_t mem_owner=UNKNOWN, address_align_t addr_align=WORD_ALIGN,mem_attr_t mem_type=NONE);
  8. vmm_mam_region region;
  9. nv_mam_allocator alloc;
  10. alloc=new(addr_align, this.amode, mem_type);
  11. region=super.request_region(n_bytes, alloc);
  12. nv_request_region = new(region.get_start_offset(),region.get_end_offset(), region.get_len(),region.get_n_bytes(),this);
  13. nv_request_region.mem_owner = mem_owner;
  14. this.in_use.push_back(region);
  15. this.nv_in_use.push_back(nv_request_region);
  16. endfunction
  17. // release OWNER specific regions
  18. function void release_mem_region(mem_owner_t mem_owner=UNKNOWN);
  19. vmm_mam_region region;
  20. `vmm_note(this.log, $psprintf(“Releasing all regions Owned by %s mem owner”,;
  21. foreach(this.nv_in_use[i]) begin
  22. `vmm_verbose(this.log,$psprintf(“REGION OWNER”,this.nv_in_use[i];
  23. if(this.nv_in_use[i].mem_owner==mem_owner) begin
  24. super.release_region(this.in_use[i]);
  25. this.in_use.delete(i);
  26. this.nv_in_use.delete(i);
  27. break;
  28. end
  29. end
  30. endfunction
  31. .
  32. .
  33. endclass

Memory Model:

The above customized MAM can be wrapped around in a class in order to bind memory configuration class with specific HW/SW memory implementation, memory monitors, checker, fw_load functions and various read write tasks. In the code below,  sys_mam_trace could be called anytime an actual read/write operation is performed on the hw/sw memory. This function provides notifications to indicate read/write to a certain address location. Two race free blocking functions wait_for_rd and wait_for_wr are implemented around these which could be used anywhere in the verification system to trap memory accesses. Based on these trap a cache protocol checker or any other checker could be implemented easily. Function load_fw can read an image file and preload any firmware code at the desired location in system memory. It also reserves the memory region so as to prevent any read/writes or allocation of the same address range to some other client in the system.  Thus with the customizations, we made to the off-the-shelf application class, we were able to efficiently meet our verification requirements!

  1. class sys_mam_c extends vmm_subenv;
  2. rand vmm_mam_cfg cfg;
  3. nv_mam_allocator sys_malloc;
  4. nv_mam mam;
  5. // setting up memory traps
  6. event indicate_rd, indicate_wr;
  7. bit rd_access[*];
  8. bit wr_access[*];
  9. // memory model – theh memory is implemented as an associative array of class objects
  10. mem_loc sys_mem[*];
  11. // constraint memory configuration as per the selected addressing mode
  12. constraint sys_mam_valid_cons {
  13. // 40 bit address space each address pointing to 4 bytes.
  14. cfg.n_bytes == 1; //no of bytes each address points to.
  15. cfg.start_offset == 0; //
  16. (amode == BIT32) -> {cfg.end_offset == 64′hFFFFFFFF};
  17. (amode == BIT64) -> {cfg.end_offset == 64′hFFFFFFFFFFFFFF};
  18. cfg.mode == vmm_mam::GREEDY;
  19. cfg.locality == vmm_mam::BROAD;
  20. }
  21. .
  22. .
  23. .
  24. // debug/monitor/notification function
  25. function void sys_mam_c :: sys_mam_trace(bit rw, bit [63:0] addr, bit[31:0] data, bit [3:0] byte_en, string requester);
  26. // indicate mem read/write notifications
  27. if (rw) begin
  28. rd_access[addr] = 1;
  29. -> indicate_rd;
  30. end else begin
  31. wr_access[addr] = 1;
  32. -> indicate_wr;
  33. end
  34. // printing a range of addresses
  35. if (mem_trace_en | wr_trace_en | rd_trace_en) begin
  36. if (trace_sa < trace_ea) begin
  37. if ((addr >= trace_sa) && (addr < trace_ea+1)) begin
  38. // `vmm_debug(this.log, $psprintf(“Memory Trace Enabled [trace_sa=%h:trace_ea=%h]“, trace_sa, trace_ea));
  39. // run time print of memory read/writes
  40. if (!rw && (wr_trace_en | mem_trace_en)) begin
  41. `vmm_note(this.log, $psprintf(“MEM_WRITE: addr %h, data :%h byte_en :%b requester %s:”, addr, data, byte_en, requester));
  42. end else if (rw && (rd_trace_en | mem_trace_en)) begin
  43. `vmm_note(this.log, $psprintf(“MEM_READ: addr %h, data :%h, requester %s:”, addr, data, requester));
  44. end
  45. end
  46. end else begin
  47. // memory read/writes trace
  48. if (!rw && (wr_trace_en | mem_trace_en)) begin
  49. `vmm_note(this.log, $psprintf(“MEM_WRITE: addr %h, data :%h byte_en :%b requester %s:”, addr, data, byte_en, requester));
  50. end else if (rw && (rd_trace_en | mem_trace_en)) begin
  51. `              vmm_note(this.log, $psprintf(“MEM_READ: addr %h, data :%h, requester %s:”, addr, data, requester));
  52. end
  53. end
  54. end
  55. endfunction : sys_mam_trace
  56. // trap memory read accesses
  57. task sys_mam_c :: wait_for_rd(bit [63:0] mem_addr);
  58. `vmm_note(this.log, $psprintf(“Waiting For a Read Access to Mem Addr #%0h”, mem_addr));
  59. while (!this.rd_access.exists(mem_addr)) begin
  60. @(indicate_rd);
  61. end
  62. this.rd_access.delete(mem_addr);
  63. endtask: wait_for_rd
  64. // trap memory write accesses
  65. task sys_mam_c :: wait_for_wr(bit [63:0] mem_addr);
  66. `vmm_note(this.log, $psprintf(“Waiting For a Write Access to Mem Addr #%0h”, mem_addr));
  67. while (!this.wr_access.exists(mem_addr)) begin
  68. @(indicate_wr);
  69. end
  70. this.wr_access.delete(mem_addr);
  71. endtask: wait_for_wr
  72. .
  73. .
  74. .
  75. // load fw in memory with an img file
  76. function void sys_mam_c::load_fw(string fw_file_name);
  77. bit [31:0] mem [*];
  78. bit [63:0] addr;
  79. static bit addr_reserved[*];
  80. mem_loc load_loc;
  81. $readmemh(fw_file_name, mem);
  82. foreach (mem[i]) begin
  83. // i is word aligned address. Change it to byte aligned
  84. addr = i<<2;
  85. // reserved the address if not already reserved
  86. if (!addr_reserved[addr]) begin
  87. this.mam.nv_reserve_region(addr, 4);
  88. addr_reserved[addr] = 1;
  89. end
  90. write_dw(addr, mem[i], 4′b1111, “load_fw”);
  91. `vmm_note(this.log, $psprintf(“MEM i=%0h and ADDR=%0h Contains Data %0h”, i, addr, mem[i]));
  92. end
  93. endfunction : load_fw
  94. endclass

Posted in Customization, Reuse | 1 Comment »

Shorthand macros with user defined implementation

Posted by Vidyashankar Ramaswamy on 3rd February 2010

The transaction class objects are created by extending the base class vmm_data. Vmm_data class has many virtual methods which need to be implemented by the extended class. This can become a laborious process as this is done for each extended class object. However, a set of shorthand macros available to help minimize the amount of code required to create these data class extensions. These shorthand macros can be used on per data member basis which provides a default implementation of all the require methods. Following is an example.

. . .
1    class apb_trans extends vmm_data;
2       `vmm_typename(apb_trans)

3       rand enum {READ, WRITE} kind;
4       rand bit [31:0] addr;
5       rand logic [31:0] data;

6       `vmm_data_member_begin(apb_trans)
7            `vmm_data_member_scalar(addr, DO_ALL)
8            `vmm_data_member_scalar(data, DO_ALL)
9            `vmm_data_member_enum(kind, DO_ALL)
10     `vmm_data_member_end(apb_trans)
11     …
12  endclass: apb_trans
. . .

The class properties are declared as shown in Line number 3 to 5. Line number 6 and 10 marks the start and end of the shorthand macro section. Based on the variable type , the appropriate macros are called (line number 7 to 9). As the name says “DO_ALL” means use this variable in all the virtual method implementations. Say if you want to exclude the “kind” property from printing, then you can use “DO_ALL – DO_PRINT”. Please refer to the VMM user guide for more details on this.

User defined implementation

Shorthand macros provide the default implementation for all the vmm_data virtual methods. If you want to override the default implementation of a method, then you have to implement the do_* method. For example say you want to change the implementation for byte_size, You can still use shorthand macros but need to explicitly implement the apb_trans::do_byte_size() method and force VMM not to provide the default implementation. The example code is shown below.

1   virtual function int unsigned do_byte_size (int kind = –1) ;
2       . . .
3       . . .
4   endfunction

Constructor replacement

In some cases a transaction class might need a custom constructor with different arguments. Please note that the explicit constructor implementation is done using the shorthand macro `vmm_data_new() as shown below (line 1). The new implementation should follow the macro definition (Line number 2 to 5). It is also important to provide default values for the arguments to make the transaction class factory-enabled (Line number 2).

1   `vmm_data_new(apb_trans)
2   function new (vmm_log log=null, vmm_object parent=null, string name=””);
3 . ., . . .) ;
4       . . .
5   endfunction

The shorthand macros are also available for messaging service (vmm_log), vmm_unit configuration, RTL configuration (vmm_rtl_config) and TLM ports. For the complete list, please refer to the VMM user guide.

Posted in Automation, Customization, Modeling Transactions | Comments Off

VMM data macros are cool, but how do I customize the constructor?

Posted by Shankar Hemmady on 27th August 2009

Srinivasan VenkataramanPawan BellamkondaSrinivasan Venkataramanan, CVC

Pawan Bellamkonda, Brocade

During our recent VMM training at CVC, we learned about VMM data member macros, and our engineers liked it. Some of our teams at Brocade have started adopting it in their projects right away! We see that we can avoid much of the lengthy code and increase readability with these new macros. We will surely avoid making silly mistakes which might be hard to debug later.

However as with any built-in automation, there are always scenarios where-in user level customization of some or all of the methods is required. VMM provides this flexibility for overriding the default behavior of virtual methods of vmm_data class. In one of our blocks we needed to tweak the constructor of the transaction. One question that perplexed us was:

“I have built a transaction class extending from vmm_data. We have used the short hand macro `vmm_data_member…..  to get all the functions automatically. But while creating an object of this transaction, we want to pass a configuration class object as argument in the new function. How should we override the new() function alone when we use the short hand macros? When we tried using do_new() (like overriding other functions), it did not work.”

As we explored a bit, we found another macro specifically meant for this:


This macro should be used before the beginning of data-member macros. This lets the succeeding macros do all the work except the “new” function implementation.

class s2p_xactn extend vmm_data;
rand bit [7:0] pkt_len, pkt_pld;

function new(int my_own_arg = 2);
`vmm_note (log, $psprintf (“my val is %0d”, my_own_arg));
endfunction : new

`vmm_data_member_scalar(pkt_len, DO_ALL)
`vmm_data_member_scalar(pkt_pld, DO_ALL)
endclass : s2p_xactn

As with traditional martial arts, functional verification too has some slightly different styles/requirements that makes each project interesting and unique. To its credit, we feel that VMM is as proven as traditional martial arts: it can be tailored to different requirements while providing a standardized means of combat.

Posted in Automation, Coding Style, Customization, Modeling | 2 Comments »

Address alignment in Memory Allocation Manager

Posted by Janick Bergeron on 10th August 2009


Janick Bergeron, Synopsys Fellow

The VMM Memory Allocation Manager (vmm_mam) can be used to manage a shared address space. It does not physically allocate memory but dishes out address ranges that are guaranteed to be previously unallocated.

By default, the only constraints on the Memory Allocation Manager are 1) the entire address range must not be already allocated, 2) the starting offset must be greater than or equal the base offset of the memory and 3) the ending offset must be less than or equal to the offset limit.

That’s it.

So randomly allocating memory will result in memory regions starting at random positions. For example, the following code allocates ten 4-byte regions in a 256-byte memory:

program test;

`include “”
`include “”

vmm_mam_cfg    cfg;
vmm_mam        mgr;
vmm_mam_region bfr;

cfg = new;
cfg.n_bytes      = 1;
cfg.start_offset = 8′h00;
cfg.end_offset   = 8′hFF;

mgr = new(“Mem Mgr”, cfg);

repeat (10) begin
bfr = mgr.request_region(4);
$write(“%s\n”, bfr.psdisplay());

And produce the following results:


but what if you needed the region to be aligned on quad-word boundaries?


You can add constraints by extending the vmm_mam_allocator object and supplying the new allocator to the request_region() call or making it the default allocation policy on the Memory Allocation Manager instance:

program test;

`include “”
`include “”

class qword_aligned_allocator extends vmm_mam_allocator;
constraint qword_aligned {
start_offset[1:0] == 0;

vmm_mam_cfg    cfg;
vmm_mam        mgr;
vmm_mam_region bfr;

cfg = new;
cfg.n_bytes      = 1;
cfg.start_offset = 8′h00;
cfg.end_offset   = 8′hFF;

mgr = new(“Mem Mgr”, cfg);

qword_aligned_allocator alloc = new;

repeat (5) begin
bfr = mgr.request_region(4, alloc);
$write(“%s\n”, bfr.psdisplay());

mgr.default_alloc = alloc;

repeat (5) begin
bfr = mgr.request_region(4);
$write(“%s\n”, bfr.psdisplay());



The resulting regions are now QWORD aligned:


Other kind of constraints can you add via a customized allocator include:

  • Region must be within a 1-k address page
  • 75% of regions must be in the upper half of the address space
  • Regions must be allocated from a pool of pre-allocated regions

Remember that regions are allocated by randomizing the allocator object. That means that a highly directed and procedural allocation policy (such as the last one mentioned above) can be implemented using the post_randomize() method.

Once you have a region, if you have associated the Memory Allocation Manager with a memory RAL abstraction class, you may access it as if it were a tiny memory in and of itself. The integration of MAm and RAL effectively implements a mini virtual-memory system.

Posted in Customization, Tutorial | Comments Off