Verification Martial Arts: A Verification Methodology Blog

Archive for the 'Configuration' Category

Avoiding Redundant Simulation Cycles with your UVM VIP based simulation with a Simple Save-Restore Strategy

Posted by paragg on 6th March 2014

In many verification environments, you reuse the same configuration cycles across different testcases. These cycles might involve writing and reading from different configuration and status registers, loading program memories, and other similar tasks to set up a DUT for the targeted stimulus. In many of these environments, the time taken during this configuration cycles are very long. Also, there is a lot of redundancy as the verification engineers have to run the same set of verified configuration cycles for different testcases leading to a loss in productivity. This is especially true for complex verification environments with multiple interfaces which require different components to be configured.

The Verilog language provides an option of saving the state of the design and the testbench at a particular point in time. You can restore the simulation to the same state and continue from there. This can be done by adding appropriate built in system calls from the Verilog code. VCS provides the same options from the Unified Command line Interpreter (UCLI).

However, it is not enough for you to restore simulation from the saved state. For different simulations, you may want to apply different random stimulus to the DUT. In the context of UVM, you would want to run different sequences from a saved state as show below.

In the above example apart from the last step which varies to large extent, the rest of the steps once established need no iterations.

Here we explain how to achieve the above strategy with the simple existing UBUS example available in the standard UVM installation. Simple changes are made in the environment to show what needs to be done to bring in this additional capability. Within the existing set of tests, the two namely, “test_read_modify_write” and “test_r8_w8_r4_w4”, differs only w.r.t the master sequence being executed – i.e. “read_modify_write_seq” and “r8_w8_r4_w4_seq” respectively.

Let’s say that we have a scenario where we would want to save a simulation once the reset_phase is done and then start executing different sequences post the reset_phase the restored simulations. To demonstrate a similar scenario through the UBUS tests, we introduced a delay in the reset_phase of the base test (in a real test, this may correspond to the PLL lock, DDR Initialization, Basic DUT Configuration).

The following snippet shows how the existing tests are modified to bring in the capability of running different tests in different ‘restored’ simulations.

As evident in the code we made two major modifications.

  • Shifted the setting of the phase default_sequence from the build phase to the start of the main phase.
  • Get the name of the sequence as an argument from the command-line and process the string appropriately in the code to execute the sequence on the relevant sequencer.

As you can see, the changes are kept to a minimum. With this, the above generic framework is ready to be simulated.  In VCS, one of the different ways, the save/restore flow can be enabled as follows.

Thus above strategy helps in optimal utilization of the compute resources with simple changes in your verification flow. Hope this was useful and you manage to easily make the changes in your verification environment to adopt this flow and avoid redundant simulation cycles.

Posted in Automation, Coding Style, Configuration, Creating tests, Customization, Optimization/Performance, Organization, Reuse, Stimulus Generation, SystemVerilog, Tutorial, UVM, Uncategorized, Verification Planning & Management | 1 Comment »

The right name at the right space: using ‘namespace’ in VMM to set virtual interfaces

Posted by Amit Sharma on 7th September 2011

Abhisek Verma, CAE, Synopsys

A ‘namespace’ is an abstract container or environment created to hold a logical grouping of unique identifiers or names. Thus the same identifier can be independently defined in multiple namespaces and the the meaning associated with an identifier defined in one namespace may or may not have the same meaning as the same identifier defined in another namespace. ‘Namespace’ in VMM is used to group or tag different VMM objects, resources and transactions with a meaningful namespace for the different components across the testbench environment. This allows the user to identify them and access them efficiently. For example, a benefit of this approach is that it relieves the user from making cross module references to access the various resources. This can be seen in the context of accessing the interfaces associated with a driver or a monitor in the environment and goes a long way in making the code more scalable.

Accessing and assigning interface handles to a particular transactor can be done in various ways in VMM, as discussed in the following blogs: Transactors and Virtual Interface and Extending Hierarchical Options in VMM to work with all data types. In addition to these, one can leverage ‘namespaces’ in VMM to achieve this fairly elegantly. The idea here is to put the Virtual Interface instances in the appropriate namespace in the object hierarchy to be retrieved by the verification environment wherever required through simple APIs as shown in the following steps:

STEP 1:: Define a parameterized class extending form vmm_object to act as a wrapper for the interface handle.

STEP 2:: Instantiate the interface wrapper in the top-level MODULE and put in the “VIF” name space

STEP 3:: In environment, access interface wrapper from the VIF name space by querying for the same in the ‘VIF” namespace and use the retrieved handle to set the interface in the transactor

The example below demonstrates the implementation of the above

The Interface and DUT templates..

image

Step1: Parameterized wrapper class for the interface-

image

The Testbench Top:

image

The Program Block:

image

Posted in Configuration, Structural Components, VMM infrastructure | Comments Off

Extending Hierarchical Options in VMM to work with all data types

Posted by Amit Sharma on 2nd September 2011

Abhisek Verma, CAE, Synopsys

Tyler Bennet, Senior Application Consultant, Synopsys

Traditionally, to pass a custom data type like a struct or a virtual interface using vmm_opts, it is recommended to wrap it in a class and then use the set/get_obj/get_object_obj on the same. This approach has been explained in another blog here.  But wouldn’t you prefer to have the same usage for these data types as the simple use model you have for integers, strings and objects?  This blog describes how to create a simple helper package around vmm_opts that uses parameterization to pass user-defined types. It will work with any user-defined type that can be assigned with a simple “=”, including virtual interfaces.

Such a package can be created as follows:-

STEP1:: Create the parameterized wrapper class inside the package

image

The above vmm_opts_p class is used to encapsulate any custom data type which it takes as a parameter “t”.

STEP2:: Define the ‘get’ methods inside the package.

Analogous to vmm_opts::get_obj()/get_object_obj(), we define get_type and get_object_type. These static functions allow the user to get an option of a non-standard type. The only restriction is that the datatype must work with the assignment operator. Also note that since this uses vmm_opts::get_obj, these options cannot be set via the command-line or options file.

image

STEP3:: Define the ‘set’ methods inside the package.

Similarly, analogous to vmm_opts::set_object(), the custom package needs to declare set_type. This static function allows the user to set an option of a non-standard type. .

image

USE-MODEL

The above package can be imported and used to set/get virtual interfaces as follows :-

vmm_opts_p#(virtual dut_if)::set_type(“@BAR”, top.intf, null); //to set the virtual interface of type dut_if

tb_intf = vmm_opts_p#(virtual dut_if)::get_object_type(is_set, this, “BAR”, null, “SET testbench interface”, 0); //to get the virtual interface of type dut_if, set by the above operation.

The following template example shows the usage of the package in complete detail in the context of passing virtual interfaces

1. Define the interface, Your DUT

image

2. Instantiate the DUT, Interface and make the connections

image

3.  Leverage the Hierarchical options and the package in your Testbench

image

So, there you go.. Now , whether you are using your own user defined types, structs, queues , you can go ahead and use this package and thus have your TB components communicate and pass data structures  elegantly and efficiently..

Posted in Communication, Configuration, Customization, Organization | Comments Off

Building & Configuring Coverage Model – VMM Style – Part-III

Posted by paragg on 25th June 2011

Parag Goel, Senior Corporate Application Engineer, Synopsys

In the final blog of this coverage modeling with VMM series, we focus on error coverage. Negative scenario testing is an integral part of verification. But again, we have this question – Whether I have covered all negative scenarios?

So it is important to ensure that the generic coverage model tracks all the error scenarios.

Let’s see, how a specific mechanism provided in VMM in the form of vmm_report_catcher helps to track error coverage efficiently and effectively. The VMM Log Catcher is able to identify/catch a specific string of any type any of the messages issue through the VMM reporting mechanism.

Typically, the Verification Environment issues messages to STDOUT when the DUT responds to an error scenario. These messages can be ‘caught’ by the Log Catcher to update the appropriate coverage groups. Let see how this is done in detail.

The Verification Environment would respond to each negative scenario by issuing a message with a unique text, specific to specific error messages.

In the context of the AXI in framework, we can introduce a wide-range of error scenarios and test if the DUT responds correctly or not. A few possible error scenarios in AXI are listed below for your reference.

clip_image001

However, all the scenarios may not be applicable always and hence configurability is required to enable only the required set of coverpoints tied to the relevant negative scenarios. Thus, we should have similar configurability for error coverage as I talked about in the earlier blogs.

Let’s see how we can catch the relevant responses and sample the appropriate covergroups.

As mentioned earlier, in the example below, we make use of the unique message issued as a result of a negative scenario.

This is how we use the VMM Log catcher.

1. The error coverage class is extended from vmm_log_catcher – VMM base class.

2. The vmm_log::caught() API is utilized as means to qualify the covergroup sampling.

clip_image001[11]

In the code above, whenever a message with the text “AXI_WRITE_RESPONSE_SLVERR “ is issued from anywhere in the verification environment, the ‘caught’ method is invoked which in turn samples the appropriate covergroup. Additionally, you an specify more parameters in the caught API, to restrict what ‘scenarios’ should be caught.

vmm_log_catcher::caught(

string name = “”,

string inst = “”,

bit recurse = 0,

int typs = ALL_TYPS,

int severity = ALL_SEVS,

string text = “”);

The above API, installs the specified message handler to catch any message of the specified type and severity, issued by the specified message service interface instances specified by name and instance arguments, which contains the specified text. By default, this method catches all messages issued by this message service interface instance.

Hope these set of articles would be relevant and useful to you.. I have made an attempt to leverage some of the built-in capabilities of the SV languages and the VMM base classes to target some of the challenges in creating configurable coverage models.. These techniques can be improvised further to make them more efficient and scalable. I would be waiting to hear from you all any inputs that you, have in this area.

Posted in Automation, Configuration, Coverage, Metrics, Reuse, Structural Components, VMM, VMM infrastructure | Comments Off

Building & Configuring Coverage Model – VMM Style – Part-II

Posted by paragg on 25th June 2011

Parag Goel, Senior Corporate Application Engineer, Synopsys

In the previous post, we looked at how you can enable/disable different types of coverage encapsulated in the Coverage Model wrapper class. In this post, let’s look at how we can easily create an infrastructure to pass different inputs to the wrapper class so as to able to configure the coverage collection based on user. The infrastructure ensure that these elements values percolate down to the to the sub-coverage model groups.

The following are some of the key inputs that needs to be passed to the difference coverage component classes

1. SV Virtual Interfaces so that different signal activity can be accessed

2. The Transactions observed and collected by the physical level monitors

3. The ‘Configuration’ information

Picture7

Let’s look at how the we can easily pass the signal level information to the Coverage Model

Step I: Encapsulation of the interface in the class wrapper.

class intf_wrapper extends vmm_object;

virtual axi_if v_if ;

function new (string name, virtual axi_if mst_if);
super.new(null, name);
this.v_if = mst_if;
endfunction

endclass: master_port

Step II: In the top class/environment- Set this object using vmm_opts API.

class axi_env extends vmm_env;
`vmm_typename(axi_env)
intf_wrapper mc_intf;

function void build_ph();
mc_intf = new(“Master_Port”, tb_top.master_if_p0);
// Set the master port interface
vmm_opts::set_object(“VIP_MSTR:vip_mstr_port“, mc_intf, env);
endfunction:build_ph
endclass: axi_env

Step III: Connecting in the coverage class.

A. Get the object containing interface in the coverage model class using vmm_opts.

assert($cast(this.mst_port_obj, vmm_opts::get_object_obj(is_set, this, “vip_mstr_port“)));

B. Connecting local virtual interface to one contained in the object.

this.cov_vif = mstr_port_obj.v_if;

Now, we need to pass the collected transaction object from the monitor needs to the coverage collector. This can be conveniently done in VMM using TLM communication. This is achieved through the vmm_tlm_analysis_port, which establishes the communication between a subscriber & an observer.

class axi_transfer extends vmm_data;

. . .

class axi_bus_monitor  extends  vmm_xactor;

vmm_tlm_analysis_port#(axi_bus_monitor, axi_transfer)  m_ap;
task collect_trans();

//Writing to the analysis port.

m_ap.write(trans);
endtask
endclass

class axi_coverage_model extends vmm_object;
vmm_tlm_analysis_export #( axi_coverage_model, axi_transfer) m_export;

function new (string inst, vmm_object parent = null);
m_export = new(this, “m_export”);

endfunction

function void write(int id, axi_transfer trans);

//Sample the appropriate covergroup, once the transaction is received

in the write function.

endfunction

endclass

To set up the TLM Connections in the agent/environment, we need to do the following:

class axi_subenv extends vmm_group;

//Instantiate the model classes and creates them.

axi_bus_monitor mon;

axi_coverage_model cov;

. . .

virtual function void build_ph;
mon = new( “mon”, this);
cov = new( “cov”, this);
endfunction
virtual function void connect_ph;

//Bind the TLM ports via VMM – tlm_bind

monitor.m_ap.tlm_bind( cov.m_export );

endfunction

To make the Coverage Model truly configurable, we need to look at some of the other key requirements as well at different level of granularity. This can be summarized as the ability to do the following.

1. Enable/Disable coverage collection for each covergroup defined . Every covergroup should be created only if a user wishes to do so. So there should be a configuration parameter which restricts the creation of the covergroup altogether. And this should also be used to control the sampling of a covergroup.

2. The user must be able to configure the limits on the individual values being covered in the coverage model within a legal set of values. Say for example, transaction field BurstLength – user should be able to guide the model what are the limits on this field that one wishes to get coverage on within a legal set of values ranging from ‘1’ to ‘16’ as per AXI spec. So providing lower and upper limits for transaction parameters like burst size, burst length, address etc. makes it re-usable. This limits should be modeled as variables which can be overwritten dynamically

3. The user should be able to control the number of bins to be created. For example in fields like address. auto_bin_max option can be exploited to achieve this in case the user doesn’t have explicitly defined bins..

4. The user must be able to control the number of hits for which a bin can be considered as covered. option.atleast can be used for this purpose and the input to this can be a user defined parameter.

5. The user should also have the control to specify his coverage goal, i.e. when the coverage collector should show the covergroupcovered” even though the coverage is not 100%. This can be achieved by using option.goal, where goal is again a user defined parameter.

All the parameters required to meet the above requirements can be encapsulated in the class (i.e. coverage configuration class) and this can be set and retrieved in a similar fashion described for setting & getting the interface wrapper class using vmm_opts API’s.

class coverage_cfg extends vmm_object;
  int disable_wr_burst_len;
   . . .
  function new( vmm_object parent=null, string name);
     super.new(parent, name);
  endfunction
  coverage_cfg cfg;
  function new(vmm_object parent=null, string name);
     bit is_set;
     super.new(parent, name);
     $cast(cfg, vmm_opts::get_object_obj(is_set, this,
                                           "COV_CFG_OBJ”));
  endfunction

Wei Hua presents another cool mechanism of collecting this parameters using vmm_notification mechanism in this earlier blog  :

A Generic Functional Coverage Solution Based On vmm_notify

Hope you found this useful. I will be talking about how to track Error Coverage in my next blog, so stay tuned!

Posted in Communication, Configuration, Coverage, Metrics, Reuse, Structural Components, VMM, VMM infrastructure | Comments Off

Building & Configuring Coverage Model – VMM Style – Part-I

Posted by paragg on 24th June 2011

Parag Goel, Senior Corporate Application Engineer, Synopsys

To minimize wasted effort, coverage is used as a guide for directing verification resources by identifying tested and untested portions of the design.”

- IEEE Standard for System Verilog (IEEE Std. 1800-2009)

Configurability & reusability are the buzz^^^ words in the verification of chips and this are enabled to a big extent by the present day verification methodologies. Through a set of blogs, I plan to show how we can create configurable coverage models in VMM based environments. Given that, AMBA – AXI is one of the most commonly used protocols in industry for communication amongst the SOC peripherals, I chose protocol AXI based framework for my case study.

The idea here is to create a configurable coverage model leveraging some of the base classes provided in the methodology so that we can make it completely reusable as we move from the block to system level or as we move across projects. Once, we enable that, we can move the coverage model inside the Sub-environment modeled by vmm_group or vmm_subenv which are the units of reuse.

Picture1

Primary Requirements of Configuration Control:

Two important requirements that are needed to be met to ensure that the coverage model is made a part of reusable components are:

1. Ability to enable/disable the coverage model whenever required.

2. Ability to Turn ON/OFF different subgroups at the desired granularity. For example, an user may not always want the Error Coverage to be enabled, unless under specific circumstances.

To meet the above requirements, we make use of the VMM Global and Hierarchical Configurations

Through the vmm_opts base classes, VMM provides a mechanism to control the configuration parameters of a verification environment. This can be done in a hierarchical as well as in a global manner. These options are summarized below:

Picture6

In the environment, the coverage_enable is by default set to 0, i.e. disabled.

coverage_enable = vmm_opts::get_int(“coverage_enable”, 0);

Now, the user can enable the coverage via either of the two mechanisms.

1. From user code using vmm_opts.

The basic rule is that you need to ‘set’ it *before* the ’get’ is invoked and during the time where the construction of the components take place.  As a general recommendation, for the construction of structural configuration, the build phase is the most appropriate place.
function axi_test::build_ph();
// Enable Coverage.
vmm_opts::set_int(“@%*:axi_subenv:enable_coverage”, 1);
endfunction

2. From command line or external option file. The option is specified using the command-line +vmm_name or +vmm_opts+name.
./simv
+vmm_opts+enable_coverage=1@axi_env.axi_subenv

The command line supersedes the option set within code as shown in 1.

User can also specify options for specific instances or hierarchically using regular expressions.

Picture3

Now let’s look at the typical classification of a coverage model.

From the perspective of AXI protocol, we can look at the 4 sub-sections.

Transaction coverage: coverage definition on the user-controlled parameters usually defined in the transaction class & controlled through sequences.

Error coverage: coverage definition on the pre-defined error injection scenarios

Protocol coverage: This is protocol specific ((AXI Handshake coverage)). In case of AXI, it is mainly for coverage on the handshake signals i.e. READY & VALID on all the 5 channels.

Flow coverage: This is again protocol specific and for AXI it covers various features like, outstanding, inter-leaving, write data before write address etc…

clip_image001[11]

At this point, let’s look at how these different sub-groups with the complete coverage model can be enabled or disabled. Once the coverage configuration class is built and passed on to the main coverage model, we need a fine grain control to enable/disable individual coverage models. The code shows how the user can control all the coverage models in the build phase of the main coverage class.

Here too, we can see how we use vmm_opts comes to meet the requirements of controlling individual parameters.

vmm_opts::set_int(“@%*:disable_transaction_coverage”, 0);
vmm_opts::set_int(“@%*:disable_error_coverage”, 0);
vmm_opts::set_int(“@%*:disable_axi_handshake_coverage”, 0);

vmm_opts::set_int(“@%*:disable_flow_coverage”, 0);

In my next blog, I show how the hierarchical VMM Configurations is used to dynamically pass on signal level and other configuration related information to the coverage model. Also, we shall discuss the usage of VMM TLM feature, towards fulfilling the goal of configurable coverage model. Stay tuned!

Posted in Configuration, Coverage, Metrics, Organization, Reuse, SystemVerilog, VMM, VMM infrastructure | Comments Off

How you can figure how you configure (a VMM testbench): Part 2

Posted by JL Gray on 13th December 2010

Jonathan Bromley, Verilab Inc, Austin, Tx

Part 2: An interesting way to use VMM 1.2’s configuration features

Part 1

examined the use of descriptor objects to pass configuration information around the testbench.  In VMM 1.2, though, the available toolkit has grown dramatically, inviting us to reconsider our approach.
 

 

Configuration in VMM 1.2

VMM1.2 provides powerful additional configuration features.  Other articles in this series, Verification in the trenches: Configuring your environment using VMM1.2 and Sharing RTL Configuration with the Testbench

, offer a great overview of the new mechanisms and how to use them.  The vmm_opts facilities make it easy to control configuration values deep in the test environment from top-level code, or even from scripts thanks to the built-in command-line and file reading features.  Within any vmm_unit you can use the vmm_unit_config macros to grab configuration values that were set up from the top level, picking up default values if the values were not set.  Within any vmm_object you can use vmm_opts::get_object_***() function calls to discover whether a given option has been explicitly set, and set a variable appropriately.
Given this new ability to set and grab arbitrary configuration values by name, is there still a role for the trusty configuration object?  I believe there is, for at least two reasons.

 

Legacy components

First, you will surely have some traditional transactors and subenvs in your VMM1.2 environments.  They will use config objects.  The two mechanisms can happily coexist, using VMM1.2 options to populate fields of the config objects before passing them to the transactors that need them:
class MagicToEth_gp extends vmm_group;
  `vmm_typename(MagicToEth_gp)
  MagicPacketXactor magic_xactor;
  EthernetXactor eth_xactor;
  bit has_voodoo;
  …
  function void build_ph();
    …
    MagicPacketXactor_cfg magic_cfg = new;
    …
    magic_cfg.SupportsVoodooMode = has_voodoo;
    magic_xactor = new(…., magic_cfg);
  endfunction
  …
  `vmm_unit_config_boolean(has_voodoo,
         “turn this on to enable MagicVoodoo”,
         0, MagicToEth_gp)
endclass

 

Configure just the variables you care about

Configuration using the new mechanisms is wonderfully flexible, but it needs to be thought through.  Used carelessly it can turn your testbench configuration into a grab-bag of hundreds of individual configuration values with no obvious relationships among them.  Using objects to group together a bunch of related configuration values allows you to use constrained randomization to enforce relationships among the values that were not specified from above, so that you can specify just those values you want to nail down and allow the remaining values to be randomized.  Making that work requires just a little bit of extra effort, and just a tiny bit of guru-level SystemVerilog randomization.
 

 

Selective configuration and randomization

For example, let’s go back to the configuration of our MagicPacket transactor.  We already have a configuration object for it – but now we must take care to derive from vmm_object so that the hierarchical configuration mechanism knows about it:
class MagicPacketXactor_cfg extends vmm_object;
  rand shortint unsigned MaxLength;
  rand bit SupportsVoodooMode;
    constraint c_valid_packet {
      if (SupportsVoodooMode) { MaxLength inside {[64:16384]}; }
      else                    { MaxLength == 1024; }
    }

Now we can play some interesting games with VMM configuration and SystemVerilog constrained randomization to make the configuration as flexible as possible:

function new(string name, vmm_object parent = null);
  bit is_set;
  super.new(parent, name);  // as usual for any vmm_object
  SupportsVoodooMode = vmm_opts::get_object_int(
    is_set,   // reports whether the option was set from above
    this,     // we’re interested in options applying to this object
    “SupportsVoodooMode”,  // identifying name of the option
    0,        // default value
    “some helpful documentation text” );
  if (is_set)  // This variable was explicitly configured.
               // Disable randomization of this variable.
    SupportsVoodooMode.rand_mode(0);
  MaxLength = vmm_opts::get_object_int(
    is_set,   // reports whether the option was set from above
    this,     // we’re interested in options applying to this object
    “MaxLength”,  // identifying name of the option
    1024,     // default value
    “some helpful documentation text” );
  if (is_set)  // This variable was explicitly configured.
               // Disable randomization of this variable.
    MaxLength.rand_mode(0);
endfunction

Why all this complexity?  Because of the huge flexibility it gives us.  The key is in understanding what happens later, when we randomize() this object.  If neither of the options was set, then rand_mode is true (its default) on both variables and randomization proceeds as normal.  If just one of the variables was configured, that variable now has rand_mode(0) and so it will not be altered by randomization – you have fixed it from the options.  The other variable is randomized, but respecting all constraints imposed by the variable that is already configured.  For example, suppose we configure MaxLength=5000 from the command line.  Thanks to our ingenious constructor code, that variable is fixed and won’t be randomized.  Other variables in the object are randomized, though, so the SystemVerilog constraint solver tries to find a value for SupportsVoodooMode that will satisfy the constraints.  Of course, that value must be true because of the if…else constraint – a false value is possible only if MaxLength is equal to 1024.

Finally, suppose we configure both variables from option values.  Now, any attempt to randomize() the configuration object will have no effect on the variables, because randomization has been disabled for both.  But randomization is still useful to us, because it will give a constraint violation error if the two options have been given contradictory values.

Getting the balance right is tricky – if you try to enforce too tight a structure on your collections of configuration data it will prove inflexible as verification requirements change, but leaving the configuration as a bunch of scattered, unrelated variables will soon become a maintenance nightmare.  Creative use of get_object_* options methods and randomization can offer a useful new set of compromises between flexibility and ease of deployment, by allowing you to specify just a few of an object’s configuration values and allowing randomization to choose a consistent set of values for the remaining variables.

Posted in Configuration | Comments Off

How you can figure how you configure (a VMM testbench): Part 1

Posted by JL Gray on 7th December 2010

Jonathan Bromley, Verilab Inc, Austin, Tx

Part 1: Using configuration objects


Each VMM transactor class should have a companion config class, where you create data members for all critical “vital statistics” of the transactor.  Just before constructing the transactor object, an enclosing subenv would create one of these config objects and then pass it to the transactor as a constructor argument.  Every transactor should also provide a reconfigure method allowing other parts of the environment to set up a new configuration for it at any time.

 

Objects to control objects
Here’s an imaginary example of a transactor configuration class:

class MagicPacketXactor_cfg;
  rand shortint unsigned MaxLength;
  rand bit SupportsVoodooMode;
    constraint c_valid_packet {
      if (SupportsVoodooMode) { MaxLength inside {[64:16384]}; }
      else { MaxLength == 1024; }
    }
endclass

Note how the class contains rand data members for all the transactor’s configurable attributes, and also has constraints so that it will yield a meaningful and interesting set of configuration values when randomized.  Of course, further constraints can easily be added to enforce specific configurations.

The corresponding transactor code might be something like this:

class MagicPacketXactor extends vmm_xactor;
  …
  local MagicPacketXactor_cfg cfg;
  function new( … // usual VMM constructor arguments, and then…
                MagicPacketXactor_cfg cfg = null);
    …
    if (cfg == null) begin
      cfg = new;
      cfg.randomize();
    end
    this.cfg = cfg;
    …
  endfunction
  function void reconfigure(MagicPacketXactor_cfg cfg);
    this.cfg = cfg;
  endfunction

In the transactor’s main() task, or elsewhere, we can now do things like

if (this.cfg.SupportsVoodooMode)
  castSpell(…);

It’s really important to note that the transactor takes no responsibility for setting up its own configuration.  It merely assumes that a configuration object has been supplied from outside, and then makes use of the values within that configuration object.  If the enclosing environment fails to provide a configuration object, as a last resort the transactor constructs its own randomized configuration.

This is a good example of the notion of encapsulating configuration in a single object.  By gathering all important attributes of a component into a class:
•    we can pass the entire configuration around as a single object or reference;
•    it becomes easy to write randomization constraints that establish relationships among configuration attributes;
but most important of all:
•    it is straightforward to assemble several configuration objects into a single, larger configuration object suitable for use at the next level up the hierarchy– typically at the vmm_subenv level.

Environment configuration
A verification environment for a MagicPacket-to-Ethernet bridge will of course contain both MagicPacket and Ethernet transactors.  The environment or subenv gets its own configuration class, containing references to configuration objects for both its transactors:

class MagicToEthernet_subenv_cfg;
  rand MagicPacketXactor_cfg magic_cfg;
  rand Ethernet_cfg eth_cfg;
endclass

Configuration proceeds in much the same way as for the transactors.  The environment’s constructor takes one of these objects as an argument, and then passes on its inner configuration objects to the corresponding transactors as it constructs them.  Reconfiguring the environment is equally straightforward:

class MagicToEthernet_subenv extends vmm_subenv;
  MagicPacketXactor magic_xactor;
  EthernetXactor eth_xactor;
  …
  function void reconfigure(MagicToEthernet_subenv_cfg cfg);
    magic_xactor.reconfigure(cfg.magic_cfg);
    eth_xactor.reconfigure(cfg.eth_cfg);
  endfunction

Finally, the code that launches your VMM test can create and populate all the necessary configuration objects, assemble them into environment-wide configuration objects, and pass them into constructors as needed.  It is also very easy to configure multiple transactors to match a common configuration simply by passing the same configuration object to all of them.

Testbench configuration vs. test configuration
The configuration mechanism we’ve explored is neat, powerful and straightforward.  However, it doesn’t offer very much help in separating the two rather different concerns of configuring the test environment and configuring the test.

Test environment configuration gets you the right structure – correct number of transactors, correct choice of active vs. passive transactors, choice of reference model or scoreboard – to match the chosen device-under-test.  Using that test environment, though, you probably wish to run a large and ever-growing battery of testcases as you develop new stimulus to hit elusive coverage objectives.  From a programming point of view, configuring the test is not very different from configuring the testbench –a matter of passing appropriate configuration objects to the constructors of various components.  From an organizational point of view, though, there are big differences.   Testbench configuration generally needs to be at least partly driven by parameters of the DUT and test harness, whereas testcase configuration is likely to be much more flexible and dynamic, and will vary from run to run.

When using configuration objects in this style, I’ve learnt the value of keeping a strict separation between these two kinds of configuration to simplify those organizational concerns.

In the second part of this article we’ll look at the impact of the extensive new configuration facilities in VMM 1.2, and explore a novel way to manage groups of inter-related configuration options.

 

Posted in Configuration | 1 Comment »

Using vmm_opts to create a configurable environment

Posted by S. Prashanth on 19th July 2010

S. Prashanth, Verification & Design Engineer, LSI Logic

To accommodate changing specifications and to support different clusters/subsystems which would have multiple processors/memory connected through a bridge), I am building a reusable environment which can support any number of masters and slaves of any standard bus protocols.  The environment requires high level of configurability since it should work for different DUTs. So, I decided to use vmm_opts not just to set the switches globally/hierarchically, but also to specify the ranges (like setting address ranges in scenarios) from the test cases/command line.

I created a list of controls/switches that need to be provided as global options (like simulation timeout, scoreboard/coverage enable, etc)  and hierarchical options  to control instance specific behaviors (like number of transactions to be generated ,  burst enables, transaction speed, address ranges to be generated by a specific instance, set of instances, etc).  Let’s see how these options can be used and what all things are required for it through examples.

Global Options

Step 1:
Declare an integer, say simulation_timeout in the environment class or wherever it is required and use vmm_opts::get_int(..) method as shown below. Specify a default value as well just in case, if the variable is not set anywhere.

simulation_timeout = vmm_opts::get_int(“TIMEOUT”, 10000);

Step 2:
Then either at the test case or/and at the command line, I can override the default value using vmm_opts::set_int  or +vmm_opts+ runtime option.

Override from the test case.

vmm_opts::set_int(“%*:TIMEOUT”, 50000);

Override from the command line.

./simv +vmm_opts+TIMEOUT=80000

Options can also be overridden from a command file, if it is difficult to specify all options in the command line. Also, options can be a string or Boolean as well.

Hierarchical Options

In order to use hierarchical options, I am building the environment with parent/child hierarchy which is a very useful feature of vmm_object. This is required since the hierarchy specified externally (from test case/command line) will be used to map the hierarchy in the environment.

Step 1:
Declare options, say burst_enable and num_of_trans in a subenv class and use vmm_opts::get_object_bit(..) and vmm_opts::get_object_int(..) to retrieve appropriate values passing current hierarchy and default values as arguments. Also, for setting ranges, declare min_addr and max_addr, and use get_object_range().

class master_subenv extends vmm_subenv;
bit burst_enable;
int num_of_trans;
int min_addr;
int max_addr;

function void configure();
bit is_set; //to determine if the default value is overridden
burst_enable = vmm_opts::get_object_bit(is_set, this, “BURST_ENABLE”);
num_of_trans = vmm_opts::get_object_int(is_set, this, “NUM_TRANS”, 100);
vmm_opts::get_object_range(is_set, this, “ADDR_RANGE”, min_addr, max_addr, 0, 32’hFFFF_FFFF);
….
endfunction

endclass

Step 2:
Build the environment with parent/child hierarchy either by passing the parent handle through the constructor to every child, or using vmm_object::set_parent_object() method. This is easy as almost all base classes are extended from vmm_object by default.

class  dut_env extends vmm_env;
virtual function build();
…..
mst0 = new(“MST0”, ….);
mst0.set_parent_object(this);
mst1 = new(“MST1”, …);
mst1.set_parent_object(this);
….
endfunction
endclass

Step 3:
From the test case, I can override the default value using vmm_opts::set_bit or vmm_opts::set_int(..) specifying the hierarchy. I can use pattern matching as well to avoid specifying to every instance

vmm_opts::set_int(“%*:MST0:NUM_TRANS”, 50);
vmm_opts::set_int(“%*:MST1:NUM_TRANS”, 100);
vmm_opts::set_bit(“%*:burst_enable);
vmm_opts::set_range(“%*:MST1:ADDR_RANGE”, 32’h1000_0000, 32’h1000_FFFF);

I can also override the default value from the command line as well.
./simv +vmm_opts+NUM_TRANS=50@%*:MST0+NUM_TRANS=100@%*:MST1+burst_enable@%*


In summary, vmm_opts provides a powerful and user friendly way of configuring the environment from the command line or the test case. User can provide explanation of each of the options while calling get_object_*() and get_*() methods which will be displayed when vmm_opts::get_help() is called.

Posted in Coding Style, Configuration, Tutorial | Comments Off

Verification in the trenches: Configuring your environment using VMM1.2

Posted by Ambar Sarkar on 7th May 2010

Dr. Ambar Sarkar, Chief Verification Technologist, Paradigm Works Inc.

Let’s start with a quick and easy quiz.
Imagine you are verifying a NxN switch where N is configurable, and it supports any configuration from 1×1 to NxN. Assume that for each port, you instantiate one set of verification components such as monitors, bfms, scoreboards. For example, if this was a PCIe switch with 2 ingress and 3 egress ports, you would instantiate 2 ingress and 3 egress instances of PCIe verification components. So, for N = 5, how many environments you should actually write code for and maintain?

The answer better be 1 J™. Maintaining NxN = 25 distinct environments will be quixotic at best. What you want is to write code that creates the desired number and configurations of component instances based on some runtime parameters. This is an example of what is known as “structural” configuration, where you are configuring the structure of your environment.

In VMM1.2 parlance, this means that you want to make sure you can communicate to your build_ph() phase the required number of ingress and egress verification component instances. The build phase can then construct the configured number of components. This way, you get to reuse your verification environment code in multiple topologies, and reduce the number of unique environments that need to be separately created and maintained.

Hopefully, this establishes why structural configuration is important. So the questions that arise next are:

How do you declare the structural configuration parameters?

How do you specify the structural configuration values?

How do you use the structural configuration values in your code?

Declaring the configurable parameters

Structural configuration declarations should sit in a class that derives from vmm_unit. A set of convenient macros are provided, with the prefix `vmm_unit_config_xxx where xxx stands for the data type of the parameter. For now, xxx can be int, boolean, or string.

For example, for integer parameters, you have:

`vmm_unit_config_int    (
<name of the int data member>,
<describe the purpose of this parameter>,
<default value>,
<name of the enclosing class derived from vmm unit>
);

If you want to declare the number of ports as configurable, you can declare a variable num_ports as shown below:class switch_env extends vmm_group;
`vmm_typename(switch_env)?
int     num_ports; // Will be configured at runtime
vip bfm[$]; // Will need to create num_port number of instances

function new(string inst, vmm_unit parent = null);
super.new(get_typename(), inst, parent); // NEVER FORGET THIS!!
endfunction

// Declare the number of ingress ports. Default is 1.
`vmm_unit_config_int(num_ports,”Number of ingress ports”, 1, switch_env);

endclass

Specifying the configuration values

There are two main options:

1.  From code using vmm_opts.

The basic rule is that you need to specify it *before* the build phase gets called, where the construction of the components take place.  A good place to do so is in vmm_test::set_config().
function my_test::set_config();
….
// Override default with 4. my_env is an instance of switch_env class.
vmm_opts::set_int(“my_env:num_ports”, 4);
endfunction

2.  From command line or external option file. Here is how the number of ingress ports could be set to 5.
./simv +vmm_opts+num_ports=5@my_env

The command line supersedes option set within code as shown in 1.
Do note that one can specify these options for specific instances or hierarchically using regular expressions.

Using the configuration values

This is the easy part. As long as you declare the parameter using `vmm_unit_config_xxx , you are all set. It will be set to the correct value when the containing object derived from vmm_unit is created.

function switch_env::build_ph();   ?
// Just use the value of num_ports
for (i = 0; i< num_ports; i++) begin
bfm[i] = new ($psprintf(“bfm%0d”, i), this);
end
endfunction

This article is the 6th in the Verification in the trenches series. Hope you found this article useful. If you would like to hear about any other related topic, please comment or drop me a line at ambar.sarkar@paradigm-works.com. Also, if you are starting out fresh, please check out the free VMM1.2 environment generator.

Posted in Configuration, Tutorial | Comments Off

Sharing RTL Configuration with the Testbench

Posted by JL Gray on 3rd May 2010

by Jason Sprott, CTO, Verilab

Often times we find ourselves with some configurable RTL to verify. The amount of configuration can vary from a few bus width parameters, or a configurable IP block with optional features and performance related control parameters, to a whole chip with optional interfaces. This can make our life as verification engineers that bit more complicated. We not only have to verify the multitude of scenarios for a specific implementation, we have to somehow handle building and running a testbench for the various RTL configurations. Especially in the case of standalone IP, it’s often the case that different configurations are included in our verification space.

Configurations may affect the physical interface between the testbench and RTL, for example:
•    Bus widths
•    Number of interrupts
•    Number of instances of an external interfaces such as USB or Ethernet

Or, the configurations might control internal behavior, which doesn’t affect the physical interface, but does affect the way the test bench has to interact with the DUT functionally. Examples might include:
•    FIFO Depth – which may affect data pattern generation to hit corner cases, or performance related water marks
•    QoS algorithms – where different algorithms implementations are selected depending on requirements, which could affect traffic generation, checking, or functional coverage

The VMM has a new RTL configuration feature which can make life a bit easier. RTL configurations can now be encapsulated in a testbench object that can be used to generate an output file. This output file can be shared between the RTL and testbench. The steps in the process go something like this:

STEP 1: Compile the RTL (with some default configuration) and testbench. This is needed to run the configuration file generation only. Tests will not be run in the simulation.

STEP 2: Run simulation to generate configuration file

./simv +vmm_rtl_config=<PREFIX> +vmm_gen_rtl_config …

The format of the output file can be customized by extending a companion format class. This determines the output format written and also the parsing of the file back into the testbench. The (simple) format that comes out of the box looks like this:

num_of_mems : 4
has_buffer  : 1

This obviously isn’t RTL code, so if we want our Verilog to understand it, for example converting it to assign parameter values, we have to perform the next step.

STEP 3: Convert output configuration file to verilog params file (or format of your choice, such as IP-XACT)

A script is provided as a demonstration, in the memsys_cntrlr std_lib VMM example.

./cfg2param <config_file>

STEP 4: Re-compile the RTL using the new parameters generated by the configuration, e.g. using -parameters switch in VCS.

STEP 5: Run the simulation executing tests, passing the configuration into the testbench

./simv +vmm_rtl_config=<PREFIX> …

The code encapsulating the configuration parameters in the testbench is encapsulated using the vmm_rtl_config class. The code that implements output to the configuration file for each variable is done using macros. The example below shows implementation for class member variables.

class my_cfg extends vmm_rtl_config;
rand int num_of_mems;
rand bit has_buffer;
constraint valid_c {
num_of_mems inside {1,2,4,8};
}

`vmm_rtl_config_begin(my_cfg)
`vmm_rtl_config_int(num_of_mems,num_of_mems)
`vmm_rtl_config_boolean(has_buffer,has_buffer)
`vmm_rtl_config_end(my_cfg)

endclass:my_cfg

On the testbench side the configuration is set using the vmm_opts::set_object() in the initial block of the program block of your testbench and can be retrieved through the vmm_opts::get_object().

The beauty of using this method for RTL configuration is that constrained randomization and functional coverage collection can be used for RTL configurations. In projects where highly configurable IP has to be verified, it’s a convenient way to control and observe progress of verification across, not only the modes of operation, but also those modes relating to specific RTL configurations.

Posted in Automation, Configuration, Reuse | 3 Comments »

Changing Functionality: The Factory Service #3

Posted by JL Gray on 21st April 2010

by Jason Sprott

Jason Sprott is CTO at Verilab.

In Factory Service #1 and in Factory Service #2 we discussed what a Factory is and what it can do for us. In this post we’ll take a look at the two general types of objects we might want to replace using the Factory. Typically objects in testbenches fall into two categories:

  • Dynamic: These are objects that get created and garbage collected spuriously during normal operation. An example might be a randomized transaction generated by a scenario, or even the scenario itself.
  • Structural: These objects get created once at the beginning and live throughout the simulation. An example might be vmm_xactor or vmm_group.

The location within the VMM Phasing scheme where the factory override performed is different between the two.  A factory override has to be done before any of the objects affected by the override are instantiated (using <class>::create_instance() ). Structural components in the testbench are typically instantiated earlier than dynamic ones.

In the case of overriding dynamic objects, we need to perform the override before the test starts running the simulation. A good place to perform dynamic object overrides is in the vmm_test::configure_test_ph()method. This is executed in the test phase (see “Constructing and Controlling Environments > Composing Implicitly Phased Environments/Sub-Environments” section of the VMM User Guide), before test execution is started by the vmm_test::start_of_sim_ph() method. So no dynamic objects are likely to have been created. The following example shows where we override a transaction in a testcase:

class my_dynamic_factory_test extends vmm_test;

virtual function void configure_test_ph();

my_trans::override_with_new(“@%*”,
custom_trans::this_type(),
log, `__FILE__, `__LINE__);
endfunction:configure_test_ph

endclass

Structural object overrides on the other hand potentially give us a bit of a problem, because these objects are instantiated in the pre-pest “build” phase. Performing the factory override in vmm_test::configure_test() would be too late, as it happens after the objects have already been instantiated. Instead structural object factory overrides must be performed in vmm_test::set_config(), which is executed before the Pre-Test timeline, as long as test concatenation is not being done. The following is an example where we override a VIP (encapsulated in a vmm_group) in the testcase:

class my_structural_factory_test extends vmm_test;

virtual function void set_config();

my_vip::override_with_new(“@%*”,
my_extended_vip::this_type(),
log, `__FILE__, `__LINE__);
endfunction:set_config

endclass

The figure below illustrates the phases where overrides should be implemented, with respect to the associated objects types:

image

Another thing to be aware of when using a Factory is test concatenation. In general, if you want to be able to concatenate a test, it’s not a good idea to change components in the environment using factory overrides. Reprogramming the environment, either structural or dynamic objects, will affect subsequent concatenated tests. If the changes can be undone, in the cleanup phase of the test for example, it may be OK, but changes to structural objects are difficult to undo. Any change to the environment may affect the validity of a test. For this reason the VMM will not execute vmm_test::set_config() if test concatenation is being performed.  That might not be enough though. For reuse, considering test concatenation is an important point test developers and users of the tests need to be aware of. Factory overrides incompatible with test concatenation will not necessarily cause a noticeable side affect, or failure, so misuse could go unnoticed. This is not a problem of the Factory; it’s just a consideration when using factory overrides with concatenation of tests.

Posted in Configuration, Reuse, VMM infrastructure | Comments Off

Changing Functionality: The Factory Service #2

Posted by JL Gray on 19th April 2010

by Jason Sprott

Jason Sprott is CTO at Verilab

In a previous post we took a look at what the VMM Factory is and what it could do for us. In this post, we take a look at the problem the Factory solves. In order to understand when we should use a Factory, we really need to understand a bit more about the problem it solves, so we can spot situations where a Factory might be appropriate. Let’s start by looking at how we might do things in SystemVerilog without using VMM Factories. Typically we might instantiate a class of type Foo like this:

Foo my_foo;

my_foo = new(…);

The problem with that is my_foo will always create an object of type Foo, no matter what.  We can’t change that. We might use this class Foo in lots of places around our testbench code. If we want to replace Foo with a new class that maybe adds more variables, or replaces a method, we could have a problem. We’d have to go around our testbench looking for everywhere Foo is used, to see if there was a way it could be replaced. Anywhere Foo is constructed using new(), it’s is highly likely to involve modifying the original code. This is very undesirable, especially if the code may be part of a tested IP library. In general, modifying the original source is a Bad Thing.

In VMM we can avoid this problem using a different way to create an instance of the class with the factory. It would look something like this:

Foo my foo;

my_foo = Foo::create_instance(…);

Although this looks similar to an instantiation with new(), something much more sophisticated  is at play. If we factory enable the class Foo, we never call new()to instantiate it again. We now call the Factory’s static method for generating instances of the object create_instance(). Since the method is static, it can be called without having an instance of Foo, therefore it can be used to create itself. What’s the point?

Now that we’ve encapsulated the task of creating an instance in a method, we can change what that method does. We can tell create_instance() to return something different. This might be a new type (with some modifications), derived from Foo, or the same type populated with different values for the variables. What’s more, we can do this easily anywhere Foo is used in the code. We can pick specific instances to be replaced, or replace multiple instances globally.

The two methods used to reprogram the factory are:
•    create_with_new() – tells the factory, when creating new instances, to return a brand new instance of the type specified.
•    create_with_copy() – tells the factory, when creating new instances,  to return a copy of instance specified.

This replacement can be done anywhere in the code, as long as it happens before the instances you want to replace are created. Let’s say a test needs to replace our type Foo, with a new class, FooWithAttitude, derived from Foo. Here’s what that might look like:

class my_factory_test extends vmm_test;

virtual function void configure_test_ph();
Foo::override_with_new(         // Foo’s factory is being overridden
“@%*”,                        // instances matching this pattern will be replaced
FooWithAttitude::this_type(), // they will be replaced with this type of class
log, `__FILE__, `__LINE__);   // some generic log and debug stuff
endfunction:configure_test_ph

endclass:my_factory_test

As can be seen, the Factory override uses regular expression pattern matching to specify which instances will be targeted. The syntax is expressive enough to indentify single instances, multiple instances (e.g. by hierarchy), or all instances (as in this example). More information on the pattern matching syntax can be found in the “Common Infrastructure and Services > Simple Match Patterns” section of the VMM User Guide.

This next example shows how the factory can be programmed to return a copy of the class we’ve modified with some values.

class test_read_back2back extends vmm_test;

virtual function void configure_test_ph();
FooBusTrans tr = new();        // create a template for the override copy
tr.address = ‘habcd_1234;      // special value we want to set up for override
tr.address.rand_mode(0);       // you might want to protect value during randomization
FooBusTrans::override_with_copy(
“@top:foobus0:*”,           // instances matching this pattern will be replaced
tr,                         // they will be replaced by a copy of this instance
log, `__FILE__, `__LINE__); // some generic log and debug stuff
endfunction

endclass

The above example  shows a Factory override in the configure test phase of the simulation timeline. Exactly when a Factory replacement is done is quite important. As previously mentioned, the replacement has to be done before any instances of the class have been created. This depends on the type of class being replaced. For example, dynamic objects (such as transactions), created many times during normal operation of the testbench, are likely to be created after the test has started. However, structural objects (such as instances of a VIP), are likely to be created once when the testbench is built. The details of where to put Factory overrides is covered in another post.

Posted in Configuration, Reuse, VMM infrastructure | Comments Off

Changing Functionality: The Factory Service #1

Posted by JL Gray on 16th April 2010

by Jason Sprott

Jason Sprott is CTO at Verilab.

Building a testbench using SystemVerilog, an object-oriented testbench language, does not automatically make the end solution reusable, or easily extensible. In SystemVerilog we can certainly implement the kind of object-oriented principles and design patterns that enable reuse, but this requires significant programming skills and experience in understanding the requirements of building substantial reusable software solutions. Also, when users inevitably need to change the original functionality, the mechanisms in place to allow those changes would have to be well documented and understood.

Two common changes we might need to make are:
•    Swap one type of class with another (which may replace or add new constraints, variables and methods).
•    Add new functionality to an existing method without replacing it

Fortunately the VMM provides some standard solutions for handling these types of changes. This post takes a look at how the Factory Service helps with the first of the two requirements, swapping classes.

There are many cases where we might want to swap one class for another in a testbench. For example, a typical requirement in constrained random testbenches is to replace one randomizable class with derived version, adding or modifying constraints. We might also have various derived versions of VIP, supporting slightly different versions of a protocol, or injecting different types of errors. When we first develop some VIP, or a testbench, it’s almost impossible to predict what people will want and need from our solution in the future. We can however provide users with a standard way to replace our implementation with something slightly different. This could be a derived version (extension with additions or modification), of what we originally implemented, or a copy with modified values.

Although this sounds quite trivial, testbenches that do not have this capability can be very hard to change without modifying the original source. Changing an original implementation to add new functionality is undesirable and sometimes impossible. Access to original source code is quite often restricted, or at least controlled. The original code may be well tested and proven, and changing the original source could affect other users of the code. Testbench components implementing the Factory are easily replaced at runtime without modifying the original source. Such modification can even be done on a per test basis if required.

However, we don’t get a Factory for free; there’s a bit of up-front thought required. We have to care enough about this capability to implement a Factory in the first place, obviously. This seems obvious, but in fact many times I come across a class that was in dire need of a factory that wasn’t there, the only reason was that the original developer didn’t think to put one in. I try not to judge too harshly, because we’ve all developed code that didn’t quite meet down-steam requirements at some point in time. Theres’s also a bit of effort (a small amount of additional coding) required. So why should we bother?

The VMM Factory Service (Factory) is a standard mechanism for changing functionality by replacing classes. VMM has always recommended the use of Factories, but The Factory Service was introduced in VMM 1.2 to ease implementation. The Factory provides the following API and utilities:
•    Short-hand macros to make implementing a Factory simple. The macros implement the Factory API methods for a given class.
•    API method to create instances of the class – replacing the use of the new()constructor method.
•    API method to change the instance generated by the Factory to a new derived type.
•    API method to change the instance generated by the Factory to a copy of a class of the same type
•    Specific and regular expression based selection of component factories to override

As an example, imagine a Bus VIP. The VIP has quite a lot going on inside. It has a master, a slave, some monitors, coverage and checking. All use a particular transaction type. So if we wanted to replace that transaction with a derived type, to maybe inject some errors, it would affect multiple components in the VIP. If any of the components using the transaction create an instance using new(), there’s a pretty good chance we cannot replace the transaction type without modifying that source. This may not be an option, or at the very least not desirable.

If we use the Factory, we can not only do the replacement, we can choose which instances of the VIP in a testbench we want to perform the replacement on. Maybe not all nodes on the bus have to inject errors. In our test, or new testbench environment, we might decide to say something like:
•    When creating new classes of type Foo, instead instantiate my new class FooWithAttitude. Using the Factory in VMM that might look like:
Foo::override_with_new(         // Foo’s factory is being overridden
“@%*”,                        // match all instances
FooWithAttitude::this_type(), // they will be replaced with this type of class
log, `__FILE__, `__LINE__);   // some generic log and debug stuff

Or, maybe we want to make sure some variables in a configuration object are replaced with something specific. We can make sure a copy of a class is instantiated, with some specific values set. We might decide to say something like:
•    When creating new classes of type Foo,  instead insatiate my copy of that class (in this case we have created our own instance, tr), with some values set and randomization turned off:

FooBusTrans::override_with_copy(
“@top:foobus0:*”,           // instances matching this pattern will be replaced
tr,                         // they will be replaced by a copy of this instance
log, `__FILE__, `__LINE__); // some generic log and debug stuff

To understand when we should use a Factory, we really need to understand a bit more about the problem it solves. We’ll discuss this in another post.

Posted in Configuration, Reuse, VMM infrastructure | Comments Off

Transactors and Virtual Interface

Posted by Vidyashankar Ramaswamy on 8th January 2010

In my previous blog post I have shown high level view of a transactor and its properties. In this article I look into more details about developing a reusable transactor with a physical interface. There are many ways to connect a transactor to the physical interface. Everybody is aware of how we used do this in the object constructor. This is similar to hardwiring the connection in the environment. Another way is to make the interface configurable by the environment thus removing any dependency between the test, env and the DUT interface. This can be accomplished in two steps. The first step is to create an object wrapper for the virtual interface and make it as one of the properties of the transactor. The second step is to set this object using VMM configuration options either from the enclosing environment or from top level.

image

Step1. Developing the port object

VMM set/get options cannot be used directly to set a virtual interface. To make this work, we have to implement an object wrapper for the virtual interface. Sample code is shown below. Please note that the class name (master_port) and the interface name (vip_if) should be changed appropriately. EX – axi_master_port, axi_if… etc

File: master_port.sv
class master_port extends vmm_object;

virtual vip_if.master mstr_if;

function new (string name, virtual vip_if.master mstr_if);
super.new(null, name);
this.mstr_if = mstr_if;
if (mstr_if != null)
`vmm_note(log, “\n**** Master I/F created ****\n”);
else
`vmm_error(log, “\n**** Master Interface is NULL ****\n”);
endfunction

endclass: master_port

Step2. Configuring the Virtual Interface

In the transactor’s connect phase, get a handle to the virtual interface using the vmm_opts get method. This establishes the connection between the transactor and the DUT pin interface. Please do not forget to check the object handles and print debug messages using VMM messaging service. This will greatly reduce the debug time if things are not connected properly in the environment.

File: master.sv

//////////////////// Master Model //////////////
class master extends vmm_xactor;
`vmm_typename(master)
// Variables declaration
virtual vip_if.master mstr_if;

////////////// Connect_vitf method ////////////////
function void connect_vitf(vip_if.master mstr_if);
begin
if (mstr_if != null)
this.mstr_if = mstr_if;
else
`vmm_fatal(log, “Virtual port [Master] is not available”);
end
endfunction: connect_vitf

////////////// Connect Phase ////////////////
function void connect_ph();
begin
master_port mstr_port_obj;
bit is_set;
// Interface connection
if ($cast(this.mstr_port_obj, vmm_opts::get_object_obj(is_set, this, “vip_mstr_port“))) begin
if (mstr_port_obj != null)
this.connect_vitf(mstr_port_obj.mstr_if);
else
`vmm_fatal(log, “Virtual port [Master] wrapper not initialized”);
end

end
endfunction: connect_ph

endclass: master

Finally, the interface is set using vmm_opts set method in the environment. In VMM 1.2, the verification environment is created by extending the vmm_group base class object. VMM 1.2 supports both explicit and implicit phasing mechanism. In the implicit phasing mechanism, the connect phase is used to configure the virtual interface. Use the connect_vitf() method directly to have similar support in the explicit phasing mechanism. Sample code is shown below. Please note that the interface (master_if_p0) is instantiated in the top level test bench (tb_top).

File: vip_env.sv

//////////////////// Environment  //////////////

`include “vmm.sv”

class vip_env extends vmm_groups;
`vmm_typename(vip_env)

// Variables declaration
master_port mstr_p0;

////////////// Build Phase ////////////////
function void build_ph();
begin

mstr_p0 = new(“master_port”, tb_top.master_if_p0);

end
endfunction: build_ph

////////////// Connect Phase ////////////////
function void connect_ph();
begin
bit is_set;

// Set the master port interface
vmm_opts::set_object(“VIP_MSTR:vip_mstr_port“, mstr_p0, env);

end
endfunction

endclass: vip_env

Explicit phasing environment:

Extend vmm_env to create the explicit phasing environment. The connect_vitf() method is called in the build phase of the environment. Sample code is shown below. Please note that one can use transactor iterater instead of using the object hierarchy to call the connect_vitf() method.

File: vip_env.sv

//////////////////// Environment  //////////////
`include “vmm.sv”

class vip_env extends vmm_env;
`vmm_typename(vip_env)

// VIP’s used …
master mstr_drvr;

////////////// Build Phase ////////////////
function
void build();
begin
super.build();

// Set the master port interface
this.mstr_drvr.connect_vitf(tb_top.master_if_p0);

end
endfunction:
build

endclass: vip_env

Please feel free to comment and share your opinion on this. In my next article I shall discuss more about the transactor’s interface and how to develop them using VMM1.2 features.

Posted in Configuration, Structural Components | 4 Comments »

class factory

Posted by Wei-Hua Han on 26th August 2009

Weihua Han, CAE, Synopsys

As a well-known Object-Oriented technique, class factory has actually been applied in VMM since inception. For instance, in the vmm atomic and scenario generators, by assigning different blueprints to randomized_obj and scenario_set[] properties, these generators can generate transactions with user specified patterns. Using the class factory pattern, users create an instance with a pre-defined method (such as allocate() or copy()) instead of the constructor. This pre-defined method will create an instance from the factory not just the type of the variable being assigned.

VMM1.2 now simplifies the application of the class factory pattern within the whole verification environment so that users can easily replace any kind of object, transaction, scenario and transactor by a similar object. Users can easily follow the steps below to apply the class factory pattern within the verification environment.

1. define “new”, “allocate”, “copy” methods for a class and create the factory for the class.

class vehicle_c extends vmm_object;

//defines the new function. each argument should have default values

function new(string name=”",vmm_object parent=null);

super.new(parent,name);

endfunction

//defines allocate and copy methods

virtual function vehicle_c allocate();

vehicle_c it;

it = new(this.get_object_name,get_parent_object());

allocate = it;

endfunction

virtual function vehicle_c copy();

vehicle_c it;

it = new this;

copy = it;

endfunction

//these two macros will define necessary methods for class factory and create factory for the class

`vmm_typename(vehicle_c);

`vmm_class_factory(vehicle_c);

endclass

`vmm_typename, `vmm_class_factory will implement the necessary methods to support the class factory pattern, like get_typename(), create_instance(), override_with_new(), override_with_copy(), etc.

Users can also use `vmm_data_member_begin and `vmm_data_member_end to implement the “new”, “copy”, “allocate” methods conveniently.

2. create an instance using the pre-defined “create_instance()” method

To use the class factory, the class instance should be created with pre-defined create_instance() method instead of the constructor. For example:

class driver_c extends vmm_object;

vehicle_c myvehicle;

function new(string name=”",vmm_object parent=null);

super.new(parent,name);

endfunction

task drive();

//create an instance from create_instance method

myvehicle = vehicle_c::create_instance(this,”myvehicle”);

$display(“%s is driving %s(%s)”, this.get_object_name(),

myvehicle.get_object_name(),

myvehicle.get_typename());

endtask

endclass

program p1;

driver_c Tom=new(“Tom”,null);

initial begin

Tom.drive();

end

endprogram

For this example, the output is:

Tom is driving myvehicle(class $unit::vehicle_c)

3.  define a new class

Let’s now define the following new class which is derived from the original class vehicle_c:

class sedan_c extends vehicle_c;

`vmm_typename(sedan_c);

function new(string name=”",vmm_object parent=null);

super.new(name,parent);

endfunction

virtual function vehicle_c allocate();

sedan_c it;

it = new(this.get_object_name,get_parent_object());

allocate = it;

endfunction

virtual function vehicle_c copy();

sedan_c it;

it = new this;

copy = it;

endfunction

`vmm_class_factory(sedan_c);

endclass

And we would like to create myvehicle instance from this new class without modifying driver_c class.

4. override the original instance or type with the new class

VMM1.2 provides two methods for users to override the original instances or type.

  • override_with_new:(string name, new_class factory, vmm_log log,string fname=”",int lineno=0)

With this method, when create_instance() is called, a new instance of new_class will be created through facory.allocate() and returned.

  • override_with_copy(string name, new_class factory,vmm_log log, string fname=”", int lineno=0)

With this method, when create_instance() is called, a new instanced of new_class will be created through factory.copy() and returned.

For both methods, the first argument is the instance name, as specified in the create_instance() method, which users hope to override with the type of new_class. Users can use powerful name matching mechanism defined in VMM to specify the override happens on dedicated instance or all the instances of one class in the whole verification environment.

The code below will override all vehicle_c instances with sedan_c type in the environment:

program p1;

driver_c Tom=new(“Tom”,null);

vmm_log log;

initial begin

//override all vehicle_c instances with type of sedan_c

vehicle_c::override_with_new(“@%*”,sedan_c::this_type,log);

Tom.drive();

end

endprogram

And the output of the above code is:

Tom is driving myvehicle(class $unit::sedan_c)

If users only want to override one dedicated instance with a copy of another instance, users can call override_with_copy using the following code:

vehicle_c::override_with_copy(“@%Tom:myvehicle”,another_sedan_c_instance,log);

As the above example shows, with the class factory pattern short-hand macros provided with VMM1.2, users can easily use class factories patterns to replace transactors, transactions and other verification components without modifying the testbench code. I find this very useful for increasing the reusability of verification components.

Posted in Configuration, Modeling, SystemVerilog, Tutorial, VMM, VMM infrastructure | 1 Comment »

How VMM can help controlling transactors easily?

Posted by Fabian Delguste on 29th May 2009

Fabian Delguste / Synopsys Verification Group

Controlling VMM transactors can sometimes be a bit hectic. A typical situation I see is when I have registered a list of transactors for driving some DUT interfaces but only want to start a few of them. Another common situation is when I want to turn off scenario generators and replay transactions directly from a file. Yet another task I often face is registering transactor callbacks without knowing where they are exactly located in the environment.

As you can see, there are many situations where fine-grain functional control of transactors is necessary.

Since VMM 1.1 came out, I have been using a new base class called vmm_xactor_iter that allows accessing any transactor directly by name. In this case all I need to do is to construct a vmm_xactor_iter with regular expression and use the iterator to loop thru all matching transactors.

To understand better how this base class works, I’ll show you a real life example. The scope of this example is to show how to start generators only when vmm_channel playback has not been turned on. As you know vmm_channel can be used to replay transactions directly from files that contain transactions that were recorded in a previous session. This can speed up simulation by turning off constraint solving.

1. string match_xactors = (cfg.mode == tb_cfg::PLAYBACK) ? /Drivers/” : “/./”;

2.

3. `foreach_vmm_xactor(vmm_xactor, “/./”, match_xactors)

4. begin

5.  `vmm_note(log, $psprintf(“Starting %s”, xact.get_instance()));

6.   xact.start_xactor();

7. end

  • In line 1, match_xactors string takes “Drivers*” value when playback mode is selected otherwise it takes “.” when no this mode is not selected. In the first case, transactors named “Drivers” match otherwise all transactors, including generators match
  • In line 3, `foreach_vmm_xactor macro is used to create a vmm_xactor_iter using previous regular expression. This macro can be used to traverse and start all matching objects by using the xact object to access transactors

In case you’d like to have more control over vmm_xactor_iter, it’s possible to use its first() / next() / xactor() methods to traverse matching transactors. Also it’s possible to ensure the regular expression returns at least one transactor. Here is the same example written using these methods.

1. string match_xactors = (cfg.mode == tb_cfg::PLAYBACK) ? “/Drivers/” : “/./”;

3. vmm_xactor_iter iter = new(“/./”, match_xactors);

4. if(iter.xactor()==null)

5. `vmm_fatal(log, $psprintf(“No matching transactors for ‘%s’”, match_xactors));

7. while(iter.xactor()!=null) begin

8.   xact = iter.xactor();

9.   xact.start_xactor();

10.end

Should you need to reclaim the memory allocation required to store all transactors, it’s possible to enable garbage collection by invoking vmm_xactor::kill().

The good news is that vmm_xactor_iter allows me to:

  • Configure the transactor without knowing its hierarchy
  • Provide dynamic access to transactors
  • Reduce code for multiple configurations and callback extensions
  • Use powerful regular expressions for name matching
  • Reuse transactors: no need to modify code when changing the environment content

I hope you find vmm_xactor_iter, and all of the other VMM features, as useful as I do

Posted in Configuration, Structural Components, SystemVerilog, Tutorial | Comments Off

VMM VIP’s on multiple buses

Posted by Adiel Khan on 27th May 2009

image

Adiel Khan, Synopsys CAE

Increasingly, more design-oriented engineers are writing VMM code. Some are trying to map typically good design architecture practices to verification development.

A dangerous mapping is parameterization, from modules to classes.

In my old Verilog testbenches I would develop reusable modules and use #parameters extensively to control the settings of the modules I was instantiating. (It was a sad day when I heard IEEE was deprecating my friend the defparam).

1. module vip #(parameter int data_width = 16,

2. parameter int addr_width = 16)

3. (addr, data);

4. output [addr_width-1:0] addr;

5. inout [data_width-1:0] data;

6.

7. endmodule

This would allow me to instantiate this VIP for many bus variants.

8. vip #(64, 32) vip_inst1(…);

9. vip #(32, 128) vip_inst2(…);

Mapping the approach from modules to classes, I could end up with:

1. class pkt_c #( parameter int data_size=16,

2. parameter int addr_size=16)

3. extends vmm_data;

4. rand bit [addr_size-1:0] addr;

5. rand bit [data_size-1:0] data;

6.

7. endclass

8. //specialized class with 64 & 32 sizes

9. pkt_c #(64, 32) pkt1=new();

10. //specialized class with 32 & 128 sizes

11. pkt_c #(32, 128) pkt2=new();


Be warned, in the SystemVerilog testbench centric view of VIP reusability, parameterization of classes leads to a dead-end path. Moving one layer of abstraction up, I really don’t care if it is a 32/64/128 bits wide interfaces. What I want to do is use pkt_c around the verification environment. The simplest case is creating a reusable driver using pkt_c to drive any bus-width interface.

However, if I try to use a generic class instantiation, I will get a specialization with parameters = 16&16. I cannot perform the $cast() to put the right pkt_c type onto the bus.


1. class pkt_driver_c extends vmm_xactor;

2. virtual protected task main();

3. forever begin : GET_OBJ_TO_SEND

4. pkt_c pkt_to_send; //default class instance

5. pkt_c #(64, 32) pkt_created;

6. randomize(pkt_created);// generator code

7. $cast(pkt_to_send, pkt_created); //FAILS !!!!!

If you are using VMM channels, they must similarly be specialized and cannot carry generic parameterized classes:

8. `vmm_channel(pkt_c)

9. class pkt_driver_c extends vmm_xactor;

10. pkt_c_channel in_chan; //Can only carry pkt_c#(16,16)!!!

Or you must upfront select which specialization you want for use with a parameterized channel.

8. class pkt_driver_c extends vmm_xactor;

9. vmm_channel_typed #(pkt_c#(64, 32)) in_chan;


Hence, for the driver to operate on the correct object type, I need to instantiate the exact specialization throughout my entire environment and make the driver itself parameterized. Now you can clearly see instantiating a specific specialization in the driver (or monitor, scoreboard etc) stops the code from being really reusable for other bus_widths.


1. pkt_c #(64, 32) pkt_to_send;

2. pkt_driver_c #(64, 32) driver;


A better approach is one that was described by Janick in the “Size Does Matter” blog of using `define. Let’s expand on this and see how it works for reusable VIPs. Well, the first thing that comes to my mind is that a `define is a global namespace macro with a single value, whereas I am using my VIP with 2 different bus architectures. Therefore, the `define alone is not enough: you also need a local constant to be able to exclude unwanted bits when you have a VIP instantiated for various bus widths.

1. //default define values

2. `define MAX_DATA_SIZE 16

3. `define MAX_ADDR_SIZE 16

4. class pkt_c extends vmm_data;

5. static vmm_log log = new(“Pkt”, “class”);

6. //instance constant to control actual bus sizes

7. const int addr_size;

8. logic [`MAX_ADDR_SIZE:0] addr;

9. logic [`MAX_DATA_SIZE:0] data;

10. // pass a_size as arg to coverage

11. // ensuring valid coverage ranges.

12. covergroup cg (int a_size);

13. coverpoint addr

14. {bins ad_bin[] = {[0:a_size]};}

15. endgroup

16. // sizes specialized at construction for pkts

17. // on buses less than MAX bus widths

18. function new(int a_s=`MAX_ADDR_SIZE);

19. addr_size = a_s;

20. cg = new(addr_size);

21. `vmm_note(log, $psprintf(“\nADDR_TYPE: “,$typename(addr),

22. “\nDATA_TYPE: “,$typename(data),

23. “\nMAX_BUS_SIZE: “, addr_size));

24. endfunction

The code above allows for a default implementation and all the user needs to do is set the `MAX_ADDR_SIZE and `MAX_DATA_SIZE symbols and all the code will be fully reusable across drivers, monitors, subenv, SoC etc.

For situations where two VIP’s of differing bus architectures are used, the compiler symbols need to be set to the biggest bus architecture in the system; smaller bus-widths are set using addr_size. It is not necessary for addr_size variable to be an instance constant or set during construction. By using instance constants, this ensures the bus-widths are not changed at runtime by users. Having the value of addr_size set during construction gives the users the flexibility to setup the object as they want. For pseudo-static objects such as drivers, monitors, subenvs, masters, slaves, scoreboards etc you should check the construction of verification modules for your particular design architecture during the vmm_env::start phase.

N.B not shown above, but assumed, is that the addr_size variable would be used to ensure correct masking occurs when performing do_pack(), do_unpack() compare() etc.

Just to wrap up some loose ends…

I’m not totally discounting the merits of parameterized classes just insuring people look at all the options. For instance you could parameterize everything and then set the SIZE at the vmm_subenv level and map the SIZE parameters to all other objects. At some point you will want to monitor or scoreboard across different bus-widths and then the parameterized class casting will bite you, reducing you to manually mapping the members within the comparison objects. There is a time and place for everything, so probably need another blog showing merits and where to use parameterized classes.

The vmm_data class is not the only place you might need to know the size of the bus, the same `define & instance constant technique can be used throughout your VIP classes.

This blog does not discuss the pros and cons of putting coverage groups in your data object class. I merely used the covergroup in the data-object as a vehicle to demonstrate how you can make your classes more reusable. I think a separate blog about where best to put coverage will clarify the usage models.

All the code snippets can be run with VCS-2009.06 & VMM1.1. Contact me for more complete code examples and bugs or issues you find.

Posted in Coding Style, Configuration, Register Abstraction Model with RAL, Reuse, Structural Components, VMM | 8 Comments »

4ed1bd0d7afc8eb581fdb1304a579723>>>>>>>>>>>>>>>>