Verification Martial Arts: A Verification Methodology Blog

Archive for the 'Structural Components' Category

SNUG-2012 Verification Round Up – Miscellaneous Topics

Posted by paragg on 29th March 2013

In my final installment of the series of blogs summing up the various SNUG verification papers of 2012, I try to cover the user papers on the Design IP/Verification IP and SystemC and SystemVerilog co-simulation. Please find my earlier blogs on the other domains here: System Verilog Language, Methodologies & VCS technologies

DesignWare core USB3.0 Controller (DWC_usb3) can be configured as a USB3.0 Device Controller. When verifying a system that comprises a DWC_usb3 Device Controller, the verification environment is responsible for bringing up the DWC_usb3 Device Controller to its proper operation mode to communicate with the USB3.0 Host. The paper Integrating DesignWare USB3.0 Device Controller In a UVM-based Testbench from Ning Guo of Paradigm Works describes the process of configuring and driving the DWC_usb3 Device Controller in a UVM based verification environment using the Discovery USB 3.0 Verification IP. This paper describes how the verification environment needs to be created so that it’s highly configurable and reusable.

The AMBA 4 ACE specification enables system level cache coherency across clusters of multicore processors, such as the ARM Cortex-A15 and Cortex-A7 MPCore™ processors .This ensures optimum performance and power efficiency of complex SoC designs. However, the design complexity associated with these capabilies is also higher.  And it throws up new verification challenges.  In the paper, Creating AMBA4 ACE Test Environment With Discovery VIP”, Whitney Huang, Sean Chou, MediaTek Inc, demonstrates how to tackle complex verification challenges increase their verification productivity by using Synopsys Discovery AMBA ACE VIP.

The paper, Verification Methodology of Dual NIC SOC Using VIPs by A.V. Anil Kumar, Mrinal Sarmah, Sunita Jain of Xilinx India Technology Services Pvt. Ltd, talks about how various features of Synopsys PCIe and Ethernet Verification IPs can be exploited to help in the efficient verification of the DUT across various traffic configurations. The paper explores how the VIP Application Programming Interface (API)s can leveraged in the  tests cases to reach high  functional coverage numbers  in a very short duration. They also show how a dual NIC verification environment can effectively use Ethernet VIP APIs to test various Media Access Control (MAC) features. Finally conclude how of the implementation can be used across future revisions of their design.

The ability to analyze the performance of the SoC at the early stage of the design can make a significant different to the end product.  This can lead to more accurate and an earlier estimate of the desired performance that is expected.  Dayananda Yaraganalu Sadashivappa, Igal Mariasin, Jayaprakash Naradasi of SanDisk India Device Design Centre Pvt. Ltd., in the paperGeneric MLM environment for SoC Performance Enhancement”, outlines the solution that was found by using the Synopsys VIP models. The VIPs were used in conjunction with interconnect, which in this case is a Multi-Layer-Matrix (MLM). The environment was built leveraging the VMM base classes. The VMM multiple stream scenario (vmm_ms_scenario) base class was used to create the traffic across the matrix, and the performance meters were constructed using the base classes. The callbacks were leverage appropriately help in collating the statistics. Multiple knobs were used to make the environment generic and configurable. The approach helped in finding multiple performance bugs which could not have been easily found using conventional verification.

In the paper, “User Experience Verifying Ethernet IP Core”, Puneet Rattia of Altera Corporation, presents his experience with verifying the Altera® 40-100Gbps Ethernet IP core utilizing VMM environment while integrating the Ethernet VIP from Synopsys. He explains how he created a full suite of system and blocks level regression tests and then goes on to show how he  utilizes the coverage mapping capabilities of VCS to merge the results across these various testbenches and produce meaningful reports. Besides showing how to reuse the verification infrastructure at the SoC level, the paper also demonstrates how they went in for horizontal reuse by integrating the reference SystemC based models developed and prototyped in the early phase of the project.

UVM 1.x includes support for the communication interfaces defined by the SystemC TLM-2.0 standard. This enables integration of SystemC TLM-2.0 IP into a SystemVerilog UVM verification environment. Dr David Long, John Aynsley, Doug Smith, Doulos in the paper A Beginner’s Guide to Using SystemC TLM-2.0 IP with UVMdescribes how this is done best. They talk about the fact that the connection between SystemC and SystemVerilog currently requires a tool specific interface such as Synopsys Transaction Level Interface (TLI). This paper begins with a brief overview of TLM-2.0 aimed at novice users. It then discusses the steps required to add a SystemC TLM-2.0 model into a SystemVerilog UVM environment and simulate it with VCS. At each step, issues that users will face are explored and suggestions made for practical fixes, showing the relevant pieces of code. Finally, the paper gives a summary of areas where the UVM implementation of TLM-2.0 differs from the SystemC standard and proposes workarounds to ensure correct communication between the SystemVerilog and SystemC domains.

There is an inherent need to enable the horizontal reuse of components created during the architecture and exploration stage. Subhra S Bandyopadhyay, Pavan N M, Intel Technology India Pvt. Ltd, in Integrating SystemC OSCI TLM 2.0 Models to OVM based System Verilog Verification Environments talks about how  theur architecture team creates SystemC models  for early performance analysis and accelerated software development. In OVM-based verification environment, the objective was to reuse this model as a reference model and thus helped in reducing the overall environment bring-up time. The challenge was not only to integrate the SystemC model in the OVM-based verification environment but also to be able to efficiently send transactions from SV to SystemC and vice versa. This paper explores the successful integration of SystemC TLM2 components in OVM based verification environments and also highlight how the VCS TLI (Transaction Level Interface) adapters help TLM2.0 sockets in SystemC to communicate with those in SV and vice versa.

Truly, I feel overwhelmed by the numbers of papers and the interesting use of technology across a variety of domains on which user share their experiences across the various SNUG conferences. As we speak, the SNUG events for 2013 have started, and the stage is all set for a new set of very informative and interesting sessions. I am sure most of you would be attending the SNUIG conferences in your area. . You can find the detailed schedule of those here.

Posted in Announcements, Automation, Callbacks, Coding Style, Communication, Reuse, Structural Components, SystemC/C/C++, SystemVerilog, Transaction Level Modeling (TLM), Tutorial, UVM, VMM | Comments Off

Namespaces, Build Order, and Chickens

Posted by Brian Hunter on 14th May 2012

As described in the video, vkits are our convenient method of lumping together reusable UVM packages with the interface(s) that they operate on. Because code within packages can only peek or poke wires that are contained by a virtual interface, it is often useful to wrap these together somehow, and vkits are our technique at Cavium for doing that.

What goes in a vkit? Anything that is reusable. From simple agents and the interfaces they work on to complete UVM environments that connect these agents together, including scoreboards, sequence libraries, types, and utility functions.

What does not go in a vkit are items that are bound to a specific testbench, including the tests themselves.

The video describes the wildcard import syntax as an “egregiously bad idea.” First and foremost, doing so can lead to namespace pollution, which comes about when one engineer independently adds types or classes to their package and only later finds out that they conflict with those of another package. Secondly, wildcard imports prevent our shorter naming conventions of having an agent_c, drv_c, env_c, etc., within each package.

Not described in the video are CSR packages that are auto-generated by RAL, IP-XACT, or your script of choice. These packages should be independent of your vkits, such that your vkits refer to them with their explicit scopes (i.e., chx_csr_pkg::PLUCKING_CFG_C).

Future posts will go into more detail about how we architect UVM testbenches and some of our other conventions that work within this framework. Until then, I’ve got a lot of pies to eat.

PS. I’ll be at DAC this year! Come see me on Tuesday, June 5, during the “Industry Leaders Verify with Synopsys” lunch. Hopefully they’ll be serving some of my favorite foods!

Posted in Organization, Structural Components, SystemVerilog, Tutorial, UVM | 6 Comments »

The right name at the right space: using ‘namespace’ in VMM to set virtual interfaces

Posted by Amit Sharma on 7th September 2011

Abhisek Verma, CAE, Synopsys

A ‘namespace’ is an abstract container or environment created to hold a logical grouping of unique identifiers or names. Thus the same identifier can be independently defined in multiple namespaces and the the meaning associated with an identifier defined in one namespace may or may not have the same meaning as the same identifier defined in another namespace. ‘Namespace’ in VMM is used to group or tag different VMM objects, resources and transactions with a meaningful namespace for the different components across the testbench environment. This allows the user to identify them and access them efficiently. For example, a benefit of this approach is that it relieves the user from making cross module references to access the various resources. This can be seen in the context of accessing the interfaces associated with a driver or a monitor in the environment and goes a long way in making the code more scalable.

Accessing and assigning interface handles to a particular transactor can be done in various ways in VMM, as discussed in the following blogs: Transactors and Virtual Interface and Extending Hierarchical Options in VMM to work with all data types. In addition to these, one can leverage ‘namespaces’ in VMM to achieve this fairly elegantly. The idea here is to put the Virtual Interface instances in the appropriate namespace in the object hierarchy to be retrieved by the verification environment wherever required through simple APIs as shown in the following steps:

STEP 1:: Define a parameterized class extending form vmm_object to act as a wrapper for the interface handle.

STEP 2:: Instantiate the interface wrapper in the top-level MODULE and put in the “VIF” name space

STEP 3:: In environment, access interface wrapper from the VIF name space by querying for the same in the ‘VIF” namespace and use the retrieved handle to set the interface in the transactor

The example below demonstrates the implementation of the above

The Interface and DUT templates..

image

Step1: Parameterized wrapper class for the interface-

image

The Testbench Top:

image

The Program Block:

image

Posted in Configuration, Structural Components, VMM infrastructure | Comments Off

Building & Configuring Coverage Model – VMM Style – Part-III

Posted by paragg on 25th June 2011

Parag Goel, Senior Corporate Application Engineer, Synopsys

In the final blog of this coverage modeling with VMM series, we focus on error coverage. Negative scenario testing is an integral part of verification. But again, we have this question – Whether I have covered all negative scenarios?

So it is important to ensure that the generic coverage model tracks all the error scenarios.

Let’s see, how a specific mechanism provided in VMM in the form of vmm_report_catcher helps to track error coverage efficiently and effectively. The VMM Log Catcher is able to identify/catch a specific string of any type any of the messages issue through the VMM reporting mechanism.

Typically, the Verification Environment issues messages to STDOUT when the DUT responds to an error scenario. These messages can be ‘caught’ by the Log Catcher to update the appropriate coverage groups. Let see how this is done in detail.

The Verification Environment would respond to each negative scenario by issuing a message with a unique text, specific to specific error messages.

In the context of the AXI in framework, we can introduce a wide-range of error scenarios and test if the DUT responds correctly or not. A few possible error scenarios in AXI are listed below for your reference.

clip_image001

However, all the scenarios may not be applicable always and hence configurability is required to enable only the required set of coverpoints tied to the relevant negative scenarios. Thus, we should have similar configurability for error coverage as I talked about in the earlier blogs.

Let’s see how we can catch the relevant responses and sample the appropriate covergroups.

As mentioned earlier, in the example below, we make use of the unique message issued as a result of a negative scenario.

This is how we use the VMM Log catcher.

1. The error coverage class is extended from vmm_log_catcher – VMM base class.

2. The vmm_log::caught() API is utilized as means to qualify the covergroup sampling.

clip_image001[11]

In the code above, whenever a message with the text “AXI_WRITE_RESPONSE_SLVERR “ is issued from anywhere in the verification environment, the ‘caught’ method is invoked which in turn samples the appropriate covergroup. Additionally, you an specify more parameters in the caught API, to restrict what ‘scenarios’ should be caught.

vmm_log_catcher::caught(

string name = “”,

string inst = “”,

bit recurse = 0,

int typs = ALL_TYPS,

int severity = ALL_SEVS,

string text = “”);

The above API, installs the specified message handler to catch any message of the specified type and severity, issued by the specified message service interface instances specified by name and instance arguments, which contains the specified text. By default, this method catches all messages issued by this message service interface instance.

Hope these set of articles would be relevant and useful to you.. I have made an attempt to leverage some of the built-in capabilities of the SV languages and the VMM base classes to target some of the challenges in creating configurable coverage models.. These techniques can be improvised further to make them more efficient and scalable. I would be waiting to hear from you all any inputs that you, have in this area.

Posted in Automation, Configuration, Coverage, Metrics, Reuse, Structural Components, VMM, VMM infrastructure | Comments Off

Building & Configuring Coverage Model – VMM Style – Part-II

Posted by paragg on 25th June 2011

Parag Goel, Senior Corporate Application Engineer, Synopsys

In the previous post, we looked at how you can enable/disable different types of coverage encapsulated in the Coverage Model wrapper class. In this post, let’s look at how we can easily create an infrastructure to pass different inputs to the wrapper class so as to able to configure the coverage collection based on user. The infrastructure ensure that these elements values percolate down to the to the sub-coverage model groups.

The following are some of the key inputs that needs to be passed to the difference coverage component classes

1. SV Virtual Interfaces so that different signal activity can be accessed

2. The Transactions observed and collected by the physical level monitors

3. The ‘Configuration’ information

Picture7

Let’s look at how the we can easily pass the signal level information to the Coverage Model

Step I: Encapsulation of the interface in the class wrapper.

class intf_wrapper extends vmm_object;

virtual axi_if v_if ;

function new (string name, virtual axi_if mst_if);
super.new(null, name);
this.v_if = mst_if;
endfunction

endclass: master_port

Step II: In the top class/environment- Set this object using vmm_opts API.

class axi_env extends vmm_env;
`vmm_typename(axi_env)
intf_wrapper mc_intf;

function void build_ph();
mc_intf = new(“Master_Port”, tb_top.master_if_p0);
// Set the master port interface
vmm_opts::set_object(“VIP_MSTR:vip_mstr_port“, mc_intf, env);
endfunction:build_ph
endclass: axi_env

Step III: Connecting in the coverage class.

A. Get the object containing interface in the coverage model class using vmm_opts.

assert($cast(this.mst_port_obj, vmm_opts::get_object_obj(is_set, this, “vip_mstr_port“)));

B. Connecting local virtual interface to one contained in the object.

this.cov_vif = mstr_port_obj.v_if;

Now, we need to pass the collected transaction object from the monitor needs to the coverage collector. This can be conveniently done in VMM using TLM communication. This is achieved through the vmm_tlm_analysis_port, which establishes the communication between a subscriber & an observer.

class axi_transfer extends vmm_data;

. . .

class axi_bus_monitor  extends  vmm_xactor;

vmm_tlm_analysis_port#(axi_bus_monitor, axi_transfer)  m_ap;
task collect_trans();

//Writing to the analysis port.

m_ap.write(trans);
endtask
endclass

class axi_coverage_model extends vmm_object;
vmm_tlm_analysis_export #( axi_coverage_model, axi_transfer) m_export;

function new (string inst, vmm_object parent = null);
m_export = new(this, “m_export”);

endfunction

function void write(int id, axi_transfer trans);

//Sample the appropriate covergroup, once the transaction is received

in the write function.

endfunction

endclass

To set up the TLM Connections in the agent/environment, we need to do the following:

class axi_subenv extends vmm_group;

//Instantiate the model classes and creates them.

axi_bus_monitor mon;

axi_coverage_model cov;

. . .

virtual function void build_ph;
mon = new( “mon”, this);
cov = new( “cov”, this);
endfunction
virtual function void connect_ph;

//Bind the TLM ports via VMM – tlm_bind

monitor.m_ap.tlm_bind( cov.m_export );

endfunction

To make the Coverage Model truly configurable, we need to look at some of the other key requirements as well at different level of granularity. This can be summarized as the ability to do the following.

1. Enable/Disable coverage collection for each covergroup defined . Every covergroup should be created only if a user wishes to do so. So there should be a configuration parameter which restricts the creation of the covergroup altogether. And this should also be used to control the sampling of a covergroup.

2. The user must be able to configure the limits on the individual values being covered in the coverage model within a legal set of values. Say for example, transaction field BurstLength – user should be able to guide the model what are the limits on this field that one wishes to get coverage on within a legal set of values ranging from ‘1’ to ‘16’ as per AXI spec. So providing lower and upper limits for transaction parameters like burst size, burst length, address etc. makes it re-usable. This limits should be modeled as variables which can be overwritten dynamically

3. The user should be able to control the number of bins to be created. For example in fields like address. auto_bin_max option can be exploited to achieve this in case the user doesn’t have explicitly defined bins..

4. The user must be able to control the number of hits for which a bin can be considered as covered. option.atleast can be used for this purpose and the input to this can be a user defined parameter.

5. The user should also have the control to specify his coverage goal, i.e. when the coverage collector should show the covergroupcovered” even though the coverage is not 100%. This can be achieved by using option.goal, where goal is again a user defined parameter.

All the parameters required to meet the above requirements can be encapsulated in the class (i.e. coverage configuration class) and this can be set and retrieved in a similar fashion described for setting & getting the interface wrapper class using vmm_opts API’s.

class coverage_cfg extends vmm_object;
  int disable_wr_burst_len;
   . . .
  function new( vmm_object parent=null, string name);
     super.new(parent, name);
  endfunction
  coverage_cfg cfg;
  function new(vmm_object parent=null, string name);
     bit is_set;
     super.new(parent, name);
     $cast(cfg, vmm_opts::get_object_obj(is_set, this,
                                           "COV_CFG_OBJ”));
  endfunction

Wei Hua presents another cool mechanism of collecting this parameters using vmm_notification mechanism in this earlier blog  :

A Generic Functional Coverage Solution Based On vmm_notify

Hope you found this useful. I will be talking about how to track Error Coverage in my next blog, so stay tuned!

Posted in Communication, Configuration, Coverage, Metrics, Reuse, Structural Components, VMM, VMM infrastructure | Comments Off

Transactor Interfaces [VMM 1.2 style]

Posted by Vidyashankar Ramaswamy on 18th January 2010

In my earlier blog post I have shown how to write a configurable physical interface. This time I shall look into the slave transactor and its interfaces. A slave transactor can have all or some of the interfaces shown in the following Figure. It solely depends on the nature of the protocol and its surrounding environment. If you are developing verification IP which is used across multiple projects or groups , then you must consider the entire interface requirement and develop the VIP accordingly. Please note that I will be discussing only the analysis port [No 5] and the transport port [No 6] interfaces which are developed using VMM 1.2 features.

image

Analysis Port

Analysis ports are used to share the received transaction among the other testbench components. These components can also be referred to as listeners, subscribers, observers, target or sometimes passive component. As the name says , this port is used to distribute the transaction to a single or multiple listeners for analysis. The important feature of the analysis port is that a single port can be connected to multiple subscribers. The component which is broadcasting the transaction uses “analysis port” and the observers implement the analysis export port. Each subscriber must implement the “write”  method of the vmm_tlm_analysis_export class. Here the listeners can’t modify the transaction and can only copy it’s content for analysis. Most common testbench components which can use this port are scoreboard, functional coverage, debug and reference model.

Analysis port in the producer (Initiator) model

The analysis port is implemented as follows. The port object is constructed in the transactors’s build phase. As this is a slave model, the write method is called after responding to the observed request on the bus.

File: vip_slave.sv

. . .
//////////// SLAVE MODEL ///////////////

class vip_slave extends vmm_xactor;
`vmm_typename(vip_slave)
// Variables declaration
. . .
// Analysis port
vmm_tlm_analysis_port#(vip_slave, vip_trans) analysis_port;
// TLM Blocking port
vmm_tlm_b_transport_port#(vip_slave, vip_trans) b_trans_resp_port;

// Component Phases
. . .
extern virtual function void build_ph();
extern virtual protected task main();
. . .
endclass: vip_slave
. . .
/////////////// Build Phase /////////////////
function void vip_slave::build_ph();
. . .
// Construct the analysis port for Observers
analysis_port = new(this, “analysis_port”);
// Construct the transport port for response modification
b_trans_resp_port = new(this, “b_trans_resp_port”);
. . .
endfunction: build_ph
. . .

////////////// Main Method////////////////
task vip_slave::main();
super.main();
forever begin
vip_trans tr;
int dly = 0;
. . .
if (tr.kind == vip_trans::READ) begin
// Retrieve the data
. . .
// Provide the handle to the modifying transactor
b_trans_resp_port.b_transport(tr, dly);
. . .
// Finally drive the data onto the bus
. . .
end
else begin
// WRITE
// Assemble the trans properties
. . .
// Provide the handle to the modifying transactor
b_trans_resp_port.b_transport(tr, dly);
. . .
// Store the data
. . .
end
. . .
// notify the observers about this transaction
this.analysis_port.write(tr);
. . .
end
endtask:
main

Analysis export port in the target (consumer) model

Following is a very simple implementation of an observer. This observer class has an instance of the tlm analysis export class and overrides the virtual function  “write”  to operate on the received transaction. Here the write method displays the received transaction from the initiator.

File: observer.sv

class observer extends vmm_object;
`vmm_typename(observer)
vmm_tlm_analysis_export#(observer, vip_trans) obsrv ;
vmm_log log = new(“log”, this.get_object_hiername());

virtual function void write (int id = -1, vip_trans tr);
`vmm_note(log, ” … From Observer: Rcvd Transaction … “);
tr.display(“”);
endfunction: write

function new(vmm_object parent = null, string inst=””);
super.new(parent, get_typename());
endfunction

/////////////// Build Phase /////////////////
function void build_ph();
. . .
// Construct the analysis export port

obsrv = new(this, “obsrv”);
. . .
endfunction

endclass: observer

Transport Interface

The transactions received by a slave can be processed by a higher layer transactor before responding to the request. In these situations TLM transport ports are used for passing transactions in a blocking/non-blocking way. This slave model uses a blocking transport port to pass the transaction to another transactor for further processing. Blocking transport, completes the transaction within a single method call and uses the forward path from initiator to target. To use this interface, the slave model should implement vmm_tlm_b_transport_port for issuing transactions and the higher layer transactor implements vmm_tlm_b_transport_export for receiving transactions. Please refer to the code shown above (vip_slave.sv).  Also note that the transaction data modification (call to b_transport method) happens before storing the WRITE data  or responding to a READ request.

In a transactor implementation, It is ok to construct and call the analysis port’s “write” method without binding it in the environment. This is not true with the transport port’s “b_transport” method and should be bound in the environment. So it is a good practice to make this port configurable (enable/disable) using vmm options.

Response modifier Transactor

This transactor shows how to instantiate the tlm export class object and the implementation of the b_transport method. Following is a very simple implementation of the transport method, where the received transaction is displayed.

File: resp_modifier.sv
class resp_modifier extends vmm_xactor;
`vmm_typename(resp_modifier)
vmm_tlm_b_transport_export#(resp_modifier, vip_trans) b_trans_resp_export ;

virtual task b_transport (int id = -1, vip_trans tr, ref int dly);
`vmm_note(log, ” …. Resp Trans Modifier …. “);
tr.display(“”);
endtask: b_transport

function new(vmm_unit parent = null, string inst=””);
super.new(get_typename(), inst, 0, parent);
endfunction

/////////////// Build Phase /////////////////
function void build_ph();
. . .
// Construct the transport export port

b_trans_resp_export = new(this, “b_trans_resp_export”);
. . .
endfunction

endclass: resp_modifier

Connecting it all together

The last step is to create the environment. The required components are constructed in the build phase. Connect phase is used to bind the ports appropriately as shown in the code below.  Please refer to the VMM user guide for more information on TLM 2.0 interfaces.

File: tb_env.sv
. . .
`include “vip_slave.sv”
`include “observer.sv”
`include “resp_modifier.sv”
. . .
////////// TB ENVIRONMENT ////////////////
class tb_env extends vmm_group;
`vmm_typename(tb_env)

// VIP Instantiation
vip_slave slv;
. . .

// TLM Port Declaration
observer        obsrv_vip_trans;
resp_modifier trans_mod_xactor;
. . .

// Component Phases
extern virtual function void build_ph();
extern virtual function void connect_ph();
. . .
endclass: tb_env
. . .

////////// Build Phase ////////////////
function void tb_env::build_ph();
. . .
this.slv = vip_slave::create_instance(this, “slv”, `__FILE__, `__LINE__);

//Create observer component
obsrv_vip_trans = new(this, “TRANS_OBSVR”);
// Create the response modifier transactor
trans_mod_xactor = new(this, “RESP_MODIFIER”);
. . .
endfunction: build_ph

/////////// Connect Phase ////////////
function void tb_env::connect_ph();
. . .
// Bind the analysis Port
this.slv.analysis_port.tlm_bind(obsrv_vip_trans.obsrv);
// Bind the transport port
this.slv.b_trans_resp_port.tlm_bind

(trans_mod_xactor.b_trans_resp_export);
. . .
endfunction: connect_ph
. . .

I hope you find this article useful. Please feel free to send me your opinion on this.

Posted in Communication, Structural Components, Transaction Level Modeling (TLM), VMM infrastructure | Comments Off

VMM 1.2 – The Movie

Posted by John Aynsley on 14th January 2010

JohnAynsley

John Aynsley, CTO, Doulos

To celebrate the release of VMM 1.2 on VMM Central, I thought I would do something a little different and share with you a video giving a brief overview of the new features, including the implicit phasing and TLM-2 communication. So grab some popcorn, sit back, and enjoy…

Posted in Phasing, Structural Components, Transaction Level Modeling (TLM), VMM | Comments Off

Transactors and Virtual Interface

Posted by Vidyashankar Ramaswamy on 8th January 2010

In my previous blog post I have shown high level view of a transactor and its properties. In this article I look into more details about developing a reusable transactor with a physical interface. There are many ways to connect a transactor to the physical interface. Everybody is aware of how we used do this in the object constructor. This is similar to hardwiring the connection in the environment. Another way is to make the interface configurable by the environment thus removing any dependency between the test, env and the DUT interface. This can be accomplished in two steps. The first step is to create an object wrapper for the virtual interface and make it as one of the properties of the transactor. The second step is to set this object using VMM configuration options either from the enclosing environment or from top level.

image

Step1. Developing the port object

VMM set/get options cannot be used directly to set a virtual interface. To make this work, we have to implement an object wrapper for the virtual interface. Sample code is shown below. Please note that the class name (master_port) and the interface name (vip_if) should be changed appropriately. EX – axi_master_port, axi_if… etc

File: master_port.sv
class master_port extends vmm_object;

virtual vip_if.master mstr_if;

function new (string name, virtual vip_if.master mstr_if);
super.new(null, name);
this.mstr_if = mstr_if;
if (mstr_if != null)
`vmm_note(log, “\n**** Master I/F created ****\n”);
else
`vmm_error(log, “\n**** Master Interface is NULL ****\n”);
endfunction

endclass: master_port

Step2. Configuring the Virtual Interface

In the transactor’s connect phase, get a handle to the virtual interface using the vmm_opts get method. This establishes the connection between the transactor and the DUT pin interface. Please do not forget to check the object handles and print debug messages using VMM messaging service. This will greatly reduce the debug time if things are not connected properly in the environment.

File: master.sv

//////////////////// Master Model //////////////
class master extends vmm_xactor;
`vmm_typename(master)
// Variables declaration
virtual vip_if.master mstr_if;

////////////// Connect_vitf method ////////////////
function void connect_vitf(vip_if.master mstr_if);
begin
if (mstr_if != null)
this.mstr_if = mstr_if;
else
`vmm_fatal(log, “Virtual port [Master] is not available”);
end
endfunction: connect_vitf

////////////// Connect Phase ////////////////
function void connect_ph();
begin
master_port mstr_port_obj;
bit is_set;
// Interface connection
if ($cast(this.mstr_port_obj, vmm_opts::get_object_obj(is_set, this, “vip_mstr_port“))) begin
if (mstr_port_obj != null)
this.connect_vitf(mstr_port_obj.mstr_if);
else
`vmm_fatal(log, “Virtual port [Master] wrapper not initialized”);
end

end
endfunction: connect_ph

endclass: master

Finally, the interface is set using vmm_opts set method in the environment. In VMM 1.2, the verification environment is created by extending the vmm_group base class object. VMM 1.2 supports both explicit and implicit phasing mechanism. In the implicit phasing mechanism, the connect phase is used to configure the virtual interface. Use the connect_vitf() method directly to have similar support in the explicit phasing mechanism. Sample code is shown below. Please note that the interface (master_if_p0) is instantiated in the top level test bench (tb_top).

File: vip_env.sv

//////////////////// Environment  //////////////

`include “vmm.sv”

class vip_env extends vmm_groups;
`vmm_typename(vip_env)

// Variables declaration
master_port mstr_p0;

////////////// Build Phase ////////////////
function void build_ph();
begin

mstr_p0 = new(“master_port”, tb_top.master_if_p0);

end
endfunction: build_ph

////////////// Connect Phase ////////////////
function void connect_ph();
begin
bit is_set;

// Set the master port interface
vmm_opts::set_object(“VIP_MSTR:vip_mstr_port“, mstr_p0, env);

end
endfunction

endclass: vip_env

Explicit phasing environment:

Extend vmm_env to create the explicit phasing environment. The connect_vitf() method is called in the build phase of the environment. Sample code is shown below. Please note that one can use transactor iterater instead of using the object hierarchy to call the connect_vitf() method.

File: vip_env.sv

//////////////////// Environment  //////////////
`include “vmm.sv”

class vip_env extends vmm_env;
`vmm_typename(vip_env)

// VIP’s used …
master mstr_drvr;

////////////// Build Phase ////////////////
function
void build();
begin
super.build();

// Set the master port interface
this.mstr_drvr.connect_vitf(tb_top.master_if_p0);

end
endfunction:
build

endclass: vip_env

Please feel free to comment and share your opinion on this. In my next article I shall discuss more about the transactor’s interface and how to develop them using VMM1.2 features.

Posted in Configuration, Structural Components | 4 Comments »

Verification in the trenches: Creating your verification components using VMM1.2

Posted by Ambar Sarkar on 25th December 2009

ambar Dr. Ambar Sarkar, Chief Verification Technologist, Paradigm Works Inc.

Ever wonder why it is hard to mix and match verification components from different sources and have them play nicely with the one you created? You want all of these components to execute in sync with each other through the phases of their construction, configuration, shutdown, etc. For example, if the AXI slave transactor is executing its reset phase while the PCIe stimulus generator is sending in DMA read requests to the AXI interface, you have a problem. Often, you end up adding dedicated code or using explicit synchronization objects such as events to get the right coordination. Wouldn’t it be nice if this synchronization came automatically?

This is where the vmm_unit base class introduced in VMM1.2 comes in. The basic idea is to derive your verification component from this predefined class and you are guaranteed that the verification environment will automatically synchronize its execution with the others. While there is much offered by the vmm_unit class, the following statement summarizes its real benefit:

vmm_unit class comes with a rich set of built in synchronization points.

These synchronization points are represented as predefined tasks or functions called phases. The verification engineer provides the actual implementation of these phases. The environment makes sure that all the objects derived from the vmm_unit class get their phases called in a well defined order, so that once an object moves into a phase, it is guaranteed that all its siblings have completed the previous phase. For example, once a component enters reset, you know every other associated component is either being reset or about to enter the reset state.

Couple of things to note, however. First, you do not have to provide implementation for each and every phase. If you do not define them, the default action is that this object will wait for the others to finish this phase before moving to the next one. Second, you can override, replace, or even add your own phases to introduce a finer or different synchronization scheme altogether.

So how does the implementation end up looking like? Here is a snippet from something that I coded recently for a reusable module-level verification environment. Note that I only defined a few of the phases of my own and used the default implementation for the others.

Predefined phase Sample Code Snippet
build_ph()
// Create various functional components of this environment
pwr_hi = new(“subenv”, “PWR_HI”, this);
pwr_pi = new(“subenv”,”_PWR_PI”, this);
// Instantiate a consensus manager
cm = new(this, {this.get_object_name(), “_CM”}, pwr_port);
….
configure_ph() // If someone built me as a sub-environmnet, take appropriate action
if (is_subenv) begin
// Disable the host interface driver
connect_ph() // Connect the components as needed
pwr_hi.chk.ana_port.tlm_bind(sb.pwr_hi_sb_chk_ap);
pwr_pi.has_generator) pwr_pi.gen.ana_port.tlm_bind(sb.pwr_pi_sb_post_ap);
start_of_sim_ph() // Put a diagnostic message, otherwise leave empty
`vmm_verbose(log, “Starting simulation”);
reset_ph() // Power_cycle
pwr_port.dck.reset <= 0;
@(pwr_port.dck);
pwr_port.dck.reset <= 1;
repeat (10) @(pwr_port.dck);
pwr_port.dck.reset <= 0;
repeat (2) @(pwr_port.dck);
training_ph() // Leave as default
config_dut_ph() // Sw initialization sequence
// …
start_ph() // Leave as default
start_of_test_ph() // Leave as default
run() // All you need is to wait for the consensus manager to agree to shut down
cm.wait_for_end_t();
shutdown_ph() // Leave as default
cleanup() // Leave as default
report() // Dump the final scoreboard status
sb.report();
destruct() // Leave as default

The key is to make sure that your code is partitioned into the appropriate phases, as shown above. Also note that a bunch of the phases were left alone to their default implementation.

Okay, one minor detail. You do not directly derive from the vmm_unit base class. Instead, two classes, vmm_xactor and vmm_group have been provided. Both are derived from vmm_unit, so you have all the support for synchronization. vmm_xactor should be used as the base class for defining your individual transactors, whereas vmm_group should be used as the base class for components that put together several others into a single entity such as the top-level environment or a top-level interface vip.

Of course, the correctness of your synchronization  will still depend on what you end up implementing in the body of the pre-defined phases. The names of the predefined phases give a hint. So you are still subject to what the other components implemented in their corresponding phases. Do follow the spirit of what each phase is supposed to do. Do not connect your components in build_ph()  phase even if the test passes. Do so in connect_ph(). A small price to pay for most cases, IMHO.

Do note that there often are scenarios where some components need to be synchronized separately from others. For example, in a PCIe based SOC, what if you need the OCP interface to be up and running before you bring your PCIe interface out of reset? In this case, you definitely do not want the OCP and the PCIe VIPs to run their configure and reset phases in lock-step with each other. This is where advanced synchronization features such as vmm_timelines come to play, but that’s a topic for the next post. Stay tuned.

This article is the 3rd in the Verification in the trenches series. Hope you found this article useful. If you would like to hear about any other related topic, please comment or drop me a line at ambar.sarkar@paradigm-works.com. Also, if you are starting out fresh, please check out the free VMM1.2 environment generator.

Posted in Phasing, Structural Components | Comments Off

Just in time for the holidays: VMM 1.2!

Posted by Janick Bergeron on 15th December 2009

Janick Bergeron
Synopsys Fellow

I am pleased to see that the OpenSource version of VMM 1.2 is finally released. It is the culmination of six months of hard work by the entire VMM teams and the hundreds of customers who have provided inputs on its requirements and the dozens of teams who have contributed their feedback during the beta period.

What is new in VMM 1.2 is a “secret de Polichinelle“. Ever since the start of the beta period, several VMM users and Synopsys engineers have published tutorials, seminar presentations and blog articles on many of its powerful aspects. Nonetheless, I would like to take this opportunity to give you the highlights and pointers to where you can find more information.

A new User’s Guide

One of the most important aspect of this release—and one that has not been mentioned so far—is the completely revamped and expanded User’s Guide. We have integrated the content of the VMM for SystemVerilog book, the book’s errata, and the previous User’s Guide into a single User Guide that completely documents all of the features of the class library. Furthermore, the body of this new User’s Guide has been expanded to present the methodology in a style that will be easier to learn, with many examples. Speaking of examples, this latest distribution contains a lot more examples (in $VMM_HOME/sv/examples), illustrating the many applications domains of the VMM and all of its new features.

Implicit Hierarchical Phasing

The original VMM used explicit phasing exclusively. With 1.2, VMM now supports implicit hierarchical phasing. With implicit phasing, transactors and environments need not be responsible for the phasing of the components they instantiate: that is taken care of automatically by the new vmm_timeline object. The implicit phasing is also hierarchical, meaning that an environment may contain more than one vmm_timeline instances. Sub-timelines limit the scope and interaction of user-defined phases when block-level environments are reused in a system context. Sub-timelines may also be rolled back if their portion of verification environment needs to be stalled or restarted, for example because its corresponding functionality in the DUT has been powered down. Furthermore, VMM allows implicit and explicit phasing to be arbitrarily mixed: instead of insisting that it be in control of every aspect of a verification environment, it can import portions of an environment described using an alternative phasing methodology and have it be explicitly phased using a different mechanism by encapsulating in a vmm_subenv instance. Similarly, any VMM environment can be subjugated to another phasing methodology by allowing vmm_timeline instances to be explicitly phased.

TLM 2.0

In addition to the vmm_channel, VMM 1.2 now offers an alternative transaction-level interface mechanism inspired by OSCI’s Transaction-Level Modeling standard version 2.0. I say “inspired” because it is not a direct translation of the SystemC TLM standard, as the SystemVerilog language does not support multi-inheritance used in the SystemC implementation. The TLM2 standard is radically different from TLM1 because the latter did not live up to its promises of model interoperability and simulation performance. In addition to specifying an interface and transport mechanism, TLM2 specifies clear transaction progress and completion models through phases and the Base Protocol. VMM has always provided similarly well-defined transport mechanism (vmm_channel) and completion models (see pp176-195 of the original VMM book). With the addition of TLM2 sockets, VMM can also be used to implement high-performance virtual prototyping models in SystemVerilog. Of course, we’ve made sure that you can attach a vmm_channel to an initiator or target blocking or nonblocking socket interface for maximum flexibility.

Object Hierarchy

Whereas modules form a strict hierarchy in SystemVerilog, class instances (also known as objects) do not – at least from a language standpoint. However, it is a common mental model even though it is not enforced by the language. VMM 1.2 has the ability to define parent-child relationships between any instances of the vmm_object class. And because that class is the base class for all other VMM classes, any instance of a VMM class or user-defined extensions thereof can have a parent and any number of children. This creates a user-defined hierarchy of objects. And because each object has a name, it implicitly creates a hierarchical naming structure. Furthermore, because this hierarchical and the name of its component is entirely user-defined, VMM 1.2 provides the concept of namespaces to create alternative object hierarchies and names, making it easy to create hierarchical registries or to map an object hierarchy to another one. Objects can easily be found by name or by traversing the hierarchy from parent to child or vice-versa.

Factory API

VMM always had the concept of class factories (see p217 in the original VMM book). It used the factory pattern in all of its pre-defined generators and recommended that it be used whenever transaction objects were created or randomized (see Rules 4-115 and 5-6 in the original VMM book). It simply did not provide any pre-defined utility to ease the implementation or overriding of class factory instances. VMM 1.2 remedies this situation by introducing a class factory API that makes it easier to replace class factory instances, as well as to build class factories. Furthermore, it provides two factory override mechanism: a fast one that creates class instances with default values, and a slower one that creates exact copies. And, being strongly typed, the new factory API will detect at compile time if you are attempting to replace a factory instance with an incompatible type.

And many more!

VMM 1.2 provides many more additional features, like hierarchical options, RTL configuration support, and test concatenation.

Learning more

You can download the OpenSource distribution here. You will also find VMM 1.2 in your VCS 2009.12-1 distribution (use the +define+VMM_12 compile-time command-line option to enable it!).

Visit this blog often, as many industry leaders and Synopsys engineers will continue to provide insights on the new features included in VMM 1.2

Also, stay tuned for a series of one-day VMM 1.2 seminars and workshops that will be touring the major semiconductor centers around the globe.

Posted in Announcements, Debug, Phasing, Structural Components, Transaction Level Modeling (TLM), VMM infrastructure | 2 Comments »

Developing transactors using VMM 1.2

Posted by Vidyashankar Ramaswamy on 8th December 2009

There are many ways to design and develop a transcator. The following is the way I visualize it. Typical transactor components are shown in the following figure. Based on the functionality, transactor can be grouped into up-stream, down-stream, pass through or a passive monitor types. I shall explain in brief what I mean by these. Up-stream can be a stimulus generator and down-stream can be a bus function master/slave model. A pass through can exists in the test bench to connect a master and the bus function model. A passive monitor simply monitors the bus interface on to which it is connected and broadcasts the packet information whenever it is available. The dotted line in the figure partitions the transactor based on functionality and shows different port connections.

The right side of the dotted line in the above figure represents the upstream. This can be a producer (master or stimulus generator), in which case it can have only output port. This output port is designed as a TLM transport port.

The left side of the dotted line represents the downstream: this can be a slave transactor in which case the receiving port will be a VMM channel. If the downstream transactor is connected to the DUT, then you need to declare a virtual interface and bind it through a port object to the physical interface. This is done from the enclosing environment. This makes the BFM component reusable across test benches. In my next article I shall show you an example about the port object.

If you are designing a pass through transactor, then you need to have both VMM channel for receiving the transaction from the producer and the TLM transport port for sending the transaction to the consumer. Analysis port can be used if any observers are hooked up to this transactor. Also note that you need not have any virtual port connection for a pass through transactor.

A monitor component will have only analysis port along with the physical interface connection.

You might be wondering why the analysis port and the callback interface are centered between up-stream and the down-stream. If you have guessed it, yes you’re right. Both master and slave need to broadcast the information/packet which is passing through them to the observers. The observer can be a scoreboard, coverage collector or a simple file write for debug purposes.

To make the master/slave transactor re-usable, callback methods are used. A callback method allows the user to extend the behavior of a transactor without having to modify the transactor itself. VMM 1.2 supports factory service to replace a transactor. I favor callbacks for transactor extensions. So which one should you use ? I shall leave that up to you to decide and this could be a topic on its own.

Please feel free to comment and share your opinion. For more information please refer to the VMM 1.2 user guide.

Posted in Communication, Structural Components, Transaction Level Modeling (TLM), VMM infrastructure | Comments Off

Finding out which vmm callbacks are registered with a particular vmm_xactor instance

Posted by Avinash Agrawal on 13th November 2009

Avinash Agrawal Avinash Agrawal, Corporate Applications, Synopsys

Is it possible to find out which VMM callbacks are registered with a particular vmm_xactor instance?

The answer is yes. Here’s how:

The vmm_xactor base class in VMM instantiates a queue of vmm_xactor_callbacks, callbacks[$]. So, it is possible to use the queue methods to find out details on the callbacks associated with an instance of the vmm_xactor class. To find out the number of callbacks associated with a vmm_xactor instance, we can use the size() function on the callbacks queue. And to list the names of the callbacks associated with the vmm_xactor instance, we can add a new string member (that would carry the name associated with the callback) to the classes derived from vmm_xactor_callbacks, and display this string variable as needed.

Here is an example. Assume that we have extended the vmm_xactor_callbacks class as follows. We also add a string name, that would carry the name associated
with the callback.

class atm_driver_callbacks extends vmm_xactor_callbacks ;
string name;
// Called before a transaction is executed
virtual task pre_trans_t(atm_driver master, atm_cell tr, ref bit drop); endtask
// Called after a transaction has been executed
virtual task post_trans_t(atm_driver master, atm_cell tr);endtask
endclass

The vmm_xactor instance can have a task as follows that displays the callbacks associated with that vmm_xactor instance, as the following example code shows:

task atm_driver::displaycallbacks;
begin
atm_driver_callbacks mycb = new();
$display(“2LOG : number of callbacks is %d\n”, callbacks.size());
$cast(mycb,callbacks[0]);
$display(“2LOG : callback[0] is %s\n”, mycb.name);
$cast(mycb,callbacks[1]);
$display(“2LOG : callback[1] %s\n”, mycb.name);
end
endtask

And, in the build() method of the environment, we have the following code:

//Instantiate the callback objects
atm_sb_callbacks atm_sb_cb = new();
atm_cov_callbacks atm_cov_cb = new();
atm_driver_callbacks mycb = new();
atm_cov_cb.name = “CBNAME1″;
atm_sb_cb.name =”CBNAME2″;

//Register the callbacks to the driver instance drv
this.drv.append_callback(atm_cov_cb);
this.drv.append_callback(atm_sb_cb);
//Call the xactor instance class method that displays the callbacks as follows:
this.drv.displaycallbacks;

This will produce the following output:
LOG : number of callbacks is 2
LOG : callback[0] is CBNAME1
LOG : callback[1] CBNAME2

As seen in the output pasted above, the names and the number of VMM callbacks are registered with the particular vmm_xactor instance, are displayed.

Posted in Callbacks, Debug, Structural Components | Comments Off

Verification in the trenches: an end user’s viewpoint on VMM1.2

Posted by Shankar Hemmady on 11th November 2009

AmbarSarkar Dr. Ambar Sarkar, Chief Verification Technologist, Paradigm Works Inc.

As a soup-to-nuts functional verification consultant, I always find myself as an integral member of my client’s verification team, be it during the project planning stage or during the mad rush end of the tape-out. The roles include everything; being an individual test writer, a verification architect, or even as the verification lead for a globally dispersed large verification team. And yes, the schedules are invariably aggressive, and the budgets tight.

So how does having a sound verification methodology such as VMM help? Broadly speaking, it offers a framework within which the verification engineer can get the job done efficiently. Instead of spending time on environment and methodology issues such as wondering about how to configure all the verification components to the same setting, he or she can focus more on identifying the application specific scenarios and easily configuring the environment to generate those. The challenge, of course, is being able to understand exactly how the methodology helps within the context of a given project.

Given this challenge, I often find myself explaining to teams, in terms of their existing verification environments, how a methodology or a feature can help and the corresponding trade-offs involved. Of course, not every feature is applicable to the needs of a given team, so I pay extra attention in explaining how a feature helps, its pros-and-cons, and how to best integrate it with the current verification environment.

In this series of blog posts, I will share my opinions on how the features newly introduced with the VMM1.2 release can help or could have helped the projects I have been directly involved with so far. Of course, I will not share the gory details, but I hope to share enough so that anyone looking at these new features can evaluate them from an end-user’s perspective.

So what’s new with VMM1.2? And how does it help? Check out the table below where it identifies some of the key features, and a “one-line” description for each. This table reflects how I see these features potentially benefit the projects I have worked with; your mileage may vary.

Feature

Description

vmm_object

A base class for all object types, making it easy to traverse hierarchies and locate objects by name

vmm_unit

A base class for all structural elements such as generators, transactors etc, making it easier to synchronize their actions when executing the phases of a test

vmm_timeline

Allows users to coordinate and even define custom phases

vmm_test

Adds support for multi-test management

`vmm_unit_config_xxx

Macros to configure and build verification environment structure in a top-down manner

vmm_opts

Flexible options handling, command-line or otherwise

vmm_rtl_config

Facilitates covering cases where a number of structural configurations for the RTL exist

vmm_tlm

Making sure your components can really connect to each other and foreign objects in a “plug and play” manner

regular expressions

A very convenient way to access objects by name and set specific properties to them

`vmm_class_factory

Macros to replace and extend object functionality anywhere in the code hierarchy conveniently, and support top-down build process


Table 1. VMM 1.2 Features

While the “one-liner”s above help get an overall idea about the feature, the Verification in the trenches series of blog posts will describe each in further detail starting with their motivation, pros and cons, and how to incorporate them quickly. The hope is that you will be able to judge for yourself how some of the features described can help your current and future project needs.

Feel free to comment/share your opinions and experiences. I will be very interested in hearing from folks in the trenches. How are these features working out for you? Are you getting the benefits as you had hoped? Which one of these features you can use today? Drop me a line! Share!

Posted in Phasing, Structural Components, Transaction Level Modeling (TLM), VMM infrastructure | Comments Off

Protocol Layering Using Transactors

Posted by Janick Bergeron on 9th June 2009

jb_blog Janick Bergeron
Synopsys Fellow

Bus protocols, such as AHB, are ubiquitous and often used in examples because they are simple to use: some control algorithm decides which address to read or write and what value to expect or to write. Pretty simple.

But data protocols can be made a lot more complex because they can often be layered arbitrarily. For example, an ethernet frame may contain an IP segment of an IP frame that contains a TCP packet which carries an FTP frame. Some ethernet frames in that same stream may contain HDLC-encapsulated ATM cells carrying encrypted PPP packets.

How would one generate stimulus for these protocol layers?

One way would be to generate a hierarchy of protocol descriptors representing the layering of the protocol. For example, for an ethernet frame carrying an IP frame, you could do:

class eth_frame extends vmm_data;
rand bit [47:0] da;
rand bit [47:0] sa;
rand bit [15:0] len_typ;
rand ip_frame payload;
rand bit [31:0] fcs;

endclass

class ip_frame extends vmm_data;
eth_frame transport;
rand bit [3:0] version;
rand bit [3:0] IHL;

rand bit [7:0] data;
endclass

That works if you have exactly one IP frame per ethernet frame. But what if your IP frame does not fit into the ethernet frame and it needs to be segmented? This approach works when you have a one-to-one layering granularity, but not when you have to deal with one-to-many (i.e. segmentation), many-to-one (i.e. reassembly, concatenation) or plesio-synchronous (e.g. justification) payloads.

This approach also limits the reusability of the protocol transactions: the ethernet frame above can only carry an IP frame. How could it carry other protocols? or random bytes? How could the IP frame above be transported by another protocol?

And let’s not even start to think about error injection…

One solution is to use transactors to perform the encapsulation. The encapsulator would have an input channel for the higher layer protocol and an output channel for the lower layer protocol.

class ip_on_ethernet extends vmm_xactor;
ip_frame_channel in_chan;
eth_frame_channel out_chan;

endclass

The protocol transactions are generic and may contain generic references to their payload or transport layers.

class eth_frame extends vmm_data;
vmm_data transport[$];
vmm_data payload[$];

rand bit [47:0] da;
rand bit [47:0] sa;
rand bit [15:0] len_typ;
rand bit [  7:0] data[];
rand bit [31:0] fcs;

endclass

class ip_frame extends vmm_data;

vmm_data transport[$];
vmm_data payload[$];

rand bit [3:0] version;
rand bit [3:0] IHL;

rand bit [7:0] data;
endclass

The transactor main() task, simply waits for higher-layer protocol transactions, packs them into a byte stream, then lays the byte stream into the payload portion of new instances of the lower-layer protocol.

virtual task main();
super.main();

forever begin
bit [7:0] bytes[];
ip_frame ip;
eth_frame eth;

this.wait_if_stopped_or_empty(this.in_chan);
this.in_chan.activate(ip);

// Pre-encapsulation callbacks (for delay & error injection)…

this.in_chan.start();
ip.byte_pack(bytes, 0);
if (bytes.size() > 1500) begin

`vmm_error(log, “IP packet is too large for Ethernet frame”);
continue;
end

eth = new(); // Should really use a factory here

eth.da = …;
eth.sa = …;
eth.len_typ = ‘h0800;  // Indicate IP payload

eth.data = bytes;
eth.fcs = 32’h0000_0000;

ip.transport.push_back(eth);
eth.payload.push_back(ip);

// Pre-tx callbacks (for delay and ethernet-level error injection)…

this.out_chan.put(eth);
eth.notify.wait_for(vmm_data::ENDED);

this.in_chan.complete();

// Post-encapsulation callbacks (for functional coverage)…

this.in_chan.remove();
end
endtask

When setting the header fields in the lower-layer protocol, you can use values from the higher-layer protocols (like setting the len_typ field to 0×0800 above, indicating an IP payload), you can use values configured in the encapsulator (e.g. a routing table) or they can be randomly generated with appropriate constraints:

if (!route.exists(ip.da)) begin
bit [47:0] da = {$urandom, $urandom};  // $urandom is only 32-bit

da[41:40] = 2’b00; // Unicast, global address
route[ip.da] = da;
end
eth.da = route[ip.da];

The protocol layers observed by your DUT are then defined by the combination and order of these encapsulation transactor.

vmm_scheducler instances may also be used at various points in the layering to combine multiple streams (maybe carrying different protocol stacks and layers) into a single stream.

Posted in Modeling, Modeling Transactions, Phasing, Structural Components, SystemVerilog, Tutorial | 3 Comments »

How VMM can help controlling transactors easily?

Posted by Fabian Delguste on 29th May 2009

Fabian Delguste / Synopsys Verification Group

Controlling VMM transactors can sometimes be a bit hectic. A typical situation I see is when I have registered a list of transactors for driving some DUT interfaces but only want to start a few of them. Another common situation is when I want to turn off scenario generators and replay transactions directly from a file. Yet another task I often face is registering transactor callbacks without knowing where they are exactly located in the environment.

As you can see, there are many situations where fine-grain functional control of transactors is necessary.

Since VMM 1.1 came out, I have been using a new base class called vmm_xactor_iter that allows accessing any transactor directly by name. In this case all I need to do is to construct a vmm_xactor_iter with regular expression and use the iterator to loop thru all matching transactors.

To understand better how this base class works, I’ll show you a real life example. The scope of this example is to show how to start generators only when vmm_channel playback has not been turned on. As you know vmm_channel can be used to replay transactions directly from files that contain transactions that were recorded in a previous session. This can speed up simulation by turning off constraint solving.

1. string match_xactors = (cfg.mode == tb_cfg::PLAYBACK) ? /Drivers/” : “/./”;

2.

3. `foreach_vmm_xactor(vmm_xactor, “/./”, match_xactors)

4. begin

5.  `vmm_note(log, $psprintf(“Starting %s”, xact.get_instance()));

6.   xact.start_xactor();

7. end

  • In line 1, match_xactors string takes “Drivers*” value when playback mode is selected otherwise it takes “.” when no this mode is not selected. In the first case, transactors named “Drivers” match otherwise all transactors, including generators match
  • In line 3, `foreach_vmm_xactor macro is used to create a vmm_xactor_iter using previous regular expression. This macro can be used to traverse and start all matching objects by using the xact object to access transactors

In case you’d like to have more control over vmm_xactor_iter, it’s possible to use its first() / next() / xactor() methods to traverse matching transactors. Also it’s possible to ensure the regular expression returns at least one transactor. Here is the same example written using these methods.

1. string match_xactors = (cfg.mode == tb_cfg::PLAYBACK) ? “/Drivers/” : “/./”;

3. vmm_xactor_iter iter = new(“/./”, match_xactors);

4. if(iter.xactor()==null)

5. `vmm_fatal(log, $psprintf(“No matching transactors for ‘%s’”, match_xactors));

7. while(iter.xactor()!=null) begin

8.   xact = iter.xactor();

9.   xact.start_xactor();

10.end

Should you need to reclaim the memory allocation required to store all transactors, it’s possible to enable garbage collection by invoking vmm_xactor::kill().

The good news is that vmm_xactor_iter allows me to:

  • Configure the transactor without knowing its hierarchy
  • Provide dynamic access to transactors
  • Reduce code for multiple configurations and callback extensions
  • Use powerful regular expressions for name matching
  • Reuse transactors: no need to modify code when changing the environment content

I hope you find vmm_xactor_iter, and all of the other VMM features, as useful as I do

Posted in Configuration, Structural Components, SystemVerilog, Tutorial | Comments Off

VMM VIP’s on multiple buses

Posted by Adiel Khan on 27th May 2009

image

Adiel Khan, Synopsys CAE

Increasingly, more design-oriented engineers are writing VMM code. Some are trying to map typically good design architecture practices to verification development.

A dangerous mapping is parameterization, from modules to classes.

In my old Verilog testbenches I would develop reusable modules and use #parameters extensively to control the settings of the modules I was instantiating. (It was a sad day when I heard IEEE was deprecating my friend the defparam).

1. module vip #(parameter int data_width = 16,

2. parameter int addr_width = 16)

3. (addr, data);

4. output [addr_width-1:0] addr;

5. inout [data_width-1:0] data;

6.

7. endmodule

This would allow me to instantiate this VIP for many bus variants.

8. vip #(64, 32) vip_inst1(…);

9. vip #(32, 128) vip_inst2(…);

Mapping the approach from modules to classes, I could end up with:

1. class pkt_c #( parameter int data_size=16,

2. parameter int addr_size=16)

3. extends vmm_data;

4. rand bit [addr_size-1:0] addr;

5. rand bit [data_size-1:0] data;

6.

7. endclass

8. //specialized class with 64 & 32 sizes

9. pkt_c #(64, 32) pkt1=new();

10. //specialized class with 32 & 128 sizes

11. pkt_c #(32, 128) pkt2=new();


Be warned, in the SystemVerilog testbench centric view of VIP reusability, parameterization of classes leads to a dead-end path. Moving one layer of abstraction up, I really don’t care if it is a 32/64/128 bits wide interfaces. What I want to do is use pkt_c around the verification environment. The simplest case is creating a reusable driver using pkt_c to drive any bus-width interface.

However, if I try to use a generic class instantiation, I will get a specialization with parameters = 16&16. I cannot perform the $cast() to put the right pkt_c type onto the bus.


1. class pkt_driver_c extends vmm_xactor;

2. virtual protected task main();

3. forever begin : GET_OBJ_TO_SEND

4. pkt_c pkt_to_send; //default class instance

5. pkt_c #(64, 32) pkt_created;

6. randomize(pkt_created);// generator code

7. $cast(pkt_to_send, pkt_created); //FAILS !!!!!

If you are using VMM channels, they must similarly be specialized and cannot carry generic parameterized classes:

8. `vmm_channel(pkt_c)

9. class pkt_driver_c extends vmm_xactor;

10. pkt_c_channel in_chan; //Can only carry pkt_c#(16,16)!!!

Or you must upfront select which specialization you want for use with a parameterized channel.

8. class pkt_driver_c extends vmm_xactor;

9. vmm_channel_typed #(pkt_c#(64, 32)) in_chan;


Hence, for the driver to operate on the correct object type, I need to instantiate the exact specialization throughout my entire environment and make the driver itself parameterized. Now you can clearly see instantiating a specific specialization in the driver (or monitor, scoreboard etc) stops the code from being really reusable for other bus_widths.


1. pkt_c #(64, 32) pkt_to_send;

2. pkt_driver_c #(64, 32) driver;


A better approach is one that was described by Janick in the “Size Does Matter” blog of using `define. Let’s expand on this and see how it works for reusable VIPs. Well, the first thing that comes to my mind is that a `define is a global namespace macro with a single value, whereas I am using my VIP with 2 different bus architectures. Therefore, the `define alone is not enough: you also need a local constant to be able to exclude unwanted bits when you have a VIP instantiated for various bus widths.

1. //default define values

2. `define MAX_DATA_SIZE 16

3. `define MAX_ADDR_SIZE 16

4. class pkt_c extends vmm_data;

5. static vmm_log log = new(“Pkt”, “class”);

6. //instance constant to control actual bus sizes

7. const int addr_size;

8. logic [`MAX_ADDR_SIZE:0] addr;

9. logic [`MAX_DATA_SIZE:0] data;

10. // pass a_size as arg to coverage

11. // ensuring valid coverage ranges.

12. covergroup cg (int a_size);

13. coverpoint addr

14. {bins ad_bin[] = {[0:a_size]};}

15. endgroup

16. // sizes specialized at construction for pkts

17. // on buses less than MAX bus widths

18. function new(int a_s=`MAX_ADDR_SIZE);

19. addr_size = a_s;

20. cg = new(addr_size);

21. `vmm_note(log, $psprintf(“\nADDR_TYPE: “,$typename(addr),

22. “\nDATA_TYPE: “,$typename(data),

23. “\nMAX_BUS_SIZE: “, addr_size));

24. endfunction

The code above allows for a default implementation and all the user needs to do is set the `MAX_ADDR_SIZE and `MAX_DATA_SIZE symbols and all the code will be fully reusable across drivers, monitors, subenv, SoC etc.

For situations where two VIP’s of differing bus architectures are used, the compiler symbols need to be set to the biggest bus architecture in the system; smaller bus-widths are set using addr_size. It is not necessary for addr_size variable to be an instance constant or set during construction. By using instance constants, this ensures the bus-widths are not changed at runtime by users. Having the value of addr_size set during construction gives the users the flexibility to setup the object as they want. For pseudo-static objects such as drivers, monitors, subenvs, masters, slaves, scoreboards etc you should check the construction of verification modules for your particular design architecture during the vmm_env::start phase.

N.B not shown above, but assumed, is that the addr_size variable would be used to ensure correct masking occurs when performing do_pack(), do_unpack() compare() etc.

Just to wrap up some loose ends…

I’m not totally discounting the merits of parameterized classes just insuring people look at all the options. For instance you could parameterize everything and then set the SIZE at the vmm_subenv level and map the SIZE parameters to all other objects. At some point you will want to monitor or scoreboard across different bus-widths and then the parameterized class casting will bite you, reducing you to manually mapping the members within the comparison objects. There is a time and place for everything, so probably need another blog showing merits and where to use parameterized classes.

The vmm_data class is not the only place you might need to know the size of the bus, the same `define & instance constant technique can be used throughout your VIP classes.

This blog does not discuss the pros and cons of putting coverage groups in your data object class. I merely used the covergroup in the data-object as a vehicle to demonstrate how you can make your classes more reusable. I think a separate blog about where best to put coverage will clarify the usage models.

All the code snippets can be run with VCS-2009.06 & VMM1.1. Contact me for more complete code examples and bugs or issues you find.

Posted in Coding Style, Configuration, Register Abstraction Model with RAL, Reuse, Structural Components, VMM | 8 Comments »

How to use VMM callbacks

Posted by Janick Bergeron on 19th November 2008

One of the major benefits of using VMM is that it helps create reusable and extensible transactors and verification environments. One of the key extensibility features of VMM is the “callback”.

The following code sample shows a transactor with its core functionality implemented in the main() task.

   class my_xactor extends vmm_xactor;
      ...
      task main();
         ...
         forever begin
            ...
            in_chan.get(in_tr);    // Get input transaction
            ...                    // Execute transaction
            out_chan.put(out_tr);  // Generate output transaction
            ...
         end
      endtask
   endclass

How would you modify this transactor to accommodate DUT or test-specific features such as saving the output transactions in a scoreboard, injecting errors or adding functional coverage? One option would be to modify the transactor itself and change the functionality in the main() task as shown in the code below:

   class my_xactor extends vmm_xactor;
      ...
      task main();
         ...
         forever begin
            ...
            in_chan.get(in_tr);    // Get input transaction
            if ($urandom % 10 < 2) in_tr.checksum = $random;
            ...                    // Execute transaction
            sb.expect(out_tr);
            out_chan.put(out_tr);  // Generate output transaction
            ...
         end
      endtask
   endclass

However, this makes the transactor not reusable for other DUTs or tests. You must avoid making DUT or test-specific modifications to reusable code. This is a problem that is generic to reusable software, not just verification, and the object-oriented world provides a solution to this problem: virtual methods. Virtual methods can be used to provide extension points at relevant point in the functionality of the transactor:

   class my_xactor extends vmm_xactor;
      virtual task pre_exec(in_trans tr);
      endtask
      virtual task post_exec(out_trans tr);
      endtask
      ...
      task main();
         ...
         forever begin
            ...
            in_chan.get(in_tr);    // Get input transaction
            pre_exec(in_tr);
            ...                    // Execute transaction
            post_exec(out_tr);
            out_chan.put(out_tr);  // Generate output transaction
            ...
         end
      endtask
   endclass

VMM guidelines 4-155 through 4-158 provide recommendations for useful extension points through virtual methods. Virtual methods should also have suitable argument allowing the behavior of the transactor to be observed or modified as necessary. It is now possible to provide DUT or test-specific extensions of the transactor without modifying the reusable code:

   class my_dut_xactor extends my_xactor;
      virtual task post_exec(out_trans tr);
         sb.expect(tr);
      endtask
   endclass

   class my_test_xactor extends my_dut_xactor;
      virtual task pre_exec(in_trans tr);
         if ($urandom % 10 < 2) in_tr.checksum = $random;
      endtask
   endclass

There are two problems though. First, each time you extend the transactor, you create a new class type but the verification environment needs to be written using a class type that is known a priori; if the class type changes for every test, the environment itself will need to be changed for every test. Second, to combine orthogonal extensions, you must extend the previously extended class type, creating a linear chain of new class types; combining N different orthogonal extensions leads to potentially having to create and maintain 2N classes.

Object-oriented programming provide several well-known approaches and solutions to common problem called “patterns“. Patterns are not base classes or libraries. They are techniques and examples on how to structure object-oriented code to solve a specific challenge. Different patterns can be used, alone or in combination.

The first problem can be solve using the “abstract factory” pattern. But it introduces several limitations. For example, it requires an argument-free constructor. This has the consequence of requiring separate methods for transactor configuration and connection. VMM was designed to use the simple and well-known pins-and-wire connectivity model–which has been used to connect Verilog modules for many, many years and is well understood by all design and verification engineers. Just like Verilog modules are instantiated with their configuration parameters and port connections specified at the same time, so are VMM transactors. And this requires that they be passed as arguments via the constructor. Second, the factory pattern is used only at the time the transactor is created. This creates a static lifetime for the transactor and its extensions. It is not possible to dynamically introduce or remove a transactor extension (such as error injection) during the execution of a test.

The better pattern to use for transactor extension is the “facade pattern”. It calls for the separation of the virtual methods into a separate class, called the “facade”. The transactor will then call the virtual methods through that facade class.

   class my_xactor_cbs;
      virtual task pre_exec(in_trans tr);
      endtask
      virtual task post_exec(out_trans tr);
      endtask
   endclass

   class my_xactor extends vmm_xactor;
      my_xactor_cbs cbs;
      ...
      task main();
         ...
         forever begin
            ...
            in_chan.get(in_tr);    // Get input transaction
            if (cbs != null) cbs.pre_exec(in_tr);
            ...                    // Execute transaction
            if (cbs != null) cbs.post_exec(out_tr);
            out_chan.put(out_tr);  // Generate output transaction
            ...
         end
      endtask
   endclass

Using the facade pattern, the environment can be written using the original “my_xactor” class and your extensions can be introduced in the existing transactor instance by assigning to the “cbs” facade instance. Extensions can also be dynamically introduced or removed as the facade instance can be modified at any time.

   class my_dut_cbs extends my_xactor_cbs;
      ...
      virtual task post_exec(out_trans tr);
         sb.expect(tr);
      endtask
   endclass

   class my_test_cbs extends my_dut_cbs;
      ...
      virtual task pre_exec(in_trans tr);
         if ($urandom % 10 < 2) in_tr.checksum = $random;
      endtask
   endclass

   class tb_env extends vmm_env;
      my_xactor xact;
      ...
      virtual function void build();
         ...
         xact = new(...);
         begin
            my_dut_cbs cb = new(...);
            xact.cbs = cb;
         end
         ...
      endfunction
      ...
   endclass

   program test;
   initial begin
      tb_env env = new();
      my_test_cbs err_ext = new(...);
      fork
         #1000 env.xact.cbs = err_ext;
      join_none
      env.run();
   end
   endprogram

The facade pattern solves the new class type issue, but not the exponential number of class extensions. If a “chain of responsibilityobserver” pattern is used in combination with the facade pattern, it is possible to register multiple facade instances with the same transactor, each providing an orthogonal extension.

   class my_dut_cbs extends my_xactor_cbs;
      ...
      virtual task post_exec(out_trans tr);
         sb.expect(tr);
      endtask
   endclass

   class my_test_cbs extends my_xactor_cbs;
      virtual task pre_exec(in_trans tr);
         if ($urandom % 10 < 2) in_tr.checksum = $random;
      endtask
   endclass

   class tb_env extends vmm_env;
      my_xactor xact;
      ...
      virtual function void build();
         ...
         xact = new(...);
         begin
            my_dut_cbs cb = new(...);
            xact.cbs.push_back(cb);
         end
         ...
      endfunction
      ...
   endclass

   program test;
   initial begin
      tb_env env = new();
      my_test_cbs err_ext = new(...);
      fork
         #1000 env.xact.cbs.push_back(err_ext);
      join_none
      env.run();
   end
   endprogram

The combination of the facade and chain-of-responsibilityobserver patterns creates what is known as VMM callbacks. VMM provides some pre-defined functionality to support the callbacks: the “vmm_xactor_callbacks” base class, the vmm_xactor::prepend_callback(), vmm_xactor::append_callback() and vmm_xactor::unregister_callback() and the `vmm_callback macro. Using the provided VMM functionality, a reusable transactor would have the following structure:

   class my_xactor_cbs extends vmm_xactor_callback;
      virtual task pre_exec(in_trans tr);
      endtask
      virtual task post_exec(out_trans tr);
      endtask
   endclass

   class my_xactor extends vmm_xactor;
      ...
      task main();
         ...
         forever begin
            ...
            in_chan.get(in_tr);    // Get input transaction
            `vmm_callback(my_xactor_cbs, pre_exec(in_tr));
            ...                    // Execute transaction
            `vmm_callback(my_xactor_cbs, post_exec(out_tr));
            out_chan.put(out_tr);  // Generate output transaction
            ...
         end
      endtask
   endclass

The environment can provide a callback extension to provide DUT-specific functionality that is shared by all tests:

   class my_dut_cbs extends my_xactor_cbs;
      ...
      virtual task post_exec(out_trans tr);
         sb.expect(tr);
      endtask
   endclass

   class tb_env extends vmm_env;
      my_xactor xact;
      ...
      virtual function void build();
         ...
         xact = new(...);
         begin
            my_dut_cbs cb = new(...);
            xact.append_callback(cb);
         end
         ...
      endfunction
      ...
   endclass

And a test can similarly provide additional callback extensions in addition to the DUT-specific extensions already registered in the environment. Each test can extend a transactor differently according to its need:

   class my_test_cbs extends my_xactor_cbs;
      virtual task pre_exec(in_trans tr);
         if ($urandom % 10 < 2) in_tr.checksum = $random;
      endtask
   endclass

   program test;
   initial begin
      tb_env env = new();
      my_test_cbs err_ext = new(...);
      fork
         #1000 env.xact.append_calback(err_ext);
      join_none
      env.run();
   end
   endprogram

You will find additional guidelines and examples on using and implementing callbacks in the VMM book pp. 198-201 and pp 221-225.

Posted in Callbacks, Communication, Structural Components | 6 Comments »

2c71bf311cb2cc9c54f53c1662e89dc811111111111111111111111111111