Verification Martial Arts: A Verification Methodology Blog

Archive for the 'Communication' Category

SNUG-2012 Verification Round Up – Miscellaneous Topics

Posted by paragg on 29th March 2013

In my final installment of the series of blogs summing up the various SNUG verification papers of 2012, I try to cover the user papers on the Design IP/Verification IP and SystemC and SystemVerilog co-simulation. Please find my earlier blogs on the other domains here: System Verilog Language, Methodologies & VCS technologies

DesignWare core USB3.0 Controller (DWC_usb3) can be configured as a USB3.0 Device Controller. When verifying a system that comprises a DWC_usb3 Device Controller, the verification environment is responsible for bringing up the DWC_usb3 Device Controller to its proper operation mode to communicate with the USB3.0 Host. The paper Integrating DesignWare USB3.0 Device Controller In a UVM-based Testbench from Ning Guo of Paradigm Works describes the process of configuring and driving the DWC_usb3 Device Controller in a UVM based verification environment using the Discovery USB 3.0 Verification IP. This paper describes how the verification environment needs to be created so that it’s highly configurable and reusable.

The AMBA 4 ACE specification enables system level cache coherency across clusters of multicore processors, such as the ARM Cortex-A15 and Cortex-A7 MPCore™ processors .This ensures optimum performance and power efficiency of complex SoC designs. However, the design complexity associated with these capabilies is also higher.  And it throws up new verification challenges.  In the paper, Creating AMBA4 ACE Test Environment With Discovery VIP”, Whitney Huang, Sean Chou, MediaTek Inc, demonstrates how to tackle complex verification challenges increase their verification productivity by using Synopsys Discovery AMBA ACE VIP.

The paper, Verification Methodology of Dual NIC SOC Using VIPs by A.V. Anil Kumar, Mrinal Sarmah, Sunita Jain of Xilinx India Technology Services Pvt. Ltd, talks about how various features of Synopsys PCIe and Ethernet Verification IPs can be exploited to help in the efficient verification of the DUT across various traffic configurations. The paper explores how the VIP Application Programming Interface (API)s can leveraged in the  tests cases to reach high  functional coverage numbers  in a very short duration. They also show how a dual NIC verification environment can effectively use Ethernet VIP APIs to test various Media Access Control (MAC) features. Finally conclude how of the implementation can be used across future revisions of their design.

The ability to analyze the performance of the SoC at the early stage of the design can make a significant different to the end product.  This can lead to more accurate and an earlier estimate of the desired performance that is expected.  Dayananda Yaraganalu Sadashivappa, Igal Mariasin, Jayaprakash Naradasi of SanDisk India Device Design Centre Pvt. Ltd., in the paperGeneric MLM environment for SoC Performance Enhancement”, outlines the solution that was found by using the Synopsys VIP models. The VIPs were used in conjunction with interconnect, which in this case is a Multi-Layer-Matrix (MLM). The environment was built leveraging the VMM base classes. The VMM multiple stream scenario (vmm_ms_scenario) base class was used to create the traffic across the matrix, and the performance meters were constructed using the base classes. The callbacks were leverage appropriately help in collating the statistics. Multiple knobs were used to make the environment generic and configurable. The approach helped in finding multiple performance bugs which could not have been easily found using conventional verification.

In the paper, “User Experience Verifying Ethernet IP Core”, Puneet Rattia of Altera Corporation, presents his experience with verifying the Altera® 40-100Gbps Ethernet IP core utilizing VMM environment while integrating the Ethernet VIP from Synopsys. He explains how he created a full suite of system and blocks level regression tests and then goes on to show how he  utilizes the coverage mapping capabilities of VCS to merge the results across these various testbenches and produce meaningful reports. Besides showing how to reuse the verification infrastructure at the SoC level, the paper also demonstrates how they went in for horizontal reuse by integrating the reference SystemC based models developed and prototyped in the early phase of the project.

UVM 1.x includes support for the communication interfaces defined by the SystemC TLM-2.0 standard. This enables integration of SystemC TLM-2.0 IP into a SystemVerilog UVM verification environment. Dr David Long, John Aynsley, Doug Smith, Doulos in the paper A Beginner’s Guide to Using SystemC TLM-2.0 IP with UVMdescribes how this is done best. They talk about the fact that the connection between SystemC and SystemVerilog currently requires a tool specific interface such as Synopsys Transaction Level Interface (TLI). This paper begins with a brief overview of TLM-2.0 aimed at novice users. It then discusses the steps required to add a SystemC TLM-2.0 model into a SystemVerilog UVM environment and simulate it with VCS. At each step, issues that users will face are explored and suggestions made for practical fixes, showing the relevant pieces of code. Finally, the paper gives a summary of areas where the UVM implementation of TLM-2.0 differs from the SystemC standard and proposes workarounds to ensure correct communication between the SystemVerilog and SystemC domains.

There is an inherent need to enable the horizontal reuse of components created during the architecture and exploration stage. Subhra S Bandyopadhyay, Pavan N M, Intel Technology India Pvt. Ltd, in Integrating SystemC OSCI TLM 2.0 Models to OVM based System Verilog Verification Environments talks about how  theur architecture team creates SystemC models  for early performance analysis and accelerated software development. In OVM-based verification environment, the objective was to reuse this model as a reference model and thus helped in reducing the overall environment bring-up time. The challenge was not only to integrate the SystemC model in the OVM-based verification environment but also to be able to efficiently send transactions from SV to SystemC and vice versa. This paper explores the successful integration of SystemC TLM2 components in OVM based verification environments and also highlight how the VCS TLI (Transaction Level Interface) adapters help TLM2.0 sockets in SystemC to communicate with those in SV and vice versa.

Truly, I feel overwhelmed by the numbers of papers and the interesting use of technology across a variety of domains on which user share their experiences across the various SNUG conferences. As we speak, the SNUG events for 2013 have started, and the stage is all set for a new set of very informative and interesting sessions. I am sure most of you would be attending the SNUIG conferences in your area. . You can find the detailed schedule of those here.

Posted in Announcements, Automation, Callbacks, Coding Style, Communication, Reuse, Structural Components, SystemC/C/C++, SystemVerilog, Transaction Level Modeling (TLM), Tutorial, UVM, VMM | Comments Off

VCS Built-in TLI connectivity for UVM to SystemC TLM 2.0

Posted by vikasg on 20th September 2012

Vikas Grover | Sr. Manager, Central Verification | AMD-India

One of the challenges faced in SOC verification is to validate the designs in mixed language and mixed  abstraction level. SystemC is widely used language to define the system model at higher level of abstraction.  SystemC is an IEEE standard language for System Level modeling and it is rich with constructs for  describing models at various levels of abstraction i.e. Untimed, Timed, Transaction Level, Cycle Accurate,  and RTL. The transaction level model simulates much faster than RTL model, besides OSCI defined the TLM  2.0 interface standard for SystemC which enables SystemC model interoperability and reuse at transaction  level.

On the other side, SystemVerilog is a unified language for design and verification. It is effective for designing advance testbenches for both RTL and Transaction level models, since it has features like constraint randomization for stimulus generation, functional coverage, assertions, object oriented constructs(like class,inheritance etc). Early availability of standard methodologies (providing framework and testbench coding guidelines for resue) like VMM, OVM, UVM enabled wide adoption for System Verilog in industry. The UVM 1.0 Base Class Library which was   released on Feb 2011  includes OSCI TLM 2.0 socket interface to enable interoperability for UVM with SystemC . Essentially it allows UVM testbench to include SystemC TLM 2.0 reference models. The UVM testbench can pass (or receive) transactions from SystemC models. The transaction passed across System Verilog ßàSystemC could be TLM 2.0 generic payload OR uvm_sequence_item. The implementation of UVM to SC TLM 2.0 communication is vendor dependent.

Starting with with the 2011.03 release, VCS provides a new TLI adaptor which enables UVM TLM 2.0 sockets to communicate with SC TLM 2.0 based environment to pass transactions across language domains.  You can also check out  a couple of earlier post from John Aynsley, (VMM-to-SystemC Communication Using the TLI and  Blocking and Non-blocking Communication Using the TLI) on SV-SystemC communication using TLI.   In this Blog, I am going to describe VCS TLI connectivity mechanism between UVM and SystemC. There are other advance TLI features in VCS ( like direct access of data, invoking task/functions  across SV and SC language),  message unification across UVM-SC, transaction debug techniques, extending TLI adaptor for user defined interface other than VMM/UVM/TLM2.0 which can be written about on later.

With the support for TLM2.0 interfaces in both UVM and VMM, the importance of OSCI TLM2.0 across both SystemC and SystemVerilog is now apparent. UVM provides the following TLM2.0 socket interfaces (for both blocking and non-blocking communication)

  • uvm_tlm_b_initiator_socket
  • uvm_tlm_b_target_socket
  • uvm_tlm_nb_initiator_socket
  • uvm_tlm_nb_target_socket
  • uvm_analysis_port
  • uvm_subscriber

SystemC TLM2.0 consists of following TLM 2.0 interface

  • tlm_initiator_socket
  • tlm_target_socket
  • tlm_analysis_port

The Built-in TLI adaptor solution for VCS is a general purpose solution to simplify the transaction passing across UVM and  SystemC as shown below. The transactions can be TLM 2.0 generic payload OR uvm_sequence_item object. The UVM 1.0 does have the TLM 2.0 generic payload class as well.

The Built-in TLI adaptor is available as a pre-compiled library with VCS. The user would need to follow two simple steps to include the TLI adaptor in his/her verification environment.

  1. Include a header file in System Verilog and SystemC code. The System Verilog header file provides a package which implements the bind function parameterized on uvm_sequence_item object.
  2. Invoke the bind function on System Verilog and SystemC side to connect each socket across language.  The bind function has a string argument which must be unique for each socket connection across System Verilog and SystemC.

The code snippet for above steps is shown below. The TLI adaptor code is highlighted in orange/blue color.  The UVM Initiator  “initiator_udf” from System Verilog is driving SystemC Target “ target_udf” using the  TLM  blocking socket.

The TLI adaptor bind function uses the unique string “str_udf_pkt” to identify the socket connectivity across SystemVerilog and SystemC domain.  For multiple sockets, the user needs to invoke the TLI bind function once for each socket. The TLI adaptor supports both blocking and non-blocking transport interfaces for sockets to communicate across System Verilog and SystemC.

Thus, the Built-in UVM-SC TLI adaptor capability of VCS ensures that SystemC can be connected seamlessly in UVM based verification environment.

Posted in Communication, Interoperability, SystemC/C/C++, Tools & 3rd Party interfaces, Transaction Level Modeling (TLM) | 1 Comment »

Using the VMM Datastream Scoreboard in a UVM environment

Posted by Amit Sharma on 2nd February 2012

Implementing the response checking mechanism in a self-checking environment remains the most time-consuming task. The VMM Data Stream Scoreboard package facilitates the implementation of verifying the correct transformation, destination and ordering of ordered data streams. This package is intuitively applicable to packet-oriented design, such as modems, routers and protocol interfaces. This package can also be used to verify any design transforming and moving sequences of data items, such as DSP data paths and floating-point units. Out-of-the-box, the VMM data stream scoreboard can be used to verify single-stream designs that do not modify the data flowing through them. For example, it can be used to verify FIFOs, Ethernet media access controllers (MACs) and bridges.

The VMM data scoreboard can also be used to verify multi-stream designs with user-defined data transformation and input-to-output stream routing. The transformation from input data items into expected data items is not limited to one-to-one transformation. An input data item may be transformed into multiple expected data items (e.g. segmenters) or none (e.g. reassemblers). Compared to this, the functionality available through UVM in-order comparator or the algorithmic comparator is significantly less. Thus, users might want to have access to the functionality provided by the VMM DS Scoreboard in a UVM environment. Using the UBUS example available in $VCS_HOME/doc/examples/uvm/integrated/ubus as a demo vehicle, this article shows how simple adapters are used to integrate the VMM DS scoreboard in a UVM environment and thus get access to more advanced scoreboarding functionality within the UVM environment

The UBUS example uses an example scoreboard to verify that the slave agent is operating as a simple memory. It extends from the uvm_scoreboard class and implements a memory_verify() function to makes the appropriate calls and comparisons needed to verify a memory operation. An uvm_analysis_export is explicitly created and implementation for ‘write’ defined. In the top level environment, the analysis export is connected to the analysis port of the slave monitor.

ubus0.slaves[0].monitor.item_collected_port.connect(scoreboard0.item_collected_export);

The simple scoreboard with its explicit implementation of the comparison routines suffices for verifying the basic operations, but would require to be enhanced significantly to provide more detailed information which the user might need. For example, lets take the ‘test_2m_4s’ test. Here , the environment is configured to have 2 Masters and 4 slaves.. Depending on how the slave memory map is configured, different slaves respond to different transfers on the bus. Now, if we want to get some information on how many transfer went into the scoreboard for a specific combination (eg: Master 1 to Slave 3), how many were verified to be processed correctly etc, it would be fair enough to conclude that the existing scoreboarding schemes will not suffice..

Hence, it was felt that the Data Stream Scoreboard with its advanced functionality and support for data transformation, data reordering, data loss, and multi-stream data routing should be available for verification environments not necessarily based on VMM. From VCS  2011.12-1, this integration have meed made very simple.  This VMM DS scoreboard implements a generic data stream scoreboard that accepts parameters for the input and output packet types. A single instance of this class is used to check the proper transformation, multiplexing and ordering of multiple data streams. The scoreboard class now  leverages a policy-based design and parameterized specializations to accepts any ‘Packet’ class or d, be it VMM, UVM or OVM.

The central element in policy-based design is a class template (called the host class, which in this case in the VMM DS Scoreboad), taking several type parameters as input, which are specialized with types selected by the user (called policy classes), each implementing a particular implicit method (called a policy), and encapsulating some orthogonal (or mostly orthogonal) aspect of the behavior of the instantiated host class. In this case, the ‘policies’ implemented by the policy classes are the ‘compare’ and ‘display’ routines.

By supplying a host class combined with a set of different, canned implementations for each policy, the VMM DS scoreboard can support all different behavior combinations, resolved at compile time, and selected by mixing and matching the different supplied policy classes in the instantiation of the host class template. Additionally, by writing a custom implementation of a given policy, a policy-based library can be used in situations requiring behaviors unforeseen by the library implementor .

So, lets go through a set of simple steps to see how you can use the VMM DS scoreboard in the UVM environment

Step 1: Creating the policy class for UVM and define its ‘policies’

image

Step 2: Replacing the UVM scoreboard with a VMM one extended from “vmm_sb_ds_typed” and specialize it with the ubus_transfer type and the previous created uvm_object_policy.

class ubus_example_scoreboard extends vmm_sb_ds_typed #(ubus_transfer,ubus_transfer, uvm_object_policy);

`vmm_typename(ubus_example_scoreboard)

endclass: ubus_example_scoreboard

Once, this is done, you can either declare an VMM TLM Analysis export to connect to the Bus Monitor in the UBUS environment or use the pre-defined on in the VMM DS scoreboard

vmm_tlm_analysis_export #(ubus_example_scoreboard,ubus_transfer) analysis_exp;

Given that for any configuration, one master and slave would be active, define the appropriate streams in the constructor (though this is not required if there are only single streams, we are defining this explicitly so that this can scale up to multiple input and expect streams for different tests)

this.define_stream(0, “Slave 0″, EXPECT);
this.define_stream(0, “Master 0″, INPUT);

Step 2 .a: Create the ‘write’ implementation for the Analysis export

Since, we are verifying the operation of the slave as a simple memory, we just add in the appropriate logic to insert a packet to the scoreboard when we do a ‘WRITE’ and an expect/check when the transfer is a ‘READ’ with an address that has already been written to.

image

Step 2.b: Implement the stream_id() method

You can use this method to determine to which stream a specific ‘transfer’ belongs to based on the packet’s content, such as a source or destination address. In this case, the BUS Monitor updates the ‘slave’ property of the collected transfer w.r.t where the address falls on the slave memory map.

image

image

Step 3: Create the UVM Analysis to VMM Analysis Adapter

The uvm_analysis_to_vmm_analysis is used to connect any UVM component with an analysis port to any VMM component via an analysis export. The adapter will convert all incoming UVM transactions to a VMM transaction and drive this converted transaction to the VMM component through the analysis port-export. If you are using the VMM UVM interoperability library, you do not have to create the adapter as it will be available in the library

image

image

Create the ‘write’ implementation for the analysis export in the adapter

The write method, called via the <analysis_export> would just post the receive UBUS transfer from the UVM analysis port to the VMM analysis port.

image

Step 4: Make the TLM connections

In the original example, the item_collected_port of the slave monitor was connected to the analysis export of the example scoreboard. Here, the DataStream scoreboard has an analysis port which expects a VMM transaction. Hence, we need the adapter created above to intermediate between the analysis port of the UVM Bus monitor and the analysis export of the VMM DS scoreboard..

image

Step 5: Define Additional streams if required for multi-master multi-slave configurations

This step is not required for a single master/slave configuration. However, would need to create additional streams so that you can verify the correctness on all the different permutations in terms of tests like “test_2m_4s” .

In this case, the following is added in the test_2m_2s in the connect_phase()

image

Step 6: Add appropriate options to your compile command and analyze your results

Change the Makefile by adding –ntb_opts rvm on the command line and add +define+UVM_ON_TOP

vcs -sverilog -timescale=1ns/1ns -ntb_opts uvm-1.1+rvm +incdir+../sv ubus_tb_top.sv -l comp.log +define+UVM_ON_TOP

And that is all, as far and you are ready to go and validate your DUT with a more advanced scoreboard with loads of built-in functionality. This is what you will get when you execute the “test_2m_4s” test

Thus, not only do you have stream specific information now, but you now have access to much more functionality as mentioned earlier. For example, you can model transformations, checks for out of order matches, allow for dropped packets, and iterate over different streams to get access to the specific transfers. Again, depending on your requirements, you can use the simple UVM comparator for your basic checks and switch over to the DS scoreboard for the more complex scenarios with the flip of a switch in the same setup. This is what we did for a UVM PCIe VIP we developed earlier ( From the Magician’s Hat: Developing a Multi-methodology PCIe Gen2 VIP) so that the users has access to all the information they require. Hopefully, this will keep you going, till we have a more powerful UVM scoreboard with some subsequent UVM version

Posted in Communication, Interoperability, Reuse, Scoreboarding, UVM, VMM infrastructure | 2 Comments »

Extending Hierarchical Options in VMM to work with all data types

Posted by Amit Sharma on 2nd September 2011

Abhisek Verma, CAE, Synopsys

Tyler Bennet, Senior Application Consultant, Synopsys

Traditionally, to pass a custom data type like a struct or a virtual interface using vmm_opts, it is recommended to wrap it in a class and then use the set/get_obj/get_object_obj on the same. This approach has been explained in another blog here.  But wouldn’t you prefer to have the same usage for these data types as the simple use model you have for integers, strings and objects?  This blog describes how to create a simple helper package around vmm_opts that uses parameterization to pass user-defined types. It will work with any user-defined type that can be assigned with a simple “=”, including virtual interfaces.

Such a package can be created as follows:-

STEP1:: Create the parameterized wrapper class inside the package

image

The above vmm_opts_p class is used to encapsulate any custom data type which it takes as a parameter “t”.

STEP2:: Define the ‘get’ methods inside the package.

Analogous to vmm_opts::get_obj()/get_object_obj(), we define get_type and get_object_type. These static functions allow the user to get an option of a non-standard type. The only restriction is that the datatype must work with the assignment operator. Also note that since this uses vmm_opts::get_obj, these options cannot be set via the command-line or options file.

image

STEP3:: Define the ‘set’ methods inside the package.

Similarly, analogous to vmm_opts::set_object(), the custom package needs to declare set_type. This static function allows the user to set an option of a non-standard type. .

image

USE-MODEL

The above package can be imported and used to set/get virtual interfaces as follows :-

vmm_opts_p#(virtual dut_if)::set_type(“@BAR”, top.intf, null); //to set the virtual interface of type dut_if

tb_intf = vmm_opts_p#(virtual dut_if)::get_object_type(is_set, this, “BAR”, null, “SET testbench interface”, 0); //to get the virtual interface of type dut_if, set by the above operation.

The following template example shows the usage of the package in complete detail in the context of passing virtual interfaces

1. Define the interface, Your DUT

image

2. Instantiate the DUT, Interface and make the connections

image

3.  Leverage the Hierarchical options and the package in your Testbench

image

So, there you go.. Now , whether you are using your own user defined types, structs, queues , you can go ahead and use this package and thus have your TB components communicate and pass data structures  elegantly and efficiently..

Posted in Communication, Configuration, Customization, Organization | Comments Off

Building & Configuring Coverage Model – VMM Style – Part-II

Posted by paragg on 25th June 2011

Parag Goel, Senior Corporate Application Engineer, Synopsys

In the previous post, we looked at how you can enable/disable different types of coverage encapsulated in the Coverage Model wrapper class. In this post, let’s look at how we can easily create an infrastructure to pass different inputs to the wrapper class so as to able to configure the coverage collection based on user. The infrastructure ensure that these elements values percolate down to the to the sub-coverage model groups.

The following are some of the key inputs that needs to be passed to the difference coverage component classes

1. SV Virtual Interfaces so that different signal activity can be accessed

2. The Transactions observed and collected by the physical level monitors

3. The ‘Configuration’ information

Picture7

Let’s look at how the we can easily pass the signal level information to the Coverage Model

Step I: Encapsulation of the interface in the class wrapper.

class intf_wrapper extends vmm_object;

virtual axi_if v_if ;

function new (string name, virtual axi_if mst_if);
super.new(null, name);
this.v_if = mst_if;
endfunction

endclass: master_port

Step II: In the top class/environment- Set this object using vmm_opts API.

class axi_env extends vmm_env;
`vmm_typename(axi_env)
intf_wrapper mc_intf;

function void build_ph();
mc_intf = new(“Master_Port”, tb_top.master_if_p0);
// Set the master port interface
vmm_opts::set_object(“VIP_MSTR:vip_mstr_port“, mc_intf, env);
endfunction:build_ph
endclass: axi_env

Step III: Connecting in the coverage class.

A. Get the object containing interface in the coverage model class using vmm_opts.

assert($cast(this.mst_port_obj, vmm_opts::get_object_obj(is_set, this, “vip_mstr_port“)));

B. Connecting local virtual interface to one contained in the object.

this.cov_vif = mstr_port_obj.v_if;

Now, we need to pass the collected transaction object from the monitor needs to the coverage collector. This can be conveniently done in VMM using TLM communication. This is achieved through the vmm_tlm_analysis_port, which establishes the communication between a subscriber & an observer.

class axi_transfer extends vmm_data;

. . .

class axi_bus_monitor  extends  vmm_xactor;

vmm_tlm_analysis_port#(axi_bus_monitor, axi_transfer)  m_ap;
task collect_trans();

//Writing to the analysis port.

m_ap.write(trans);
endtask
endclass

class axi_coverage_model extends vmm_object;
vmm_tlm_analysis_export #( axi_coverage_model, axi_transfer) m_export;

function new (string inst, vmm_object parent = null);
m_export = new(this, “m_export”);

endfunction

function void write(int id, axi_transfer trans);

//Sample the appropriate covergroup, once the transaction is received

in the write function.

endfunction

endclass

To set up the TLM Connections in the agent/environment, we need to do the following:

class axi_subenv extends vmm_group;

//Instantiate the model classes and creates them.

axi_bus_monitor mon;

axi_coverage_model cov;

. . .

virtual function void build_ph;
mon = new( “mon”, this);
cov = new( “cov”, this);
endfunction
virtual function void connect_ph;

//Bind the TLM ports via VMM – tlm_bind

monitor.m_ap.tlm_bind( cov.m_export );

endfunction

To make the Coverage Model truly configurable, we need to look at some of the other key requirements as well at different level of granularity. This can be summarized as the ability to do the following.

1. Enable/Disable coverage collection for each covergroup defined . Every covergroup should be created only if a user wishes to do so. So there should be a configuration parameter which restricts the creation of the covergroup altogether. And this should also be used to control the sampling of a covergroup.

2. The user must be able to configure the limits on the individual values being covered in the coverage model within a legal set of values. Say for example, transaction field BurstLength – user should be able to guide the model what are the limits on this field that one wishes to get coverage on within a legal set of values ranging from ‘1’ to ‘16’ as per AXI spec. So providing lower and upper limits for transaction parameters like burst size, burst length, address etc. makes it re-usable. This limits should be modeled as variables which can be overwritten dynamically

3. The user should be able to control the number of bins to be created. For example in fields like address. auto_bin_max option can be exploited to achieve this in case the user doesn’t have explicitly defined bins..

4. The user must be able to control the number of hits for which a bin can be considered as covered. option.atleast can be used for this purpose and the input to this can be a user defined parameter.

5. The user should also have the control to specify his coverage goal, i.e. when the coverage collector should show the covergroupcovered” even though the coverage is not 100%. This can be achieved by using option.goal, where goal is again a user defined parameter.

All the parameters required to meet the above requirements can be encapsulated in the class (i.e. coverage configuration class) and this can be set and retrieved in a similar fashion described for setting & getting the interface wrapper class using vmm_opts API’s.

class coverage_cfg extends vmm_object;
  int disable_wr_burst_len;
   . . .
  function new( vmm_object parent=null, string name);
     super.new(parent, name);
  endfunction
  coverage_cfg cfg;
  function new(vmm_object parent=null, string name);
     bit is_set;
     super.new(parent, name);
     $cast(cfg, vmm_opts::get_object_obj(is_set, this,
                                           "COV_CFG_OBJ”));
  endfunction

Wei Hua presents another cool mechanism of collecting this parameters using vmm_notification mechanism in this earlier blog  :

A Generic Functional Coverage Solution Based On vmm_notify

Hope you found this useful. I will be talking about how to track Error Coverage in my next blog, so stay tuned!

Posted in Communication, Configuration, Coverage, Metrics, Reuse, Structural Components, VMM, VMM infrastructure | Comments Off

Communication Options in VMM 1.2

Posted by John Aynsley on 30th November 2010

 

John Aynsley, CTO, Doulos

Having given an example of transaction-level communication in VMM in my previous post, I am now going to say a little about each of the communication options available in VMM 1.2. The VMM user is presented with an array of communication options from which to chose, but which-is-which and which is best?

Blocking Transport

Blocking transport is used to send a single payload representing a transaction from an initiator to a target where very little timing information is required. Each transaction can have a begin time and an end time, but there is no more structure or detail to the lifetime of the transaction than that. It is possible to annotate timing information onto the b_transport call to offset the begin and end times of the transaction from the actual times as which the function is called and returned, but not to add further timing points or events during the lifetime of the transaction. The b_transport function is called with a transaction object as an argument, and returns only when the processing of that transaction at the target is complete. Blocking transport is the cleanest and simplest way to send a payload from A to B.

Non-blocking Transport

Like blocking transport, non-blocking transport is used to send a single payload representing a transaction from an initiator to a target. Unlike blocking transport, non-blocking transport allows significant events within the lifetime of the transaction to be modeled to whatever degree of timing granularity you like, and also allows multiple pipelined transactions to be in-flight at the same time. This is achieved by allowing multiple calls to nb_transport_fw and nb_transport_bw, in the forward and backward directions respectively, associated with a single transaction. The ability to model multiple timing points and pipelining comes at a cost in terms of the complexity of the interface, however. In the context of VMM, non-blocking transport is only of interest if you need to communicate with an existing SystemC TLM-2.0 model that uses this same interface.

Analysis Interface

The analysis interface is the third import to VMM from the TLM-2.0 standard. The analysis interface is used to broadcast a single payload to passive subscribers (aka observers or listeners). The defining characteristics of the analysis interface are firstly that there can be any number of subscribers to a single call and secondly that the subscribers are not permitted to modify the transaction object. Hence the analysis interface is ideal for distributing transactions to passive components such as checkers, scoreboards, and coverage collectors within a verification environment. The analysis interface is non-blocking and does not have any associated timing or status information.

Channels

Channels are the classic way to send payloads around a VMM verification environment. Unlike the three transaction-level interfaces described above, a channel is more than a function call. A channel is a FIFO buffer that can store multiple outstanding transactions. A transaction can be accessed by both producer and consumer transactors while it remains in the so-called active slot of the channel, and it is possible to model multiple timing points at that stage by having VMM notifications associated with the channel or with the transaction object. Channels are still the preferred mechanism used to model the timing and synchronization details of specific protocols within VMM, while blocking transport is preferred where a more abstract model is adequate.

Callbacks

In some respects callbacks play the same role as the analysis interface described above, while in other respects there are fundamentally different. Callbacks are like the analysis interface in the sense that they distribute function calls to any subscriber that has register itself with the transactor making the calls. They are different in that a callback is allowed to modify transactions passed as arguments to the function call whereas the analysis interface does not allow transactions to be modified. Also, callbacks are more flexible in the sense that both the callback function names and their argument lists are user-defined, which is not true of the analysis interface. In VMM, callbacks are the preferred mechanism where the behavior of a transactor needs to be modified by tweaking the contents of a transaction.

Notifications

VMM notifications are preferred for data-less synchronization where transactors need to synchronize without passing a payload. The analysis interface is preferred when passing around payloads. VMM notifications are also built into the transaction and channel base classes, so can be used to introduce additional timing points during the lifetime of a transaction when using VMM channels for communication as described above.

So, those are the six main choices for communication available to you in VMM. You can read more details in the VMM 1.2 Standard Library User Guide.

Posted in Communication, Transaction Level Modeling (TLM) | 1 Comment »

Example of Transaction-Level Communication in VMM 1.2

Posted by John Aynsley on 9th September 2010

John Aynsley, CTO, Doulos

Having introduced many of the technical details of transaction-level communication in VMM 1.2 in previous posts on this blog, we will now look at an example highlighting the various transaction-level communication mechanisms available in VMM 1.2 so that we can get a feel for the roles they perform and how they work together.
Let’s start with a producer that generates a stream of transactions:

 

class  my_gen  extends  vmm_xactor;

  vmm_tlm_b_transport_port #(my_gen, my_tx)  m_port;
  vmm_tlm_analysis_port    #(my_gen, my_tx)  m_ap;
  …

  begin: loop
    assert( randomized_tx.randomize() ) …
    $cast(tx, randomized_tx.copy());

    `vmm_callback(my_gen_callbacks,  pre_trans(this, tx));

    m_port.b_transport(tx, delay);

    `vmm_callback(my_gen_callbacks,  post_trans(this, tx));
    m_ap.write(tx);
  end
  …

The example above makes use of a blocking transport port, an analysis port, and callbacks. After randomizing and copying the next transaction in the stream, the producer offers the transaction to the pre_trans callback, giving a test the chance to modify the transaction before it is sent downstream. The transaction is then sent through the blocking transport port, with the advantage that the simple completion semantics of b_transport makes the producer code very clean. b_transport only returns when the transaction is finished, at which point the transaction is offered to another callback post_trans, which gets the chance to modify the transaction once more. Finally the transaction is sent out through the analysis port for broadcast to passive verification components for checking and coverage collection. The analysis port may be connected to zero, one, or many such passive components.
Now for the consumer that receives the transaction:

 


class  my_bfm  extends  vmm_xactor;
  my_channel  m_chan;
  vmm_tlm_analysis_port  #(my_bfm, my_tx)  m_ap;
  …
  begin: loop
    `vmm_callback(my_bfm_callbacks, pre_trans(this, tx));

    m_chan.activate(tx);
    m_chan.start();
    @(i_f.bus_cb); …
    m_chan.complete();
    m_chan.remove();

    `vmm_callback(my_bfm_callbacks, post_trans(this, tx));
    m_ap.write(tx);
  end


The consumer transactor makes use of callbacks and an analysis port in exactly the same way as the consumer, but does not use b_transport. Rather, it receives the incoming transaction using a vmm_channel, which gives more flexibility in interacting with the transaction during its lifetime. VMM recommends the use of vmm_channel when modeling slave-like transactors. The start and complete methods each cause notifications within the transaction, which may be significant for transaction recording or debug. The remove method removes the transaction from the channel, which in this example will have the effect of unblocking the call to b_transport from the producer.
Finally, the producer and consumer are connected together in the environment:

 


class  tb_env  extends  vmm_group;

  virtual function void  build_ph;
    m_tx_chan  = new( “my_channel”, “m_tx_chan” );
    m_tx_chan.reconfigure(1);
    m_gen      = new( “m_gen”, this );
    m_bfm      = new( “m_bfm”, this );
  endfunction

  function void  connect_ph();
    vmm_connect #(.D(my_tx))::tlm_bind(m_tx_chan, m_gen.m_port,
                              vmm_tlm::TLM_BLOCKING_EXPORT );
    m_bfm.m_chan = m_tx_chan;
  endfunction
  …


It is important to note that the channel is reconfigured with a full level of exactly 1 so that the blocking transport call will indeed block until the one-and-only transaction is removed from the channel by the consumer. If the full level were greater than 1, b_transport would return immediately and the communication scheme would be broken.
In order to bind the blocking transport port of the producer to the input channel of the consumer, it is necessary to use the tlm_bind method of the vmm_connect utility, as discussed in previous posts.

So, we have seen how the transaction-level ports, analysis ports, and callbacks each have their own role to play in constructing a VMM transactor.

 

 

Posted in Communication, Transaction Level Modeling (TLM) | Comments Off

Controlling transaction generation timing from the driver using ‘PULL’ mode

Posted by Amit Sharma on 30th July 2010

Sadiya Tarannum Ahmed, Senior CAE, Synopsys

In the default flow, the transaction level communication in VMM Channels operates in the ‘PUSH’ mode, i.e., the process is initiated by the producer which randomizes and pushes a transaction in the channel when it is empty. This process is repeated again when the channel is empty or the consumer retrieves the transaction from the channel. However, in specific cases, you might not want the generator to create stimulus before the bus protocol is ready or until it is requested by the bus-protocol. In this case, you may want to use the ‘PULL’ mode in VMM channels.

In ‘Pull’ mode, the consumer initiates the process by requesting transactions and then the generator or the producer responds to it by putting the transaction into the channel.

 

 image

The following steps show how the communication can operate in “PULL_MODE”.

Step1: In the generator code, call the vmm_channel::wait_for request() method before randomizing and putting the transaction into the channel.

By default vmm_channel::wait_for_request() does not block (“PUSH MODE”). In “PULL” mode, it will block until a get/peek/activate() is invoked by the consumer

class cpu_rand_scenario extends vmm_ms_scenario;
   …
   cpu_trans blueprint;
  `vmm_scenario_new(cpu_rand_scenario)
   …
   …
   virtual task execute(ref int n);
       blueprint = cpu_trans::create_instance(this, "blueprint”);
       chan.wait_for_request();
       assert(blueprint.randomize());
          else `vmm_fatal(log, “cpu_trans randomization failed”);
       chan.put(blueprint.copy());
       n++;
   endtask
  …
`vmm_class_factory(cpu_rand_scenario)
endclass

Step2: Set the mode of channels.

By default, all channels are configure to work in “PUSH_MODE” and can be set to work in “PULL_MODE” statically or dynamically.

  • ·Static setting: Set the mode of channel in the testbench environment or in your testcases

      gen_to_drv_chan.set_mode(vmm_channel::PULL_MODE);

  • Dynamic: call the Runtime switch +vmm_opts+pull_mode_on

Since the mode can be changed through vmm_opts, hierarchical or instance based setting for any channel can also be done at runtime. This brings in a lot of flexibility and the same channel can be made to work under different modes for different tests or even within the same simulation

The pre-defined atomic and scenario generators now support this feature; which can either be enabled by runtime control or by setting the associated channel mode to PULL_MODE in the environment.

Thus you now have the flexibility to configure your transaction level communication easily based on your requirements.

Posted in Communication, Stimulus Generation | 2 Comments »

Non-blocking Transport Communication in VMM 1.2

Posted by John Aynsley on 24th June 2010

John Aynsley, CTO, Doulos

When discussing the TLM-2.0 transport interfaces my posts on this blog have referred to the blocking transport interface alone. Now it is time to take a brief look at the non-blocking transport interface of the TLM-2.0 standard, which offers the possibility of much greater timing accuracy.

The blocking transport interface is restricted to exactly two so-called timing points per transaction, marked by the call to and the return from the b_transport method, and by convention corresponding to the start of the transaction and the arrival of the response. The non-blocking transport interface, on the other hand, allows a transaction to be modeled with any number of timing points so it becomes possible to distinguish between the start and end of an arbitration phase, address phase, write data phase, read data phase, response phase, and so forth.

image

As shown in the diagram above, b_transport is only called in one direction from initiator to target, and the entire transaction is completed in a single method call. nb_transport, on the other hand, comes in two flavors: nb_transport_fw, called by the initiator on the so-called forward path, and nb_transport_bw, called by the target on the backward path. Whereas b_transport is blocking, meaning that it may execute a SystemVerilog event control, nb_transport is non-blocking, meaning that it must return control immediately to the caller. A single transaction may be associated with multiple calls to nb_transport in both directions, the actual number of calls (or phases) being determined by the protocol being modeled.

With just one call per transaction, b_transport is the simplest to use. nb_transport allows more timing accuracy, with multiple method calls in both directions per transaction, but is considerably more complicated to use. b_transport is fast, simple, but inaccurate. nb_transport is more accurate, supporting multiple pipelined transactions, but slower and more complicated to use.

In VMM the role of the TLM-2.0 non-blocking transport interface is usually played by vmm_channel, which allows multiple timing points per transaction to be implemented using the notifications embedded within the channel and the vmm_data transaction object. The VMM user guide still recommends vmm_channel for this purpose. nb_transport is provided in VMM for interoperability with SystemC models that use the TLM-2.0 standard.

Let’s take a quick look at a call to nb_transport, just so we can get a feel for some of the complexities of using the non-blocking transport interface:


class initiator extends vmm_xactor;
`vmm_typename(initiator)
vmm_tlm_nb_transport_port #(initiator, vmm_tlm_generic_payload) m_nb_port;

begin: loop
vmm_tlm_generic_payload tx;
int                     delay;
vmm_tlm::phase_e        phase;
vmm_tlm::sync_e         status;

phase  = vmm_tlm::BEGIN_REQ;

status = m_nb_port.nb_transport_fw(tx, phase, delay);
if (status == vmm_tlm::TLM_UPDATED)

else if (status == vmm_tlm::TLM_COMPLETED)

From the above, you will notice some immediate differences with b_transport. The call to nb_transport_fw takes a phase argument to distinguish between the various phases of an individual transaction and returns a status flag which signals how the values of the arguments are to be interpreted following the return from the method call. A status value of TLM_ACCEPTED indicates that the transaction, phase, and delay were unchanged by the call, TLM_UPDATED indicates that the return from the method call corresponds to an additional timing point and so the values of the arguments will have changed, and TLM_COMPLETED indicates that the transaction has jumped to its final phase.

You are not recommended to use nb_transport except when interfacing to a SystemC model because the rules are considerably more complicated than those for either b_transport or vmm_channel.

Posted in Communication, Interoperability, Reuse, Transaction Level Modeling (TLM) | Comments Off

Generic Payload Extensions in VMM 1.2

Posted by John Aynsley on 22nd June 2010

John Aynsley, CTO, Doulos

In a previous post I described the TLM-2 generic payload as implemented in VMM 1.2. In this post I focus on the generic payload extension mechanism, which allows any number of user-defined attributes to be added to a generic payload transaction without any need to change its data type.

Like all things TLM-2, the motivation for the TLM-2.0 extension mechanism arose in the world of virtual platform modeling in SystemC. There were two requirements for generic payload extensions: firstly, to enable a transaction to carry secondary attributes (or meta-data) without having to introduce new transaction types, and secondly, to allow specific protocols to be modeled using the generic payload. In the first case, introducing new transaction types would have required the insertion of adapter components between sockets of different types, whereas extensions permit meta-data to be transported through components written to deal with the generic payload alone. In the second case, extensions enable specific protocols to be modeled on top of the generic payload, which makes it possible to create very fast, efficient bridges between different protocols.

Let us have a look at an example that adds a timestamp to a VMM generic payload transaction using the extension mechanism. The first task is to define a new class to represent the user-defined extension:

class my_extension extends vmm_tlm_extension #(my_extension);

int timestamp;

`vmm_data_new(my_extension)
`vmm_data_member_begin(my_extension)
`vmm_data_member_scalar(timestamp, DO_ALL)
`vmm_data_member_end(my_extension)

endclass


The user-defined extension class extends vmm_tlm_extension, which should be parameterized with the name of the user-defined extension class itself, as shown. The extension can contain any number of user-defined class properties; this example contains just one, the timestamp.

The initiator of the transaction will create a new extension object, set the value of the extension, and add the extension to the transaction before sending the transaction out through a port:


class initiator extends vmm_xactor;
`vmm_typename(initiator)

vmm_tlm_b_transport_port #(initiator, vmm_tlm_generic_payload) m_port;

vmm_tlm_generic_payload randomized_tx;

begin: loop
my_extension ext = new;

$cast(tx, randomized_tx.copy());
ext.timestamp = $time;
tx.set_extension(my_extension::ID, ext);
m_port.b_transport(tx, delay);


Note the use of the extension ID in the call to the method set_extension: each extension class has its own unique ID, which is used as an index into an array-of-extensions within the generic payload transaction.

Any component that receives the transaction can test for the presence of a given extension type and then retrieve the extension object, as shown here:


class target extends vmm_xactor;
`vmm_typename(target)

vmm_tlm_b_transport_export #(target, vmm_tlm_generic_payload) m_export;

task b_transport(int id = -1, vmm_tlm_generic_payload trans, ref int delay);

my_extension ext;

$cast(ext, trans.get_extension(my_extension::ID));

if (ext)
$display(“Target received transaction with timestamp = %0d”, ext.timestamp);


Note that once again the extension type is identified by using its ID in the call to method get_extension. If the given extension object does not exist, get_extension will return a null object handle. If the extension is present, the target can retrieve the value of timestamp and, in this example, print it out.

The neat thing about the extension mechanism is that a transaction can carry extensions of many types simultaneously, and those transactions can be passed to or through transactors that may not know of the existence of particular extensions.

image

In the diagram above, the initiator sets an extension that is passed through an interconnect, where the interconnect knows nothing of that extension. The interconnect adds a second extension to the transaction that is only known to the interconnect itself and is ignored by the other transactors.

And the point of all this? The generic payload extension mechanism in VMM will permit transactions to be passed to SystemC virtual platform models, where the TLM-2.0 extension mechanism is heavily used.

Posted in Communication, Interoperability, Reuse, Transaction Level Modeling (TLM) | Comments Off

Diagnosing Transaction-Level Connections in VMM 1.2

Posted by John Aynsley on 7th June 2010

John Aynsley, CTO, Doulos

In my previous post on this blog I discussed hierarchical transaction-level connections in VMM 1.2. In this post I show how to remove connections, and also discuss the various diagnostic methods that can help when debugging connection issues.

Actually, removing transaction-level connections is very straightforward. Let’s start with a consumer having an export that permits multiple bindings:

class consumer extends vmm_xactor;

vmm_tlm_b_transport_export #(consumer, vmm_tlm_generic_payload) m_export;

function new (string inst, vmm_object parent = null);
super.new(get_typename(), inst, -1, parent);
m_export = new(this, “m_export”, 4);
endfunction: new

Passing the value 4 as the third argument to the constructor permits up to four different ports to be bound to this one export. These ports can be distinguished using their peer id, as described in a previous blog post.

function void start_of_sim_ph;
vmm_tlm_port_base #(vmm_tlm_generic_payload) q[$];
m_export.get_peers(q);
m_export.tlm_unbind(q[0]);

TLM ports have a method get_peer (singular) that returns the one-and-only export bound to that particular port. TLM exports have a similar method get_peers (plural) that returns a SystemVerilog queue containing all the ports bound to that particular export. The method tlm_unbind can then be called to remove a particular binding, as shown above.

There are several other methods that can be helpful when diagnosing connection problems. For example, the method get_n_peers returns the number of ports bound to a given export:

$display(“get_n_peers() = %0d”, m_export.get_n_peers());

There are also methods for getting a peer id from a peer, and vice-versa, as shown in the following code which loops through the entire queue of peers returned from get_peers:

m_export.get_peers(q);
foreach(q[i])
begin: blk
int id;
id = m_export.get_peer_id(q[i]);
$write(“id = %0d”, id);
$display(“, peer = %s”, m_export.get_peer(id).get_object_hiername());
end

In addition to these low-level methods that allow you to interrogate the bindings of individual ports and exports, there are also methods to print and check the bindings for an entire transactor. The methods are print_bindings, check_bindings and report_unbound, which can be called as follows:

class tb_env extends vmm_group;

virtual function void start_of_sim_ph;
$display(“\n——– print_bindings ——–”);
vmm_tlm::print_bindings(this);
$display(“——– check_bindings ——–”);
vmm_tlm::check_bindings(this);
$display(“——– report_unbound ——–”);
vmm_tlm::report_unbound(this);
$display(“——————————–”);
endfunction: start_of_sim_ph

print_bindings prints information concerning the binding of every port and export below the given transactor. check_bindings checks that every port and export has been bound at least the specified minimum number of times. report_unbound generates warnings for any unbound ports or exports, regardless of the specified minimum.

In summary, VMM 1.2 allows you easily to check whether all ports and exports have been bound, whether a given port or export has been bound, to find the objects to which a given port or export has been bound, and even to remove the bindings when necessary.

Posted in Communication, Interoperability, Reuse, Transaction Level Modeling (TLM) | Comments Off

Verification in the trenches: TLM2.0 or vmm_channel? Communicating with other components using VMM1.2

Posted by Ambar Sarkar on 1st June 2010

Dr. Ambar Sarkar, Chief Verification Technologist, Paradigm Works Inc.

Did life get easier with the availability of TLM2.0 style communication in VMM1.2?

Or the other way around? Are you asking: Should I use a vmm_tlm port or just stick to the tried and trusted vmm_channel as my communication method within the verification environment? You are not alone.

As you are aware, VMM1.2 provides the following interfaces for exchanging transactions between components:

  1. vmm_channel (Pre VMM1.2)
  2. vmm_tlm based blocking and non-blocking interfaces (TLM based)
  3. vmm_analysis_port (TLM based)
  4. vmm_callback (Pre VMM1.2)

Which option is the best for communicating between components? Under what circumstances?

There are two real requirements here,

  1. Maintain the ability to work with existing VMM (pre VMM 1.2) components.
  2. Be forward compatible with components created with TLM based ports, as is expected as the industry moves toward UVM (Yes, the early adopter version was released recently!).

TLM2.0 based communication mechanism offers a flexible, sufficient, efficient, and clear and well-defined semantics for communication between two components, and the industry as a whole is moving towards a TLM based approach. For these reasons, I recommend going forward that any new VIP using only the TLM2.0 based communication.

Since VMM1.2 provides complete interoperability between vmm_channel and tlm 2.0 style ports, the user is guaranteed that the created component will keep on working with vmm_channel based counterparts:

class subenv extends vmm_group;
initiator i0;
target t0;

virtual function void connect_ph();
vmm_connect #(.D(my_trans))::tlm_bind(
t0.in_chan, // Channel
i0.b_port, // TLM port
vmm_tlm::TLM_BLOCKING_EXPORT);
endfunction: connect_ph

endclass: subenv

Similarly, any vmm_notify based callback events can be communicated using tlm_analysis ports. For details, see example 5-48 in the Advanced Usage section in the user guide.

By creating components that solely depend on TLM style communication schemes will greatly facilitate interoperability and migration of VIP implementations to currently evolving statndard of UVM.

The following summarizes my recommendations:

Type of interaction

Recommended mechanism

Issuing and receiving transactions vmm_tlm based blocking interface
Issuing notification to passive observers vmm_analysis_port

For issuing transactions to other components on a point-to-point manner, typically seen in master-slave based communications, use the vmm_tlm based blocking port interfaces.

For slave-like transactors which expect to receive transactions from other transactors, use vmm_tlm based blocking export interface.

For reactive transactors, define additional vmm_tlm based blocking export interface.

For broadcast transactions to be communicated to multiple consumers such scoreboards, functional coverage models, etc, use vmm_analysis_port.

So, did life get easier with TLM? I believe so. Especially if you want to be forward compatible with the new and upcoming methodologies.

This article is the 7th in the Verification in the trenches series. Hope you found this article useful. If you would like to hear about any other related topic, please comment or drop me a line at ambar.sarkar@paradigm-works.com. Also, if you are starting out fresh, please check out the free VMM1.2 environment generator.

Posted in Communication, Transaction Level Modeling (TLM), Tutorial | Comments Off

Hierarchical Transaction-Level Connections in VMM 1.2

Posted by John Aynsley on 19th May 2010

John Aynsley, CTO, Doulos

In a previous blog post I described ports and exports in VMM 1.2, and explored the issue of binding a port of one transactor to the export of another transactor, where the two transactors are peers. Now let us look at how we can make hierarchical connections between ports and exports in the case where one transactor is nested within another.

Let’s start with ports. Suppose we have a producer that sends transactions out through a port and is nested inside another transactor:

class producer extends vmm_xactor;
vmm_tlm_b_transport_port #(producer, vmm_tlm_generic_payload) m_port;

virtual task run_ph;

vmm_tlm_generic_payload tx;

m_port.b_transport(tx, delay);

This transactor calls the b_transport method to send a transaction out through a port. So far, so good. Now let’s look at the parent transactor:

class producer_parent extends vmm_xactor;

vmm_tlm_b_transport_port #(producer_parent, vmm_tlm_generic_payload) m_port;

producer  m_producer;

virtual function void build_ph;
m_producer = new( “m_producer”, this );
endfunction: build_ph

The producer’s parent also has a port, through which it wishes to send the transaction out into its environment. The port on the producer must be bound to the port on the parent. This is done in the connect phase:

virtual function void connect_ph;
m_producer.m_port.tlm_import( this.m_port );
endfunction: connect_ph

Note the use of the method tlm_import in place of tlm_bind to perform the child-to-parent port binding. Why is this particular method named tlm_import? I am tempted to say “Don’t ask!” Whatever name had been selected, somebody would have found it confusing. Of course, the idea is that something is being imported. In this case it is actually the address of the b_transport method that is effectively being imported from the parent (this.m_port) to the child (m_producer.m_port). tlm_import is being called in the sense CHILD.tlm_import( PARENT), which makes sense to me, anyway.

So much for the producer. On the consumer side the situation is very similar so I will cut to the chase:

class consumer_parent extends vmm_xactor;

vmm_tlm_b_transport_export #(consumer_parent, vmm_tlm_generic_payload) m_export;

consumer  m_consumer;

virtual function void connect_ph;
m_consumer.m_export.tlm_import( this.m_export );
endfunction: connect_ph

In this case, the address of the b_transport method is effectively being passed up from the child (m_consumer.m_export) to the parent (this.m_export). In other words, b_transport is being exported rather than imported, but note that it is still the tlm_import method that is being used to perform the binding in the direction CHILD.tlm_import( PARENT ).

Now for the top-level environment, where we instantiate the parent transactors:

class tb_env extends vmm_group;

producer_parent  m_producer_1;
producer_parent  m_producer_2;
consumer_parent  m_consumer;

virtual function void build_ph;
m_producer_1 = new( “m_producer_1″, this );
m_producer_2 = new( “m_producer_2″, this );
m_consumer   = new( “m_consumer”,   this );
endfunction: build_ph

virtual function void connect_ph;
m_producer_1.m_port.tlm_bind( m_consumer.m_export, 0 );
m_producer_2.m_port.tlm_bind( m_consumer.m_export, 1 );
endfunction: connect_ph

endclass: tb_env

This is straightforward. We just use the tlm_bind method to perform peer-to-peer binding between a port and an export at the top level. Note that we are binding two distinct ports to a single export; as explained in a previous blog post, a VMM transactor can accept incoming transactions from multiple sources, distinguished using the value of the second argument to the tlm_bind method.

So, in summary, it is possible to bind TL- ports and exports up, down, and across the transactor hierarchy. Use tlm_bind for peer-to-peer bindings, and tlm_import for child-to-parent bindings.

Posted in Communication, Reuse, Transaction Level Modeling (TLM) | 1 Comment »

Required and Provided Interfaces in VMM 1.2

Posted by John Aynsley on 14th May 2010

John Aynsley, CTO, Doulos

Before diving into more technical detail concerning VMM 1.2, let’s take some time to review a basic concept of transaction-level communication that often causes confusion, particularly for people more familiar with HDLs like Verilog and VHDL than with object-oriented software programming. This is the idea of the transaction-level interface.

A transaction-level interface is a software interface that permits software components to communicate using a specific set of function calls (also known as method calls). In the case of VMM, the software components in question are VMM transactors, and the function calls are the VMM TLM methods such as b_transport, introduced in previous posts on this blog. Such transaction-level interfaces are often depicted diagrammatically as shown here:

clip_image002

Ports and exports are depicted as if they were pins on the periphery of a component, which is accurate in a metaphorical sense, but misleading if taken too literally. A port is a structured way of representing the fact that the Producer transactor above makes a call to a specific function, and thus requires an implementation of that function in order to compile and run. On the other side, an export is a structured way of representing the fact that the Consumer transactor provides an implementation of a specific function. So although the diagram may appear to show two components with a structural connection between them, it actually shows the Producer making a call to a function implemented by the Consumer. What may appear to be a hardware connection turns out to be an object-oriented software dependency between Producer and Consumer.

When it comes to combining multiple transactors, the types of the transaction-level interfaces have to be respected. The declarations of ports and exports are each parameterized with the type of the transaction object to be passed as a function argument:

vmm_tlm_b_transport_port #(Producer, transaction) port;

vmm_tlm_b_transport_export #(Consumer, transaction) export;

The port, which requires a transaction-level interface of a given type, must be bound to an export that provides an interface of the same type. The type in question is provided by the second parameter transaction. The tlm_bind method effectively seals a contract between the transactor that requires the interface and the provider of the interface:

producer.port.tlm_bind( consumer.export );

One benefit of transaction-level interfaces is that this connection is strongly typed, so the SystemVerilog compiler will catch any mismatch between the types of the port and the export.

As well as binding a port to an export peer-to-peer, it is also possible to bind chains of ports or exports going up or down the component hierarchy, as shown diagrammatically below:

clip_image004

Child-to-parent port bindings carry the function call up through the component hierarchy to the left, while parent-to-child export bindings carry the function call down through the component hierarchy to the right. A port-to-export binding is only permitted at the top level.

At run-time, a method call to the appropriate function is made through the child port:

port.b_transport(tx, delay);

This will result in the corresponding function implementation being called directly, with no intervening channel to store the transaction en route. Transaction-level interfaces are fast, robust, and simple to use, which is why they have been incorporated into VMM.

Posted in Communication, Transaction Level Modeling (TLM), VMM infrastructure | Comments Off

Combining TLM Ports with VMM Channels in VMM 1.2

Posted by John Aynsley on 23rd April 2010

JohnAynsley

John Aynsley, CTO, Doulos

In previous posts I described the TLM ports and exports introduced with VMM 1.2. Of course, the vmm_channel still remains one of the primary communication mechanisms in VMM. In this article I explore how TLM ports can be used with VMM channels.

TLM ports and exports support a style of communication between transactors in which a b_transport method made from a producer is implemented within a consumer, thereby allowing a transaction to be passed directly from producer to consumer without any intervening channel. On the other hand, a vmm_channel sits as a buffer between a producer and a consumer, with the producer putting transactions into the tail of the channel and the consumer getting transactions from the head of the channel. The channel deliberately isolates the producer from the consumer.

Each style of communication has its advantages. The b_transport style of communication is fast, direct, and the end-of-life of the transaction is handled independently from the contents and behavior of the transaction object. On the other hand, vmm_channel provides a lot more functionality and flexibility, including the ability to process transactions while they remain in the channel and to support a range of synchronization models between producer and consumer.
In VMM 1.2, support for TLM ports and exports is now built into the vmm_channel class. It is possible to bind a TLM port to a VMM channel such that the channel provides the implementation of b_transport. The goal is to get the best of both worlds: the clean semantics of a b_transport call in the producer, and the convenience of using the active slot in the vmm_channel in the consumer.
An example is shown below. First we need a transaction class and a corresponding channel type:

class my_tx extends vmm_data;   // User-defined transaction

typedef vmm_channel_typed #(my_tx) my_channel;

The producer sends transactions using a b_transport call, knowing that by the time the call returns, the transaction will have been completed:

class producer extends vmm_xactor;

vmm_tlm_b_transport_port #(producer, my_tx) m_port;

my_tx tx;

m_port.b_transport(tx, delay);

The consumer manipulates the transaction while leaving it in the active slot of a vmm_channel and executing the standard notifications:

class consumer extends vmm_xactor;

my_channel m_chan;

my_tx tx;

m_chan.activate(tx);   // Peek at the transaction in the channel
m_chan.start();        // Notification vmm_data::STARTED

m_chan.complete();     // Notification vmm_data::ENDED
m_chan.remove();       // Unblock the producer

The channel is instantiated and connected in the surrounding environment:

class tb_env extends vmm_group;

my_channel  m_tx_chan;
producer    m_producer;
consumer    m_consumer;

virtual function void build_ph;

m_producer = new( “m_producer”, this );
m_consumer = new( “m_consumer”, this );

m_tx_chan  = new( “my_channel”, “m_tx_chan” );
m_tx_chan.reconfigure(1);

endfunction

function void connect_ph();

vmm_connect #(.D(my_tx))::tlm_bind( m_tx_chan, m_producer.m_port,
vmm_tlm::TLM_BLOCKING_EXPORT );
m_consumer.m_chan = m_tx_chan;

There are two key points to note in the above. Firstly, the channel is reconfigured to have a full level of 1. This ensures that the blocking transport call does indeed block. If the full level is greater than 1, the first call to b_transport will return immediately before the transaction has completed, which would defeat the purpose.

Secondly, the transport port and the vmm_channel are bound together using the vmm_connect utility. This connect utility must be used when binding VMM TLM objects to channels, and can also used in order to bind TLM ports and exports of differing interface types (e.g. a blocking port to a non-blocking export). The third argument to tlm_bind indicates that the connection is being made from the port in the producer to a blocking export within the channel. I will discuss other uses for this method in later posts.

Posted in Communication, Interoperability, Reuse, Transaction Level Modeling (TLM) | Comments Off

Analysis Ports in VMM 1.2

Posted by John Aynsley on 3rd March 2010

JohnAynsley

John Aynsley, CTO, Doulos

Analysis ports are another feature from the SystemC TLM-2.0 standard that has been incorporated into VMM 1.2. Analysis ports provide a mechanism for distributing transactions to passive components in a verification environment, such as checkers and scoreboards.

Analysis ports and exports are a variant on the TLM ports and exports that I have discussed in previous blog posts. The main difference between analysis ports and regular ports is that a single analysis port can be bound to multiple exports, in which case the same transaction is sent to each and every export or “subscriber” or “observer” connected to the analysis port. The terms subscriber and observer are used interchangeably in the VMM documentation.

Let us take a look at an example:

class my_tx extends vmm_data;  // User-defined transaction class

class transactor extends vmm_xactor;
vmm_tlm_analysis_port #(transactor, my_tx) m_ap;   // The analysis port

virtual task main;
my_tx tx;

m_ap.write(tx);

The transactor above sends a transaction tx out through an analysis port m_ap.

The type of the analysis port is parameterized with the type of the transactor and of the transaction my_tx. The call to write sends the transaction to any object that has registered itself with the analysis port. There could be zero, one, or many such observers registered with the analysis port.

To continue the example, let us look at one observer:

class observer extends vmm_object;
vmm_tlm_analysis_export #(observer, my_tx) m_export;
function new (string inst, vmm_object parent = null);

m_export = new(this, “m_export”);

function void write(int id, my_tx tx);

The observer has an instance of an analysis export and must implement the write method that the export will provide to the transactors. Note that the observer extends vmm_object. Since an observer is passive, it need not extend vmm_xactor.

The analysis port may be bound to any number of observers in the surrounding environment:

class tb_env extends vmm_group;
transactor  m_transactor;
observer    m_observer_1;
another     m_observer_2;
yet_another m_observer_3;
virtual function void build_ph;
m_transactor = new( “m_transactor”, this );
m_observer   = new( “m_observer”,   this );

endfunction
virtual function void connect_ph;
m_transactor.m_ap.tlm_bind( m_observer_1.m_export );
m_transactor.m_ap.tlm_bind( m_observer_2.m_export );
m_transactor.m_ap.tlm_bind( m_observer_3.m_export );

Note the use of the predefined phase methods from VMM 1.2. Transactors are created during the build phase, and ports are connected during the connect phase.

Finally, let us compare analysis ports with VMM callbacks:

m_ap.write(tx);
versus
`vmm_callback(callback_facade, write(tx));

The effect is very similar, but there are differences. Unlike VMM callbacks, the name of the method called through an analysis port is fixed at write. A VMM callback method is permitted to modify the transaction object, whereas a transaction sent through an analysis port cannot be modified. When multiple callbacks are registered, the prepend_callback and append_callback methods allow you to determine the order in which the callbacks are made, whereas you have no control over the order in which write is called for multiple observers bound to an analysis port. Because of these differences, only VMM callbacks are appropriate for modifying the behavior of transactors. Analysis ports are only appropriate for sending transactions to passive components that will not attempt to modify the transaction object. On the other hand, that in itself is the feature and strength of analysis ports; they are only for analysis.

It can make sense to combine a VMM callback with an analysis port in the same transactor, using the callback to inject an error and the analysis port to send the modified transaction to a scoreboard, for example:

`vmm_callback(callback_facade, inject_error(tx));
m_ap.write(tx);

In this situation, the VMM recommendation is to make the analysis call after the callback, as shown here.

Posted in Communication, Reuse, SystemC/C/C++, Transaction Level Modeling (TLM), VMM infrastructure | Comments Off

Handling Incoming Transactions from Multiple Sources in VMM 1.2

Posted by John Aynsley on 16th February 2010

JohnAynsley John Aynsley, CTO, Doulos

In the previous post I described TLM ports and exports from VMM 1.2. In this post, we will look at how to handle incoming transactions from multiple sources, that is, multiple producers connected to a single consumer. VMM 1.2 provides two separate mechanisms to handle this situation: peer ids, and shorthand macros. We will explore what these mechanisms have in common, and also the differences between them.

We are discussing the following situation, where two separate producer instances send transactions to a single consumer:

class producer extends vmm_xactor;

vmm_tlm_b_transport_port #(producer, my_tx) m_port;

m_port.b_transport(tx, delay);

class consumer extends vmm_xactor;

vmm_tlm_b_transport_export #(consumer, my_tx) m_export;

function new (string inst, vmm_object parent = null);
super.new(get_typename(), inst, -1, parent);
m_export = new(this, “m_export”, 2); // 3rd argument = max # bindings
endfunction

function void start_of_sim_ph;
vmm_note(log, $psprintf(“Number of peers = %d”, m_export.get_n_peers()));
endfunction

task b_transport(int id = -1,
my_tx trans, ref int delay);

class my_env extends vmm_group;

producer m_producer_1;
producer m_producer_2;
consumer m_consumer;

virtual function void connect_ph;
m_producer_1.m_port.tlm_bind( m_consumer.m_export, 0 ); // 2nd argument = id
m_producer_2.m_port.tlm_bind( m_consumer.m_export, 1 );
endfunction

The first thing to notice is the connect_ph method of the environment, which binds two separate ports to the same export. The tlm_bind method takes a second argument, the peer id, which allows transactions from the two ports to be distinguished.

The second thing to notice is that when the export is instantiated, the constructor new takes a third argument that specifies the maximum number of bindings to this export. The default value of 1 would be inadequate in this case, since the export is bound twice.

Thirdly, the method get_n_peers called from start_of_sim_ph returns the number of peers, which would be 2 in this case.

Finally, the first argument to the b_transport method implemented in the consumer is the peer id passed to the tlm_bind method. The implementation of b_transport can now use the peer id to distinguish between transactions from the two producers.

So much for peer ids. Now let us take a look at the alternative, that is, shorthand macros. Instead of binding two ports to a single export, we could have used the shorthand macros to create two separate exports:

class consumer extends vmm_xactor;

`vmm_tlm_b_transport_export(_1) // Argument is suffix to name
`vmm_tlm_b_transport_export(_2)
vmm_tlm_b_transport_export_1 #(consumer, my_tx) m_export_1;
vmm_tlm_b_transport_export_2 #(consumer, my_tx) m_export_2;

task b_transport_1(int id = -1,
my_tx trans, ref int delay);

task b_transport_2(int id = -1,
my_tx trans, ref int delay);

The argument passed to the macro is used as the suffix for a new type name and a new method name. Those new types are then used to create two separate exports, and the consumer contains two separate and differently named implementations of the b_transport method, one for each export. It is good practice to use the same suffix when naming the export members themselves (e.g. m_export_1), though this is not strictly necessary. Since peer ids are not being used, the id argument to b_transport will have the value 0 for both methods.

As usual, ports are bound to exports in the surrounding environment, but this time using separate exports rather than peer ids:

class my_env extends vmm_group;

producer m_producer_1;
producer m_producer_2;
consumer m_consumer;

virtual function void connect_ph;
m_producer_1.m_port.tlm_bind( m_consumer.m_export_1 );
m_producer_2.m_port.tlm_bind( m_consumer.m_export_2 );
endfunction

In conclusion, we have seen peer ids and shorthand macros used to accomplish the same thing, that is, multiple producers sending transactions to a single consumer. With peer ids we instantiate a single export and provide a single b_transport method, distinguishing between the incoming transactions using the peer id argument. With shorthand macros we instantiate two exports and provide two implementations of b_transport, distinguished by the suffix to their names.

Posted in Communication, Reuse, Transaction Level Modeling (TLM), VMM infrastructure | Comments Off

Connecting Multiple Analysis Ports to a Single Analysis Export

Posted by JL Gray on 9th February 2010

Today’s post was written by my colleague Asif Jafri. Enjoy! JL

by Asif Jafri

Asif Jafri is a verification engineer at Verilab.

This post introduces the VMM implementation of the Transaction Level Modeling (TLM) 2.0 specification of how you can connect multiple broadcasting ports to the same receiving export using peer ID’s. Figure 1 shows multiple initiators communicating with the same target. The initiators can be monitors on either side of your DUT passing transaction to a single scoreboard which keeps track of the transactions and does various checks. In TLM 2.0 message broadcast is accomplished through write function calls from the initiator which are then implemented in the target. image

Figure 1: Connecting using ID

Read the rest of this entry »

Posted in Communication, Reuse, Transaction Level Modeling (TLM), VMM, VMM infrastructure | Comments Off

Verification in the trenches: Implementing Complex Synchronization Between Components Using VMM1.2

Posted by Ambar Sarkar on 5th February 2010

ambar Dr. Ambar Sarkar, Chief Verification Technologist, Paradigm Works Inc.

Why is it tricky to get transactors and other verification components to work in sync with each other, especially  if they come from different projects?  It is likely that they worked well within their source projects,  but their phases (build, configure, reset, start, shutdown etc) were implemented quite differently compared to other components. These differences are usually driven by the inherent protocol requirements or team preferences. For example, consider the verification of an SOC with an AXI  host interface and a PCIe Root Complex. You will likely get your host interface transactor out of reset and execute a configuration sequence before you let your PCIe end point transactor send in requests. So you would not want to run the phases of these two transactors in lock step.

While there are countless ways to implement the phases and their sequencing, one can broadly classify a component  as being either explicitly or implicitly driven, depending on how its phases are invoked.

Implicit phasing: In my earlier post, we discussed how one can often easily coordinate the execution of various verification components. Simply put, as long as one is able to distribute the execution of the component between predetermined methods (called phases), the components can execute in lock-step with one another without requiring any additional coding by the verification engineer. This is called implicit phasing. Implicit phasing may suffice in many cases, but the challenge is to agree on the same set of phases and their sequencing. You basically will need a way to define additional  phases and potentially even rearrange their implicit calling sequence.

Explicit phasing: In contrast, explicit phasing requires the environment writer to explicitly call and synchronize the phases of the components. Typically, it takes some work to get such components to play well with one another.  This happens more often for legacy or externally developed components. In such cases, the  developers may not have known about the predetermined phases so they could not have broken down the implementation quite the way the target environment expects. Explicit phasing is often unavoidable in environments with components from multiple sources, since you may need to carefully control and coordinate the phases by hand to accommodate their differing implementation assumptions.

So the challenge we are discussing today is really about making these explicit and implicit phased components get their phases to match and cooperate during their phase transitions.

This is where vmm_timeline helps. Simply put, vmm_timeline object encapsulates your implicitly phased object and allows it to be called as an explicitly phased object.  It lets you define your own phases and the sequence in which you want to execute them. The ability to customize phases is critical, as you may need to define additional phases to fit in with the way the explicitly phased target  environment expects its phases to execute.

Here is an example that shows how an implicitly-phased component(my_implicit_comp) is being executed within an explicitly-phased my_env. Notice how the my_tl(derived from vmm_timeline) is used.

Step a. Create a vmm_timeline object and instantiate the components

// Implicitly phased comp
class my_implicit_comp extends vmm_group;
`vmm_typename(my_implicit_comp)

function new(string name = “”,
vmm_object parent = null);
super.new(“my_implicit_comp”, name, null);
super.set_parent_object(parent);
endfunctionvirtual function void build_ph();
super.build_ph();

endfunction

endclass

// Create a vmm_timeline class to wrap this implicitly phased component

class my_tl extends vmm_timeline;
`vmm_typename(my_tl)
my_implicit_comp comp1;

function new(string name = “”,
vmm_object parent = null);
super.new(“my_tl”, name, parent);
endfunction

virtual function void build_ph();
super.build_ph();

// Create an instance
this.comp1 = my_implicit_comp::create_instance(this, “comp1”);
endfunction

endclass

Step b. Instantiate in top-level vmm_env and call out the implicit methods

// Instantiate the vmm_timeline object in the top environment and call its phases explicitly.class my_env extends vmm_env;
`vmm_typename(my_env)
my_tl tl;

function new();
super.new(“env”);
endfunction

virtual function void build();
super.build();
this.tl = new(“tl”, this);
endfunction

virtual task start();
super.start();
tl.run_phase(“start”);
`vmm_note(log, “Started…”);
endtask

virtual task wait_for_end();
super.wait_for_end();
fork
// run_test phase corresponds best here
tl.run_phase(“run_test”);
begin
`vmm_note(log, “Running…”);
#100;
end
join
endtask

virtual task stop();
super.stop();

// shutdown phase corresponds best here
tl.run_phase(“shutdown”);
`vmm_note(log, “Stopped…”);
endtask

Note that the converse is also true. Explicitly phased components can be incorporated into implicitly driven environments. You need to encapsulate them in a parent class derived from the vmm_subenv class and define how each implicit phase of the parent class can be mapped to the proper explicit phase(s) of the original component. Then you can simply instantiate this parent class in the target environment. For further details, search the string “Mixed Phasing” in the VMM 1.2 User Guide.

In summary, vmm_timeline helps you manage different phasing and sequencing needs of verification components by making it easier for explicitly and implicitly phased components to interact. No wonder that under the hood of VMM1.2, vmm_timeline is used to implement advanced features such as multi-test concatenation.

This article is the 4th in the Verification in the trenches series. Hope you found this article useful. If you would like to hear about any other related topic, please comment or drop me a line at ambar.sarkar@paradigm-works.com. Also, if you are starting out fresh, please check out the free VMM1.2 environment generator.

Posted in Communication, Modeling, Phasing, Reuse | Comments Off

Should you use VMM Callbacks or TLM Analysis ports?

Posted by S. Prashanth on 27th January 2010

With the addition of OSCI TLM2.0 features in VMM1.2, it is now becoming possible to use analysis ports for broadcasting transaction information from a transactor to any components such as scoreboard, functional coverage models, etc…

But, does it mean that VMM callbacks which are traditionally used for this purpose are not required anymore?

In short, the answer is no: both analysis port and callback have their own advantages. Before getting into more details, let me show you two simple examples I’ve created to encompass the analysis port and callback usage.

Communication through analysis port

Step1: In my transactor, I’ve simply instantiate a vmm_tlm_analysis_port instance [line 2]and invoke the analysis_port.write() method [line 8] to post the tr object to multiple subscribers.

1. class cpu_driver extends vmm_xactor;

2. vmm_tlm_analysis_port#(cpu_driver, cpu_trans) analysis_port;

3.    virtual function void build_ph();

analysis_port = new(this, “cpu_analysys_port”);

4.    endfunction

5.    virtual task main();

6.       cpu_trans tr;

7.       …

8. analysis_port.write(tr);

9.     endtask

10. endclass

Step2: Let’s now see how to model a subscriber such as a coverage model. In this model, I’ve simply instantiated a vmm_tlm_analysis_export instance [Line 3] and provided the implementation of the write method() [Line 4]. Once the transactor posts a transaction to the write method, the coverage model write_CPU gets called as well and receives this transaction, which can be sampled and covered.

1. class cntrlr_cov extends vmm_object;

2.    vmm_tlm_analysis_export #(cntrlr_cov, cpu_trans)

3.    cpu_export = new(this, “CpuAnExPort”);

4.    virtual function void write(int id=-1, cpu_trans tr);

5.        this.cpu_tr = tr;

6.        CG_CPU.sample();

7.   endfunction

8. endclass

Step3: The last important part is to bind my transactor and my subscribers. This is simply done by using the transactor tlm_bind() method. Of course we should be invoked for all subscribers.

1. class tb_env extends vmm_group;

2.    cpu_driver drv;

3.    cntrlr_cov cov;

4.    function void connect_ph();

5.       drv.analysis_port.tlm_bind(cov.cpu_export);

6.    endfunction

7. endclass

As you can see in these examples, analysis ports are easy to use. But they are not meant to allow subscribers to modify the transaction and are very much restricted to only one method with only one argument, i.e. write().

Communication through callback

Step1: I’ve created a generic callback that is nothing but a container that extends the vmm_xactor_callbacks base class with empty virtual methods [Line 1-3]. I have deliberately left these methods empty so that they can overridden for any particular usage such as a coverage model, scoreboard, etc.Next step is have my transactor calling this callback once the transaction is available [Line 10]

1. class cpu_driver_callbacks extends vmm_xactor_callbacks;

2.    virtual function void write(cpu_trans tr); endtask

3. endclass

4.

5. class cpu_driver extends vmm_xactor;

6.

7.    virtual task main();

8.       cpu_trans tr;

9.       …

10.      `vmm_callback(cpu_driver_callbacks, write(tr));

11.   endtask

12. endclass

Step2: Now, I can extend the previous class and provide the implementation the coverage model directly in the callback

1. cpu2cov_callback extends cpu_driver_callbacks;

2.    cntrlr_cov cov;

3.    virtual function void write(cpu_trans tr);

4.       cov.cpu_tr = tr;

5.       cov.CG_CPU.sample()

6.    endfunction

7. endclass

Step3: Once both transactor and subscribers are implemented, I can simply instantiate them in my implicitly phased environment [Line2-3]. Then, I can register the extended callback instance to the transactor by using append_callback() method Line[6]. Note that this registration happens in the connect phase.

1. class tb_env extends vmm_group;

2.    cpu_driver drv;

3.    cntrlr_cov cov;

4.    function void connect_ph();

5.       cpu2cov_callback cbk = new(cov);

6.       drv.append_callback(cbk);

7.    endfunction

8. endclass

As you can see in above examples, callbacks are also easy to use. As opposed to analysis port, their subscribers can possibly modify the transaction and can contain multiple methods with any kind of arguments.

Comparison

1. Analysis ports follow OSCI TLM2.0 standard.

2. Unlike callbacks, user do not need to create class with empty virtual methods, instead pre-defined method write() is used

3. Analysis ports can only broadcast one transaction as opposed to callbacks where you can define the method arguments.

4. Analysis port method write() is a function whereas callback class can model its empty virtual methods as tasks or void functions which gives you flexibility to control the transactor as well (like inserting delays, injecting error mechanism, etc)

5. The `vmm_tlm_analysis_export() macro must be used to create a new analysis façade when a class need to have more than one analysis export

6. Analysis ports can only be used by classes based on vmm_object. Callbacks can be used by any class.

In summary

- If a transactor needs to broadcast only one transaction, then analysis port can be used. If a transactor needs to send different kinds of information at different points, and provide some hooks for modeling variant functionality (like inserting delays, error injections, etc), then callbacks is the way to go. Both analysis port and callback can also be provided, publishing analysis port after callbacks.

Posted in Callbacks, Communication, Reuse, Transaction Level Modeling (TLM) | 2 Comments »

Transactor Interfaces [VMM 1.2 style]

Posted by Vidyashankar Ramaswamy on 18th January 2010

In my earlier blog post I have shown how to write a configurable physical interface. This time I shall look into the slave transactor and its interfaces. A slave transactor can have all or some of the interfaces shown in the following Figure. It solely depends on the nature of the protocol and its surrounding environment. If you are developing verification IP which is used across multiple projects or groups , then you must consider the entire interface requirement and develop the VIP accordingly. Please note that I will be discussing only the analysis port [No 5] and the transport port [No 6] interfaces which are developed using VMM 1.2 features.

image

Analysis Port

Analysis ports are used to share the received transaction among the other testbench components. These components can also be referred to as listeners, subscribers, observers, target or sometimes passive component. As the name says , this port is used to distribute the transaction to a single or multiple listeners for analysis. The important feature of the analysis port is that a single port can be connected to multiple subscribers. The component which is broadcasting the transaction uses “analysis port” and the observers implement the analysis export port. Each subscriber must implement the “write”  method of the vmm_tlm_analysis_export class. Here the listeners can’t modify the transaction and can only copy it’s content for analysis. Most common testbench components which can use this port are scoreboard, functional coverage, debug and reference model.

Analysis port in the producer (Initiator) model

The analysis port is implemented as follows. The port object is constructed in the transactors’s build phase. As this is a slave model, the write method is called after responding to the observed request on the bus.

File: vip_slave.sv

. . .
//////////// SLAVE MODEL ///////////////

class vip_slave extends vmm_xactor;
`vmm_typename(vip_slave)
// Variables declaration
. . .
// Analysis port
vmm_tlm_analysis_port#(vip_slave, vip_trans) analysis_port;
// TLM Blocking port
vmm_tlm_b_transport_port#(vip_slave, vip_trans) b_trans_resp_port;

// Component Phases
. . .
extern virtual function void build_ph();
extern virtual protected task main();
. . .
endclass: vip_slave
. . .
/////////////// Build Phase /////////////////
function void vip_slave::build_ph();
. . .
// Construct the analysis port for Observers
analysis_port = new(this, “analysis_port”);
// Construct the transport port for response modification
b_trans_resp_port = new(this, “b_trans_resp_port”);
. . .
endfunction: build_ph
. . .

////////////// Main Method////////////////
task vip_slave::main();
super.main();
forever begin
vip_trans tr;
int dly = 0;
. . .
if (tr.kind == vip_trans::READ) begin
// Retrieve the data
. . .
// Provide the handle to the modifying transactor
b_trans_resp_port.b_transport(tr, dly);
. . .
// Finally drive the data onto the bus
. . .
end
else begin
// WRITE
// Assemble the trans properties
. . .
// Provide the handle to the modifying transactor
b_trans_resp_port.b_transport(tr, dly);
. . .
// Store the data
. . .
end
. . .
// notify the observers about this transaction
this.analysis_port.write(tr);
. . .
end
endtask:
main

Analysis export port in the target (consumer) model

Following is a very simple implementation of an observer. This observer class has an instance of the tlm analysis export class and overrides the virtual function  “write”  to operate on the received transaction. Here the write method displays the received transaction from the initiator.

File: observer.sv

class observer extends vmm_object;
`vmm_typename(observer)
vmm_tlm_analysis_export#(observer, vip_trans) obsrv ;
vmm_log log = new(“log”, this.get_object_hiername());

virtual function void write (int id = -1, vip_trans tr);
`vmm_note(log, ” … From Observer: Rcvd Transaction … “);
tr.display(“”);
endfunction: write

function new(vmm_object parent = null, string inst=””);
super.new(parent, get_typename());
endfunction

/////////////// Build Phase /////////////////
function void build_ph();
. . .
// Construct the analysis export port

obsrv = new(this, “obsrv”);
. . .
endfunction

endclass: observer

Transport Interface

The transactions received by a slave can be processed by a higher layer transactor before responding to the request. In these situations TLM transport ports are used for passing transactions in a blocking/non-blocking way. This slave model uses a blocking transport port to pass the transaction to another transactor for further processing. Blocking transport, completes the transaction within a single method call and uses the forward path from initiator to target. To use this interface, the slave model should implement vmm_tlm_b_transport_port for issuing transactions and the higher layer transactor implements vmm_tlm_b_transport_export for receiving transactions. Please refer to the code shown above (vip_slave.sv).  Also note that the transaction data modification (call to b_transport method) happens before storing the WRITE data  or responding to a READ request.

In a transactor implementation, It is ok to construct and call the analysis port’s “write” method without binding it in the environment. This is not true with the transport port’s “b_transport” method and should be bound in the environment. So it is a good practice to make this port configurable (enable/disable) using vmm options.

Response modifier Transactor

This transactor shows how to instantiate the tlm export class object and the implementation of the b_transport method. Following is a very simple implementation of the transport method, where the received transaction is displayed.

File: resp_modifier.sv
class resp_modifier extends vmm_xactor;
`vmm_typename(resp_modifier)
vmm_tlm_b_transport_export#(resp_modifier, vip_trans) b_trans_resp_export ;

virtual task b_transport (int id = -1, vip_trans tr, ref int dly);
`vmm_note(log, ” …. Resp Trans Modifier …. “);
tr.display(“”);
endtask: b_transport

function new(vmm_unit parent = null, string inst=””);
super.new(get_typename(), inst, 0, parent);
endfunction

/////////////// Build Phase /////////////////
function void build_ph();
. . .
// Construct the transport export port

b_trans_resp_export = new(this, “b_trans_resp_export”);
. . .
endfunction

endclass: resp_modifier

Connecting it all together

The last step is to create the environment. The required components are constructed in the build phase. Connect phase is used to bind the ports appropriately as shown in the code below.  Please refer to the VMM user guide for more information on TLM 2.0 interfaces.

File: tb_env.sv
. . .
`include “vip_slave.sv”
`include “observer.sv”
`include “resp_modifier.sv”
. . .
////////// TB ENVIRONMENT ////////////////
class tb_env extends vmm_group;
`vmm_typename(tb_env)

// VIP Instantiation
vip_slave slv;
. . .

// TLM Port Declaration
observer        obsrv_vip_trans;
resp_modifier trans_mod_xactor;
. . .

// Component Phases
extern virtual function void build_ph();
extern virtual function void connect_ph();
. . .
endclass: tb_env
. . .

////////// Build Phase ////////////////
function void tb_env::build_ph();
. . .
this.slv = vip_slave::create_instance(this, “slv”, `__FILE__, `__LINE__);

//Create observer component
obsrv_vip_trans = new(this, “TRANS_OBSVR”);
// Create the response modifier transactor
trans_mod_xactor = new(this, “RESP_MODIFIER”);
. . .
endfunction: build_ph

/////////// Connect Phase ////////////
function void tb_env::connect_ph();
. . .
// Bind the analysis Port
this.slv.analysis_port.tlm_bind(obsrv_vip_trans.obsrv);
// Bind the transport port
this.slv.b_trans_resp_port.tlm_bind

(trans_mod_xactor.b_trans_resp_export);
. . .
endfunction: connect_ph
. . .

I hope you find this article useful. Please feel free to send me your opinion on this.

Posted in Communication, Structural Components, Transaction Level Modeling (TLM), VMM infrastructure | Comments Off

Using Explicitly-Phased Components in an Implicitly-Phased Testbench

Posted by JL Gray on 11th December 2009

In my last post, I described the new VMM 1.2 implicit phasing capabilities.  I also recommended developing any new code based off of implicit phasing.  Obviously, though, companies that have been using the VMM for quite some time will have developed all of their existing testbench components using explicit phasing.  It is relatively straightforward (and in some sense almost trivial) to use an explicitly phased component in an implicitly phased testbench.

Remember that the whole point of explicit phasing is that users cycle components through the desired phases by manually calling functions and tasks within the component itself. vmm_env contains the following methods:

  • gen_cfg
  • build
  • reset
  • config_dut
  • start
  • wait_for_end
  • stop
  • cleanup
  • report

vmm_subenv contains the following relevant methods:

  • new
  • configure
  • start
  • stop
  • cleanup
  • report

In an explicitly-phased environment, subenv methods are called manually by integrators, usually from the equivalent method in vmm_env. There are two approaches for instantiating a vmm_subenv-based component in an implicitly-phased testbench. The default approach is to simply allow the implicit phasing mechanism to call these explicit phases for you. Explicitly phased components are identified by the implicit phasing mechanism, and methods are called using a standard (and not entirely unexpected) mapping:

Implicit Phase Explicit Phase Called
build_ph vmm_subenv::new[1]
configure_ph vmm_subenv::configure
start_ph vmm_subenv::start
stop_ph vmm_subenv::stop
cleanup_ph vmm_subenv::cleanup
report_ph vmm_subenv::report

[1] Users must call vmm_subenv::new manually.

Now, you might want to phase your vmm_subenv in a non-standard way. If that’s the case, the first thing you’ll need to do is disable the automatic phasing. Here’s how. First, instantiate a null phase:

vmm_null_phase_def null_ph = new();

Next, override the phases you don’t want to start automatically. For example:

my_group.override_phase(“start”, null_ph);
my_group.override_phase(“stop”, null_ph);

Finally, call the explicit phases from the parent object’s implicit phases.  A complete example is shown below.

class testbench_top extends vmm_group;
bus_master_subenv bus_master;
vmm_null_phase_def null_ph = new();

function void build_ph();
bus_master = bus_master_subenv::create_instance(this, “bus_master”);
bus_master.override_phase(“start”, null_ph);
bus_master.override_phase(“stop”, null_ph);
endfunction: build_ph

task reset_ph();
bus_master.start();
// wait 1000 clocks…
bus_master.stop();
endtask: reset_ph

endclass: testbench_top

Posted in Communication, Modeling, Phasing, Reuse, VMM | Comments Off

Blocking Transport Communication in VMM 1.2

Posted by John Aynsley on 10th December 2009

image John Aynsley, CTO, Doulos

One of the new features in VMM 1.2 is the blocking transport interface, borrowed from the SystemC TLM-2.0 standard. This interface provides an alternative to the using the put and get methods of the vmm_channel class when communicating between a producer and consumer. There are a couple of reasons why you might consider using the blocking transport interface in VMM: perhaps you are trying to interface to a SystemC reference model, or perhaps you want to write a transactor that has very clean semantics when it comes to determining the start and end of a transaction. In either case, the new blocking transport interface can help.

A blocking transport call is a method call that carries with it a transaction object and is expected not to return until the transaction is complete. The status of the transaction, that is, whether the transaction succeeded or failed, is expected to be carried within the transaction object itself. You call blocking transport as follows:

class my_tx extends vmm_data; // A user-defined transaction class

enum {OKAY, FAIL} status; // Status flag embedded in the transaction itself
endclass

my_tx tx;
int delay;

$cast(tx, randomized_tx.copy()); // Create and randomize a transaction object

m_port.b_transport(tx, delay);

if( tx.m_status != OKAY ) // Check the response status
`vmm_warning(log, “Transaction failed”);

Since this is SystemVerilog, the transaction object itself is passed by reference. The delay argument instructs the recipient of the transaction to process the transaction after the given delay has elapsed, and can be set by both the caller of and the implementation of the b_transport method.

The blocking transport method needs to be implemented by the transactor that will receive or consume the transaction:

task b_transport(int id = -1, my_tx tx, ref int delay);

// Process transaction

// Pause for given delay and then reset the delay argument
#(delay);
delay = 0;

// Set the response status in the transaction object
tx.m_status = OKAY;
endtask : b_transport

By definition, blocking transport “blocks” until the transaction has been fully processed. It is coded as a SystemVerilog task so that it is allowed to execute timing controls such as the #(delay) in this example. The task is not obliged to suspend for the given delay in this manner; it could have simply returned leaving the delay without modification, or even have increased the value of the delay, leaving it for the caller to realize the delay later. The task is obliged to set the response status flag within the transaction object before returning, and the caller must check the response status on return.

One significant thing that is happening here is that communication between a producer and a consumer (or in TLM-2.0 jargon, an initiator and a target) is being accomplished without having any intervening channel to buffer the transactions. The b_transport method is called by the producer, provided by the consumer, and only returns when the transaction is complete. The advantage of not having an intervening channel is that it can help increase execution speed in the context of a very fast simulation model. The disadvantage is that it is not possible to have multiple transactions in progress given only a single execution thread in the initiator; therefore, it is a good idea to add a semaphore in the implementation of the b_transport method in the target transactor

class target_xactor extends vmm_xactor;

local semaphore m_sem = new(1);

task b_transport(int id = -1, my_tx tx, ref int delay);
m_sem.get(1);
// Process transaction

m_sem.put(1);
endtask : b_transport

This is not the whole story, because we also need to explain how to connect the producer to the consumer. Look out for future blog posts.

Posted in Communication, Reuse, SystemC/C/C++, Transaction Level Modeling (TLM) | 2 Comments »

Developing transactors using VMM 1.2

Posted by Vidyashankar Ramaswamy on 8th December 2009

There are many ways to design and develop a transcator. The following is the way I visualize it. Typical transactor components are shown in the following figure. Based on the functionality, transactor can be grouped into up-stream, down-stream, pass through or a passive monitor types. I shall explain in brief what I mean by these. Up-stream can be a stimulus generator and down-stream can be a bus function master/slave model. A pass through can exists in the test bench to connect a master and the bus function model. A passive monitor simply monitors the bus interface on to which it is connected and broadcasts the packet information whenever it is available. The dotted line in the figure partitions the transactor based on functionality and shows different port connections.

The right side of the dotted line in the above figure represents the upstream. This can be a producer (master or stimulus generator), in which case it can have only output port. This output port is designed as a TLM transport port.

The left side of the dotted line represents the downstream: this can be a slave transactor in which case the receiving port will be a VMM channel. If the downstream transactor is connected to the DUT, then you need to declare a virtual interface and bind it through a port object to the physical interface. This is done from the enclosing environment. This makes the BFM component reusable across test benches. In my next article I shall show you an example about the port object.

If you are designing a pass through transactor, then you need to have both VMM channel for receiving the transaction from the producer and the TLM transport port for sending the transaction to the consumer. Analysis port can be used if any observers are hooked up to this transactor. Also note that you need not have any virtual port connection for a pass through transactor.

A monitor component will have only analysis port along with the physical interface connection.

You might be wondering why the analysis port and the callback interface are centered between up-stream and the down-stream. If you have guessed it, yes you’re right. Both master and slave need to broadcast the information/packet which is passing through them to the observers. The observer can be a scoreboard, coverage collector or a simple file write for debug purposes.

To make the master/slave transactor re-usable, callback methods are used. A callback method allows the user to extend the behavior of a transactor without having to modify the transactor itself. VMM 1.2 supports factory service to replace a transactor. I favor callbacks for transactor extensions. So which one should you use ? I shall leave that up to you to decide and this could be a topic on its own.

Please feel free to comment and share your opinion. For more information please refer to the VMM 1.2 user guide.

Posted in Communication, Structural Components, Transaction Level Modeling (TLM), VMM infrastructure | Comments Off

Phases vs. Threads

Posted by JL Gray on 4th November 2009

JLGray JL Gray, Consultant, Verilab, Austin, Texas, and Author of Cool Verification

Building a testbench for personal use is easy. Building a testbench that can be used by others is much more difficult. To make it easier for verification IP written by different people to interoperate, modern verification methodologies support the concept of standardized “phases” during a simulation run. Phases are a way to help verification engineers communicate in a standard language about what is meant to be taking place at any given time during a simulation.  For example, an explicitly phased VMM testbench built using vmm_env contains the following phases of execution:

· gen_cfg

· build

· reset

· config_dut

· start

· wait_for_end

· stop

· cleanup

· report

Ideally, each of these phases serves a clear purpose. If I want to reset the DUT, a good way to do it is to instrument the reset phase with the appropriate reset logic.  Similarly, the bulk of the simulation activity will likely occur during the wait_for_end phase.  The VMM now has support for implicit phasing. In an implicitly-phased system, components in the verification environment are stepped through each phase automatically by a global controller called vmm_simulation (and its associated “timelines”).  I will discuss timelines in a separate post.  In both the explicit and implicitly phased cases, the phases serve as guides through the simulation. However, most of the real work of the testbench will be accomplished by threads spawned off from these phases.

It is easy to spawn threads using a simple fork/join, but the VMM provides tools to make managing threads easier. In the VMM, the vmm_xactor base class provides support for managing threads and is at the same time phase-aware.  How does it do this?  For starters, vmm_xactor is now implicitly phased by the top level vmm_simulation controller.  However, users maintain full control over the ability to start and stop the transactor, just as they did in earlier versions of the VMM.  That means that a user could start a transactor during any VMM phase, and stop the transactor during the same or a later phase. The transactor could then query the current phase and change its behavior depending on the state of the simulation. Users can also modify their behavior via the use of callbacks.

Here is a diagram demonstrating the interaction between threads and an example subset of the new VMM implicit phases.

clip_image002

The diagram demonstrates activities that take place during specific phases of the testbench. It also shows that threads may start in one phase (such as the host generator starting in the reset phase) and stop in another (in this case, the shutdown phase).  The astute reader will note that I didn’t really need standardized phases at all to handle this. I could have done all of the activities described above in the “run” phase. In fact, that’s what many people do, even today where other phases are available.  The issue, as I stated at the beginning of the article, is that by standardizing when we do specific types of activities, our verification IP will be easier to reuse in other compatible environments.

Posted in Communication, Modeling, Phasing | 1 Comment »

ef16eb3bc86d6a7de90f49a033e97d79jjjjjjjjjj