Verification Martial Arts: A Verification Methodology Blog

Archive for the 'Interoperability' Category

VCS Built-in TLI connectivity for UVM to SystemC TLM 2.0

Posted by vikasg on 20th September 2012

Vikas Grover | Sr. Manager, Central Verification | AMD-India

One of the challenges faced in SOC verification is to validate the designs in mixed language and mixed  abstraction level. SystemC is widely used language to define the system model at higher level of abstraction.  SystemC is an IEEE standard language for System Level modeling and it is rich with constructs for  describing models at various levels of abstraction i.e. Untimed, Timed, Transaction Level, Cycle Accurate,  and RTL. The transaction level model simulates much faster than RTL model, besides OSCI defined the TLM  2.0 interface standard for SystemC which enables SystemC model interoperability and reuse at transaction  level.

On the other side, SystemVerilog is a unified language for design and verification. It is effective for designing advance testbenches for both RTL and Transaction level models, since it has features like constraint randomization for stimulus generation, functional coverage, assertions, object oriented constructs(like class,inheritance etc). Early availability of standard methodologies (providing framework and testbench coding guidelines for resue) like VMM, OVM, UVM enabled wide adoption for System Verilog in industry. The UVM 1.0 Base Class Library which was   released on Feb 2011  includes OSCI TLM 2.0 socket interface to enable interoperability for UVM with SystemC . Essentially it allows UVM testbench to include SystemC TLM 2.0 reference models. The UVM testbench can pass (or receive) transactions from SystemC models. The transaction passed across System Verilog ßàSystemC could be TLM 2.0 generic payload OR uvm_sequence_item. The implementation of UVM to SC TLM 2.0 communication is vendor dependent.

Starting with with the 2011.03 release, VCS provides a new TLI adaptor which enables UVM TLM 2.0 sockets to communicate with SC TLM 2.0 based environment to pass transactions across language domains.  You can also check out  a couple of earlier post from John Aynsley, (VMM-to-SystemC Communication Using the TLI and  Blocking and Non-blocking Communication Using the TLI) on SV-SystemC communication using TLI.   In this Blog, I am going to describe VCS TLI connectivity mechanism between UVM and SystemC. There are other advance TLI features in VCS ( like direct access of data, invoking task/functions  across SV and SC language),  message unification across UVM-SC, transaction debug techniques, extending TLI adaptor for user defined interface other than VMM/UVM/TLM2.0 which can be written about on later.

With the support for TLM2.0 interfaces in both UVM and VMM, the importance of OSCI TLM2.0 across both SystemC and SystemVerilog is now apparent. UVM provides the following TLM2.0 socket interfaces (for both blocking and non-blocking communication)

  • uvm_tlm_b_initiator_socket
  • uvm_tlm_b_target_socket
  • uvm_tlm_nb_initiator_socket
  • uvm_tlm_nb_target_socket
  • uvm_analysis_port
  • uvm_subscriber

SystemC TLM2.0 consists of following TLM 2.0 interface

  • tlm_initiator_socket
  • tlm_target_socket
  • tlm_analysis_port

The Built-in TLI adaptor solution for VCS is a general purpose solution to simplify the transaction passing across UVM and  SystemC as shown below. The transactions can be TLM 2.0 generic payload OR uvm_sequence_item object. The UVM 1.0 does have the TLM 2.0 generic payload class as well.

The Built-in TLI adaptor is available as a pre-compiled library with VCS. The user would need to follow two simple steps to include the TLI adaptor in his/her verification environment.

  1. Include a header file in System Verilog and SystemC code. The System Verilog header file provides a package which implements the bind function parameterized on uvm_sequence_item object.
  2. Invoke the bind function on System Verilog and SystemC side to connect each socket across language.  The bind function has a string argument which must be unique for each socket connection across System Verilog and SystemC.

The code snippet for above steps is shown below. The TLI adaptor code is highlighted in orange/blue color.  The UVM Initiator  “initiator_udf” from System Verilog is driving SystemC Target “ target_udf” using the  TLM  blocking socket.

The TLI adaptor bind function uses the unique string “str_udf_pkt” to identify the socket connectivity across SystemVerilog and SystemC domain.  For multiple sockets, the user needs to invoke the TLI bind function once for each socket. The TLI adaptor supports both blocking and non-blocking transport interfaces for sockets to communicate across System Verilog and SystemC.

Thus, the Built-in UVM-SC TLI adaptor capability of VCS ensures that SystemC can be connected seamlessly in UVM based verification environment.

Posted in Communication, Interoperability, SystemC/C/C++, Tools & 3rd Party interfaces, Transaction Level Modeling (TLM) | 1 Comment »

Build your own code generator!

Posted by Amit Sharma on 25th June 2012

One of the critical problems developers cope with during SoC development process (architecture plan, RTL design, verification, software development) is the constant need to synchronize between different forms of the same data structure: the SoC registers database. The SoC registers database can be found at the SoC architecture team (who write the SoC registers description document), design engineers (who implement the registers structure in RTL code), verification engineers (who write the verification infrastructure – such as RAL code, and write verification tests – such as exhaustive read/write tests from all registers), and software engineers (who use the registers information to write embedded software).

Since the same information is used in different forms, it is only natural to have a single, central database which will hold all SoC registers data. You would ideally like to generate all the required output files  (documentation, UVM or VMM Regsiter Abstraction Model, RTL, C headers, …) from this central database ..  Different vendors and CAD teams provide different automation solutions for doing this.

The RALF specification contains all of the necessary information to generate RTL and testbench  code that implements the specified registers. There are many ways to code and implement RTL, so it is not possible to provide a general purpose solution.  As far as the testbench abstraction model is concerned, there are multiple ways of customizing your model post generation in both UVM and VMM. Callbacks, factories, configurable RAL model  attributes are some of the ways through which the desired customization can be brought in.  “The ‘user’ in RALF : get ralgen to generate ‘your’ code” highlights a very convenient way of adding of bringing in  SystemVerilog compatible code which will be copied as-is into the RAL model and in the desired scope. When it comes down to generating the RTL and the ‘C’ headers, we cannot leave the customization to such a late stage.  Also, different organizations and project groups have their own RTL and C-code coding styles which means  a generated output of a very generic nature might not be very helpful. For RTL generation, engineers would want the generated code to be power and gate-count efficient. Similar for C registers header generation, it often needs to follow coding styles and match CPU Firmware API. How do we bring in all this customizations to the end user?

Using the RALF C++ API, you have full access to parsed RALF data (through C++ routines), which you can use to implement a customized RTL code generator, or any other feature that needs RALF data. So, you can use  it to generate your C header files, HTML documentation, or translate the i/p RALF files to another register description format, custom covergroups and coverage convergence sequences (DAC 2012 User Track poster 6U.8 — Register Verification on a Fast Lane: Using Automation to Converge on UVM REG Coverage Models)

I have seen two instances of the need to generate a different register specification in the recent past and that is one of the reasons I decided to put this down in a blog.   Let me talk about the first instance..

One of the project groups were in the process of migrating from their own SV base classes to UVM. They had their own Register description format from which they used to generate their Register Abstraction Model. This was a proven flow.

So, when they migrated to UVM , they wanted to have a flow which would validate the changes that they are doing..

Given that they were moving to using RALF and ‘ralgen’, they didn’t want to create Register Specification in the legacy format anymore. So, they wanted to have some automation for generating scripts in the earlier format.  So, how did they go about doing this?..   They took the RAL C++ APIs and using them there were able to create the necessary automation to to generate the legacy format from RALF in no time.. (From what I remember, it was a half days work).. Everyone were involved in doing what they were best at, and that helped in the overall scheme of things.

The other customer had their own format from which they were generating RTL, firmware code and HTML. They had the necessary automation to create RALF for generating the UVM register Model. They also had a mechanism in place to generate IPXACT from this format as well as vice versa.. So, to complete the traceability matrix, they wanted a RALF  to IPXACT conversion.. Again, the most logical approach was to take the RALF CPP APIs and get those to iterate through the parsed RALF data and generate IPXACT.. Though, this effort is not complete, it just took a day or so to be able to generate valid IPXACT1.5 schema and all that is required now is some more additional work to smoothen the corners.

How do you start using these APIs and build your own code/html generators? You need to include ‘ralf.hpp” (which is in $VCS_HOME/include) in your ‘generator’ block. And then to compile the code, you need to pick up the shared library libralf.so from the VCS installation.

$CPP_COMPILER $CFLAGS -I${VCS_HOME}/include –L${VCS_HOME}/lib –lralf your-file.cpp $LDFLAGS

#include
#include "ralf.hpp"
int main(int argc, char *argv[])
{
// Check basic command line usage…
if (argc < 3) {
fprintf(stderr, "Error: Wrong Usage.\n");
// Show Correct Usage …
exit(1);
}
/**
* Parse command line arguments to get the essential
* constructor arguments. See documentation
* of class ralf::vmm_ralf’s constructor parameters.
*/
{

/**
* Create a ralf::vmm_ralf object by passing in proper
constructor arguments. */
ralf::vmm_ralf ralf_data(ralf_file, top, rtl_dir,
inc_dir);
/**
* Get the top level object storing the parsed RALF
* block/system data and traverse that, top-down, to get
* access to complete RALF data.
*/
const ralf::vmm_ralf_blk_or_sys_typ * top_lvl_blk_or_sys
= ralf_data.getTopLevelBlockOrSys();
#ifndef GEN_RTL_IN_SNPS_STYLE
/*
* TODO–Traverse the parsed RALF data structure top-down
* using/starting-from ‘top_lvl_blk_or_sys’ for getting
* complete access to the RALF data and then, do whatever
* you would want to do, with the parsed RALF data. One
* typical usage of parsed RALF data could be, to generate
* RTL code in your own style.
*/
//
// TODO – Add your RTL generator code here.
//
#else
/*
* As part of this library, Synopsys also provides a
* default RTL generator, which can be invoked by
* invoking ‘generateRTL()’ method of ‘ralf::vmm_ralf’
* class, as demonstrated below.
*/
ralf_data.generateRTL();
#endif
}

Essentially, you have a handle to the parsed database, and with the available APIs you can do whatever you want with it :) ..  The documentation of the APIs are in the documentation shipped with the VCS installation.. Also, if you are like me and would rather hack away at existing code rather than start with something from scratch, you can just check with Synopsys support to give you existing templates to dump out code in specific format, and you can starting modifying that for your requirements..

Posted in Automation, Coverage, Metrics, Customization, Interoperability, Register Abstraction Model with RAL, Tools & 3rd Party interfaces, UVM, VMM infrastructure | Comments Off

Using the VMM Datastream Scoreboard in a UVM environment

Posted by Amit Sharma on 2nd February 2012

Implementing the response checking mechanism in a self-checking environment remains the most time-consuming task. The VMM Data Stream Scoreboard package facilitates the implementation of verifying the correct transformation, destination and ordering of ordered data streams. This package is intuitively applicable to packet-oriented design, such as modems, routers and protocol interfaces. This package can also be used to verify any design transforming and moving sequences of data items, such as DSP data paths and floating-point units. Out-of-the-box, the VMM data stream scoreboard can be used to verify single-stream designs that do not modify the data flowing through them. For example, it can be used to verify FIFOs, Ethernet media access controllers (MACs) and bridges.

The VMM data scoreboard can also be used to verify multi-stream designs with user-defined data transformation and input-to-output stream routing. The transformation from input data items into expected data items is not limited to one-to-one transformation. An input data item may be transformed into multiple expected data items (e.g. segmenters) or none (e.g. reassemblers). Compared to this, the functionality available through UVM in-order comparator or the algorithmic comparator is significantly less. Thus, users might want to have access to the functionality provided by the VMM DS Scoreboard in a UVM environment. Using the UBUS example available in $VCS_HOME/doc/examples/uvm/integrated/ubus as a demo vehicle, this article shows how simple adapters are used to integrate the VMM DS scoreboard in a UVM environment and thus get access to more advanced scoreboarding functionality within the UVM environment

The UBUS example uses an example scoreboard to verify that the slave agent is operating as a simple memory. It extends from the uvm_scoreboard class and implements a memory_verify() function to makes the appropriate calls and comparisons needed to verify a memory operation. An uvm_analysis_export is explicitly created and implementation for ‘write’ defined. In the top level environment, the analysis export is connected to the analysis port of the slave monitor.

ubus0.slaves[0].monitor.item_collected_port.connect(scoreboard0.item_collected_export);

The simple scoreboard with its explicit implementation of the comparison routines suffices for verifying the basic operations, but would require to be enhanced significantly to provide more detailed information which the user might need. For example, lets take the ‘test_2m_4s’ test. Here , the environment is configured to have 2 Masters and 4 slaves.. Depending on how the slave memory map is configured, different slaves respond to different transfers on the bus. Now, if we want to get some information on how many transfer went into the scoreboard for a specific combination (eg: Master 1 to Slave 3), how many were verified to be processed correctly etc, it would be fair enough to conclude that the existing scoreboarding schemes will not suffice..

Hence, it was felt that the Data Stream Scoreboard with its advanced functionality and support for data transformation, data reordering, data loss, and multi-stream data routing should be available for verification environments not necessarily based on VMM. From VCS  2011.12-1, this integration have meed made very simple.  This VMM DS scoreboard implements a generic data stream scoreboard that accepts parameters for the input and output packet types. A single instance of this class is used to check the proper transformation, multiplexing and ordering of multiple data streams. The scoreboard class now  leverages a policy-based design and parameterized specializations to accepts any ‘Packet’ class or d, be it VMM, UVM or OVM.

The central element in policy-based design is a class template (called the host class, which in this case in the VMM DS Scoreboad), taking several type parameters as input, which are specialized with types selected by the user (called policy classes), each implementing a particular implicit method (called a policy), and encapsulating some orthogonal (or mostly orthogonal) aspect of the behavior of the instantiated host class. In this case, the ‘policies’ implemented by the policy classes are the ‘compare’ and ‘display’ routines.

By supplying a host class combined with a set of different, canned implementations for each policy, the VMM DS scoreboard can support all different behavior combinations, resolved at compile time, and selected by mixing and matching the different supplied policy classes in the instantiation of the host class template. Additionally, by writing a custom implementation of a given policy, a policy-based library can be used in situations requiring behaviors unforeseen by the library implementor .

So, lets go through a set of simple steps to see how you can use the VMM DS scoreboard in the UVM environment

Step 1: Creating the policy class for UVM and define its ‘policies’

image

Step 2: Replacing the UVM scoreboard with a VMM one extended from “vmm_sb_ds_typed” and specialize it with the ubus_transfer type and the previous created uvm_object_policy.

class ubus_example_scoreboard extends vmm_sb_ds_typed #(ubus_transfer,ubus_transfer, uvm_object_policy);

`vmm_typename(ubus_example_scoreboard)

endclass: ubus_example_scoreboard

Once, this is done, you can either declare an VMM TLM Analysis export to connect to the Bus Monitor in the UBUS environment or use the pre-defined on in the VMM DS scoreboard

vmm_tlm_analysis_export #(ubus_example_scoreboard,ubus_transfer) analysis_exp;

Given that for any configuration, one master and slave would be active, define the appropriate streams in the constructor (though this is not required if there are only single streams, we are defining this explicitly so that this can scale up to multiple input and expect streams for different tests)

this.define_stream(0, “Slave 0″, EXPECT);
this.define_stream(0, “Master 0″, INPUT);

Step 2 .a: Create the ‘write’ implementation for the Analysis export

Since, we are verifying the operation of the slave as a simple memory, we just add in the appropriate logic to insert a packet to the scoreboard when we do a ‘WRITE’ and an expect/check when the transfer is a ‘READ’ with an address that has already been written to.

image

Step 2.b: Implement the stream_id() method

You can use this method to determine to which stream a specific ‘transfer’ belongs to based on the packet’s content, such as a source or destination address. In this case, the BUS Monitor updates the ‘slave’ property of the collected transfer w.r.t where the address falls on the slave memory map.

image

image

Step 3: Create the UVM Analysis to VMM Analysis Adapter

The uvm_analysis_to_vmm_analysis is used to connect any UVM component with an analysis port to any VMM component via an analysis export. The adapter will convert all incoming UVM transactions to a VMM transaction and drive this converted transaction to the VMM component through the analysis port-export. If you are using the VMM UVM interoperability library, you do not have to create the adapter as it will be available in the library

image

image

Create the ‘write’ implementation for the analysis export in the adapter

The write method, called via the <analysis_export> would just post the receive UBUS transfer from the UVM analysis port to the VMM analysis port.

image

Step 4: Make the TLM connections

In the original example, the item_collected_port of the slave monitor was connected to the analysis export of the example scoreboard. Here, the DataStream scoreboard has an analysis port which expects a VMM transaction. Hence, we need the adapter created above to intermediate between the analysis port of the UVM Bus monitor and the analysis export of the VMM DS scoreboard..

image

Step 5: Define Additional streams if required for multi-master multi-slave configurations

This step is not required for a single master/slave configuration. However, would need to create additional streams so that you can verify the correctness on all the different permutations in terms of tests like “test_2m_4s” .

In this case, the following is added in the test_2m_2s in the connect_phase()

image

Step 6: Add appropriate options to your compile command and analyze your results

Change the Makefile by adding –ntb_opts rvm on the command line and add +define+UVM_ON_TOP

vcs -sverilog -timescale=1ns/1ns -ntb_opts uvm-1.1+rvm +incdir+../sv ubus_tb_top.sv -l comp.log +define+UVM_ON_TOP

And that is all, as far and you are ready to go and validate your DUT with a more advanced scoreboard with loads of built-in functionality. This is what you will get when you execute the “test_2m_4s” test

Thus, not only do you have stream specific information now, but you now have access to much more functionality as mentioned earlier. For example, you can model transformations, checks for out of order matches, allow for dropped packets, and iterate over different streams to get access to the specific transfers. Again, depending on your requirements, you can use the simple UVM comparator for your basic checks and switch over to the DS scoreboard for the more complex scenarios with the flip of a switch in the same setup. This is what we did for a UVM PCIe VIP we developed earlier ( From the Magician’s Hat: Developing a Multi-methodology PCIe Gen2 VIP) so that the users has access to all the information they require. Hopefully, this will keep you going, till we have a more powerful UVM scoreboard with some subsequent UVM version

Posted in Communication, Interoperability, Reuse, Scoreboarding, UVM, VMM infrastructure | 2 Comments »

Using the VMM Performance Analyzer in a UVM Environment

Posted by Amit Sharma on 23rd August 2011

As a generic VMM package, the Performance Analyzer (PAN) is not based on nor requires specific shared resources, transactions or hardware structures. It can be used to collect statistical coverage metrics relating to the utilization of a specific shared resource. This package helps to measure and analyze many different performance aspects of a design. UVM doesn’t have a performance analyzer as a part of the base class library as of now. Given that the collection/tracking and analysis  of performance metrics of a design has become a key checkpoint in today’s verification, there is a lot of value in integrating the VMM Performance Analyzer in an UVM testbench. To demonstrate the same, we will use both VMM and UVM base classes in the same simulation.

Performance is analyzed based on user-defined atomic resource utilization called ‘tenures’. A tenure refers to any activity on a shared resource with a well-defined starting and ending point. A tenure is uniquely identified by an automatically-assigned identifier. We take the XBUS example in  $VCS_HOME/doc/examples/uvm_1.0/simple/xbus as a demo vehicle for the UVM environment.

Step 1: Defining data collection

Data is collected for each resource in a separate instance of the “vmm_perf_analyzer” class. These instances should be allocated in the build phase of the top level environment.

For example, in xbus_demo_tb.sv:

image

Step 2: Defining the tenure, and enable data collection

There must be one instance of the “vmm_perf_tenure” class for each operation that is performed on the  sharing resource. Tenures are associated with the instance of the “vmm_perf_analyzer” class that corresponds to the resource operated. In this case of the Xbus example, lets say we want to measure transcation throughput performance (i.e for the XBUS transfers).. This is how we will associate a tenure with the Xbus transaction. To denote the starting and ending of the tenure, we define two additional events in the XBUS Master Driver (started, ended). ‘started’ is triggered when the Driver obtains a transaction from the Sequencer, and ‘ended’ once the transaction is driven on the bus and the driver is about to indicate seq_item_port.item_done(rsp); At the same time,  ‘started’ is triggered, a callback is invoked to get the PAN to starting collecting statistics. Here is the relevant code.

image

Now, the Performance Analyzer  works on classes extended from vmm_data and uses the base class functionality for starting/stopping these tenures. Hence, the callback task which gets triggered at the appropriate points would have to have the functionality for converting the UVM transactions to a corresponding VMM one. This is how it is done.

Step 2.a: Creating the VMM counterpart of the XBUS Transfer Class

image

Step 2.b: Using the UVM Callback for starting/stopping data collection and calling the UVM -> VMM conversion routines appropriately.

image

The callback class needs to be associated with the driver as follows in the Top testbecnh (xbus_demo_tb)

image

Step 3: Generating the Reports..

In the report_ph of xbus_demo_tb, save, and write out the appropriate databases

image

Step 4. Run simulation , and analyze the reports for possible inefficiencies etc

Use -ntb_opts uvm-1.0+rvm +define+UVM_ON_TOP with VCS

Include vmm_perf.sv along with the new files in the included file list.  The following table shows the text report at the end of the simulation.

image

You can generate the SQL databases as well and typically you would be doing this across multiple simulations.. Once, you have done that, you can create your custom queries to the get the desired information out of the SQL database across your regression runs.  You can also analyze the results and generate the required graphs in Excel. Please see the following post : Analyzing results of the Performance Analyzer with Excel

So there you go,  the VMM Performance Performance Analyzer can fit in any verification environment you have.. So make sure that you leverage this package  to make the  RTL-level performance measurements that are needed to validate micro-architectural and architectural assumptions, as well as to tune the RTL for optimal performance.

Posted in Coverage, Metrics, Interoperability, Optimization/Performance, Performance Analyzer, VMM infrastructure, Verification Planning & Management | 6 Comments »

Verification in the trenches: Transform your sc_module into a vmm_xactor

Posted by Ambar Sarkar on 19th January 2011

Dr. Ambar Sarkar, Chief Verification Technologist, Paradigm Works Inc.

Say you have SystemC VIP lying around, tried and true. More likely than not, they are BFMs that connect at the signal level to the DUT and have a procedural API supporting transaction level abstraction.

What would be the best way to hook these components up with a VMM environment? With VMM now being available in SystemC as well, you really want to make these models look and behave as vmm_xactor derived objects that interact seamlessly across the SystemC/SystemVerilog language boundary. Your VMM environment can thus take full advantage of your existing SystemC components. And your sc_module can still be used, just as before, in other non VMM environments!

Enough motivation. Can this be done? Since Syst
emC is really C++, and it supports multiple inheritance, is there a way to just create a class that inherits from both your SystemC component as well vmm_xactor?

Here is an example..

Originally, suppose you had a consumer bfm defined(keeping the example simple for illustration purposes).

   1:  SC_MODULE(consumer) {
   2:    sc_out<sc_logic>   reset;
   3:    sc_out<sc_lv<32> > sample;
   4:    
   5:    sc_in_clk    clk;
   6:      SC_CTOR(consumer_wrapper):    clk("clk"),    reset("reset"),   sample("sample") {
   7:    }
   8:    
   9:    . . .  
  10:  };

Solution Attempt 1) The first thing to try would be to simply create a new class called consumer_vmm as follows and define the required vmm_xactor methods.

   1:  class consumer_vmm : public consumer, public vmm_xactor 
   2:  {
   3:    consumer(vmm_object* parent, sc_module_name _nm) 
   4:           : vmm_xactor(_nm,"consumer",0,parent)
   5:              ,reset("reset") 
   6:              ,sample("sample") 
   7:              ,clk("clk")   
   8:       {   
   9:           SC_METHOD(entry);
  10:           sensitive << clk.pos();
  11:          . . .
  13:        
  14:       }
  15:      . . . define the remaining vmm_xactor methods as needed . . .
  16:  };
Unfortunately, this does not work. Reason? As it turns out, vmm_xactor also inherits from sc_module.So consumer_vmm will end up inheriting same sc_module through two separate classes, the consumer and the vmm_xactor. This is known as the Diamond Problem.  Check out for some fun reading 
at http://en.wikipedia.org/wiki/Diamond_problem. 
 
Okay, so what can be done? Well, luckily, we can get all of this to work reasonably well with some additional tweaks/steps. Yes, you will need to very slightly modify the original source code, but in a backward compatible way. 
 

Solution Attempt 2) Make the original consumer class  derive from vmm_xactor instead of sc_module. This is the only change to existing code, and this will be backward compatible since vmm_xactor inherits from sc_module as well. Of course, add any further vmm_xactor:: derived methods using the old api as needed.

   1:  class consumer: public vmm_xactor
   2:  {
   3:   public:
   4:    sc_out<sc_logic>   reset;
   5:    sc_out<sc_lv<32> > sample;
   6:    sc_in_clk    clk;
   7:    . . . 
   8:  }
 
 
Solution) Here are all the steps. It looks like quite a few steps, but other than creating the 
wrappers and hooking them, the rest of the steps remain the same regardless of whether you use 
the sc_module or the vmm_xactor. 

Step 1. Make the original consumer class  derive from vmm_xactor instead of sc_module. This is the only change to existing code, and this will be backward compatible since vmm_xactor inherits from sc_module as well. Of course, add any further vmm_xactor:: derived methods using the old api as needed.

   1:  class consumer: public vmm_xactor
   2:  {
   3:   public:
   4:    sc_out<sc_logic>   reset;
   5:    sc_out<sc_lv<32> > sample;
   6:    sc_in_clk    clk;
   7:    . . . 
   8:  }

step 2. define sc_module(consumer_wrapper) declare class that has the same set of pins as needed by consumer.

   1:  sc_module(consumer_wrapper) {
   2:    sc_out<sc_logic>   reset;
   3:    sc_out<sc_lv<32> > sample;
   4:    sc_in_clk    clk;
   5:    
   6:    sc_ctor(consumer_wrapper):    clk("clk"),    reset("reset"),   sample("sample") {
   7:    }
   8:      
   9:  };
 
step 3. declare pointers to instances(not instances)  to these wrappers in env class
   1:  class env: public vmm_group
   2:  {
   3:  public:
   4:     consumer *consumer_inst0;
   5:     consumer *consumer_inst1;
   6:     consumer_wrapper *wrapper0, *wrapper1;
   7:   . . .
   8:  }

step 4. in the connect_ph phase, connect the pins of consumer instances and the corresponding wrappers instances

   1:  virtual void env::connect_ph() {
   2:      consumer_inst0->reset(wrapper0->reset);
   3:      consumer_inst0->clk(wrapper0->clk);
   4:      consumer_inst0->sample(wrapper0->sample);
   5:   
   6:      consumer_inst1->reset(wrapper1->reset);
   7:      consumer_inst1->clk(wrapper1->clk);
   8:      consumer_inst1->sample(wrapper1->sample);
   9:  }
  10:     

Step 5. In the constructor for sc_top, after the  vmmm_env instance is created, make sure the pointers in the env point to the these wrappers

   1:  class sc_top : public sc_module
   2:  {
   3:  public: 
   4:    
   5:    vmm_timeline*  t1;
   6:    env*           e1;
   7:   
   8:    sc_out<sc_logic>   reset0;
   9:    sc_out<sc_lv<32> > sample0;
  10:    sc_in_clk    clk;
  11:   
  12:    sc_out<sc_logic>   reset1;
  13:    sc_out<sc_lv<32> > sample1;
  14:      
  15:    consumer_wrapper wrapper0;
  16:    consumer_wrapper wrapper1;
  17:   
  18:    SC_CTOR(sc_top):
  19:      wrapper0("wrapper0")
  20:      ,wrapper1("wrapper1")
  21:      ,reset0("reset0")
  22:      ,sample0("sample0")
  23:      ,reset1("reset1")
  24:      ,sample1("sample1")
  25:      ,clk("clk")
  26:     {
  27:        t1 = new vmm_timeline("timeline","t1");
  28:        e1 = new env("env","e1",t1);
  29:   
  30:        e1->wrapper0 = &wrapper0;
  31:        e1->wrapper1 = &wrapper1;
  32:   
  33:        vmm_simulation::run_tests();
  34:   
  35:        wrapper0.clk(clk);
  36:        wrapper0.reset(reset0);
  37:        wrapper0.sample(sample0);
  38:   
  39:        wrapper1.clk(clk);
  40:        wrapper1.reset(reset1);
  41:        wrapper1.sample(sample1);
  42:   
  43:     }
  44:   
  45:  };
 
 
So while it looks like a few more than we had hoped, you do it only once, and mechanically. Small price to pay for reuse. Maybe someone can create a simple script. 


 

Also, contact me if you want the complete example. The example also shows how you can add tlm ports as well.
 

This article is the 10th in the Verification in the trenches series. Hope you found this article useful. If you would like to hear about any other related topic, please comment or drop me a line at ambar.sarkar@paradigm-works.com. Also, if you are starting out fresh, please check out the free VMM1.2 environment generator.

 

Posted in Interoperability, SystemC/C/C++, VMM | 1 Comment »

Blocking/Non-blocking Transport Adaption in VMM 1.2

Posted by John Aynsley on 1st September 2010

John Aynsley, CTO, Doulos

One neat feature of the SystemC TLM-2.0 standard is the ability provided by the so-called simple target socket to perform automatic adaption between the blocking and non-blocking transport calls; an initiator that calls b_transport can be connected to a target that implements nb_transport, and vice-versa. VMM 1.2 provides similar functionality using the vmm_connect utility.

vmm_connect serves four distinct purposes in VMM 1.2.

•    Connecting one channel port to another channel port, taking account of whether each channel port is null or actually refers to an existing channel object
•    Connecting a notify observer to a notification
•    Binding channels to transaction-level ports and exports
•    Binding a transaction-level port to a transaction-level export where one uses the blocking
transport interface and the other the non-blocking transport interface. This is the case we are considering here.

When making transaction-level connections I think of vmm_connect as covering the “funny cases”.

To illustrate, suppose we have a transactor that acts as an initiator and calls b_transport. The following code fragment also serves as a reminder of how to use the blocking transport interface in VMM:

class initiator extends vmm_xactor;
`vmm_typename(initiator)

vmm_tlm_b_transport_port  #(initiator, vmm_tlm_generic_payload) m_b_port;

virtual task run_ph;
forever begin: loop
vmm_tlm_generic_payload tx;

// Create valid generic payload
assert( randomized_tx.randomize() with {…} )
else `vmm_error(log, “tx.randomize() failed”);

$cast(tx, randomized_tx.copy());

// Send copy through port
m_b_port.b_transport(tx, delay);

// Check response status
assert( tx.m_response_status == vmm_tlm_generic_payload::TLM_OK_RESPONSE );


Further, suppose we have another transactor that acts as a target for nb_transport calls, and which therefore implements nb_transport_fw to receive the request and subsequently calls nb_transport_bw to send the response. The code fragment below is just an outline, but it does show the main idea that non-blocking transport allows timing points to be modeled by having multiple method calls in both directions (as opposed to a single method call in one direction for b_transport):

class target extends vmm_xactor;
`vmm_typename(target)

vmm_tlm_nb_transport_export #(target, vmm_tlm_generic_payload) m_nb_export;

// Implementation of nb_transport method
virtual function vmm_tlm::sync_e nb_transport_fw(
int id=-1, vmm_tlm_generic_payload trans,
ref vmm_tlm::phase_e ph, ref int delay);

-> ev;
return vmm_tlm::TLM_ACCEPTED;
endfunction : nb_transport_fw

// Process to send response by calling nb_transport on backward path
virtual task run_ph;
forever  begin: loop
vmm_tlm::phase_e phase = vmm_tlm::BEGIN_RESP;
@(ev);
tx.m_response_status = vmm_tlm_generic_payload::TLM_OK_RESPONSE;
status = m_nb_export.nb_transport_bw(tx, phase, delay);
end
endtask: run_ph

Now for the main point. We can use tlm_connect to bind the two components together, despite the fact that one uses blocking calls and the other non-blocking calls:

virtual function void build_ph;
m_initiator = new( “m_initiator”, this );
m_target    = new( “m_target”,    this );
endfunction: build_ph

virtual function void connect_ph;

vmm_connect #(.D(vmm_tlm_generic_payload))::tlm_transport_interconnect(
m_initiator.m_b_port, m_target.m_nb_export, vmm_tlm::TLM_NONBLOCKING_EXPORT);

endfunction: connect_ph

That’s all there is to it! Notice that tlm_transport_interconnect is a static method of vmm_connect so its name is prefixed by the scope resolution operator and an instantiation of the vmm_connect parameterized class with the appropriate transaction type, namely the generic payload. The first argument is the port, the second argument the export, and the third argument indicates that it is the export that is non-blocking.
It is also possible to connect a non-blocking initiator to a blocking target: you would simply replace TLM_NONBLOCKING_EXPORT with TLM_NONBLOCKING_PORT. The objective of this post has been to show that VMM has all the bases covered when it comes to connecting together transaction-level ports.

Posted in Interoperability, Reuse, SystemC/C/C++, Transaction Level Modeling (TLM) | Comments Off

Non-blocking Transport Communication in VMM 1.2

Posted by John Aynsley on 24th June 2010

John Aynsley, CTO, Doulos

When discussing the TLM-2.0 transport interfaces my posts on this blog have referred to the blocking transport interface alone. Now it is time to take a brief look at the non-blocking transport interface of the TLM-2.0 standard, which offers the possibility of much greater timing accuracy.

The blocking transport interface is restricted to exactly two so-called timing points per transaction, marked by the call to and the return from the b_transport method, and by convention corresponding to the start of the transaction and the arrival of the response. The non-blocking transport interface, on the other hand, allows a transaction to be modeled with any number of timing points so it becomes possible to distinguish between the start and end of an arbitration phase, address phase, write data phase, read data phase, response phase, and so forth.

image

As shown in the diagram above, b_transport is only called in one direction from initiator to target, and the entire transaction is completed in a single method call. nb_transport, on the other hand, comes in two flavors: nb_transport_fw, called by the initiator on the so-called forward path, and nb_transport_bw, called by the target on the backward path. Whereas b_transport is blocking, meaning that it may execute a SystemVerilog event control, nb_transport is non-blocking, meaning that it must return control immediately to the caller. A single transaction may be associated with multiple calls to nb_transport in both directions, the actual number of calls (or phases) being determined by the protocol being modeled.

With just one call per transaction, b_transport is the simplest to use. nb_transport allows more timing accuracy, with multiple method calls in both directions per transaction, but is considerably more complicated to use. b_transport is fast, simple, but inaccurate. nb_transport is more accurate, supporting multiple pipelined transactions, but slower and more complicated to use.

In VMM the role of the TLM-2.0 non-blocking transport interface is usually played by vmm_channel, which allows multiple timing points per transaction to be implemented using the notifications embedded within the channel and the vmm_data transaction object. The VMM user guide still recommends vmm_channel for this purpose. nb_transport is provided in VMM for interoperability with SystemC models that use the TLM-2.0 standard.

Let’s take a quick look at a call to nb_transport, just so we can get a feel for some of the complexities of using the non-blocking transport interface:


class initiator extends vmm_xactor;
`vmm_typename(initiator)
vmm_tlm_nb_transport_port #(initiator, vmm_tlm_generic_payload) m_nb_port;

begin: loop
vmm_tlm_generic_payload tx;
int                     delay;
vmm_tlm::phase_e        phase;
vmm_tlm::sync_e         status;

phase  = vmm_tlm::BEGIN_REQ;

status = m_nb_port.nb_transport_fw(tx, phase, delay);
if (status == vmm_tlm::TLM_UPDATED)

else if (status == vmm_tlm::TLM_COMPLETED)

From the above, you will notice some immediate differences with b_transport. The call to nb_transport_fw takes a phase argument to distinguish between the various phases of an individual transaction and returns a status flag which signals how the values of the arguments are to be interpreted following the return from the method call. A status value of TLM_ACCEPTED indicates that the transaction, phase, and delay were unchanged by the call, TLM_UPDATED indicates that the return from the method call corresponds to an additional timing point and so the values of the arguments will have changed, and TLM_COMPLETED indicates that the transaction has jumped to its final phase.

You are not recommended to use nb_transport except when interfacing to a SystemC model because the rules are considerably more complicated than those for either b_transport or vmm_channel.

Posted in Communication, Interoperability, Reuse, Transaction Level Modeling (TLM) | Comments Off

Generic Payload Extensions in VMM 1.2

Posted by John Aynsley on 22nd June 2010

John Aynsley, CTO, Doulos

In a previous post I described the TLM-2 generic payload as implemented in VMM 1.2. In this post I focus on the generic payload extension mechanism, which allows any number of user-defined attributes to be added to a generic payload transaction without any need to change its data type.

Like all things TLM-2, the motivation for the TLM-2.0 extension mechanism arose in the world of virtual platform modeling in SystemC. There were two requirements for generic payload extensions: firstly, to enable a transaction to carry secondary attributes (or meta-data) without having to introduce new transaction types, and secondly, to allow specific protocols to be modeled using the generic payload. In the first case, introducing new transaction types would have required the insertion of adapter components between sockets of different types, whereas extensions permit meta-data to be transported through components written to deal with the generic payload alone. In the second case, extensions enable specific protocols to be modeled on top of the generic payload, which makes it possible to create very fast, efficient bridges between different protocols.

Let us have a look at an example that adds a timestamp to a VMM generic payload transaction using the extension mechanism. The first task is to define a new class to represent the user-defined extension:

class my_extension extends vmm_tlm_extension #(my_extension);

int timestamp;

`vmm_data_new(my_extension)
`vmm_data_member_begin(my_extension)
`vmm_data_member_scalar(timestamp, DO_ALL)
`vmm_data_member_end(my_extension)

endclass


The user-defined extension class extends vmm_tlm_extension, which should be parameterized with the name of the user-defined extension class itself, as shown. The extension can contain any number of user-defined class properties; this example contains just one, the timestamp.

The initiator of the transaction will create a new extension object, set the value of the extension, and add the extension to the transaction before sending the transaction out through a port:


class initiator extends vmm_xactor;
`vmm_typename(initiator)

vmm_tlm_b_transport_port #(initiator, vmm_tlm_generic_payload) m_port;

vmm_tlm_generic_payload randomized_tx;

begin: loop
my_extension ext = new;

$cast(tx, randomized_tx.copy());
ext.timestamp = $time;
tx.set_extension(my_extension::ID, ext);
m_port.b_transport(tx, delay);


Note the use of the extension ID in the call to the method set_extension: each extension class has its own unique ID, which is used as an index into an array-of-extensions within the generic payload transaction.

Any component that receives the transaction can test for the presence of a given extension type and then retrieve the extension object, as shown here:


class target extends vmm_xactor;
`vmm_typename(target)

vmm_tlm_b_transport_export #(target, vmm_tlm_generic_payload) m_export;

task b_transport(int id = -1, vmm_tlm_generic_payload trans, ref int delay);

my_extension ext;

$cast(ext, trans.get_extension(my_extension::ID));

if (ext)
$display(“Target received transaction with timestamp = %0d”, ext.timestamp);


Note that once again the extension type is identified by using its ID in the call to method get_extension. If the given extension object does not exist, get_extension will return a null object handle. If the extension is present, the target can retrieve the value of timestamp and, in this example, print it out.

The neat thing about the extension mechanism is that a transaction can carry extensions of many types simultaneously, and those transactions can be passed to or through transactors that may not know of the existence of particular extensions.

image

In the diagram above, the initiator sets an extension that is passed through an interconnect, where the interconnect knows nothing of that extension. The interconnect adds a second extension to the transaction that is only known to the interconnect itself and is ignored by the other transactors.

And the point of all this? The generic payload extension mechanism in VMM will permit transactions to be passed to SystemC virtual platform models, where the TLM-2.0 extension mechanism is heavily used.

Posted in Communication, Interoperability, Reuse, Transaction Level Modeling (TLM) | Comments Off

VMM 1.2.1 is now available

Posted by Janick Bergeron on 15th June 2010

We, in the VMM team, have been so busy working on improving VMM that we only recently noticed that it has been almost a full year since we released an Open Source distribution of VMM. With the release of VCS 2010.06, we took the opportunity to release an updated Open Source distribution that contains all of the new features and capability in VMM now available in the VCS distribution.

I am not going to repeat in details what changed (you can refer to the RELEASE.txt file for that), but I will point two of the most important highlights…

First, this version supports the VMM/UVM interoperability package (also available from download). This interoperability package will allow you to use UVM verification assets in your VMM verification environment (and vice-versa). Note that the VMM/UVM interoperability package is also included in the VCS distribution (along with the UVM-1.0EA release) in VCS2010.06.

Second, many new features were added to the VMM Register Abstraction Layer (RAL). For example, RAL now supports automatic mirroring of registers by passively observing read/write transaction or by actively monitoring changes in the RTL code itself. Another important addition is the ability to perform sub-register accesses when fields are located in individual byte lanes.

The Open Source distribution is the exact same source code as the VMM distribution included in VCS. Therefore, you can trust its robustness acquired in the field through many hundreds of successful verification projects.

Posted in Announcements, Interoperability, Register Abstraction Model with RAL | 1 Comment »

Diagnosing Transaction-Level Connections in VMM 1.2

Posted by John Aynsley on 7th June 2010

John Aynsley, CTO, Doulos

In my previous post on this blog I discussed hierarchical transaction-level connections in VMM 1.2. In this post I show how to remove connections, and also discuss the various diagnostic methods that can help when debugging connection issues.

Actually, removing transaction-level connections is very straightforward. Let’s start with a consumer having an export that permits multiple bindings:

class consumer extends vmm_xactor;

vmm_tlm_b_transport_export #(consumer, vmm_tlm_generic_payload) m_export;

function new (string inst, vmm_object parent = null);
super.new(get_typename(), inst, -1, parent);
m_export = new(this, “m_export”, 4);
endfunction: new

Passing the value 4 as the third argument to the constructor permits up to four different ports to be bound to this one export. These ports can be distinguished using their peer id, as described in a previous blog post.

function void start_of_sim_ph;
vmm_tlm_port_base #(vmm_tlm_generic_payload) q[$];
m_export.get_peers(q);
m_export.tlm_unbind(q[0]);

TLM ports have a method get_peer (singular) that returns the one-and-only export bound to that particular port. TLM exports have a similar method get_peers (plural) that returns a SystemVerilog queue containing all the ports bound to that particular export. The method tlm_unbind can then be called to remove a particular binding, as shown above.

There are several other methods that can be helpful when diagnosing connection problems. For example, the method get_n_peers returns the number of ports bound to a given export:

$display(“get_n_peers() = %0d”, m_export.get_n_peers());

There are also methods for getting a peer id from a peer, and vice-versa, as shown in the following code which loops through the entire queue of peers returned from get_peers:

m_export.get_peers(q);
foreach(q[i])
begin: blk
int id;
id = m_export.get_peer_id(q[i]);
$write(“id = %0d”, id);
$display(“, peer = %s”, m_export.get_peer(id).get_object_hiername());
end

In addition to these low-level methods that allow you to interrogate the bindings of individual ports and exports, there are also methods to print and check the bindings for an entire transactor. The methods are print_bindings, check_bindings and report_unbound, which can be called as follows:

class tb_env extends vmm_group;

virtual function void start_of_sim_ph;
$display(“\n——– print_bindings ——–”);
vmm_tlm::print_bindings(this);
$display(“——– check_bindings ——–”);
vmm_tlm::check_bindings(this);
$display(“——– report_unbound ——–”);
vmm_tlm::report_unbound(this);
$display(“——————————–”);
endfunction: start_of_sim_ph

print_bindings prints information concerning the binding of every port and export below the given transactor. check_bindings checks that every port and export has been bound at least the specified minimum number of times. report_unbound generates warnings for any unbound ports or exports, regardless of the specified minimum.

In summary, VMM 1.2 allows you easily to check whether all ports and exports have been bound, whether a given port or export has been bound, to find the objects to which a given port or export has been bound, and even to remove the bindings when necessary.

Posted in Communication, Interoperability, Reuse, Transaction Level Modeling (TLM) | Comments Off

Early Adopter release of UVM now available!

Posted by Janick Bergeron on 17th May 2010

As I am sure most of you already know, Accellera has been working on a new industry-standard verification methodology. That new standard, the Universal Verification Methodology, is still a work in progress: there are many features that need to be defined and implemented before its first official release, such as a register abstraction package and OSCI-style TLM2 interfaces.

However, an Early Adopter release is now available if you wish to start using the UVM today. The UVM distribution kit contains a User Guide, a Reference Manual and an open source reference implementation of the work completed to date. It is supported by VCS (you will need version 2009.12-3 or later or version 2010.06 or later).

Synopsys is fully committed to supporting UVM. As such, a version of the UVM library, augmented to support additional features provided by VCS and DVE, will be included in the VCS distribution, starting with the 2009.12-7 and 2010.06-1 patch releases. It will have the same simple use model as VMM: just specify the “-ntb_opts uvm” command line argument and voila: the UVM library will be automatically compiled, ready to be used.

But if you can’t wait for these patch releases, you can download the Early Adopter kit from the Accellera website and use it immediately.

What about VMM?

Synopsys continues to be fully committed to VMM! All of the powerful applications that continue to make you more productive will be available on top of the UVM base class and methodology.

A UVM/VMM interoperability kit is available from Synopsys (and will be included in the mentioned VCS patch releases) that will make it easy to integrate UVM verification IP into a VMM testbench (or vice-versa). Contact your VMM support AC/CAE to obtain the UVM/VMM interoperability package.

What about support?

If you need help adopting UVM with VCS, our ACs will be more than happy to help you: they have many years of expertise supporting customers using advanced verification methodologies.

UVM is an Accellera effort. If you have a great idea for a new feature, you should submit your request to the Technical Subcommittee. Better yet! You should join the Subcommittee and help us implement it!

Posted in Announcements, Interoperability | 1 Comment »

Why do the SystemC guys use TLM-2.0?

Posted by John Aynsley on 29th April 2010

JohnAynsley John Aynsley, CTO, Doulos

Since this is the Verification Martial Arts blog, I have focused so far on features of VMM 1.2 itself. But some of you may be wondering why all the fuss about TLM-2.0 anyway? Why is TLM-2.0 used in the SystemC domain?

I guess I should first give a quick summary of how and why SystemC is used. That’s easy. SystemC is a C++ class library with an open-source implementation, and it is used as “glue” to stick together component models when building system-level simulations or software virtual platforms (explained below). SystemC has Verilog-like features such as modules, ports, processes, events, time, and concurrency, so it is conceivable that SystemC could be used in place of an HDL. Indeed, hardware synthesis from SystemC is a fast-growing area. However, the primary use case for SystemC today is to create wrappers for existing behavioral models, which could be plain C/C++, in order to bring them into a concurrent simulation environment.

A software virtual platform is a software model of a hardware platform used for application software development. Today, such platforms typically include multiple processor cores, on-chip busses, memories, and a range of digital and analog hardware peripherals. The virtual platform would typically include an instruction set simulator for each processor core, and transaction-level models for the various busses, memories and peripherals, many of which will be intellectual property (IP) reused from previous projects or bought in from an external supplier.

The SystemC TLM-2.0 standard is targeted at the integration of transaction-level component models around an on-chip communication mechanism, specifically a memory-mapped bus. When you gather component models from multiple sources you need them to play together, but at the transaction level, using SystemC alone is insufficient to ensure interoperability. There are just too many degrees of freedom when writing a SystemC communication wrapper to ensure that two models will talk to each other off-the-shelf. TLM-2.0 provides a standardized modeling interface between transaction-level models of components that communicate over a memory-mapped bus, such that any two TLM-2.0-compliant models can be made to talk to each other.

In order to fulfil its purpose, the primary focus of the SystemC TLM-2.0 standard is on speed and interoperability. Speed means being able to execute application software at as close to full speed as possible and TLM-2.0 sets very aggressive simulation performance goals in this respect. Interoperability means being able to integrate models from different sources with a minimum of engineering effort, and in the case of integrating models that use different bus protocols, to do so without sacrificing any simulation speed.

So finally back to VMM. It turns out that the specific features of TLM-2.0 used to achieve speed and interoperability do not exactly translate into the SystemVerilog verification environment, where the speed goals are less aggressive and there is not such a singular focus on memory-mapped bus modeling. But, as I described in a previous post on this blog, there are still significant benefits to be gained from using a standard transaction-level interface within VMM, both for its intrinsic benefits and in particular when it comes to interacting with virtual platforms that exploit the TLM-2.0 standard.

Posted in Interoperability, Optimization/Performance, SystemC/C/C++, Transaction Level Modeling (TLM) | 2 Comments »

Combining TLM Ports with VMM Channels in VMM 1.2

Posted by John Aynsley on 23rd April 2010

JohnAynsley

John Aynsley, CTO, Doulos

In previous posts I described the TLM ports and exports introduced with VMM 1.2. Of course, the vmm_channel still remains one of the primary communication mechanisms in VMM. In this article I explore how TLM ports can be used with VMM channels.

TLM ports and exports support a style of communication between transactors in which a b_transport method made from a producer is implemented within a consumer, thereby allowing a transaction to be passed directly from producer to consumer without any intervening channel. On the other hand, a vmm_channel sits as a buffer between a producer and a consumer, with the producer putting transactions into the tail of the channel and the consumer getting transactions from the head of the channel. The channel deliberately isolates the producer from the consumer.

Each style of communication has its advantages. The b_transport style of communication is fast, direct, and the end-of-life of the transaction is handled independently from the contents and behavior of the transaction object. On the other hand, vmm_channel provides a lot more functionality and flexibility, including the ability to process transactions while they remain in the channel and to support a range of synchronization models between producer and consumer.
In VMM 1.2, support for TLM ports and exports is now built into the vmm_channel class. It is possible to bind a TLM port to a VMM channel such that the channel provides the implementation of b_transport. The goal is to get the best of both worlds: the clean semantics of a b_transport call in the producer, and the convenience of using the active slot in the vmm_channel in the consumer.
An example is shown below. First we need a transaction class and a corresponding channel type:

class my_tx extends vmm_data;   // User-defined transaction

typedef vmm_channel_typed #(my_tx) my_channel;

The producer sends transactions using a b_transport call, knowing that by the time the call returns, the transaction will have been completed:

class producer extends vmm_xactor;

vmm_tlm_b_transport_port #(producer, my_tx) m_port;

my_tx tx;

m_port.b_transport(tx, delay);

The consumer manipulates the transaction while leaving it in the active slot of a vmm_channel and executing the standard notifications:

class consumer extends vmm_xactor;

my_channel m_chan;

my_tx tx;

m_chan.activate(tx);   // Peek at the transaction in the channel
m_chan.start();        // Notification vmm_data::STARTED

m_chan.complete();     // Notification vmm_data::ENDED
m_chan.remove();       // Unblock the producer

The channel is instantiated and connected in the surrounding environment:

class tb_env extends vmm_group;

my_channel  m_tx_chan;
producer    m_producer;
consumer    m_consumer;

virtual function void build_ph;

m_producer = new( “m_producer”, this );
m_consumer = new( “m_consumer”, this );

m_tx_chan  = new( “my_channel”, “m_tx_chan” );
m_tx_chan.reconfigure(1);

endfunction

function void connect_ph();

vmm_connect #(.D(my_tx))::tlm_bind( m_tx_chan, m_producer.m_port,
vmm_tlm::TLM_BLOCKING_EXPORT );
m_consumer.m_chan = m_tx_chan;

There are two key points to note in the above. Firstly, the channel is reconfigured to have a full level of 1. This ensures that the blocking transport call does indeed block. If the full level is greater than 1, the first call to b_transport will return immediately before the transaction has completed, which would defeat the purpose.

Secondly, the transport port and the vmm_channel are bound together using the vmm_connect utility. This connect utility must be used when binding VMM TLM objects to channels, and can also used in order to bind TLM ports and exports of differing interface types (e.g. a blocking port to a non-blocking export). The third argument to tlm_bind indicates that the connection is being made from the port in the producer to a blocking export within the channel. I will discuss other uses for this method in later posts.

Posted in Communication, Interoperability, Reuse, Transaction Level Modeling (TLM) | Comments Off

Bruce Kenner: Programmer

Posted by Andrew Piziali on 8th April 2010

by Andrew Piziali and Gary Stringham

Once upon a time in a cubicle far, far away there was a brilliant software engineer named Bruce Kenner. He had designed an elegant compiler for a processor with an exposed pipeline, requiring compiler-scheduled instructions for a new instruction set architecture. No, this was not the pedestrian VLIW machine you may be familiar with but a machine having a programmer specified number of delay slots following each branch instruction. In order to manage resource hazards, Bruce had furnished the compiler with an oracle that had a full pipeline model of the processor. The oracle advised the scheduler about resources and their availability.Orca Compiler Flow

Despite Bruce’s obvious talent, his business card simply read “Bruce Kenner: Programmer.”Since Bruce was a software engineer that designed and implemented exceedingly complex machinery, we might ask what is the role of the software engineer in the verification process. How did Bruce verify the oracle, scheduler and compiler he was designing? What role in general does the software engineer play in the verification of a modern system? To address these questions and others, I asked embedded software developer Gary Stringham to join me in this discussion. Gary is the founder and president of Gary Stringham & Associates, LLC, specializing in embedded systems development.

The typical SoC today contains a dozen or more processors of various flavors: general purpose, DSP, graphics, audio, encryption, etc. These processors are distinguished from their digital hardware brethren in that they are generally pre-verified cores that faithfully perform whatever tasks the programmer specifies. Hence, the DUV (design under verification) becomes the code written by the programmer rather than the hardware on which it executes. We must demonstrate this code implementation preserves the original intent of the architect. To do so requires the contribution of the software engineer to the verification process. We illustrate the cost of bringing the software engineer into the design cycle too late with this story from Gary’s experience. He writes:

I was having trouble getting my device driver to work with a block in an SoC for a printer so I went to the hardware engineer for help. We studied what my device driver was doing and it seemed fine. Then, we compared it to what the corresponding verification test was doing and both were writing the same values to the same registers. However, when we examined the device driver more closely, we discovered that it was programming the registers in a different order than the test case so that it exposed a problem.

I had my reasons for writing those registers in the way that I did, having to do with the overall architecture of the software. This was based upon the order that the device driver received information during the printing process. The hardware engineer had no rhyme nor reason to write to the registers in the order he did. He did not—nor was he expected to—know the nuances of the software architecture that required the driver to do it the way it did. But, with the order he happened to write those registers in his verification test, he obscured a defect. At this point in the design cycle the SoC was already in silicon so I figured out a work-around in my device driver.

Now, in this particular case, there was no reason to believe there would be order sensitivity of register writes; the driver was setting up some configuration registers before launching the task. But, there was a sensitivity. If I, the software engineer, had been involved months earlier with the design of the verification test suite, I might have suggested to write the registers in the order I needed the device driver to do it and it would have exposed the order-dependent error.

With this in mind, let’s examine the software engineering roles we recommend. The initial role of the software engineer should be collaborating with the system architect to ensure that required hardware resources are available for the algorithms delegated to the software.[1] Parametric specifications and version dependencies should also be examined and verified. Often the software engineer is the only one intimately familiar with the software limitations of legacy blocks reused in the design. Hence, they need to examine these in light of the requirements of the current design. What space/time/performance/power trade-offs become apparent as various hardware components are considered?

Next, the software engineer should review early specifications with an eye toward implementation challenges. What requirements and features lead to code complexities that might jeopardize the implementation or verification schedule? Are obscure use cases addressed? What ambiguities exist in the specification that could lead to different interpretations by the design, verification and software engineers? What hardware resources do software engineers need to debug problems, such as debug hooks and ports?

At the same time, the software engineer should be contributing to the verification planning process[2] during either top-down or bottom-up specification analysis. During the more common top-down analysis, where each designer explains their understanding of the features and their interactions, the software engineer will be asking questions that aid the extraction of design features requiring software implementation. Likewise, the software engineer will be answering questions posed by verification engineers and designers about their understanding of the specification. As the verification plan comes together, the software engineer should be a periodic reviewer.

Finally, during hardware/software integration, the software engineer plays an integral role, having a first-hand understanding (usually!) of their code and its intended behavior. Both the designer and software engineer will reference the specification and verification plan to disambiguate results. Each will learn from the other as they observe their respective components contribute to system features.

Summarizing, the software engineer must be involved early and throughout the design cycle to ensure design intent is preserved, yet properly partitioned between hardware and software, in the final implementation. Make sure software engineering is represented during the specification and verification planning processes, all the way through final system integration to maximize your potential for success. Or, as one of my earliest finger(1) .plan files (you remember those, right?) used to read: “Fully functional first pass silicon.”

——————-
[1] Hardware/Firmware Interface Design: Best Practices for Improving Embedded Systems Development, Stringham, Elsevier, 2010

[2] ESL Design and Verification, Bailey, Martin and Piziali, Elsevier, 2007 Read the rest of this entry »

Posted in Interoperability, Reuse, Verification Planning & Management | Comments Off

Transaction Level Modeling – Value add in different languages.

Posted by Nasib Naser on 14th December 2009

Nasib_NaserNasib Naser, PhD

Sr. Staff Corporate Applications Engineer – Synopsys

One of the driving factors of creating SystemVerilog is to raise the design verification abstraction level. The reason for such a move is described in the SV LRM Abstract. It says: “A set of extensions to the IEEE 1364-2001 Verilog Hardware Description Language to aid in the creation and verification of abstract architectural level models.” So why and how? The “Why” is obvious. A number of reasons come to mind:

  1. Faster development and simulation to enable reaching design and verification goals sooner than later.
  2. Achieve early closure on architectural decisions without early commitment to implementation details.
  3. Enable running “some” software within the complete system context.
  4. Create an environment in which verification methodologies could be developed that creates true verification IP re-usability and models interoperability.

This blog will discuss the methodology behind the “How.” The technology will be explained in subsequent blogs. Up until vmm 1.2 was released Transaction based models were created by utilizing VMM constructs such as vmm_channel(), vmm_xactor(), and vmm_data() . With all VMM strengths this use model succeeded only on in-house designs and didn’t gain wide modeling adoption. The TLM methodology built into VMM lacked common practices to enable interoperability – emphasis on common practice.

Meanwhile, driven by users, the Open SystemC Initiative aka OSCI TLM community managed to create a standard on which models could be developed for re-use and interoperability. Without dwelling in the past I’d like to give a brief history on the evolution that led to the TLM standard. In the beginning there was C, then C++. For obvious reasons these widely known software languages were used to develop in-house system level models for doing high level performance and architectural analysis. The use model was very limited as the simulation behavior was very far from the actual hardware. It lacks concurrency and timing. C++ extensions augmented with a “proof of concept” simulator was created and called SystemC that enable modeling these hardware behaviors in C++, and more. That worked very well. Failing to replace Verilog and vhdl for design and verification SystemC found its niche use model in the architectural modeling domain. At that point architects using SystemC started to demand a standard that enables interoperability for fast platform composition, ease of use, and re-usability. Hence, the OSCI SystemC Transaction Level Modeling 1.0 and later 2.0 standards were created.

So why re-invent the wheel when it comes to SystemVerilog TLM? And why not adopt a powerful and robust verification methodology such as VMM standard to enable seamless integration between SystemC and SystemVerilog? That’s why features described in the OSCI TLM standard found its way into the SystemVerilog/VMM world. These features are available in the newly released VMM 1.2. Subsequent blogs I will explain these features and the value add they bring into a true IP re-use and interoperable verification methodologies.

Posted in Interoperability, Reuse, SystemC/C/C++, Transaction Level Modeling (TLM) | Comments Off

What Has TLM-2.0 Got To Do With It?

Posted by John Aynsley on 17th November 2009

JohnAynsley

John Aynsley, CTO, Doulos

You may have noticed that the public release of VMM 1.2 is just around the corner, and with this new version of VMM comes the introduction of features inspired by the SystemC TLM-2.0 standard.

Excuse me! TLM-2.0? What? Why do we need features from SystemC in VMM?

I will set out to answer that question fully in a series of blog posts over the coming months. But first off I will remark that the idea is not so strange. After all, VMM has always been transaction-level (with a small ‘t’ and ‘I’). Communication within a VMM verification environment exploits transaction-level modeling for speed and productivity, because “TLM” is about abstracting the model of communication used in a simulation. If we can adopt a common standard for transaction-level modeling across both SystemC and SystemVerilog, that has to be a good thing for everyone. It is evident that the design and verification community demands more than one language standard (witness VHDL, SystemVerilog, C/C++, and SystemC). Each individual language standards has progressed over time by borrowing the best features from the others. Having VMM borrow features from SystemC makes it easier to learn and work with both standards.

The other natural link between VMM and SystemC is that mixed-language simulation environments and C/C++ reference models are not unusual. Virtual platform models, as used for software development and architectural exploration, are growing in importance, and the SystemC TLM-2.0 standard is used to achieve interoperability between the components of a virtual platform model. If a constrained random VMM environment is to be used with a reference model that consists of a virtual platform adhering to the SystemC TLM-2.0 standard, then having TLM-2.0 support within VMM promises to make life easier for the VMM programmer.

Besides interoperability, the other main objective of the SystemC TLM-2.0 standard is simulation speed. The combination of speed and interoperability is achieved by the technical details of the ways in which transactions are passed between components. Fortunately, those technical details are a good fit with the way communication has always worked in VMM. In particular, both VMM and TLM-2.0 support the idea that each transaction has a finite lifetime with a well-defined time-of-birth and time-of-death.

The SystemC TLM-2.0 standard is based on C++. Unfortunately, not all C++ coding idioms translate naturally into SystemVerilog, so the transaction-level communication in VMM 1.2 is “inspired by” the TLM-2.0 standard rather than being a literal rendition of it.

So what are these new features? If you are already a SystemC user you may recognize ports and exports, borrowed directly from the SystemC standard, and analysis ports, transport interfaces, sockets and the generic payload, borrowed from the TLM-2.0 standard. I will explain how VMM is able to exploit each of these features in future blog posts, so watch this space…

Posted in Interoperability, Reuse, SystemC/C/C++, Transaction Level Modeling (TLM) | Comments Off

7a47a9d57d8815b2981f446d971b8834RRRRRRRRRRRR