Verification Martial Arts: A Verification Methodology Blog

Archive for the 'Stimulus Generation' Category

Avoiding Redundant Simulation Cycles with your UVM VIP based simulation with a Simple Save-Restore Strategy

Posted by paragg on 6th March 2014

In many verification environments, you reuse the same configuration cycles across different testcases. These cycles might involve writing and reading from different configuration and status registers, loading program memories, and other similar tasks to set up a DUT for the targeted stimulus. In many of these environments, the time taken during this configuration cycles are very long. Also, there is a lot of redundancy as the verification engineers have to run the same set of verified configuration cycles for different testcases leading to a loss in productivity. This is especially true for complex verification environments with multiple interfaces which require different components to be configured.

The Verilog language provides an option of saving the state of the design and the testbench at a particular point in time. You can restore the simulation to the same state and continue from there. This can be done by adding appropriate built in system calls from the Verilog code. VCS provides the same options from the Unified Command line Interpreter (UCLI).

However, it is not enough for you to restore simulation from the saved state. For different simulations, you may want to apply different random stimulus to the DUT. In the context of UVM, you would want to run different sequences from a saved state as show below.

In the above example apart from the last step which varies to large extent, the rest of the steps once established need no iterations.

Here we explain how to achieve the above strategy with the simple existing UBUS example available in the standard UVM installation. Simple changes are made in the environment to show what needs to be done to bring in this additional capability. Within the existing set of tests, the two namely, “test_read_modify_write” and “test_r8_w8_r4_w4”, differs only w.r.t the master sequence being executed – i.e. “read_modify_write_seq” and “r8_w8_r4_w4_seq” respectively.

Let’s say that we have a scenario where we would want to save a simulation once the reset_phase is done and then start executing different sequences post the reset_phase the restored simulations. To demonstrate a similar scenario through the UBUS tests, we introduced a delay in the reset_phase of the base test (in a real test, this may correspond to the PLL lock, DDR Initialization, Basic DUT Configuration).

The following snippet shows how the existing tests are modified to bring in the capability of running different tests in different ‘restored’ simulations.

As evident in the code we made two major modifications.

  • Shifted the setting of the phase default_sequence from the build phase to the start of the main phase.
  • Get the name of the sequence as an argument from the command-line and process the string appropriately in the code to execute the sequence on the relevant sequencer.

As you can see, the changes are kept to a minimum. With this, the above generic framework is ready to be simulated.  In VCS, one of the different ways, the save/restore flow can be enabled as follows.

Thus above strategy helps in optimal utilization of the compute resources with simple changes in your verification flow. Hope this was useful and you manage to easily make the changes in your verification environment to adopt this flow and avoid redundant simulation cycles.

Posted in Automation, Coding Style, Configuration, Creating tests, Customization, Optimization/Performance, Organization, Reuse, Stimulus Generation, SystemVerilog, Tutorial, UVM, Uncategorized, Verification Planning & Management | 1 Comment »

The ‘user’ in RALF : get ralgen to generate ‘your’ code

Posted by S. Varun on 11th August 2011

A lot of times, registers in a device could be associated with configuration fields that may not exist physically inside the DUT. For example, there could be a register field meant for enabling the scrambler, a field that would need to be set to “1” only when the protocol is PCIE. As this protocol-mode is not a physical field one cannot write it as a memory mapped register. For such cases ralgen reserves a “user area” wherein users can write SystemVerilog compatible code which will be copied as-is into the RAL model. This gives users the flexibility to add any variables/constraints that may not necessarily be physical registers/fields while maintaining the automated flow. This ensures that the additional parameters are part of the ‘spec’, in this case RALF from which the Model Generation happens.. Thus, it creates a more seamless sharing of variables across the register model and the testbench..

Lets looks at how it works..  If I have a requirement to randomize the register values based on additional testbench parameters, this is what can be done..

block mdio {

bytes 2;

register mdio_reg_1_0@’h0000000 {

field bit_31_0 {

bits 32;

access rw;

reset ‘h00000000;

constraint c_bit_31_0 {

value inside {[0:15]};




user_code lang=SV {

rand enum {PCIE,XAUI} protocol;

constraint protocol_reg1 {

if(protocol == PCIE) mdio_reg_1_0.bit_31_0.value == 16′hFF;




As shown above the “user_code” RALF construct enables the users to achieve the addition of the user-code inside the generated RAL model. Make note of the fact that this construct allows you to weave custom code without having to modify the generated code. This construct can also be used to generate custom coverage. In the context of the above example the “protocol mode” will not be a coverpoint in the coverage generated by ralgen as it is not a physical field in the DUT. So user can fill this a separate covergroup using “user_code”. The new RALF spec and the generated RAL model with the added coverage are shown below:

block mdio {image

bytes 2;

register mdio_reg_1_0 @’h0000000 {

field bit_15_0 {

bits 16;

access rw;

reset ‘h00000000;

constraint c_bit_15_0 {

value inside {[0:15]};




user_code lang=SV {

rand enum {PCIE,XAUI} protocol;

constraint protocol_reg1 {

if(protocol == XAUI)

mdio_reg_1_0.bit_15_0.value == 16′hff;



user_code lang=sv {

covergroup protocol_mode; = name;

mode : coverpoint protocol {

bins pcie = {PCIE};


mdio_reg : coverpoint mdio_reg_1_0.bit_15_0.value {

bins set = {‘hff};


cross mode, mdio_reg;


protocol_mode = new();


} Figure: Generated model snippet

User-code gets embedded in the generated RAL classes but there is no way to embed user-code in the “sample” method that exists inside each block. And so for any user embedded covergroups the sampling will need to be done manually (perhaps inside post_write callback of registers/fields) within the user testbench using <covergroup>.sample(). The construct could also be used to embed additional data members and user-defined methods, a sampling method to sample all the newly defined covergroups maybe. Thus “user_code” as a RALF construct comes in as a very handy solution for embedding user code in the automated model generation flow.

Posted in Register Abstraction Model with RAL, Stimulus Generation | 1 Comment »

Controlling transaction generation timing from the driver using ‘PULL’ mode

Posted by Amit Sharma on 30th July 2010

Sadiya Tarannum Ahmed, Senior CAE, Synopsys

In the default flow, the transaction level communication in VMM Channels operates in the ‘PUSH’ mode, i.e., the process is initiated by the producer which randomizes and pushes a transaction in the channel when it is empty. This process is repeated again when the channel is empty or the consumer retrieves the transaction from the channel. However, in specific cases, you might not want the generator to create stimulus before the bus protocol is ready or until it is requested by the bus-protocol. In this case, you may want to use the ‘PULL’ mode in VMM channels.

In ‘Pull’ mode, the consumer initiates the process by requesting transactions and then the generator or the producer responds to it by putting the transaction into the channel.



The following steps show how the communication can operate in “PULL_MODE”.

Step1: In the generator code, call the vmm_channel::wait_for request() method before randomizing and putting the transaction into the channel.

By default vmm_channel::wait_for_request() does not block (“PUSH MODE”). In “PULL” mode, it will block until a get/peek/activate() is invoked by the consumer

class cpu_rand_scenario extends vmm_ms_scenario;
   cpu_trans blueprint;
   virtual task execute(ref int n);
       blueprint = cpu_trans::create_instance(this, "blueprint”);
          else `vmm_fatal(log, “cpu_trans randomization failed”);

Step2: Set the mode of channels.

By default, all channels are configure to work in “PUSH_MODE” and can be set to work in “PULL_MODE” statically or dynamically.

  • ·Static setting: Set the mode of channel in the testbench environment or in your testcases


  • Dynamic: call the Runtime switch +vmm_opts+pull_mode_on

Since the mode can be changed through vmm_opts, hierarchical or instance based setting for any channel can also be done at runtime. This brings in a lot of flexibility and the same channel can be made to work under different modes for different tests or even within the same simulation

The pre-defined atomic and scenario generators now support this feature; which can either be enabled by runtime control or by setting the associated channel mode to PULL_MODE in the environment.

Thus you now have the flexibility to configure your transaction level communication easily based on your requirements.

Posted in Communication, Stimulus Generation | 2 Comments »

WRED Verification using VMM

Posted by Amit Sharma on 22nd July 2010

Puja Sethia, ASIC Verification Technical Lead, eInfochips

With the increase in Internet usage, consumer appetite for high-bandwidth content such as streaming video and peer-to-peer file sharing continues to grow. The quality-of-service and throughput requirements of such content bring concerns about network congestion. WRED (Weighted Random Early Detection) is one of the network congestion avoidance mechanisms. WRED Verification challenges span the injection of transactions to the creation of various test scenarios, to the prediction and checking of WRED results, to the end of report generation etc. To address this complexity, it is important to choose the right technique and methodology when architecting the verification environment.

WRED Stimulus Generation – Targeting real network traffic scenarios

There are two major WRED stimulus generation requirements:

1. Normal Traffic Generation – Interleaved for different traffic queues (class) and targeting regions below minimum threshold for the corresponding queue.

2. Congested Traffic Generation – Interleaved for different traffic queues and targeting regions below mininum threshold, above maximum threshold and range between minimum and maximum threshold for the corresponding queue.


As shown in the diagram above, WRED stimulus generation requirements can be implemented in three major steps:

1. Identify test requirements such as the traffic patterns for different regions

2. Identify packet requirements for each interface to generate the required test scenario

3. Generate packets as per the provided constraints and pass it on to transcators

This demarcation helps in ensuring that the flow is intuitive, concise and flexible while planning the stimulus generation strategy. To achieve this hierarchical and stacked requirement, we used VMM scenarios. VMM Single Stream scenarios which creates randomized list of transactions based on constraints can be directly mapped to the 3rd step. For the first two steps, we need the capability to drive, control and access more than one channel and we also need to create a hierarchy of scenarios. The VMM multi-stream scenarios provide the capability to control and access more than one channel. Higher level multi-stream scenarios can also instantiate other multi-stream scenarios. Separate multi-stream scenarios were thus defined to generate different traffic patterns which are used in the test multi-stream scenario to interleave all different types of traffic. As the requirement is to interleave different types of traffic coming from multiple multi-stream scenarios there are multiple sources of transactions and single destination. The VMM scheduler helps to funnel transactions from multiple sources into one output channel. It uses a user configurable scheduling algorithm to select and place an input transaction on the output channel.


The snippet above shows how traffic from CLASS_0 and CLASS_1 is easily interleaved with each of them targeting a different region.

Predicting, Debugging and Checking WRED Results using VMM with-loss scoreboard

It is hard to predict if and what packets that might be lost and this makes it difficult to debug or verify the WRED results. The VMM DataStream Scoreboard helps in addressing this. The “expect_with_losses” option, when leveraged with a mechanism based on the configured probability, can help to overcome the challenge of predicting and verifying detailed WRED results.


As shown in the above diagram, the DS scoreboard identifies which packets are lost if they do not appear in the output stream and posts the lost packets into a separate queue. At the same time, it verifies the correctness of all the others packets received at the output. It generates the different statistics based on packets matched and lost and this information can be utilized to represent the actual WRED results. By linking these results to the stimulus generator, the actual WRED results can be categorized based on the pass/lost packet count for each region. This actual WRED results represented in terms of lost and pass packet count for each region can than be compared with the predicted lost and pass packet count calculated based on the configured drop probability for each region to know pass/fail result.

For more information on the WRED verification challenges and usage of VMM scenarios and VMM DS Scoreboarding techniques to overcome WRED verification challenges, refer to paper on “Verification of Network Congestion Avoidance Algorithm like WRED using VMM” on

Posted in Scoreboarding, Stimulus Generation | Comments Off

Generating microcode stimuli using a constrained-random verification approach

Posted by Shankar Hemmady on 15th July 2010

As microprocessor designs have grown considerably in complexity, generating microcode stimuli has become increasingly challenging.  An article by AMD and Synopsys engineers in EE Times explores using a hierarchical constrained-random approach to accelerate generation and reduce memory consumption, while providing optimal distribution and biasing to hit corner cases using the Synopsys VCS constraint solver.

You can find the full article in PDF here.

Posted in Stimulus Generation, SystemVerilog | Comments Off


Posted by JL Gray on 5th May 2010

by Asif Jafri, verification engineer, Verilab

Atomic Generators

Atomic generators select and randomize transactions based on user constraints. If you do not care about what sequence the transactions are generated, then atomic generators are a good choice.

The code below shows a pcie_config class which defines the read and write transactions. It also has two macros defined that create a channel and the atomic generator to drive the channel.

// filename:

class pcie_config extends vmm_data;
typedef enum {Read, Write} kind_e;
rand kind_e instruction;
rand bit [31:0] address;
rand bit [31:0] data;

endclass: pcie_config
`vmm_atomic_gen(pcie_config, “PCIE Configuration Atomic Generator”)

If you are running functional coverage, there are often corner cases or a sequence of events that will require multiple simulation cycles to be covered. The time to get to these specific points in the state space can be drastically reduced by using scenario generators to generate the specific sequence of events.

Scenario Generators

Scenario generators are useful if you need to constrain your generator to generate a specific sequence of transactions.

Let’s say to configure your PCIE device you need to follow a series of steps.

1.    Read the status register
2.    Read the control register and
3.    Write to the control register

In this case you can use your pcie_config class once again but now define a macro for its scenario generator instead of the atomic generator.

// filename:

`vmm_scenario_gen(pcie_config, “ PCIE Configuration Scenario Generator”)

Here again a single channel will be connected to this scenario generator. We now need to define the scenario as shown below.

// filename:

class pcie_control_scenario extends pcie_config_scenario;
// Variable to identify the scenario
int pcie_control_scenario_id

constraint pcie_control_scenario_items {
if($void(scenario_kind) == pcie_control_scenario_id) {
// number of elements in the scenario
length == 3;
// run scenario more than once
repeated == 0;
this.items[i].instruction == pcie_config:: Read(status_address);
else if(i==1)
this.items[i].instruction == pcie_config:: Read(control_address);
else if(i==2)
this.items[i].instruction == pcie_config:: Write(control_address);

endclass: pcie_gen

These are also called single stream scenarios.  Scenario generators work well for block level testbenches, but if we need to control multiple blocks in the system level testbench to generate specific interactions to improve coverage multi-stream scenario generators should be used.

Multi-stream Scenario (MSS) Generators

In a typical chip there will be more than one type of peripheral. Ex: PCIE, USB, Ethernet. Each needs to be controlled separately with its own scenario generator. But there is often a time when the interface on the PCIE and the USB need to be controlled together. This is where MSS generators are useful. MSS generators can feed transactions to multiple channels, unlike single stream scenario generators and atomic generators which can feed only one.

Figure 1: Multi-Stream Scenarios


Let’s try to create a MSS with the PCIE scenario and the USB scenario together.

1.    Use the macros to generate  the pcie_config channel and scenario

// filename:

`vmm_scenario_gen(pcie_config, “PCIE Configuration Scenario Generation”)

2.    Use the macros to generate  the usb_config channel and scenario

// filename:

`vmm_scenario_gen(usb_config, “USB Configuration Scenario Generation”)

3.    Define scenarios for the PCIE and USB that you want to control. An example for the PCIE control scenario is shown in the previous section.

4.    Next extend your scenario from vmm_ms_scenario to put the above defined scenarios together and in the execute task define how to use the above defined scenarios.  Directed stimulus for the ETH module can be reused from the block level testbench by encapsulating it in the MSS. The directed stimulus can be cut-and-pasted into the MSS or the MSS could directly call an external function to execute the stimulus.

// filename:
class my_ms_scenario extends vmm_ms_scenario;

pcie_control_scenario pcie_control_scenario;
usb_control_scenario  usb_control_scenario;

task execute();
fork begin
pcie_config_channel out_chan;
usb_config_channel out_chan;
// Directed test code goes here
endtask: execute
endclass: my_ms_scenario

5.    Of course in your top level testbench don’t forget to instantiate your multi-stream scenario and register the various channels that it will use to talk to the PCIE and USB peripherals. You will also need to register the multi-stream scenario generator.

// filename:

vmm_ms_scenario_gen mss_gen;
my_ms_scenario my_ms_scenario;

pcie_config_chan pcie_config_chan;
usb_config_chan usb_config_chan;

mss_gen = new(“Multi-stream scenario generator”);
my_ms_scenario = new();
pcie_config_chan = new(“PCIE CONFIGURATION CHANNEL”, pcie_config_chan);
USB_config_chan = new(“USB CONFIGURATION CHANNEL”, usb_config_chan);

mss_gen.register_channel(“PCIE_SCENARIO_CHANNEL”, pcie_config_channel);
mss_gen.register_channel(“USB_SCENARIO_CHANNEL”, usb_config_channel);
mss_gen.register_ms_scenario(“MSS_SCENARIO”, my_ms_scenario);

mss_gen.stop_after_n_scenarios = 10;

As can be seen in Figure 1, you can combine scenarios to make multi-stream scenarios, or you can also have higher level multi-stream scenario generators calling other multi-stream scenario generators and directed tests.

Posted in Reuse, Stimulus Generation, Tutorial | Comments Off

Great article on managing complex constraints

Posted by Janick Bergeron on 12th March 2010

A two-part articles by Cisco and Synopsys engineers in IC Design and Verification Journal explains how complex constraints can be better managed to simplify the solving process, yet obtain high-quality results. Part1 deals with solutions spaces and constraint partitions. Part2 introduces the concept of soft constraint in e and default constraints in OpenVera.

You can read part1 and part2 here.

Posted in Debug, Modeling Transactions, Optimization/Performance, Stimulus Generation | Comments Off

Leverage on the built-in callback inside vmm_atomic_gen and be productive with DVE features for VMM debug

Posted by Srinivasan Venkataramanan on 7th February 2010

Srinivasan Venkataramanan, CVC Pvt. Ltd.

Rashmi Talanki, Sasken

John Paul Hirudayasamy, Synopsys

During a recent Verification environment creation for a customer we had to tap an additional copy/reference of the generated transaction to another component in the environment without affecting the flow. So one producer gets more than one consumer (here 2 consumers). As a first time VMM coder the customer tried using “vmm_channel::peek” on the channel that was connecting GEN to BFM. Initially it seemed to work, but with some more complex code being added across the 2 consumers for the channel, things started getting funny – one of the consumers received the transactions more than once for instance.

The log file looked like:

@ (N-1) ns the transaction was peeked by Master_BFM  0.0.0

@ (N-1) ns the transaction was peeked by Slave_BFM 0.0.0


.(perform the task)


@N ns the Master_BFM  get the transaction 0.0.0

@N ns the transaction was peeked by Slave_BFM 0.0.0

@N ns the transaction was peeked by Master_BFM 0.0.1

@N ns the transaction was peeked by Slave_BFM 0.0.1

With little reasoning from CVC team, the customer understood the issue quickly to be classical race condition of 2 consumers waiting for same transaction. What are the options, well several indeed:

1. Use vmm_channel::tee() (See our VMM Adoption book for an example)

2. Use callbacks – a flexible, robust means to provide extensions for any such future requirements

3. Use vmm_broadcaster

4. Use the new VMM 1.2 Analysis Ports (See a good thread on this: )

The customer liked the callbacks route but was hesitant to move towards the lengthy route of callbacks – for few reasons (valid for first timers).

1. Coding callbacks takes more time than simple chan.peek(), especially the facade class & inserting at the right place

2. She was using the built-in `vmm_atomic_gen macro to create the generator and didn’t know exactly how to add the callbacks there as it is pre-coded!

Up for review, we discussed the pros and cons of the approaches and when I mentioned about the built-in post_inst_gen callback inside the vmm_atomic_gen she got a pleasant surprise – that takes care of 2 of the 4 steps in the typical callbacks addition step as being recommended by CVC’s popular DR-VMM course.

Step-1: Declaring a facade class with needed tasks/methods

Step-2: Inserting the callback at “strategic” location inside the component (in this case generator)

This leaves only the Steps 3 & 4 for the end user – not bad for a robust solution (especially given that the Step-4 is more of formality of registration). Now that the customer is convinced, it is time to move to coding desk to get it working. She opened up and got trapped in the multitude of `define vmm_atomic_gen_* macros with all those nice looking “ \ “ at the end – thanks to SV’s style of creating macros with arguments. Though powerful, it is not the easiest one to read and decipher – again for a first time SV/VMM user.

Now comes the rescue in terms of well proven DVE – the VCS’s robust GUI front end. Its macro expansion feature that works as cleanly as it can get is at times hard to locate. But with our toolsmiths ready for assistance at CVC, it took hardly a few clicks to reveal the magic behind the `vmm_atomic_gen(icu_xfer). Here is a first look at the atomic gen code inside DVE.


Once the desired text macro is selected, DVE has a “CSM – Context Sensitive Menu” to expand the macro with arguments. It is “Show à Macro”, as seen below in the screenshot.


With a quick bang go on DVE – the Macros expander popped up revealing the nicely expanded, with all class name argument substituted source code for the actual atomic_generator that gets created by the one liner macro. Along with clearly visible were the facade class name and the actual callback task with clear argument list (something that’s not obvious by looking at standard


Now, what’s more – in DVE, you can bind such “nice feature” to a convenient hot-key if you like (say if you intend to use this feature often). Here is the trick:

Add the following to your $HOME/.synopsys_dve_usersetup.tcl

gui_set_hotkey -menu “Scope->Show->Macro” -hot_key “F6″

Now when you select a macro and type “F6” – the macro expands, no rocket science, but a cool convenient feature indeed!

Voila – learnt 2 things today – the built-in callback inside the vmm_atomic_gen can save more than 50% of coding and can match up to the effort (or the lack of) of using simple chan.peek(). The second one being DVE’s macro expansion feature that makes debugging a real fun!

Kudos to VMM and the ever improving DVE!

Posted in Callbacks, Debug, Reuse, Stimulus Generation, VMM, VMM infrastructure | Comments Off

VMM scenario generators and dependent scenarios

Posted by Avinash Agrawal on 4th December 2009

Avinash Agrawal

Avinash Agrawal, Corporate Applications, Synopsys

Often folks wonder if it possible to have a VMM scenario generator, where one scenario is dependent on another scenario.

The answer is “Yes.”

Consider the testcase below. You can define two scenarios, scn_a and scn_b, both of which have their own set of constraints. The variables generated in scn_b are a multiple of the values that were set when scn_a was generated previously, in this case by variable “ratio”. For more details on how VMM scenario generators work, refer to the VMM user guide.

Systemverilog testcase:


class packet extends vmm_data;

rand int sa;

rand int da;


`vmm_data_member_scalar(sa, DO_ALL)

`vmm_data_member_scalar(da, DO_ALL)




`vmm_scenario_gen(packet, “packet”)

class a_scenario extends packet_scenario;

int unsigned scn_a;

rand int ratio;

function new();

scn_a = define_scenario(“scn_a”, 5);


constraint cst_a {

$void(scenario_kind) == scn_a ->  {

foreach(items[i]) {

this.items[i].sa inside {[0:100]};

this.items[i].da inside {[0:100]};


ratio inside {[1:5]};




class b_scenario extends packet_scenario;

int unsigned scn_b;

function new();

scn_b = define_scenario(“scn_b”, 10);


constraint cst_b {

$void(scenario_kind) == scn_b -> {

foreach(items[i]) {

this.items[i].sa inside {[100:300]};

this.items[i].da inside {[100:300]};





class hier_scenario extends packet_scenario;

rand a_scenario scn_a;

rand b_scenario scn_b;

function new();

this.scn_a = new();

this.scn_b = new();




virtual task apply(packet_channel channel, ref int unsigned n_insts);

this.scn_a.apply(channel, n_insts);

this.scn_b.apply(channel, n_insts);


constraint cst_hier {

foreach(scn_b.items[i]) {

// Create a scenario ‘scn_b’ depending upon ‘scn_a’

scn_b.items[i].sa inside {[scn_a.ratio*100:scn_a.ratio*800]};

scn_b.items[i].da inside {[scn_a.ratio*100:scn_a.ratio*800]};




program automatic test;

packet_scenario_gen scn_gen;

packet_channel      pkt_chan;

packet pkt;

hier_scenario scn_hier;

initial begin

scn_hier = new();

scn_gen = new(“scn_gen”, -1, pkt_chan);

scn_gen.register_scenario(“scn_hier”, scn_hier);


$display(“Size of the scn is %0d”, scn_gen.scenario_set.size());

scn_gen.stop_after_n_insts = 15;


while(1) begin

#10 scn_gen.out_chan.get(pkt);

$display(“id is %0d , %0d %0d”, pkt.data_id,, pkt.da);





Posted in Reuse, Stimulus Generation, VMM | Comments Off

Exclusive Access of VMM Channel

Posted by Shankar Hemmady on 4th September 2009

rahul_shah1 Rahul V. Shah (bio)
Director of Customer Solutions, eInfochips

It is always challenging when it comes to controlled randomization. Constraints may be an easier way to think about it, but at chip level we are often interested in generating a few scenarios which are controlled in specific sequences. However, we still don’t want to develop scenarios that are very directed.

Let’s consider an example: we have an AHB bus interface with different master. We have a DMA controller on the bus along with few other masters. The chip level stress scenario might include multiple master performing data transfer to the memory interface on the bus. The transactions can be completely random. To make the scenario more interesting, we may want to add random reads from the status register, random read of some read only registers along with other data xfer.

One of the scenarios can include handling an error/exception scenario where we want read the status register, followed by the interrupt register followed by a write transfer to clear the interrupt. In the normal scenario, we can generate such scenario in directed fashion. But that will take away the random behavior.

Earlier, such scenarios were implemented in a lot more complex fashion as it was difficult to create a random scenario while getting exclusive access to randomness whenever required. Here I describe a mechanism to get exclusive access to a channel when required, while utilizing the benefits of randomness.

Scenario generators are used to generate a sequence of transactions. Multiple scenario generators may be connected to the same output channel but such a connection does not prevent other generators to concurrently inject transactions to that channel. A scenario is thus not guaranteed the exclusive access to an output channel. Multiple threads in the same multi-stream scenario, or multiple single stream scenarios, or any transactor may inject transactions in the same channel.

If the requirement is to generate a sequence of transactions without any interruption from other generator or from any transactor, an exclusive access of channel can be obtained and later released when exclusive access is no longer required. Let’s consider the scenario below:

01. class my_scenario extends vmm_ms_scenario;
02. rand atm_cell atm_cell_inst ;
03. atm_cell_channel atm_out_chan;
04. int MSC = this.define_scenario(“MY SCENARIO”, 0);
05. local bit [7:0] id;
07. function new();
09. atm_cell_inst = new;
10. endfunction: new
12. task execute(ref int n);
13. $cast(atm_out_chan, this.get_channel(“ATM_SCENARIO_CHANNEL”));
14. atm_out_chan.grab(this);
15. repeat (10) begin
16. atm_out_chan.put(atm_scenario,.grabber(this));
17. repeat (10) @ (posedge clk) ;
18. end
19. atm_out_chan.ungrab(this);
20. endtask: execute
21. endclass: my_scenario

If a scenario requires exclusive access to a channel to ensure the uninterrupted execution of the sequence of transactions, it can grab the channel as shown in line 16, atm_out_chan.grab(this). This will grab the atm_out_chan channel and once grabbed, access of this channel will not be provided to any other scenario until it is explicitly ungrabbed. As shown in lines 15-18, the scenario sends 10 sequential transactions with a delay of 10 clock cycless after grabbing the channel. To inject transactions in the grabbed channel, a reference to the scenario currently injecting the transaction must be provided to the put method as shown in line 16 atm_out_chan.put(atm_scenario,.grabber(this)). After completion of the sequence of transactions, the channel is ungrabbed at line 19 atm_out_chan.ungrab(this).

When the channel is grabbed by one scenario and other scenarios try to put transactions in the same channel, the put method is blocked until the channel is ungrabbed by the scenario who has previously grabbed the channel. To prevent the blocking that would occur as a result of the grabbed channel, we can check the status of the channel using the vmm_channel::is_grabbed() function. This function will return “1”, if the channel is grabbed.

Posted in Communication, Stimulus Generation, Transaction Level Modeling (TLM), Tutorial | Comments Off

Multi-stream Scenario Generator (MSS)

Posted by Amit Sharma on 15th May 2009

Amit  Sharma, CAE Manager, Synopsys

Multi-stream scenario generation is an important feature recently added to VMM. It targets generation and coordination of stimulus across multiple interfaces. It also allows hierarchical layering of scenarios and the reuse of block level scenarios at system level.

Multi-stream scenarios are described by extending the vmm_ms_scenario class. They can be composed of individual transactions, procedural code, existing single-stream scenarios as well as other multi-stream scenarios. Depending on your requirements, a multi-stream scenario can be either single threaded, multi threaded or reactive.

In a nutshell, you simply need to provide an implementation of the vmm_ms_scenario::execute() task. Whatever the behavior of this method is, it is what defines the multi-stream scenario. This method can execute single/multiple transactions/scenarios. It can execute procedural code. It can read a file. It can do anything SystemVerilog code can do.

Let’s take a look at the example that comes with the VMM 1.1 release (available in sv/examples/std_lib/mss_simple directory). This example has a multi-stream scenario controlling the execution of two independent single stream scenarios: my_atm_scenario and my_cpu_scenario. Let’s see how the multi-stream scenario called my_scenario controls the execution of these two scenarios.

1. class my_scenario extends vmm_ms_scenario;

2.    my_atm_cell_scenario atm_scenario;

3.    my_cpu_scenario cpu_scenario;

4.    atm_cell_channel atm_chan;

5.    cpu_channel cpu_chan;


7.    int MSC = this.define_scenario(“Multistream SCENARIO”, 0);

8.    local bit [7:0] id;


10.   function new(bit [7:0] id);


12.       atm_scenario = new(id);

13.       cpu_scenario = new();

14. = id;

15.   endfunction: new


17.   task execute(ref int n);

18.       fork

19.         begin

20.            atm_cell_channel atm_chan;

21.            int unsigned nn = 0;

22.            $cast(atm_chan, this.get_channel(“ATM_SCENARIO_CHANNEL”));

23.             atm_scenario.randomize with {length == 1;};

24.             atm_scenario.apply(atm_chan, nn);

25.             n += nn;

26.         end

27.         begin

28.             cpu_channel cpu_chan;

29.             int unsigned nn = 0;

30.             $cast(cpu_chan,this.get_channel(“CPU_SCENARIO_CHANNEL”));

31.             cpu_scenario.randomize with {length == 1;};

32.             cpu_scenario.apply(cpu_chan, nn);

33.             n += nn;

34.         end

35.       join

36.   endtask: execute

37. endclass: my_scenario

In lines 1-3, my_scenario extends vmm_ms_scenario and declares single stream scenarios. These single-stream scenario members,if prefixed with the rand keyword, would be implicitly randomized before execution.

In lines 17-36, the virtual execute() method is overridden with the user code to implement the scenario. In this example, atm_scenario and cpu_scenario are randomized and executed in parallel threads. If VMM single stream scenarios are used (as in this example), then the apply()method of the scenarios is called to send the transactions of the scenario through the appropriate VMM channel. Thus there is a very close correlation between the procedural “apply()” method of single stream scenarios and the “execute()” method of multi-stream scenarios. If multi-stream sub-scenarios had been instantiated in the above scenario, the “execute()” method of the child scenarios would be invoked instead of the apply() method.

A multi-stream scenario can connect to any channel interface in the testbench environment. In lines 22 and 30, the get_channel() method is used to get the appropriate vmm_channel handle by specifying their logical names “ATM_SCENARIO_CHANNEL” and “CPU_SCENARIO_CHANNEL” . All you need to do is to register the different channels to the multi-stream generator under these logical names.

Now that we have created our multi-stream scenario, let’s see how to use it in our verification environment. The following code snippet shows how channels and scenarios are registered with the multi-stream scenario generator, how they are started and used by other transactors.

1. my_scenario sc0 = new(0);

2. ……

3. atm_cell_channel atm_chan = new(“ATM CELL CHANNEL”, “TEST”);

4. cpu_channel cpu_chan = new(“CPU CELL CHANNEL”, “TEST”);

5. ….

6. ….

7. gen.register_channel(“ATM_SCENARIO_CHANNEL”, atm_chan);

8. gen.register_channel(“CPU_SCENARIO_CHANNEL”, cpu_chan);

9. gen.register_ms_scenario(“SCENARIO_0″, sc0);

10. ….

11. `

12. gen.stop_after_n_scenarios = 10;


14. …

15. gen.start_xactor();

16. gen.notify.wait_for(vmm_ms_scenario_gen::DONE);

17. …

In line 1-4, the multi-stream scenario and channels are created.

In lines 7-8, channels are registered with the generator through the register_channel() method under the logical names “ATM_SCENARIO_CHANNEL” and “CPU_SCENARIO_CHANNEL”. Scenarios will subsequently get the channel handles through the get_channel() method by specifying these names as discussed earlier.

In line 9, the scenario is registered with the generator through the register_ms_scenario() method. Any number of scenarios can be registered and their order of execution can be controlled. The number of scenarios to be executed is controlled through the stop_after_n_scenarios field of the generator.

With these simple steps, you can quickly implement interesting multiple stream scenarios to stress your DUT interfaces. Thanks to this functionality, you can easily reuse existing single stream scenarios and/or other multi-stream scenarios in a hierarchical way. You can instantiate other scenarios and ‘execute’ them. In case of a single stream scenario, its apply() method has to be called after it is randomized and in case of a multi stream scenario, execute() method has to be called after randomization. Any registered multi stream scenario can be obtained by another scenario by its logical name. And thus, a hierarchical scenario library with increasing complexity can be easily created.

Last but not least, multi-stream scenarios can be vertically reused, where coordination between various generators is a must have. VMM allows a multi-stream scenario generator to be registered with another generator and have that top-level generate execute the scenarios of its sub-generators. This allows top level generator to control and synchronize the execution of scenarios of other generators. I’ll address hierarchical multi-stream scenarios in my next blog. Stay tuned.

Posted in Reuse, Stimulus Generation | 13 Comments »