Verification Martial Arts: A Verification Methodology Blog

Archive for the 'Scoreboarding' Category

Using the VMM Datastream Scoreboard in a UVM environment

Posted by Amit Sharma on 2nd February 2012

Implementing the response checking mechanism in a self-checking environment remains the most time-consuming task. The VMM Data Stream Scoreboard package facilitates the implementation of verifying the correct transformation, destination and ordering of ordered data streams. This package is intuitively applicable to packet-oriented design, such as modems, routers and protocol interfaces. This package can also be used to verify any design transforming and moving sequences of data items, such as DSP data paths and floating-point units. Out-of-the-box, the VMM data stream scoreboard can be used to verify single-stream designs that do not modify the data flowing through them. For example, it can be used to verify FIFOs, Ethernet media access controllers (MACs) and bridges.

The VMM data scoreboard can also be used to verify multi-stream designs with user-defined data transformation and input-to-output stream routing. The transformation from input data items into expected data items is not limited to one-to-one transformation. An input data item may be transformed into multiple expected data items (e.g. segmenters) or none (e.g. reassemblers). Compared to this, the functionality available through UVM in-order comparator or the algorithmic comparator is significantly less. Thus, users might want to have access to the functionality provided by the VMM DS Scoreboard in a UVM environment. Using the UBUS example available in $VCS_HOME/doc/examples/uvm/integrated/ubus as a demo vehicle, this article shows how simple adapters are used to integrate the VMM DS scoreboard in a UVM environment and thus get access to more advanced scoreboarding functionality within the UVM environment

The UBUS example uses an example scoreboard to verify that the slave agent is operating as a simple memory. It extends from the uvm_scoreboard class and implements a memory_verify() function to makes the appropriate calls and comparisons needed to verify a memory operation. An uvm_analysis_export is explicitly created and implementation for ‘write’ defined. In the top level environment, the analysis export is connected to the analysis port of the slave monitor.

ubus0.slaves[0].monitor.item_collected_port.connect(scoreboard0.item_collected_export);

The simple scoreboard with its explicit implementation of the comparison routines suffices for verifying the basic operations, but would require to be enhanced significantly to provide more detailed information which the user might need. For example, lets take the ‘test_2m_4s’ test. Here , the environment is configured to have 2 Masters and 4 slaves.. Depending on how the slave memory map is configured, different slaves respond to different transfers on the bus. Now, if we want to get some information on how many transfer went into the scoreboard for a specific combination (eg: Master 1 to Slave 3), how many were verified to be processed correctly etc, it would be fair enough to conclude that the existing scoreboarding schemes will not suffice..

Hence, it was felt that the Data Stream Scoreboard with its advanced functionality and support for data transformation, data reordering, data loss, and multi-stream data routing should be available for verification environments not necessarily based on VMM. From VCS  2011.12-1, this integration have meed made very simple.  This VMM DS scoreboard implements a generic data stream scoreboard that accepts parameters for the input and output packet types. A single instance of this class is used to check the proper transformation, multiplexing and ordering of multiple data streams. The scoreboard class now  leverages a policy-based design and parameterized specializations to accepts any ‘Packet’ class or d, be it VMM, UVM or OVM.

The central element in policy-based design is a class template (called the host class, which in this case in the VMM DS Scoreboad), taking several type parameters as input, which are specialized with types selected by the user (called policy classes), each implementing a particular implicit method (called a policy), and encapsulating some orthogonal (or mostly orthogonal) aspect of the behavior of the instantiated host class. In this case, the ‘policies’ implemented by the policy classes are the ‘compare’ and ‘display’ routines.

By supplying a host class combined with a set of different, canned implementations for each policy, the VMM DS scoreboard can support all different behavior combinations, resolved at compile time, and selected by mixing and matching the different supplied policy classes in the instantiation of the host class template. Additionally, by writing a custom implementation of a given policy, a policy-based library can be used in situations requiring behaviors unforeseen by the library implementor .

So, lets go through a set of simple steps to see how you can use the VMM DS scoreboard in the UVM environment

Step 1: Creating the policy class for UVM and define its ‘policies’

image

Step 2: Replacing the UVM scoreboard with a VMM one extended from “vmm_sb_ds_typed” and specialize it with the ubus_transfer type and the previous created uvm_object_policy.

class ubus_example_scoreboard extends vmm_sb_ds_typed #(ubus_transfer,ubus_transfer, uvm_object_policy);

`vmm_typename(ubus_example_scoreboard)

endclass: ubus_example_scoreboard

Once, this is done, you can either declare an VMM TLM Analysis export to connect to the Bus Monitor in the UBUS environment or use the pre-defined on in the VMM DS scoreboard

vmm_tlm_analysis_export #(ubus_example_scoreboard,ubus_transfer) analysis_exp;

Given that for any configuration, one master and slave would be active, define the appropriate streams in the constructor (though this is not required if there are only single streams, we are defining this explicitly so that this can scale up to multiple input and expect streams for different tests)

this.define_stream(0, “Slave 0″, EXPECT);
this.define_stream(0, “Master 0″, INPUT);

Step 2 .a: Create the ‘write’ implementation for the Analysis export

Since, we are verifying the operation of the slave as a simple memory, we just add in the appropriate logic to insert a packet to the scoreboard when we do a ‘WRITE’ and an expect/check when the transfer is a ‘READ’ with an address that has already been written to.

image

Step 2.b: Implement the stream_id() method

You can use this method to determine to which stream a specific ‘transfer’ belongs to based on the packet’s content, such as a source or destination address. In this case, the BUS Monitor updates the ‘slave’ property of the collected transfer w.r.t where the address falls on the slave memory map.

image

image

Step 3: Create the UVM Analysis to VMM Analysis Adapter

The uvm_analysis_to_vmm_analysis is used to connect any UVM component with an analysis port to any VMM component via an analysis export. The adapter will convert all incoming UVM transactions to a VMM transaction and drive this converted transaction to the VMM component through the analysis port-export. If you are using the VMM UVM interoperability library, you do not have to create the adapter as it will be available in the library

image

image

Create the ‘write’ implementation for the analysis export in the adapter

The write method, called via the <analysis_export> would just post the receive UBUS transfer from the UVM analysis port to the VMM analysis port.

image

Step 4: Make the TLM connections

In the original example, the item_collected_port of the slave monitor was connected to the analysis export of the example scoreboard. Here, the DataStream scoreboard has an analysis port which expects a VMM transaction. Hence, we need the adapter created above to intermediate between the analysis port of the UVM Bus monitor and the analysis export of the VMM DS scoreboard..

image

Step 5: Define Additional streams if required for multi-master multi-slave configurations

This step is not required for a single master/slave configuration. However, would need to create additional streams so that you can verify the correctness on all the different permutations in terms of tests like “test_2m_4s” .

In this case, the following is added in the test_2m_2s in the connect_phase()

image

Step 6: Add appropriate options to your compile command and analyze your results

Change the Makefile by adding –ntb_opts rvm on the command line and add +define+UVM_ON_TOP

vcs -sverilog -timescale=1ns/1ns -ntb_opts uvm-1.1+rvm +incdir+../sv ubus_tb_top.sv -l comp.log +define+UVM_ON_TOP

And that is all, as far and you are ready to go and validate your DUT with a more advanced scoreboard with loads of built-in functionality. This is what you will get when you execute the “test_2m_4s” test

Thus, not only do you have stream specific information now, but you now have access to much more functionality as mentioned earlier. For example, you can model transformations, checks for out of order matches, allow for dropped packets, and iterate over different streams to get access to the specific transfers. Again, depending on your requirements, you can use the simple UVM comparator for your basic checks and switch over to the DS scoreboard for the more complex scenarios with the flip of a switch in the same setup. This is what we did for a UVM PCIe VIP we developed earlier ( From the Magician’s Hat: Developing a Multi-methodology PCIe Gen2 VIP) so that the users has access to all the information they require. Hopefully, this will keep you going, till we have a more powerful UVM scoreboard with some subsequent UVM version

Posted in Communication, Interoperability, Reuse, Scoreboarding, UVM, VMM infrastructure | 2 Comments »

Using VMM template Generator to ramp up your testbench development

Posted by Amit Sharma on 25th October 2010

Amit Sharma, Synopsys
‘vmmgen’, the template generator for creating robust, extensible VMM compliant environments, has been available for a long time with VMM and it was upgraded significantly with VMM1.2. Though the primary functionality of ‘vmmgen’ is to help minimize VIP and environment development cycle by providing detailed templates for developing VMM Compliant verification environments, a lot of folks also use it to quickly understand how different VMM base classes can be used in different contexts. This is done as the templates uses a rich set of the latest VMM features to ensure the appropriate base classes and their features are picked up optimally.

Given that it has a wide user interaction mechanism which provides available features and options to the user, the user can pick up the modes which are most relevant to his or her requirement. It also provides them the option to provide their own templates thus providing a rich layer of customization. Based on the need, one can generate individual templates of different verification components or they can generate a complete verification environment which comes with a ’Makefile’ and an intuitive directory structure, thus propelling them on their way to catch the first set of bugs in their DUTs. I am sure all of you know where to pick up ‘vmmgen’ form. It available in the <VMM_HOME>/Shared/bin area or in $VCS_HOME/bin

Some of the rich set of features available now includes:

• Template Options:

– Complete environment generation

– Individual templates generation

• Options to create Explicitly phased environments or Implicitly phased environment or to mix Implicitly phased components and Explicitly phased components

• Usage of VMM Shorthand macros

• Creating RAL based environment, and providing Multiplexed domain support if required

• Hooking up VMM Performance Analyzer at the appropriate interfaces

• Hooking up the DS Scoreboard at the relevant interfaces (with options to chose from a range of integration options, e.g. : through callbacks, through TLM2.0 analysis ports, connect directly through to transactors, channels or notifications)

• Ability to hook up different generators (atomic, scenario, Multistream generators) at different interfaces

• Creating a scenario library and Multistream scenario creation

• Multi-driver generator support for different kinds of transactions in the same environment

• Factory support for transactions, scenarios and multi stream scenarios. Sample factory testcase which can explain the usage of transaction override from a testcase.

• ‘RTL config’ support for drivers and receivers.

• Various types of unidirectional and bi-directional TLM connections between generator and driver.

• Analysis ports/exports OR parameterized notify observers to broadcast the information from monitor to scoreboard and coverage collectors.

• Multi test concatenation support and management to run the tests

• Creating portable Interface wrapper object, and setting up interface connections to the testbench components using vmm_opts::set_object/get_object_obj

• Creating a Generic slave component

• Option to use default names or user provided names for different components

As you can see the above list itself is quite comprehensive and let me tell you that that it is not exhaustive as there are many more features in vmmgen.

With respect to the usage as well, there are multiple flavors. In the default mode, the user is taken through multiple detailed choices/options as he is creating/connecting different components in his verification environment. However, some folks might want to use ‘vmmgen’ within their own wrapper script/environment and for them there are options to generate the environments by providing all required options in the command line or through a configuration file… Some of these switches include

-SE [y/n] : Generates a complete environment with sub-environments

-RAL [y/n] : Create RAL based verification environments

-RTL [y/n] : Generates RTL configuration for the environment

-ENV <name>, -TR <name> : Provide the name for the environment class and transaction classes. names for multiple transaction class names can be provide as well:

vmmgen –l sv –TR tr1+tr2

-cfg_file <file_name> : Option to provide a configuration file for the options

There is an option to generate an environment quickly by taking the user through the minimum number of questions (-q).

Additionally, the user can provide his or her own templates through the –L <template directory> option.

As far as individual template generation goes, you have the complete list. Here, I am outlining this down for reference:

image

I am sure a lot of you have already been using ‘vmmgen’. For those, who haven’t, I encourage you to try out the different options with it. I am sure you will find this immensely useful and it will not only help you create verification components and environments quickly but will also make sure they are optimal and appropriate based on your requirements.

Posted in Automation, Coding Style, Customization, Modeling Transactions, Organization, Performance Analyzer, Scoreboarding, Tools & 3rd Party interfaces, VMM infrastructure | Comments Off

WRED Verification using VMM

Posted by Amit Sharma on 22nd July 2010

Puja Sethia, ASIC Verification Technical Lead, eInfochips

With the increase in Internet usage, consumer appetite for high-bandwidth content such as streaming video and peer-to-peer file sharing continues to grow. The quality-of-service and throughput requirements of such content bring concerns about network congestion. WRED (Weighted Random Early Detection) is one of the network congestion avoidance mechanisms. WRED Verification challenges span the injection of transactions to the creation of various test scenarios, to the prediction and checking of WRED results, to the end of report generation etc. To address this complexity, it is important to choose the right technique and methodology when architecting the verification environment.

WRED Stimulus Generation – Targeting real network traffic scenarios

There are two major WRED stimulus generation requirements:

1. Normal Traffic Generation – Interleaved for different traffic queues (class) and targeting regions below minimum threshold for the corresponding queue.

2. Congested Traffic Generation – Interleaved for different traffic queues and targeting regions below mininum threshold, above maximum threshold and range between minimum and maximum threshold for the corresponding queue.

clip_image002

As shown in the diagram above, WRED stimulus generation requirements can be implemented in three major steps:

1. Identify test requirements such as the traffic patterns for different regions

2. Identify packet requirements for each interface to generate the required test scenario

3. Generate packets as per the provided constraints and pass it on to transcators

This demarcation helps in ensuring that the flow is intuitive, concise and flexible while planning the stimulus generation strategy. To achieve this hierarchical and stacked requirement, we used VMM scenarios. VMM Single Stream scenarios which creates randomized list of transactions based on constraints can be directly mapped to the 3rd step. For the first two steps, we need the capability to drive, control and access more than one channel and we also need to create a hierarchy of scenarios. The VMM multi-stream scenarios provide the capability to control and access more than one channel. Higher level multi-stream scenarios can also instantiate other multi-stream scenarios. Separate multi-stream scenarios were thus defined to generate different traffic patterns which are used in the test multi-stream scenario to interleave all different types of traffic. As the requirement is to interleave different types of traffic coming from multiple multi-stream scenarios there are multiple sources of transactions and single destination. The VMM scheduler helps to funnel transactions from multiple sources into one output channel. It uses a user configurable scheduling algorithm to select and place an input transaction on the output channel.

clip_image004

The snippet above shows how traffic from CLASS_0 and CLASS_1 is easily interleaved with each of them targeting a different region.

Predicting, Debugging and Checking WRED Results using VMM with-loss scoreboard

It is hard to predict if and what packets that might be lost and this makes it difficult to debug or verify the WRED results. The VMM DataStream Scoreboard helps in addressing this. The “expect_with_losses” option, when leveraged with a mechanism based on the configured probability, can help to overcome the challenge of predicting and verifying detailed WRED results.

clip_image006

As shown in the above diagram, the DS scoreboard identifies which packets are lost if they do not appear in the output stream and posts the lost packets into a separate queue. At the same time, it verifies the correctness of all the others packets received at the output. It generates the different statistics based on packets matched and lost and this information can be utilized to represent the actual WRED results. By linking these results to the stimulus generator, the actual WRED results can be categorized based on the pass/lost packet count for each region. This actual WRED results represented in terms of lost and pass packet count for each region can than be compared with the predicted lost and pass packet count calculated based on the configured drop probability for each region to know pass/fail result.

For more information on the WRED verification challenges and usage of VMM scenarios and VMM DS Scoreboarding techniques to overcome WRED verification challenges, refer to paper on “Verification of Network Congestion Avoidance Algorithm like WRED using VMM” on https://www.synopsys.com/news/pubs/snug/india2010/TA2.3_%20Sethia_Paper.pdf.

Posted in Scoreboarding, Stimulus Generation | Comments Off

Using the Datastream Scoreboard Iterators

Posted by Amit Sharma on 23rd February 2010

varVarun S, CAE, Synopsys


Datastream scoreboard iterators are objects that know how to traverse and navigate the
implementation of the scoreboard. They provide high level methods for moving through the
scoreboard and modifying its content at the location of the iterator.

The actual data structure used to implement the datastream scoreboard is entirely private
to the implementation of the foundation classes. However, for implementing user-defined
functionality, the entire content of the scoreboard should be made available to the user,
so that it can be searched and modified. You can do this using iterators without exposing
the  underlying implementation.

Logical streams:

Datastream applications involve transmission, multiplexing, prioritization, or transformation of
data items. A stream in general is a sequence of data elements. In VMM, a stream is composed of
transaction descriptors based on the vmm_data class. A stream of data elements flowing between two
transactors through a vmm_channel instance or to the DUT through a physical interface is a structural
stream. Through multiplexing, it maybe composed of data elements from different inputs or destined to
different outputs. If there is a way to identify the original input a data element came from or the
destination a data element is going to, a structural stream can be said to be composed of mutiple
logical substreams. It is up to you to define the logical streams based on the nature of the
application you are trying to verify.

For example, if you are verifying a router with 16 input and 16 output ports, all data packet going
from 0th input port to 0th output port can be viewed as belonging to the same logical stream giving
rise to a total of 256 logical streams.

Kinds of iterators:

1. Scoreboard iterators (vmm_sb_ds_iter class) - These move from one logic stream of expected data to
   another. An instance is created using vmm_sb_ds::new_sb_iter() method.

2. Stream iterators (vmm_sb_ds_stream_iter class) - These move from one data element to another on a
   single logical data stream. An instance is created using vmm_sb_ds_iter::new_stream_iter() method.

		-----------------------------------------
		|  d00	|  d01	|  d02	|	|  d0n	|	^
		-----------------------------------------	|
								|
		-----------------------------------------	|
		|  d10	|  d11	|  d12	|	|  d1n	|	|
		-----------------------------------------	|
								|vmm_sb_ds_iter
		-----------------------------------------	|
		|	|	|	|	|	|	|
		-----------------------------------------	|
								|
		-----------------------------------------	|
		|  dm0	|  dm1	|  dm2	|	|  dmn	|	v
		-----------------------------------------

			vmm_sb_ds_stream_iter

The figure above shows 'm' logical streams, each stream consisting of 'n' data elements. As previously
mentioned, logical streams are user-defined based on the application and within the scoreboard they
correspond to data queues into which data elements are stored. Once the queues are populated, if the
user wishes to modify a data element or delete a few of them, he can do so by the use of iterators.

A set of high level methods have been implemented within the iterator classes, which aid in navigating
through the data queued up in the scoreboard and modify it, if necessary. The methods first(), next(),
last(), prev(), length() are some of the methods that have been implemented within both the iterator
classes.

Note : Logical streams only exist if they contain(or have contained in the past) expected data and the
iterator will only iterate over logical streams that exist.

Sample code:

     class my_scoreboard extends vmm_sb_ds;
           /* creates a scoreboard iterator that iterates over different streams of data */
           vmm_sb_ds_iter iter_over_streams = this.new_sb_iter();  

           for(iter_over_streams.first(); iter_over_streams.is_ok(); iter_over_streams.next()) begin

               /* a stream iterator that scans the data within a stream */
               vmm_sb_sb_stream_iter scan_stream = iter_over_streams.new_stream_iter(); 

               if(scan_stream.find(pkt)) begin
                   repeat(scan_stream.pos()) begin
                       scan_stream.prev();
                       scan_stream.delete();
                   end
               end
           end
     endclass

The sample code shows a very simple scoreboard. Here, a scoreboard iterator "iter_over_streams" is
instantiated and set to iterate over all the streams using the for loop. 

The method first() sets the scoreboard iterator on the first stream in the scoreboard.

The method is_ok() returns TRUE if the iterator is currently positioned on a valid stream, returns
FALSE if the iterator has moved beyond the valid streams.

The method next() moves the iterator to the next applicable stream. A stream iterator "scan_stream"
is then created to iterate over the stream on which "iter_over_streams" is positioned. 

The find() method used in the if-statement is used to locate the next packet in the stream matching
the specified packet "pkt". The repeat loop is used to delete all the packets before the found packet. 

The method pos() returns the current position of the iterator, prev() moves the iterator to the
previous packet and delete() deletes the packet at the position where the iterator is positioned.

Hence, you can see that you can easily traverse across the different streams easily to find/alter any
specific packets  that you need to. Hope you find this useful and use these iterators to make your
verification tasks simpler.

For more information about the complete list of methods, see the VMM Datastream Scoreboard User Guide.

Posted in Reuse, Scoreboarding | 1 Comment »

Negative Verification

Posted by Janick Bergeron on 20th December 2008

The objective of negative verification is to check that an error is detected when present. It is actually one of the most important aspect of verification because undetected errors do not “exists”. It is thus important that you make sure that your monitor properly issues an error message when the errors it is supposed to detect actually occur.

But how do you distinguish between an error that is supposed to occur (and is not an error) and one that is unexpected (and actially is an error)? How do you prevent the former from causing your test to fail but not the latter? And how do you make sure that the expected error message indeed came out? That’s where the message demotion and catching capabilities of the VMM message service come into play. And we use these capabilities in verifying the VMM library itself!

Take the VMM Data Stream Scoreboard, for example. It has three pre-defined methods for checking an observed response against the expected response. How would you know if your integration of it worked properly unless you injected an error in your testcase? Or if you decide to write your own “expect()” method, how would you verify that it detects errors?

Let me illustrates how that could be done using the APB bus examples located in $VMM_HOME/sv/examples/sv/apb_bus in the OpenSource distribution.

The example provides a simple positive test that does not expect any errors. If you invoke “make”, you should see the following message:

Simulation PASSED on /./ (/./) at 6010 (0 warnings, 0 demoted errors & 0 demoted warnings)

To check that the scoreboard would indeed identify a bad transaction, should it see one, we need to break the correct functionality of the environment. So let’s create a test that will corrupt a transaction in each of the streams. That can be easily accomplished by registering a callback after the scoreboard integration callback that will modify the transaction before it is executed. For example, let’s corrupt the address of the Nth transaction targetting slave #N, for each of the three slaves:

class apb_master_to_sb_inj extends apb_master_cbs;
   int n[3] = '{0, 0, 0}; 

   virtual task pre_cycle(apb_master xactor,
                          apb_rw     cycle,
                          ref bit    drop);
      // Corrupt the address on the Nth transaction on slave #N
      if (this.n[cycle.addr[9:8]]++ == cycle.addr[9:8]) begin
         cycle.addr[7:0] = ~cycle.addr[7:0];
      end
   endtask: pre_cycle 

endclass: apb_master_to_sb_inj

We then inject the error in a new tests:

initial
begin
   tb_env env = new;
   env.build();
   begin
      apb_master_to_sb_inj inj = new;
      env.mst.append_callback(inj);
   end
   env.run();
   $finish();
end

and voila! Three streams, three failures:

Simulation *FAILED* on /./ (/./) at 6010: 3 errors, 0 warnings

It is a good idea to include this test in your regression suite. But how to make it pass when it really needs to fail? The new VMM message catcher to the rescue! Simply define a handler that will trap and mask the following error messages:

!ERROR![FAILURE] on Data Stream Scoreboard(Master->Slave) at                   70:
    In-order packet was not expected:
       Actual: APB WRITE @ 0x000000db = 0xd612ef36
       Expect: APB WRITE @ 0x00000024 = 0xd612ef36.

First, let’s define a message catcher that will turn all messages into normal note messages. For safety reasons, it is a good practices to count the number of such demotions to ensure that only the expected messages where demoted:

class handler extends vmm_log_catcher; 

   int n = 0; 

   virtual function void caught(vmm_log_msg msg);
      msg.effective_typ = vmm_log::NOTE_TYP;
      msg.effective_severity = vmm_log::NORMAL_SEV;
      this.n++;
      this.throw(msg);
   endfunction 

endclass

then register this message catcher with the appropriate message service interface instance, specifying the appropriate message content. At the end of the test, simply check that only three messages were demoted:

initial
begin
   tb_env env = new;
   handler hdlr = new;
   env.build();
   env.m2s.log.catch(hdlr, .text("/In-order packet was not expected/"));
   begin
      apb_master_to_sb_drop inj = new;
      env.mst.append_callback(inj);
   end
   env.run();
   if (hdlr.n != 3) begin
      `vmm_error(log, "The scoreboard did not properly detect errors");
   end
   $finish();
end

And now the negative test will succesfully report that the scoreboard detects errors:

Normal[NOTE] on Data Stream Scoreboard(Master->Slave) at                   70:
    In-order packet was not expected:
       Actual: APB WRITE @ 0x000000db = 0xd612ef36
       Expect: APB WRITE @ 0x00000024 = 0xd612ef36.
Normal[NOTE] on Data Stream Scoreboard(Master->Slave) at                 1090:
    In-order packet was not expected:
       Actual: APB WRITE @ 0x000000ca = 0xe6aa5052
       Expect: APB WRITE @ 0x00000035 = 0xe6aa5052.
Normal[NOTE] on Data Stream Scoreboard(Master->Slave) at                 2030:
    In-order packet was not expected:
       Actual: APB READ @ 0x000000bf = 0xxxxxxxxx
       Expect: APB READ @ 0x00000040 = 0x57879754.
Simulation PASSED on /./ (/./) at 6010 (0 warnings, 0 demoted errors & 0 demoted warnings)

Posted in Callbacks, Scoreboarding | 4 Comments »

290ea62af45f005c7f1cdf135ef059fdffffffffffffffffffffffffffffff