Verification Martial Arts: A Verification Methodology Blog

Accessing Virtual Registers in RAL

Posted by Amit Sharma on August 5th, 2010

Amit Sharma, Synopsys

In one of my previous posts on  Virtual Registers I talked about how you use RAL to model Virtual Registers or fields which are an efficient means of implementing large number of registers in  memory or RAM instead of individual flip-flops.  I also mentioned that they are  implemented as arrays associated with a memory.

In this post, I will talk about how you access these registers through RAL. Normal registers can be accessed using  the hierarchical name in RAL. For these, RAL  would generate the offsets and addresses required.  However, for virtual registers, along with  the virtual register name you need to provide the index ID to refer to them for read and write operations. Accordingly, RAL  generates the offset address based on the index ID of the virtual register.

Consider the following example:

image

The above RALF specification would translate into the following SV classes in the RAL model

 

image

‘DMA_BFRS’  which is the instance of the Virtual Registers class ‘ral_vreg_dut_DMA_BFRS’ is not an array in the RAL Model and is thus different from a typical register array modeled in RAL.

Now, how do you access the individual registers mapped to the memory?  You have to specify the index as the argument to the read/write methods. For example, To access the 4th index of this array, the access would like:

blk.DMA_BFRS.write(4, status, ‘hFF);

RAL would ultimately access the RAM to enable this access. Hence, the following are functionally equivalent:

blk.DMA_BFRS.write(4, status, ‘hFF);
and
blk.ram.write(4 * 0×0004 + 0×1000, status, ‘hFF);

The following illustration explains this point:

image

You can see that you now have an option to access these registers in different modes, but both will eventually go through the RAM. You can leverage the Virtual register Callbacks or the RAL memory callbacks for additional customizations and extensibility, in addition to other capabilities that you get when you are using VMM RAL.

Hope, this was useful.. do comment on any other specific functionality that you might look at when modeling these kind of registers.

Share and Enjoy:
  • Print
  • Digg
  • del.icio.us
  • Facebook
  • Google Bookmarks
  • LinkedIn
  • RSS
  • Twitter

Posted in Register Abstraction Model with RAL | Comments Off

1 Star2 Stars3 Stars4 Stars5 Stars (No Ratings Yet)
Loading ... Loading ...

Low Power Verification for telecom designs with VMM-LP

Posted by Paul Kaunds on August 2nd, 2010

Paul Kaunds, founder, Kacper Technologies

Today power has become a dominant factor for most applications including hand-held portable devices, consumer electronics, communications and computing devices. Market requirements are shifting focus from traditional constraints like area, cost, performance and reliability to power consumption.

For efficient power management, power intent has to be reviewed at each phase along the design flow — starting from RTL, during synthesis, and in place and route in the physical design. Power specifications of the design will be common throughout the flow. In order to implement uniform power flow, we need a common format which is easily understood by all tools used along the flow with reusability and that can be declared independent of the RTL. The creation of the UPF (unified power format) as the IEEE standard has been the best solution for specifying power intent for design implementation as well as verification. UPF sits beside design and verification, and develops relationship between the design specifications and low power specifications.

Understanding the driving factors, designers have opted for advanced low power design techniques like Power Gating, Multi Supply Multi Voltage (MSMV), Power-Retention, Power-Isolation etc. Such low power constraints have increased design complexity, which have a direct impact on verification. This makes the verification engineers’ job more challenging as they have to verify the power intent bugs along with functionality.

Some of the low-power design verification challenges are:

  • · Power switch off/on duration
  • · Transitions between different power modes and states
  • · Interface between power domains
  • · Missing of level shifters, isolation and retention
  • · Legal and illegal transitions
  • · Clock enabling/toggling
  • · Verifying retention registers and isolation cells and level shifter Strategies

To tackle these challenges, we need a structured and reusable verification environment which is power aware and encapsulates the best practices when verifying such complex low power designs. To address the low power requirements of one of the complex telecom designs, we made use of (Verification Methodology Manual for Low Power (VMM-LP) ) for SONET/SDH verification. (http://www.vmmcentral.org/vmmlp/vmmlp.html)

We developed a framework for accelerating the verification of low power designs. Our verification environment included VMM along with UPF, RAL, and a power state manager controlled by a power management unit.


Generation of Power control signals

Using extensive features provided by VMM, we developed a VMM based power generator to enable the user to generate the power control signals as random, constrained random and directed to verify power management blocks, and implement low power strategies like Retention Registers and Isolation Cells. This power generator was designed with a creed that it can be hooked into any VMM based environment. It provides the user the ability to generate power signals without any hassles in generating the complex power control sequencing of such signals.

 

Power Scenario generation

Power scenario generation eased the verification of power aware designs. To generate power signals, a user can simply enter the ranges in a scenario file depending on power specifications. The same generator can be used along with any other VMM environment without any modifications for conventional verification. Some of the capabilities that can be made configurable in this scenario include: varying retention, isolation and power widths.  Additionally, other key capabilities were enabled:

  • · Adheres to standard power sequence as specified in VMM_LP
  • · All the power signals and power domains are parameterizable
  • · Power Generator can be hooked to any VMM environment
  • · Gives a well defined structure to define power sequences
  • · Avoids overlapping of control signals
  • · Allows multiple save, restore in each sequence

The Power state manager takes care of state transition operations by defining power domains, states, modes and their dependencies. It is the key holder for all power transitions that can monitor all transitions dynamically.


Automated Power Aware Assertion Library

Another mechanism to address the challenges in low power verification is the usage of powerful LP assertions. Low power assertions are dealt in conjunction with design’s logic data checkers. VMM-LP assertion rules and coverage techniques helped us to achieve a comprehensive low power verification solution. It suggested some handy recommendations to ensure a consistent and portable implementation of the methodology.

To increase our efficiency of Low Power Telecom design verification, we also developed an automated Power Aware Assertion Library which is generic to any domain and can be hooked to any design. Assertion library is built on top of VMM_LP rules and guidelines with standard power sequencing. Key features of our Assertion library include:

  • · Assertions to verify access of software addressable registers in on/off conditions
  • · Clock toggling during power on/off
  • · Reset during power off
  • · Power control signals sequencing etc.


More details on our flow and usage can be found online in the paper we recently presented at the Synopsys User Group meeting (SNUG) in Bangalore:

https://www.synopsys.com/news/pubs/snug/india2010/TA3.1_Kaunds_Paper.pdf
https://www.synopsys.com/news/pubs/snug/india2010/TA3.1_kaunds_pres.pdf

Share and Enjoy:
  • Print
  • Digg
  • del.icio.us
  • Facebook
  • Google Bookmarks
  • LinkedIn
  • RSS
  • Twitter

Posted in Low Power, Register Abstraction Model with RAL | 3 Comments »

1 Star2 Stars3 Stars4 Stars5 Stars (No Ratings Yet)
Loading ... Loading ...

Controlling transaction generation timing from the driver using ‘PULL’ mode

Posted by Amit Sharma on July 30th, 2010

Sadiya Tarannum Ahmed, Senior CAE, Synopsys

In the default flow, the transaction level communication in VMM Channels operates in the ‘PUSH’ mode, i.e., the process is initiated by the producer which randomizes and pushes a transaction in the channel when it is empty. This process is repeated again when the channel is empty or the consumer retrieves the transaction from the channel. However, in specific cases, you might not want the generator to create stimulus before the bus protocol is ready or until it is requested by the bus-protocol. In this case, you may want to use the ‘PULL’ mode in VMM channels.

In ‘Pull’ mode, the consumer initiates the process by requesting transactions and then the generator or the producer responds to it by putting the transaction into the channel.

 

 image

The following steps show how the communication can operate in “PULL_MODE”.

Step1: In the generator code, call the vmm_channel::wait_for request() method before randomizing and putting the transaction into the channel.

By default vmm_channel::wait_for_request() does not block (“PUSH MODE”). In “PULL” mode, it will block until a get/peek/activate() is invoked by the consumer

class cpu_rand_scenario extends vmm_ms_scenario;
   …
   cpu_trans blueprint;
  `vmm_scenario_new(cpu_rand_scenario)
   …
   …
   virtual task execute(ref int n);
       blueprint = cpu_trans::create_instance(this, "blueprint”);
       chan.wait_for_request();
       assert(blueprint.randomize());
          else `vmm_fatal(log, “cpu_trans randomization failed”);
       chan.put(blueprint.copy());
       n++;
   endtask
  …
`vmm_class_factory(cpu_rand_scenario)
endclass

Step2: Set the mode of channels.

By default, all channels are configure to work in “PUSH_MODE” and can be set to work in “PULL_MODE” statically or dynamically.

  • ·Static setting: Set the mode of channel in the testbench environment or in your testcases

      gen_to_drv_chan.set_mode(vmm_channel::PULL_MODE);

  • Dynamic: call the Runtime switch +vmm_opts+pull_mode_on

Since the mode can be changed through vmm_opts, hierarchical or instance based setting for any channel can also be done at runtime. This brings in a lot of flexibility and the same channel can be made to work under different modes for different tests or even within the same simulation

The pre-defined atomic and scenario generators now support this feature; which can either be enabled by runtime control or by setting the associated channel mode to PULL_MODE in the environment.

Thus you now have the flexibility to configure your transaction level communication easily based on your requirements.

Share and Enjoy:
  • Print
  • Digg
  • del.icio.us
  • Facebook
  • Google Bookmarks
  • LinkedIn
  • RSS
  • Twitter

Posted in Communication, Stimulus Generation | 2 Comments »

1 Star2 Stars3 Stars4 Stars5 Stars (No Ratings Yet)
Loading ... Loading ...

Modeling Virtual Registers and Fields in RAL

Posted by Amit Sharma on July 27th, 2010

Amit Sharma, Synopsys

Typically, fields and registers are assumed to be implemented in individual, dedicated hardware structures with a constant and permanent physical location such as a set of D flip-flops. However for some registers which are typically large in number, implementing them in  memory or RAM instead of individual flip-flops is most efficient.  These are the virtual fields and virtual registers, and as they are implemented in a RAM, their physical location and layout is created by an agreement between the hardware and the software, not by their physical implementation.

Virtual fields and registers can be modeled using RAL by creating a logical overlay on a RAL memory model that can be accessed as if they were real physical fields and registers. Virtual fields are contained in virtual registers.

 

clip_image002

Virtual registers are defined as continuous locations within a memory and can span across multiple memory locations. They are always composed of entire memory locations and not fractions and are modeled arrays associated with a memory.

The association of a virtual register array with a memory can be static or dynamic, but the structure of these registers should be specified in the RALF file.

********************************************************************************

Static virtual registers are associated with a specific memory and are located at specific offsets within this memory. This association should be specified in the RALF file as shown in the following example. This association is permanent and cannot be broken at runtime.

Static Virtual Register Array:

Example 1 :

            memory ram1 {
              size 32; bits 8; access rw; initial 0; 
             }

            block dut {
               bytes 2;
               memory ram1@0×0000
               virtual register vreg[3] ram1 @0×5 +2 {
                   bytes 2;
                   field f1 { bits 4; };
                   field f2 { bits 4; };
                   field f3 { bits 8; };
               }
            }

        An array of new virtual registers with array size as 3 is associated with the memory ‘ram1′ starting at the offset 5. The increment value is specified as 2 as minimum 2 locations in the memory are required to implement 1 virtual register. The 3 virtual registers are associated with the ram1 as shown:

    vreg[0] is associated at offset 0×5 of ram1.
    vreg[1] is associated at offset 0×7(0×5 +2) of ram1.
    vreg[2] is associated at offset 0×9(0×7 +2) of ram1.
    vreg2[2] is associated at offset 0×24(0×22 +2) of ram1.

********************************************************************************

Dynamic virtual registers are associated with a user-specified memory and are located at user-specified offsets within that memory. The association is done at runtime. The dynamic allocation of virtual register arrays can also be performed randomly by a Memory Allocation Manager instance.

The number of virtual registers in the array and its association with the memory is specified in the SystemVerilog code and must be correctly implemented by the user. Dynamic virtual registers arrays can be relocated or resized at runtime.

Dynamic Virtual Register Array:

Example 1:

            memory ram1 {
              size 64; bits 8; access rw; initial 0; 
            }

            virtual register vreg {
             bytes 2;
             field f1 { bits 4; };
             field f2 { bits 4; };
             field f3 { bits 8; };
            }

            block dut {
             bytes 2;
             memory ram1@0×0000
             virtual register vreg=vreg1;
             virtual register vreg=vreg2;           
            }

           Here, the virtual register structure is specified in the RALF file, but the association is done during runtime (in the env or testcase) in one of the following ways:

a. Dynamic specification:

   env.ral_model.vreg1.implement(4,env.ral_model.ram1,’h20,2);

   Virtual register array of size 4 is implemented starting at the offset ‘h20 of the memory ram1 and ’2′ is the increment value. The virtual registers will be associated as:

    vreg1[0] is associated at offset 0×20 of ram1.
    vreg1[1] is associated at offset 0×22(0×20 +2) of ram1.
    vreg1[2] is associated at offset 0×24(0×22 +2) of ram1.
    vreg1[3] is associated at offset 0×26(0×24 +2) of ram1.

b. Randomly implemented dynamic specification:  

   env.ral_model.dut.vreg2.allocate(5,env.ral_model.dut.ram1.mam);

   Virtual register array of size 5 is allocated randomly by the memory allocation manager (MAM). The allocated region is randomly selected in the   address space managed by the specified MAM.

********************************************************************************

Hope this post would help you to model and verify the registers and fields implemented in the DUT more efficiently. Please look out for the next blog post where I will talk about the different modes through which you can access these registers and fields

Share and Enjoy:
  • Print
  • Digg
  • del.icio.us
  • Facebook
  • Google Bookmarks
  • LinkedIn
  • RSS
  • Twitter

Posted in Register Abstraction Model with RAL | Comments Off

1 Star2 Stars3 Stars4 Stars5 Stars (No Ratings Yet)
Loading ... Loading ...

WRED Verification using VMM

Posted by Amit Sharma on July 22nd, 2010

Puja Sethia, ASIC Verification Technical Lead, eInfochips

With the increase in Internet usage, consumer appetite for high-bandwidth content such as streaming video and peer-to-peer file sharing continues to grow. The quality-of-service and throughput requirements of such content bring concerns about network congestion. WRED (Weighted Random Early Detection) is one of the network congestion avoidance mechanisms. WRED Verification challenges span the injection of transactions to the creation of various test scenarios, to the prediction and checking of WRED results, to the end of report generation etc. To address this complexity, it is important to choose the right technique and methodology when architecting the verification environment.

WRED Stimulus Generation – Targeting real network traffic scenarios

There are two major WRED stimulus generation requirements:

1. Normal Traffic Generation – Interleaved for different traffic queues (class) and targeting regions below minimum threshold for the corresponding queue.

2. Congested Traffic Generation – Interleaved for different traffic queues and targeting regions below mininum threshold, above maximum threshold and range between minimum and maximum threshold for the corresponding queue.

clip_image002

As shown in the diagram above, WRED stimulus generation requirements can be implemented in three major steps:

1. Identify test requirements such as the traffic patterns for different regions

2. Identify packet requirements for each interface to generate the required test scenario

3. Generate packets as per the provided constraints and pass it on to transcators

This demarcation helps in ensuring that the flow is intuitive, concise and flexible while planning the stimulus generation strategy. To achieve this hierarchical and stacked requirement, we used VMM scenarios. VMM Single Stream scenarios which creates randomized list of transactions based on constraints can be directly mapped to the 3rd step. For the first two steps, we need the capability to drive, control and access more than one channel and we also need to create a hierarchy of scenarios. The VMM multi-stream scenarios provide the capability to control and access more than one channel. Higher level multi-stream scenarios can also instantiate other multi-stream scenarios. Separate multi-stream scenarios were thus defined to generate different traffic patterns which are used in the test multi-stream scenario to interleave all different types of traffic. As the requirement is to interleave different types of traffic coming from multiple multi-stream scenarios there are multiple sources of transactions and single destination. The VMM scheduler helps to funnel transactions from multiple sources into one output channel. It uses a user configurable scheduling algorithm to select and place an input transaction on the output channel.

clip_image004

The snippet above shows how traffic from CLASS_0 and CLASS_1 is easily interleaved with each of them targeting a different region.

Predicting, Debugging and Checking WRED Results using VMM with-loss scoreboard

It is hard to predict if and what packets that might be lost and this makes it difficult to debug or verify the WRED results. The VMM DataStream Scoreboard helps in addressing this. The “expect_with_losses” option, when leveraged with a mechanism based on the configured probability, can help to overcome the challenge of predicting and verifying detailed WRED results.

clip_image006

As shown in the above diagram, the DS scoreboard identifies which packets are lost if they do not appear in the output stream and posts the lost packets into a separate queue. At the same time, it verifies the correctness of all the others packets received at the output. It generates the different statistics based on packets matched and lost and this information can be utilized to represent the actual WRED results. By linking these results to the stimulus generator, the actual WRED results can be categorized based on the pass/lost packet count for each region. This actual WRED results represented in terms of lost and pass packet count for each region can than be compared with the predicted lost and pass packet count calculated based on the configured drop probability for each region to know pass/fail result.

For more information on the WRED verification challenges and usage of VMM scenarios and VMM DS Scoreboarding techniques to overcome WRED verification challenges, refer to paper on “Verification of Network Congestion Avoidance Algorithm like WRED using VMM” on https://www.synopsys.com/news/pubs/snug/india2010/TA2.3_%20Sethia_Paper.pdf.

Share and Enjoy:
  • Print
  • Digg
  • del.icio.us
  • Facebook
  • Google Bookmarks
  • LinkedIn
  • RSS
  • Twitter

Posted in Scoreboarding, Stimulus Generation | Comments Off

1 Star2 Stars3 Stars4 Stars5 Stars (1 votes, average: 4.00 out of 5)
Loading ... Loading ...

Using vmm_opts to create a configurable environment

Posted by S. Prashanth on July 19th, 2010

S. Prashanth, Verification & Design Engineer, LSI Logic

To accommodate changing specifications and to support different clusters/subsystems which would have multiple processors/memory connected through a bridge), I am building a reusable environment which can support any number of masters and slaves of any standard bus protocols.  The environment requires high level of configurability since it should work for different DUTs. So, I decided to use vmm_opts not just to set the switches globally/hierarchically, but also to specify the ranges (like setting address ranges in scenarios) from the test cases/command line.

I created a list of controls/switches that need to be provided as global options (like simulation timeout, scoreboard/coverage enable, etc)  and hierarchical options  to control instance specific behaviors (like number of transactions to be generated ,  burst enables, transaction speed, address ranges to be generated by a specific instance, set of instances, etc).  Let’s see how these options can be used and what all things are required for it through examples.

Global Options

Step 1:
Declare an integer, say simulation_timeout in the environment class or wherever it is required and use vmm_opts::get_int(..) method as shown below. Specify a default value as well just in case, if the variable is not set anywhere.

simulation_timeout = vmm_opts::get_int(“TIMEOUT”, 10000);

Step 2:
Then either at the test case or/and at the command line, I can override the default value using vmm_opts::set_int  or +vmm_opts+ runtime option.

Override from the test case.

vmm_opts::set_int(“%*:TIMEOUT”, 50000);

Override from the command line.

./simv +vmm_opts+TIMEOUT=80000

Options can also be overridden from a command file, if it is difficult to specify all options in the command line. Also, options can be a string or Boolean as well.

Hierarchical Options

In order to use hierarchical options, I am building the environment with parent/child hierarchy which is a very useful feature of vmm_object. This is required since the hierarchy specified externally (from test case/command line) will be used to map the hierarchy in the environment.

Step 1:
Declare options, say burst_enable and num_of_trans in a subenv class and use vmm_opts::get_object_bit(..) and vmm_opts::get_object_int(..) to retrieve appropriate values passing current hierarchy and default values as arguments. Also, for setting ranges, declare min_addr and max_addr, and use get_object_range().

class master_subenv extends vmm_subenv;
bit burst_enable;
int num_of_trans;
int min_addr;
int max_addr;

function void configure();
bit is_set; //to determine if the default value is overridden
burst_enable = vmm_opts::get_object_bit(is_set, this, “BURST_ENABLE”);
num_of_trans = vmm_opts::get_object_int(is_set, this, “NUM_TRANS”, 100);
vmm_opts::get_object_range(is_set, this, “ADDR_RANGE”, min_addr, max_addr, 0, 32’hFFFF_FFFF);
….
endfunction

endclass

Step 2:
Build the environment with parent/child hierarchy either by passing the parent handle through the constructor to every child, or using vmm_object::set_parent_object() method. This is easy as almost all base classes are extended from vmm_object by default.

class  dut_env extends vmm_env;
virtual function build();
…..
mst0 = new(“MST0”, ….);
mst0.set_parent_object(this);
mst1 = new(“MST1”, …);
mst1.set_parent_object(this);
….
endfunction
endclass

Step 3:
From the test case, I can override the default value using vmm_opts::set_bit or vmm_opts::set_int(..) specifying the hierarchy. I can use pattern matching as well to avoid specifying to every instance

vmm_opts::set_int(“%*:MST0:NUM_TRANS”, 50);
vmm_opts::set_int(“%*:MST1:NUM_TRANS”, 100);
vmm_opts::set_bit(“%*:burst_enable);
vmm_opts::set_range(“%*:MST1:ADDR_RANGE”, 32’h1000_0000, 32’h1000_FFFF);

I can also override the default value from the command line as well.
./simv +vmm_opts+NUM_TRANS=50@%*:MST0+NUM_TRANS=100@%*:MST1+burst_enable@%*


In summary, vmm_opts provides a powerful and user friendly way of configuring the environment from the command line or the test case. User can provide explanation of each of the options while calling get_object_*() and get_*() methods which will be displayed when vmm_opts::get_help() is called.

Share and Enjoy:
  • Print
  • Digg
  • del.icio.us
  • Facebook
  • Google Bookmarks
  • LinkedIn
  • RSS
  • Twitter

Posted in Coding Style, Configuration, Tutorial | Comments Off

1 Star2 Stars3 Stars4 Stars5 Stars (No Ratings Yet)
Loading ... Loading ...

Generating microcode stimuli using a constrained-random verification approach

Posted by Shankar Hemmady on July 15th, 2010

As microprocessor designs have grown considerably in complexity, generating microcode stimuli has become increasingly challenging.  An article by AMD and Synopsys engineers in EE Times explores using a hierarchical constrained-random approach to accelerate generation and reduce memory consumption, while providing optimal distribution and biasing to hit corner cases using the Synopsys VCS constraint solver.

You can find the full article in PDF here.

Share and Enjoy:
  • Print
  • Digg
  • del.icio.us
  • Facebook
  • Google Bookmarks
  • LinkedIn
  • RSS
  • Twitter

Posted in Stimulus Generation, SystemVerilog | Comments Off

1 Star2 Stars3 Stars4 Stars5 Stars (No Ratings Yet)
Loading ... Loading ...

Fantasy Cache: The Role of the System Level Designer in Verification

Posted by Andrew Piziali on July 12th, 2010

Andrew Piziali, Independent Consultant

As is usually the case, staying ahead of the appetite of a high performance processor with the memory technology of the day was a major challenge. This processor consumed instructions and data far faster than current PC memory systems could supply. Fortunately, spatial and temporal locality–the tendency for memory accesses to cluster near common addresses and around the same time–were on our side. These could be exploited by a cache that would present a sufficiently fast memory interface to the processor while dealing with the sluggish PC memory system behind the scenes. However, this cache would require a three level, on-chip memory hierarchy that had never been seen before in a microprocessor. Could it be done?

clip_image002

The system level designer responsible the cache design–let’s call him “Ambrose”–managed to meet the performance requirements, yet with an exceedingly complex cache design. It performed flawlessly as a C model running application address traces, rarely stalling the processor on memory accesses. Yet, when its RTL incarnation was unleashed to the verification engineers, it stumbled … and stumbled badly. Each time a bug was found and fixed, performance took a hit while another bug was soon exposed. Before long we finally had a working cache but it unfortunately starved the processor. Coupled with the processor not making its clock frequency target and schedule slips, this product never got beyond the prototype stage, after burning through $35 million and 200 man-years of labor. Ouch! What can we learn about the role of the system level designer from this experience?

The system level designer faces the challenge of evaluating all of the product requirements, choosing implementation trade-offs that necessarily arise. The architectural requirements of a processor cache include block size, hit time, miss penalty, access time, transfer time, miss rate and cache size. It shares physical requirements with other blocks such as area, power and cycle time. However, of particular interest to us are its verification requirements, such as simplicity, limited state space and determinism.

The cache must be as simple as possible while meeting all of its other requirements. Simplicity translates into a shorter verification cycle because the specification is less likely to be misinterpreted (fewer bugs), fewer boundary conditions to be explored (smaller search space and smaller coverage models), less simulation cycles required for coverage closure and fewer properties to be proven. A limited state space also leads to a smaller search space and coverage model.  Determinism means that from the same initial conditions and given the same cycle-by-cycle input the response of the cache is always identical from one simulation run to the next. Needless to say, this makes it far easier to isolate a bug than an ephemeral glitch that cannot be produced on demand. These all add up to a cost savings in functional verification.

Ambrose, while skilled in processor cache design, was wholly unfamiliar with the design-for-verification requirements we just discussed. The net result was a groundbreaking, novel, high-performance three level cache that could not be implemented.

Share and Enjoy:
  • Print
  • Digg
  • del.icio.us
  • Facebook
  • Google Bookmarks
  • LinkedIn
  • RSS
  • Twitter

Posted in Coverage, Metrics, Organization, Verification Planning & Management | 2 Comments »

1 Star2 Stars3 Stars4 Stars5 Stars (No Ratings Yet)
Loading ... Loading ...

Automating Coverage Closure

Posted by Janick Bergeron on July 5th, 2010

“Coverage Closure” is the process used to reach 100% of your coverage goals. In a directed test methodology, it is simply the process of writing all of the testcases outlined in the verification plan. In a constrained-random methodology, it is the process of adding constraints, defining scenarios or writing directed tests to hit the uncovered areas in your functional and structural coverage model. In the latter case, it is a process that is time-consuming and challenging: you must reverse-engineer the design and verification environment to determine why specific stimulus must be generated to hit those uncovered areas.

Note that your first strategy should be to question the relevance of the uncovered coverage point. A coverage point describes an interesting and unique condition that must be verified. If that condition is already represented by another coverage point, or it is not that interesting (if no one on your team is curious about nor looking forward to analyzing a particular coverage point, then it is probably not that interesting), then get rid of the coverage point rather than trying to cover it.

Something that is challenging and time-consuming is an ideal candidate for automation. In this case, the Holy Grail is the automation of the feedback loop between the coverage metrics and the constraint solver. The challenge in automating that loop is correlating those metrics with the constraints.

Coverage ConvergenceFor input coverage, this correlation is obvious. For every random variable in a transaction description, there is usually a corresponding coverage point, then cross coverage points for various combinations of random variables. VCS includes automatic input coverage convergence. It automatically generates the coverage model based on the constraints in a transaction descriptor, then will automatically tighten the constraints at run-time to reach 100% coverage in very few runs.

For internal or output coverage, the correlation is a lot more obscure. How can a tool determine how to tweak the constraints to reach a specific line in the RTL code, trigger a specific expression or produce a specific set of values in an output transaction? That is where newly acquired technology from Nusym will help. Their secret sauce traces the effect of random input values on expressions inside the design. From there, it is possible to correlate the constraints and the coverage points. Once this correlation is known, it is a relatively straightforward process to modify the constraints to target uncovered coverage points.

A complementary challenge to coverage closure is identifying coverage points that are unreachable. Formal tools, such as Magellan, can help identify structural coverage points that cannot be reached and thus further trim your coverage space. For functional coverage, that same secret sauce from Nusym can also be used to help identify why existing constraints are preventing certain coverage points from being reached.

Keep in mind that filling the coverage model is not the goal of a verification process: it is to find bugs! Ultimately, a coverage model is no different than a set of directed testcases: it will only measure the conditions you have thought of and consider interesting. The value of constraint-random stimulus is not just in filling those coverage models automatically, but also in creating conditions you did not think of. In a constrained-random methodology, the journey is just as interesting—and valuable—as the destination.

Share and Enjoy:
  • Print
  • Digg
  • del.icio.us
  • Facebook
  • Google Bookmarks
  • LinkedIn
  • RSS
  • Twitter

Posted in Coverage, Metrics, Creating tests, Verification Planning & Management | 4 Comments »

1 Star2 Stars3 Stars4 Stars5 Stars (No Ratings Yet)
Loading ... Loading ...

Learn about VMM adoption from customers – straight from SNUG India 2010

Posted by Srinivasan Venkataramanan on July 1st, 2010

…Reflections form core engineering team of CVC, fresh from SNUG India 2010

Jijo PS, Thirumalai Prabhu, Kumar Shivam, Avit Kori, Praveen & Nikhil – TeamCVC www.cvcblr.com

Here is a quick collection of various VMM updates from SNUG India 2010 – as seen by TeamCVC. Expect to hear more on VMM1.2 soon from us as now I have a young team all charged up with VMM 1.2 (thanks to Amit @SNPS). All the papers and presentations can be accessed through: http://www.synopsys.com/Community/SNUG/India/Pages/IndiaConferenceataGlance.aspx

TI’s usage of VMM 1.2 & RAL

In one of the well received papers, TI Bangalore talked about “Pragmatic Approach for Reuse with VMM1.2 and RAL “. The design is a complex digital display subsystem involving numerous register configurations. Not only handling the register configurations is a challenge, but also the ability to reuse of block level subenvs at system level with ease, and with minimal rework and reduced verification time. The author presented their success with VMM 1.2 & RAL to address these challenges.

Key elements touched up on advanced VMM are:

· TLM 2.0 communication mechanism

· VMM Multi-Stream Scenario gen (MSS)

· VMM RAL

Automated Coverage Closure with ECHO

It is real and live – automated coverage closure is slowly becoming reality atleast in select designs & projects. Having been attempted by various vendors for a while (see: http://www.cvcblr.com/blog/?tag=acc) VCS has added this under ECHO technology. At SNUG, TI presented their experience with ECHO & VMM-RAL. In their paper titled “Automating Functional Coverage Convergence and Avoiding Coverage Triage with ECHO Technology” TI described how an ECHO based methodology in a VMM RAL based environment, can in an automated manner close the feedback loop in targeting coverage groups involving register configuration combinations resulting in significant reduction in verification time.

WRED Verification with VMM

In her paper on “WRED verification with VMM”, Puja shared her usage of advanced VMM capabilities for a challenging verification task. Specifically she touched upon:

· VMM Multi-Stream Scenario gen

· VMM Datastream Scoreboard with its powerful “with_loss” predictor engine

· VMM RAL to access direct & indirect RAMs & registers

What we really liked is to see real application of some of these advanced VMM features – we were taught all of these during our regular CVC trainings and we even tried many of them on our own designs. It feels great to hear form peers on similar usage and to appreciate the value we derive out of VMM @CVC and the vibrant ecosystem that CVC creates around the same.

System-Level verification with VMM

Ashok Chandran, of Analog Devices presented their use of specialized VMM components in a system-level verification project. Specifically he touched upon specialized VMM base classes like vmm_broadcast and vmm_scheduler

At the end the audience learnt what are some of the unique challenges a SoC verification project can present. Even more interesting was the fact that the ever growing VMM seems to have solution for a wide variety of such problems, well thought-out upfront – Kudos to the VMM developers!

Ashok also briefed on his team’s usage of relatively new features in VMM such as vmm_record and vmm_playback and how it helps us to quickly replay streams.

On the tool side, a very useful feature for regressions is the usage of separate compile option in VCS.

VMM 1.2 for VMM users

Amit from SNPS gave a very useful and upto-the-point update on VMM 1.2 for long time VMM users. It was rejuvenating to listen to the VMM 1.2 run_tests feature and the implicit phasing techniques. Though look like little “magic” these features are bound to improve our productivity as there are lesser things to code-debug and move-on..

Amit also touched upon the use of TLM 2.0 ports and how they can be nicely used for integrating functional coverage, instead of using the vmm_callbacks.

The hierarchical component creation and configurations in VMM 1.2 puts us on track for the emerging UVM and is very pleasing to see how the industry keeps moving to more-n-more automation.

A truly vibrant ecosystem enabled by CVC -VMM Catalyst member

A significant addition to this year’s SNUG India was the DCE – Designer Community Expo – a genuine initiative by Synopsys to bring in partners to serve the larger customer base better all under one roof. CVC (www.cvcblr.com) being the most active VMM catalyst member in this region was invited to setup a booth showcasing its offerings. We gave away several books including our popular VMM adoption book http://systemverilog.us/?p=14 and all the new SVA Handbook 2nd edition http://systemverilog.us/?p=16 .

Here is a snapshot of CVC’s booth with our VMM and other offerings.

clip_image002

Share and Enjoy:
  • Print
  • Digg
  • del.icio.us
  • Facebook
  • Google Bookmarks
  • LinkedIn
  • RSS
  • Twitter

Posted in Coverage, Metrics, Register Abstraction Model with RAL, VMM | 1 Comment »

1 Star2 Stars3 Stars4 Stars5 Stars (No Ratings Yet)
Loading ... Loading ...

Non-blocking Transport Communication in VMM 1.2

Posted by John Aynsley on June 24th, 2010

John Aynsley, CTO, Doulos

When discussing the TLM-2.0 transport interfaces my posts on this blog have referred to the blocking transport interface alone. Now it is time to take a brief look at the non-blocking transport interface of the TLM-2.0 standard, which offers the possibility of much greater timing accuracy.

The blocking transport interface is restricted to exactly two so-called timing points per transaction, marked by the call to and the return from the b_transport method, and by convention corresponding to the start of the transaction and the arrival of the response. The non-blocking transport interface, on the other hand, allows a transaction to be modeled with any number of timing points so it becomes possible to distinguish between the start and end of an arbitration phase, address phase, write data phase, read data phase, response phase, and so forth.

image

As shown in the diagram above, b_transport is only called in one direction from initiator to target, and the entire transaction is completed in a single method call. nb_transport, on the other hand, comes in two flavors: nb_transport_fw, called by the initiator on the so-called forward path, and nb_transport_bw, called by the target on the backward path. Whereas b_transport is blocking, meaning that it may execute a SystemVerilog event control, nb_transport is non-blocking, meaning that it must return control immediately to the caller. A single transaction may be associated with multiple calls to nb_transport in both directions, the actual number of calls (or phases) being determined by the protocol being modeled.

With just one call per transaction, b_transport is the simplest to use. nb_transport allows more timing accuracy, with multiple method calls in both directions per transaction, but is considerably more complicated to use. b_transport is fast, simple, but inaccurate. nb_transport is more accurate, supporting multiple pipelined transactions, but slower and more complicated to use.

In VMM the role of the TLM-2.0 non-blocking transport interface is usually played by vmm_channel, which allows multiple timing points per transaction to be implemented using the notifications embedded within the channel and the vmm_data transaction object. The VMM user guide still recommends vmm_channel for this purpose. nb_transport is provided in VMM for interoperability with SystemC models that use the TLM-2.0 standard.

Let’s take a quick look at a call to nb_transport, just so we can get a feel for some of the complexities of using the non-blocking transport interface:


class initiator extends vmm_xactor;
`vmm_typename(initiator)
vmm_tlm_nb_transport_port #(initiator, vmm_tlm_generic_payload) m_nb_port;

begin: loop
vmm_tlm_generic_payload tx;
int                     delay;
vmm_tlm::phase_e        phase;
vmm_tlm::sync_e         status;

phase  = vmm_tlm::BEGIN_REQ;

status = m_nb_port.nb_transport_fw(tx, phase, delay);
if (status == vmm_tlm::TLM_UPDATED)

else if (status == vmm_tlm::TLM_COMPLETED)

From the above, you will notice some immediate differences with b_transport. The call to nb_transport_fw takes a phase argument to distinguish between the various phases of an individual transaction and returns a status flag which signals how the values of the arguments are to be interpreted following the return from the method call. A status value of TLM_ACCEPTED indicates that the transaction, phase, and delay were unchanged by the call, TLM_UPDATED indicates that the return from the method call corresponds to an additional timing point and so the values of the arguments will have changed, and TLM_COMPLETED indicates that the transaction has jumped to its final phase.

You are not recommended to use nb_transport except when interfacing to a SystemC model because the rules are considerably more complicated than those for either b_transport or vmm_channel.

Share and Enjoy:
  • Print
  • Digg
  • del.icio.us
  • Facebook
  • Google Bookmarks
  • LinkedIn
  • RSS
  • Twitter

Posted in Communication, Interoperability, Reuse, Transaction Level Modeling (TLM) | Comments Off

1 Star2 Stars3 Stars4 Stars5 Stars (No Ratings Yet)
Loading ... Loading ...

Generic Payload Extensions in VMM 1.2

Posted by John Aynsley on June 22nd, 2010

John Aynsley, CTO, Doulos

In a previous post I described the TLM-2 generic payload as implemented in VMM 1.2. In this post I focus on the generic payload extension mechanism, which allows any number of user-defined attributes to be added to a generic payload transaction without any need to change its data type.

Like all things TLM-2, the motivation for the TLM-2.0 extension mechanism arose in the world of virtual platform modeling in SystemC. There were two requirements for generic payload extensions: firstly, to enable a transaction to carry secondary attributes (or meta-data) without having to introduce new transaction types, and secondly, to allow specific protocols to be modeled using the generic payload. In the first case, introducing new transaction types would have required the insertion of adapter components between sockets of different types, whereas extensions permit meta-data to be transported through components written to deal with the generic payload alone. In the second case, extensions enable specific protocols to be modeled on top of the generic payload, which makes it possible to create very fast, efficient bridges between different protocols.

Let us have a look at an example that adds a timestamp to a VMM generic payload transaction using the extension mechanism. The first task is to define a new class to represent the user-defined extension:

class my_extension extends vmm_tlm_extension #(my_extension);

int timestamp;

`vmm_data_new(my_extension)
`vmm_data_member_begin(my_extension)
`vmm_data_member_scalar(timestamp, DO_ALL)
`vmm_data_member_end(my_extension)

endclass


The user-defined extension class extends vmm_tlm_extension, which should be parameterized with the name of the user-defined extension class itself, as shown. The extension can contain any number of user-defined class properties; this example contains just one, the timestamp.

The initiator of the transaction will create a new extension object, set the value of the extension, and add the extension to the transaction before sending the transaction out through a port:


class initiator extends vmm_xactor;
`vmm_typename(initiator)

vmm_tlm_b_transport_port #(initiator, vmm_tlm_generic_payload) m_port;

vmm_tlm_generic_payload randomized_tx;

begin: loop
my_extension ext = new;

$cast(tx, randomized_tx.copy());
ext.timestamp = $time;
tx.set_extension(my_extension::ID, ext);
m_port.b_transport(tx, delay);


Note the use of the extension ID in the call to the method set_extension: each extension class has its own unique ID, which is used as an index into an array-of-extensions within the generic payload transaction.

Any component that receives the transaction can test for the presence of a given extension type and then retrieve the extension object, as shown here:


class target extends vmm_xactor;
`vmm_typename(target)

vmm_tlm_b_transport_export #(target, vmm_tlm_generic_payload) m_export;

task b_transport(int id = -1, vmm_tlm_generic_payload trans, ref int delay);

my_extension ext;

$cast(ext, trans.get_extension(my_extension::ID));

if (ext)
$display(“Target received transaction with timestamp = %0d”, ext.timestamp);


Note that once again the extension type is identified by using its ID in the call to method get_extension. If the given extension object does not exist, get_extension will return a null object handle. If the extension is present, the target can retrieve the value of timestamp and, in this example, print it out.

The neat thing about the extension mechanism is that a transaction can carry extensions of many types simultaneously, and those transactions can be passed to or through transactors that may not know of the existence of particular extensions.

image

In the diagram above, the initiator sets an extension that is passed through an interconnect, where the interconnect knows nothing of that extension. The interconnect adds a second extension to the transaction that is only known to the interconnect itself and is ignored by the other transactors.

And the point of all this? The generic payload extension mechanism in VMM will permit transactions to be passed to SystemC virtual platform models, where the TLM-2.0 extension mechanism is heavily used.

Share and Enjoy:
  • Print
  • Digg
  • del.icio.us
  • Facebook
  • Google Bookmarks
  • LinkedIn
  • RSS
  • Twitter

Posted in Communication, Interoperability, Reuse, Transaction Level Modeling (TLM) | Comments Off

1 Star2 Stars3 Stars4 Stars5 Stars (No Ratings Yet)
Loading ... Loading ...

The Generic Payload in VMM 1.2

Posted by John Aynsley on June 16th, 2010

John Aynsley, CTO, Doulos

In this post we turn to the generic payload, another feature from the SystemC TLM-2.0 standard that has been incorporated into VMM 1.2. The generic payload is a transaction data structure that contains common attributes found in current memory mapped bus protocols, as shown in the table below:

clip_image002

You will immediately recognize attributes such as command, address and data, and can probably have a good guess at the meaning of most, if not all, the remaining attributes. The Modifiable column indicates whether the value of the attribute may be modified by a component other than the initiator that first generates the transaction.

In TLM-2.0 the generic payload serves two purposes: firstly, to provide an off-the-shelf way of representing memory mapped bus protocols at an abstract level, and secondly, to provide the basis for modeling specific protocols. As an abstraction, the generic payload provides interoperability between components where the precise details of the bus protocol are unimportant, thus allowing the construction of a set of generic components for simple peripherals and memory that can be plugged into any transaction-level platform model. On the other hand, for modeling specific protocols the generic payload offers an extension mechanism that can be used to add protocol-specific attributes. This extension mechanism has been tuned to provide very fast, efficient conversion in C++ between generic payloads that represent different protocols, where each payload has a different set of extensions.

So much for TLM-2.0 in the context of virtual platform modeling, but what about VMM? A native VMM test bench that drives transactions into an HDL model is unlikely to benefit from using the generic payload. The real benefit will come when driving transactions into a TLM-2.0-compliant SystemC model that already uses the generic payload. The VMM code to generate a generic payload transaction might look like this:


vmm_tlm_generic_payload  randomized_tx;

forever
begin: loop
  vmm_tlm_generic_payload  tx;
  int  delay;
  assert( randomized_tx.randomize() with {
      m_address >= 0 && m_address < 256;
      m_length == 4 || m_length == 8;
      m_data.size == m_length;          // Trick to randomize dynamic array
      m_byte_enable_length <= m_length;
      (m_byte_enable_length % 4) == 0;
      m_byte_enable.size == m_byte_enable_length;
      m_streaming_width == m_length;
    } )
  else `vmm_error(log, "tx.randomize() failed");

  $cast(tx, randomized_tx.copy());
  m_port.b_transport(tx, delay);
  assert( tx.m_response_status == vmm_tlm_generic_payload::TLM_OK_RESPONSE );
end


The initiator is obliged to set each of the attributes of the generic payload to a legal value before sending the transaction through the port, and to check the value of the m_response_status attribute on return. The m_command attribute may be READ, WRITE, or IGNORE. The m_length attribute gives the number of bytes in the m_data array and the m_byte_enable_length attribute the number of bytes in the m_byte_enable array, which could be 0. The m_address attribute gives the address of the first byte in the array.

In summary, a generic payload transaction has the effect of transferring a block of bytes between an initiator and a target. Because the generic payload is intended as an abstraction for a memory-mapped bus, that block of bytes has a single start address in a memory map, and may be either written to or read from the target. As a refinement, individual bytes may be enabled or disabled and successive blocks of bytes may be sent to the same address range by setting the m_streaming_width attribute accordingly. The generic payload can be used as an abstraction for any communication mechanism that transfers a block of bytes, and the extension mechanism is available when it becomes necessary to add new protocol-specific attributes to the transaction.

Look out for my next post on this blog, when I will explain the extension mechanism in more detail.

Share and Enjoy:
  • Print
  • Digg
  • del.icio.us
  • Facebook
  • Google Bookmarks
  • LinkedIn
  • RSS
  • Twitter

Posted in Transaction Level Modeling (TLM) | 1 Comment »

1 Star2 Stars3 Stars4 Stars5 Stars (No Ratings Yet)
Loading ... Loading ...

VMM 1.2.1 is now available

Posted by Janick Bergeron on June 15th, 2010

We, in the VMM team, have been so busy working on improving VMM that we only recently noticed that it has been almost a full year since we released an Open Source distribution of VMM. With the release of VCS 2010.06, we took the opportunity to release an updated Open Source distribution that contains all of the new features and capability in VMM now available in the VCS distribution.

I am not going to repeat in details what changed (you can refer to the RELEASE.txt file for that), but I will point two of the most important highlights…

First, this version supports the VMM/UVM interoperability package (also available from download). This interoperability package will allow you to use UVM verification assets in your VMM verification environment (and vice-versa). Note that the VMM/UVM interoperability package is also included in the VCS distribution (along with the UVM-1.0EA release) in VCS2010.06.

Second, many new features were added to the VMM Register Abstraction Layer (RAL). For example, RAL now supports automatic mirroring of registers by passively observing read/write transaction or by actively monitoring changes in the RTL code itself. Another important addition is the ability to perform sub-register accesses when fields are located in individual byte lanes.

The Open Source distribution is the exact same source code as the VMM distribution included in VCS. Therefore, you can trust its robustness acquired in the field through many hundreds of successful verification projects.

Share and Enjoy:
  • Print
  • Digg
  • del.icio.us
  • Facebook
  • Google Bookmarks
  • LinkedIn
  • RSS
  • Twitter

Posted in Announcements, Interoperability, Register Abstraction Model with RAL | 1 Comment »

1 Star2 Stars3 Stars4 Stars5 Stars (No Ratings Yet)
Loading ... Loading ...

TLM vs. Channels

Posted by JL Gray on June 9th, 2010

JL Gray, Consultant, Verilab, Austin, Texas, and Author of Cool Verification


Confused about the difference between using channels and TLM to connect your testbench components? Check out this short video for a simple explanation of the mechanics of the two approaches.

Share and Enjoy:
  • Print
  • Digg
  • del.icio.us
  • Facebook
  • Google Bookmarks
  • LinkedIn
  • RSS
  • Twitter

Posted in Transaction Level Modeling (TLM), Tutorial | 2 Comments »

1 Star2 Stars3 Stars4 Stars5 Stars (No Ratings Yet)
Loading ... Loading ...

Diagnosing Transaction-Level Connections in VMM 1.2

Posted by John Aynsley on June 7th, 2010

John Aynsley, CTO, Doulos

In my previous post on this blog I discussed hierarchical transaction-level connections in VMM 1.2. In this post I show how to remove connections, and also discuss the various diagnostic methods that can help when debugging connection issues.

Actually, removing transaction-level connections is very straightforward. Let’s start with a consumer having an export that permits multiple bindings:

class consumer extends vmm_xactor;

vmm_tlm_b_transport_export #(consumer, vmm_tlm_generic_payload) m_export;

function new (string inst, vmm_object parent = null);
super.new(get_typename(), inst, -1, parent);
m_export = new(this, “m_export”, 4);
endfunction: new

Passing the value 4 as the third argument to the constructor permits up to four different ports to be bound to this one export. These ports can be distinguished using their peer id, as described in a previous blog post.

function void start_of_sim_ph;
vmm_tlm_port_base #(vmm_tlm_generic_payload) q[$];
m_export.get_peers(q);
m_export.tlm_unbind(q[0]);

TLM ports have a method get_peer (singular) that returns the one-and-only export bound to that particular port. TLM exports have a similar method get_peers (plural) that returns a SystemVerilog queue containing all the ports bound to that particular export. The method tlm_unbind can then be called to remove a particular binding, as shown above.

There are several other methods that can be helpful when diagnosing connection problems. For example, the method get_n_peers returns the number of ports bound to a given export:

$display(“get_n_peers() = %0d”, m_export.get_n_peers());

There are also methods for getting a peer id from a peer, and vice-versa, as shown in the following code which loops through the entire queue of peers returned from get_peers:

m_export.get_peers(q);
foreach(q[i])
begin: blk
int id;
id = m_export.get_peer_id(q[i]);
$write(“id = %0d”, id);
$display(“, peer = %s”, m_export.get_peer(id).get_object_hiername());
end

In addition to these low-level methods that allow you to interrogate the bindings of individual ports and exports, there are also methods to print and check the bindings for an entire transactor. The methods are print_bindings, check_bindings and report_unbound, which can be called as follows:

class tb_env extends vmm_group;

virtual function void start_of_sim_ph;
$display(“\n——– print_bindings ——–”);
vmm_tlm::print_bindings(this);
$display(“——– check_bindings ——–”);
vmm_tlm::check_bindings(this);
$display(“——– report_unbound ——–”);
vmm_tlm::report_unbound(this);
$display(“——————————–”);
endfunction: start_of_sim_ph

print_bindings prints information concerning the binding of every port and export below the given transactor. check_bindings checks that every port and export has been bound at least the specified minimum number of times. report_unbound generates warnings for any unbound ports or exports, regardless of the specified minimum.

In summary, VMM 1.2 allows you easily to check whether all ports and exports have been bound, whether a given port or export has been bound, to find the objects to which a given port or export has been bound, and even to remove the bindings when necessary.

Share and Enjoy:
  • Print
  • Digg
  • del.icio.us
  • Facebook
  • Google Bookmarks
  • LinkedIn
  • RSS
  • Twitter

Posted in Communication, Interoperability, Reuse, Transaction Level Modeling (TLM) | Comments Off

1 Star2 Stars3 Stars4 Stars5 Stars (No Ratings Yet)
Loading ... Loading ...

Implicit vs. Explicit Phasing

Posted by JL Gray on June 3rd, 2010

JL Gray, Consultant, Verilab, Austin, Texas, and Author of Cool Verification

Last week I discussed the concepts of phases and threads. This week, I’ll continue that theme by focusing on the difference between implicit and explicit phasing.

Quick quiz: If you had to remember just one thing about the benefits of implicit phasing, what would that one thing be?

Share and Enjoy:
  • Print
  • Digg
  • del.icio.us
  • Facebook
  • Google Bookmarks
  • LinkedIn
  • RSS
  • Twitter

Posted in Phasing, Tutorial | Comments Off

1 Star2 Stars3 Stars4 Stars5 Stars (No Ratings Yet)
Loading ... Loading ...

Verification in the trenches: TLM2.0 or vmm_channel? Communicating with other components using VMM1.2

Posted by Ambar Sarkar on June 1st, 2010

Dr. Ambar Sarkar, Chief Verification Technologist, Paradigm Works Inc.

Did life get easier with the availability of TLM2.0 style communication in VMM1.2?

Or the other way around? Are you asking: Should I use a vmm_tlm port or just stick to the tried and trusted vmm_channel as my communication method within the verification environment? You are not alone.

As you are aware, VMM1.2 provides the following interfaces for exchanging transactions between components:

  1. vmm_channel (Pre VMM1.2)
  2. vmm_tlm based blocking and non-blocking interfaces (TLM based)
  3. vmm_analysis_port (TLM based)
  4. vmm_callback (Pre VMM1.2)

Which option is the best for communicating between components? Under what circumstances?

There are two real requirements here,

  1. Maintain the ability to work with existing VMM (pre VMM 1.2) components.
  2. Be forward compatible with components created with TLM based ports, as is expected as the industry moves toward UVM (Yes, the early adopter version was released recently!).

TLM2.0 based communication mechanism offers a flexible, sufficient, efficient, and clear and well-defined semantics for communication between two components, and the industry as a whole is moving towards a TLM based approach. For these reasons, I recommend going forward that any new VIP using only the TLM2.0 based communication.

Since VMM1.2 provides complete interoperability between vmm_channel and tlm 2.0 style ports, the user is guaranteed that the created component will keep on working with vmm_channel based counterparts:

class subenv extends vmm_group;
initiator i0;
target t0;

virtual function void connect_ph();
vmm_connect #(.D(my_trans))::tlm_bind(
t0.in_chan, // Channel
i0.b_port, // TLM port
vmm_tlm::TLM_BLOCKING_EXPORT);
endfunction: connect_ph

endclass: subenv

Similarly, any vmm_notify based callback events can be communicated using tlm_analysis ports. For details, see example 5-48 in the Advanced Usage section in the user guide.

By creating components that solely depend on TLM style communication schemes will greatly facilitate interoperability and migration of VIP implementations to currently evolving statndard of UVM.

The following summarizes my recommendations:

Type of interaction

Recommended mechanism

Issuing and receiving transactions vmm_tlm based blocking interface
Issuing notification to passive observers vmm_analysis_port

For issuing transactions to other components on a point-to-point manner, typically seen in master-slave based communications, use the vmm_tlm based blocking port interfaces.

For slave-like transactors which expect to receive transactions from other transactors, use vmm_tlm based blocking export interface.

For reactive transactors, define additional vmm_tlm based blocking export interface.

For broadcast transactions to be communicated to multiple consumers such scoreboards, functional coverage models, etc, use vmm_analysis_port.

So, did life get easier with TLM? I believe so. Especially if you want to be forward compatible with the new and upcoming methodologies.

This article is the 7th in the Verification in the trenches series. Hope you found this article useful. If you would like to hear about any other related topic, please comment or drop me a line at ambar.sarkar@paradigm-works.com. Also, if you are starting out fresh, please check out the free VMM1.2 environment generator at http://resourceworks.paradigm-works.com/svftg/vmm .

Share and Enjoy:
  • Print
  • Digg
  • del.icio.us
  • Facebook
  • Google Bookmarks
  • LinkedIn
  • RSS
  • Twitter

Posted in Communication, Transaction Level Modeling (TLM), Tutorial | Comments Off

1 Star2 Stars3 Stars4 Stars5 Stars (No Ratings Yet)
Loading ... Loading ...

Phases and Threads

Posted by JL Gray on May 27th, 2010

JL Gray, Consultant, Verilab, Austin, Texas, and Author of Cool Verification

Back in the fall, I wrote about the differences between Phases and Threads, and how that relates to implicit and explicit phasing in the VMM.  In this video, I’ve taken a step back to describe phases, threads, and timelines using a real-world analogy based on the seasons of the year.

As you’re watching this video, one thing to keep in mind is that threads are dealt with by vmm_xactor-based objects, and phases are handled by vmm-timeline and vmm_group-based objects.

Share and Enjoy:
  • Print
  • Digg
  • del.icio.us
  • Facebook
  • Google Bookmarks
  • LinkedIn
  • RSS
  • Twitter

Posted in Phasing, Tutorial | 1 Comment »

1 Star2 Stars3 Stars4 Stars5 Stars (No Ratings Yet)
Loading ... Loading ...

Simplifying test writing with MSSG and constraint switching feature of System Verilog

Posted by Tushar Mattu on May 25th, 2010

Pratish Kumar KT  & Sachin Sohale, Texas Instruments
Tushar Mattu,  Synopsys

In an ideal world, one would want maximum automation and minimal effort. The same holds true when you would want to uncover bugs in your DUV. You would want to provide your team members with an environment whereby they can focus their efforts on debugging issues in the design rather than spend time writing hundreds of lines of testcase code. Here we want to share some simple techniques to make the test writer’s life easier by enabling him to achieve the required objectives with minimum lines of test code, and by providing more automation and reuse. Along with automation, it is important to ensure that the test logic remains easy to understand. In one recent project, where the DUV was an image processing block, the following were some of the requirements:

- The relevant configuration for each block had to be driven first on a configuration port before driving the frame data on another port

- The configuration had to be written to the registers in individual blocks and each block had its own unique configuration.

- Several such blocks together constituted the overall sub-system which required an integrated testbench

- Because all these blocks had to be verified in parallel, the requirement was to have one a generic Register Model which could work not only with the blocks in parallel but also be reused at the sub-system level

- Also, another aspect was to provide as much of reuse in the testcases as possible from block to system level

Given the requirements, we decided to go ahead in the manner described below using a clever mix of the MSS scenarios , named constraints blocks and RAL:

first

Here is the brief description of the different components and the flow which is represented in the pictorial representation above:

Custom RAL model

As the RAL register model forms the configuration stimulus space, we decided to put all test-specific constraints for register configuration in the top level RAL block itself and had them switched off by default. As seen below, the custom RAL block extended from the generated model has all the test-specific constraints which are switched off.

second

Basic Multistream Scenario

For each block, one basic scenario (“BASIC SCENARIO”) was always registered with the MSSG. This procedural scenario governs the test flow, which is:

- Randomize the RAL configuration with default constraints,

- Drive the configuration to the configuration port

- Put the desired number of frame data into the driver BFM input channel

The following snippet shows the actual ‘BASIC SCENARIO”

third

Scenario Library

With the basic infrastructure in place, the strategy for creating a scenario library for the test-specific requirements was simple. Each extended scenario class was meant to change the configuration generation through the RAL model of the different blocks. Thus, for the scenario library for each block, each independent scenario extended the basic scenario and overloaded only the required virtual method for configuration change as shown below:

fourth

Test

Each test would then select the scenario corresponding to the test and register it with the MSSG. fifth

At the block level, we were able to reuse this approach across all block level tests consistently and effectively.

Later we were able to reuse all block level tests in the sub-system level testbench , as tests were written in terms of constraints in the RAL model itself. At the sub-system level, we would then turn on specific constraint per block using corresponding block level scenarios. This ensures that there is maximum reuse and that the test flow is consistent across levels.

sixth

At higher levels, the scenarios extend the “basic_system_scenario”. The test flow managed by the system level scenarios have a slightly different execution flow than block level tests.But the ‘configuration generation’ is reused consistently and efficiently from block to system. That means modifications of block level test constraints would not require any modification for subsystem level tests using that configuration.

And voila !! Scenarios and test dumping steps were automated at block and sub-system level using simple perl scripting. Test-specific constraint per block were written in an XL sheet which a script will take that as input to generate the scenario and test at block/sub-system level. The scenario execution flow is predetermined and defined in the block/sub-system basic scenario . Furthermore, the flexibility was given to user within this automation process to create multiple such scenarios and reuse existing test constraints.

Share and Enjoy:
  • Print
  • Digg
  • del.icio.us
  • Facebook
  • Google Bookmarks
  • LinkedIn
  • RSS
  • Twitter

Posted in Creating tests, Register Abstraction Model with RAL, Tutorial, VMM | 3 Comments »

1 Star2 Stars3 Stars4 Stars5 Stars (No Ratings Yet)
Loading ... Loading ...

Using RAL callbacks to model latencies within complex systems

Posted by Amit Sharma on May 20th, 2010

Varun S, CAE, Synopsys

In very complex systems there may be cases where a write to a register may be delayed by blocks internal to the system while the block sitting on the periphery might signal a write complete. The RAL BFM will generally be connected to the peripheral block and as soon as it sees the write complete from the peripheral block it will update the RAL’s shadow registers. But due to latencies that might be present within the internal blocks, the state of the physical register wouldn’t have changed yet.The value within the real register and the shadow register will not match for some duration due to the internal latencies
within the system. This could cause problems if another component happens to read from the register, there would be a mis-match as the shadow register reflects the new value which hasn’t reached the register yet, whereas the DUT register still holds the old value.

To prevent such mis-matches one can use the callback methods within the vmm_rw_xactor_callbacks class and model the delay within the relevant callback task. The following are the methods present within the callback,

1. pre_single()  – This task is invoked before execution of a single cycle i.e., before execute_single().
2. pre_burst()   – This task is invoked before execution of a burst cycle i.e., before execute_burst().
3. post_single() – This task is invoked after execution of a single cycle i.e., after execute_single().
4. post_burst()  – This task is invoked after execution of a burst cycle i.e., after execute_burst().

In the scenario described above it would be ideal if we introduced a delay such that the updation of RAL’s shadow register is simultaneous with the state change of the real register. For this we will need a monitor that sits very close to the register which would correctly indicate the state changes of the register. Within the post_write() method of the callback we can wait till the monitor senses a change and then exit the task. This would thus delay the shadow register from being updated until the real register changes its state. The sample code below gives an illustration of how this is done.

//———————————————————————————————————————-

class my_rw_xactor_callbacks extends vmm_rw_xactor_callbacks;

my_monitor_channel mon_activity_chan;

function new(my_monitor_channel mon_activity_chan);
this.mon_activity_chan = mon_activity_chan;
endfunction

virtual task post_single(vmm_rw_xactor xact, vmm_rw_access tr);

my_monitor_transaction mon_tr;

while(1) begin
my_mon_activity_chan.get(tr);
if(tr.kind == vmm_ral::WRITE)
if(tr.addr == mon_tr.addr)
break;
end
endtask

endclass : my_rw_xactor_callbacks

//———————————————————————————————————————-

In the example code above the callback class receives a handle to the monitors activity channel which is then used within the post_single() task to wait till the monitor observes a state change on the register. A check is made using the address of the register to confirm if the state change was for the register in question. Real scenarios could get much more complex with multiple domains, drivers etc., additional complexities can be accordingly handled within the post_single() callback task using the monitor/monitors that monitor the interfaces through which the register is written to.

RAL also has a built-in mechanism known as auto mirroring to automatically update the mirror whenever the register changes inside the DUT through a different interface. One can refer to the latest RAL User Guide to get more details on its usage.

Share and Enjoy:
  • Print
  • Digg
  • del.icio.us
  • Facebook
  • Google Bookmarks
  • LinkedIn
  • RSS
  • Twitter

Posted in Register Abstraction Model with RAL, Reuse | Comments Off

1 Star2 Stars3 Stars4 Stars5 Stars (No Ratings Yet)
Loading ... Loading ...

Hierarchical Transaction-Level Connections in VMM 1.2

Posted by John Aynsley on May 19th, 2010

John Aynsley, CTO, Doulos

In a previous blog post I described ports and exports in VMM 1.2, and explored the issue of binding a port of one transactor to the export of another transactor, where the two transactors are peers. Now let us look at how we can make hierarchical connections between ports and exports in the case where one transactor is nested within another.

Let’s start with ports. Suppose we have a producer that sends transactions out through a port and is nested inside another transactor:

class producer extends vmm_xactor;
vmm_tlm_b_transport_port #(producer, vmm_tlm_generic_payload) m_port;

virtual task run_ph;

vmm_tlm_generic_payload tx;

m_port.b_transport(tx, delay);

This transactor calls the b_transport method to send a transaction out through a port. So far, so good. Now let’s look at the parent transactor:

class producer_parent extends vmm_xactor;

vmm_tlm_b_transport_port #(producer_parent, vmm_tlm_generic_payload) m_port;

producer  m_producer;

virtual function void build_ph;
m_producer = new( “m_producer”, this );
endfunction: build_ph

The producer’s parent also has a port, through which it wishes to send the transaction out into its environment. The port on the producer must be bound to the port on the parent. This is done in the connect phase:

virtual function void connect_ph;
m_producer.m_port.tlm_import( this.m_port );
endfunction: connect_ph

Note the use of the method tlm_import in place of tlm_bind to perform the child-to-parent port binding. Why is this particular method named tlm_import? I am tempted to say “Don’t ask!” Whatever name had been selected, somebody would have found it confusing. Of course, the idea is that something is being imported. In this case it is actually the address of the b_transport method that is effectively being imported from the parent (this.m_port) to the child (m_producer.m_port). tlm_import is being called in the sense CHILD.tlm_import( PARENT), which makes sense to me, anyway.

So much for the producer. On the consumer side the situation is very similar so I will cut to the chase:

class consumer_parent extends vmm_xactor;

vmm_tlm_b_transport_export #(consumer_parent, vmm_tlm_generic_payload) m_export;

consumer  m_consumer;

virtual function void connect_ph;
m_consumer.m_export.tlm_import( this.m_export );
endfunction: connect_ph

In this case, the address of the b_transport method is effectively being passed up from the child (m_consumer.m_export) to the parent (this.m_export). In other words, b_transport is being exported rather than imported, but note that it is still the tlm_import method that is being used to perform the binding in the direction CHILD.tlm_import( PARENT ).

Now for the top-level environment, where we instantiate the parent transactors:

class tb_env extends vmm_group;

producer_parent  m_producer_1;
producer_parent  m_producer_2;
consumer_parent  m_consumer;

virtual function void build_ph;
m_producer_1 = new( “m_producer_1″, this );
m_producer_2 = new( “m_producer_2″, this );
m_consumer   = new( “m_consumer”,   this );
endfunction: build_ph

virtual function void connect_ph;
m_producer_1.m_port.tlm_bind( m_consumer.m_export, 0 );
m_producer_2.m_port.tlm_bind( m_consumer.m_export, 1 );
endfunction: connect_ph

endclass: tb_env

This is straightforward. We just use the tlm_bind method to perform peer-to-peer binding between a port and an export at the top level. Note that we are binding two distinct ports to a single export; as explained in a previous blog post, a VMM transactor can accept incoming transactions from multiple sources, distinguished using the value of the second argument to the tlm_bind method.

So, in summary, it is possible to bind TL- ports and exports up, down, and across the transactor hierarchy. Use tlm_bind for peer-to-peer bindings, and tlm_import for child-to-parent bindings.

Share and Enjoy:
  • Print
  • Digg
  • del.icio.us
  • Facebook
  • Google Bookmarks
  • LinkedIn
  • RSS
  • Twitter

Posted in Communication, Reuse, Transaction Level Modeling (TLM) | 1 Comment »

1 Star2 Stars3 Stars4 Stars5 Stars (No Ratings Yet)
Loading ... Loading ...

Early Adopter release of UVM now available!

Posted by Janick Bergeron on May 17th, 2010

As I am sure most of you already know, Accellera has been working on a new industry-standard verification methodology. That new standard, the Universal Verification Methodology, is still a work in progress: there are many features that need to be defined and implemented before its first official release, such as a register abstraction package and OSCI-style TLM2 interfaces.

However, an Early Adopter release is now available if you wish to start using the UVM today. The UVM distribution kit contains a User Guide, a Reference Manual and an open source reference implementation of the work completed to date. It is supported by VCS (you will need version 2009.12-3 or later or version 2010.06 or later).

Synopsys is fully committed to supporting UVM. As such, a version of the UVM library, augmented to support additional features provided by VCS and DVE, will be included in the VCS distribution, starting with the 2009.12-7 and 2010.06-1 patch releases. It will have the same simple use model as VMM: just specify the “-ntb_opts uvm” command line argument and voila: the UVM library will be automatically compiled, ready to be used.

But if you can’t wait for these patch releases, you can download the Early Adopter kit from the Accellera website and use it immediately.

What about VMM?

Synopsys continues to be fully committed to VMM! All of the powerful applications that continue to make you more productive will be available on top of the UVM base class and methodology.

A UVM/VMM interoperability kit is available from Synopsys (and will be included in the mentioned VCS patch releases) that will make it easy to integrate UVM verification IP into a VMM testbench (or vice-versa). Contact your VMM support AC/CAE to obtain the UVM/VMM interoperability package.

What about support?

If you need help adopting UVM with VCS, our ACs will be more than happy to help you: they have many years of expertise supporting customers using advanced verification methodologies.

UVM is an Accellera effort. If you have a great idea for a new feature, you should submit your request to the Technical Subcommittee. Better yet! You should join the Subcommittee and help us implement it!

Share and Enjoy:
  • Print
  • Digg
  • del.icio.us
  • Facebook
  • Google Bookmarks
  • LinkedIn
  • RSS
  • Twitter

Posted in Announcements, Interoperability | 1 Comment »

1 Star2 Stars3 Stars4 Stars5 Stars (1 votes, average: 3.00 out of 5)
Loading ... Loading ...

Required and Provided Interfaces in VMM 1.2

Posted by John Aynsley on May 14th, 2010

John Aynsley, CTO, Doulos

Before diving into more technical detail concerning VMM 1.2, let’s take some time to review a basic concept of transaction-level communication that often causes confusion, particularly for people more familiar with HDLs like Verilog and VHDL than with object-oriented software programming. This is the idea of the transaction-level interface.

A transaction-level interface is a software interface that permits software components to communicate using a specific set of function calls (also known as method calls). In the case of VMM, the software components in question are VMM transactors, and the function calls are the VMM TLM methods such as b_transport, introduced in previous posts on this blog. Such transaction-level interfaces are often depicted diagrammatically as shown here:

clip_image002

Ports and exports are depicted as if they were pins on the periphery of a component, which is accurate in a metaphorical sense, but misleading if taken too literally. A port is a structured way of representing the fact that the Producer transactor above makes a call to a specific function, and thus requires an implementation of that function in order to compile and run. On the other side, an export is a structured way of representing the fact that the Consumer transactor provides an implementation of a specific function. So although the diagram may appear to show two components with a structural connection between them, it actually shows the Producer making a call to a function implemented by the Consumer. What may appear to be a hardware connection turns out to be an object-oriented software dependency between Producer and Consumer.

When it comes to combining multiple transactors, the types of the transaction-level interfaces have to be respected. The declarations of ports and exports are each parameterized with the type of the transaction object to be passed as a function argument:

vmm_tlm_b_transport_port #(Producer, transaction) port;

vmm_tlm_b_transport_export #(Consumer, transaction) export;

The port, which requires a transaction-level interface of a given type, must be bound to an export that provides an interface of the same type. The type in question is provided by the second parameter transaction. The tlm_bind method effectively seals a contract between the transactor that requires the interface and the provider of the interface:

producer.port.tlm_bind( consumer.export );

One benefit of transaction-level interfaces is that this connection is strongly typed, so the SystemVerilog compiler will catch any mismatch between the types of the port and the export.

As well as binding a port to an export peer-to-peer, it is also possible to bind chains of ports or exports going up or down the component hierarchy, as shown diagrammatically below:

clip_image004

Child-to-parent port bindings carry the function call up through the component hierarchy to the left, while parent-to-child export bindings carry the function call down through the component hierarchy to the right. A port-to-export binding is only permitted at the top level.

At run-time, a method call to the appropriate function is made through the child port:

port.b_transport(tx, delay);

This will result in the corresponding function implementation being called directly, with no intervening channel to store the transaction en route. Transaction-level interfaces are fast, robust, and simple to use, which is why they have been incorporated into VMM.

Share and Enjoy:
  • Print
  • Digg
  • del.icio.us
  • Facebook
  • Google Bookmarks
  • LinkedIn
  • RSS
  • Twitter

Posted in Communication, Transaction Level Modeling (TLM), VMM infrastructure | Comments Off

1 Star2 Stars3 Stars4 Stars5 Stars (No Ratings Yet)
Loading ... Loading ...

Verification in the trenches: Configuring your environment using VMM1.2

Posted by Ambar Sarkar on May 7th, 2010

Dr. Ambar Sarkar, Chief Verification Technologist, Paradigm Works Inc.

Let’s start with a quick and easy quiz.
Imagine you are verifying a NxN switch where N is configurable, and it supports any configuration from 1×1 to NxN. Assume that for each port, you instantiate one set of verification components such as monitors, bfms, scoreboards. For example, if this was a PCIe switch with 2 ingress and 3 egress ports, you would instantiate 2 ingress and 3 egress instances of PCIe verification components. So, for N = 5, how many environments you should actually write code for and maintain?

The answer better be 1 J™. Maintaining NxN = 25 distinct environments will be quixotic at best. What you want is to write code that creates the desired number and configurations of component instances based on some runtime parameters. This is an example of what is known as “structural” configuration, where you are configuring the structure of your environment.

In VMM1.2 parlance, this means that you want to make sure you can communicate to your build_ph() phase the required number of ingress and egress verification component instances. The build phase can then construct the configured number of components. This way, you get to reuse your verification environment code in multiple topologies, and reduce the number of unique environments that need to be separately created and maintained.

Hopefully, this establishes why structural configuration is important. So the questions that arise next are:

How do you declare the structural configuration parameters?

How do you specify the structural configuration values?

How do you use the structural configuration values in your code?

Declaring the configurable parameters

Structural configuration declarations should sit in a class that derives from vmm_unit. A set of convenient macros are provided, with the prefix `vmm_unit_config_xxx where xxx stands for the data type of the parameter. For now, xxx can be int, boolean, or string.

For example, for integer parameters, you have:

`vmm_unit_config_int    (
<name of the int data member>,
<describe the purpose of this parameter>,
<default value>,
<name of the enclosing class derived from vmm unit>
);

If you want to declare the number of ports as configurable, you can declare a variable num_ports as shown below:class switch_env extends vmm_group;
`vmm_typename(switch_env)?
int     num_ports; // Will be configured at runtime
vip bfm[$]; // Will need to create num_port number of instances

function new(string inst, vmm_unit parent = null);
super.new(get_typename(), inst, parent); // NEVER FORGET THIS!!
endfunction

// Declare the number of ingress ports. Default is 1.
`vmm_unit_config_int(num_ports,”Number of ingress ports”, 1, switch_env);

endclass

Specifying the configuration values

There are two main options:

1.  From code using vmm_opts.

The basic rule is that you need to specify it *before* the build phase gets called, where the construction of the components take place.  A good place to do so is in vmm_test::set_config().
function my_test::set_config();
….
// Override default with 4. my_env is an instance of switch_env class.
vmm_opts::set_int(“my_env:num_ports”, 4);
endfunction

2.  From command line or external option file. Here is how the number of ingress ports could be set to 5.
./simv +vmm_opts+num_ports=5@my_env

The command line supersedes option set within code as shown in 1.
Do note that one can specify these options for specific instances or hierarchically using regular expressions.

Using the configuration values

This is the easy part. As long as you declare the parameter using `vmm_unit_config_xxx , you are all set. It will be set to the correct value when the containing object derived from vmm_unit is created.

function switch_env::build_ph();   ?
// Just use the value of num_ports
for (i = 0; i< num_ports; i++) begin
bfm[i] = new ($psprintf(“bfm%0d”, i), this);
end
endfunction

This article is the 6th in the Verification in the trenches series. Hope you found this article useful. If you would like to hear about any other related topic, please comment or drop me a line at ambar.sarkar@paradigm-works.com. Also, if you are starting out fresh, please check out the free VMM1.2 environment generator at http://resourceworks.paradigm-works.com/svftg/vmm .

Share and Enjoy:
  • Print
  • Digg
  • del.icio.us
  • Facebook
  • Google Bookmarks
  • LinkedIn
  • RSS
  • Twitter

Posted in Configuration, Tutorial | Comments Off

1 Star2 Stars3 Stars4 Stars5 Stars (No Ratings Yet)
Loading ... Loading ...