Verification Martial Arts: A Verification Methodology Blog

Archive for the 'Reuse' Category

Avoiding Redundant Simulation Cycles with your UVM VIP based simulation with a Simple Save-Restore Strategy

Posted by paragg on 6th March 2014

In many verification environments, you reuse the same configuration cycles across different testcases. These cycles might involve writing and reading from different configuration and status registers, loading program memories, and other similar tasks to set up a DUT for the targeted stimulus. In many of these environments, the time taken during this configuration cycles are very long. Also, there is a lot of redundancy as the verification engineers have to run the same set of verified configuration cycles for different testcases leading to a loss in productivity. This is especially true for complex verification environments with multiple interfaces which require different components to be configured.

The Verilog language provides an option of saving the state of the design and the testbench at a particular point in time. You can restore the simulation to the same state and continue from there. This can be done by adding appropriate built in system calls from the Verilog code. VCS provides the same options from the Unified Command line Interpreter (UCLI).

However, it is not enough for you to restore simulation from the saved state. For different simulations, you may want to apply different random stimulus to the DUT. In the context of UVM, you would want to run different sequences from a saved state as show below.

In the above example apart from the last step which varies to large extent, the rest of the steps once established need no iterations.

Here we explain how to achieve the above strategy with the simple existing UBUS example available in the standard UVM installation. Simple changes are made in the environment to show what needs to be done to bring in this additional capability. Within the existing set of tests, the two namely, “test_read_modify_write” and “test_r8_w8_r4_w4”, differs only w.r.t the master sequence being executed – i.e. “read_modify_write_seq” and “r8_w8_r4_w4_seq” respectively.

Let’s say that we have a scenario where we would want to save a simulation once the reset_phase is done and then start executing different sequences post the reset_phase the restored simulations. To demonstrate a similar scenario through the UBUS tests, we introduced a delay in the reset_phase of the base test (in a real test, this may correspond to the PLL lock, DDR Initialization, Basic DUT Configuration).

The following snippet shows how the existing tests are modified to bring in the capability of running different tests in different ‘restored’ simulations.

As evident in the code we made two major modifications.

  • Shifted the setting of the phase default_sequence from the build phase to the start of the main phase.
  • Get the name of the sequence as an argument from the command-line and process the string appropriately in the code to execute the sequence on the relevant sequencer.

As you can see, the changes are kept to a minimum. With this, the above generic framework is ready to be simulated.  In VCS, one of the different ways, the save/restore flow can be enabled as follows.

Thus above strategy helps in optimal utilization of the compute resources with simple changes in your verification flow. Hope this was useful and you manage to easily make the changes in your verification environment to adopt this flow and avoid redundant simulation cycles.

Posted in Automation, Coding Style, Configuration, Creating tests, Customization, Optimization/Performance, Organization, Reuse, Stimulus Generation, SystemVerilog, Tutorial, UVM, Uncategorized, Verification Planning & Management | 1 Comment »

SNUG-2012 Verification Round Up – Miscellaneous Topics

Posted by paragg on 29th March 2013

In my final installment of the series of blogs summing up the various SNUG verification papers of 2012, I try to cover the user papers on the Design IP/Verification IP and SystemC and SystemVerilog co-simulation. Please find my earlier blogs on the other domains here: System Verilog Language, Methodologies & VCS technologies

DesignWare core USB3.0 Controller (DWC_usb3) can be configured as a USB3.0 Device Controller. When verifying a system that comprises a DWC_usb3 Device Controller, the verification environment is responsible for bringing up the DWC_usb3 Device Controller to its proper operation mode to communicate with the USB3.0 Host. The paper Integrating DesignWare USB3.0 Device Controller In a UVM-based Testbench from Ning Guo of Paradigm Works describes the process of configuring and driving the DWC_usb3 Device Controller in a UVM based verification environment using the Discovery USB 3.0 Verification IP. This paper describes how the verification environment needs to be created so that it’s highly configurable and reusable.

The AMBA 4 ACE specification enables system level cache coherency across clusters of multicore processors, such as the ARM Cortex-A15 and Cortex-A7 MPCore™ processors .This ensures optimum performance and power efficiency of complex SoC designs. However, the design complexity associated with these capabilies is also higher.  And it throws up new verification challenges.  In the paper, Creating AMBA4 ACE Test Environment With Discovery VIP”, Whitney Huang, Sean Chou, MediaTek Inc, demonstrates how to tackle complex verification challenges increase their verification productivity by using Synopsys Discovery AMBA ACE VIP.

The paper, Verification Methodology of Dual NIC SOC Using VIPs by A.V. Anil Kumar, Mrinal Sarmah, Sunita Jain of Xilinx India Technology Services Pvt. Ltd, talks about how various features of Synopsys PCIe and Ethernet Verification IPs can be exploited to help in the efficient verification of the DUT across various traffic configurations. The paper explores how the VIP Application Programming Interface (API)s can leveraged in the  tests cases to reach high  functional coverage numbers  in a very short duration. They also show how a dual NIC verification environment can effectively use Ethernet VIP APIs to test various Media Access Control (MAC) features. Finally conclude how of the implementation can be used across future revisions of their design.

The ability to analyze the performance of the SoC at the early stage of the design can make a significant different to the end product.  This can lead to more accurate and an earlier estimate of the desired performance that is expected.  Dayananda Yaraganalu Sadashivappa, Igal Mariasin, Jayaprakash Naradasi of SanDisk India Device Design Centre Pvt. Ltd., in the paperGeneric MLM environment for SoC Performance Enhancement”, outlines the solution that was found by using the Synopsys VIP models. The VIPs were used in conjunction with interconnect, which in this case is a Multi-Layer-Matrix (MLM). The environment was built leveraging the VMM base classes. The VMM multiple stream scenario (vmm_ms_scenario) base class was used to create the traffic across the matrix, and the performance meters were constructed using the base classes. The callbacks were leverage appropriately help in collating the statistics. Multiple knobs were used to make the environment generic and configurable. The approach helped in finding multiple performance bugs which could not have been easily found using conventional verification.

In the paper, “User Experience Verifying Ethernet IP Core”, Puneet Rattia of Altera Corporation, presents his experience with verifying the Altera® 40-100Gbps Ethernet IP core utilizing VMM environment while integrating the Ethernet VIP from Synopsys. He explains how he created a full suite of system and blocks level regression tests and then goes on to show how he  utilizes the coverage mapping capabilities of VCS to merge the results across these various testbenches and produce meaningful reports. Besides showing how to reuse the verification infrastructure at the SoC level, the paper also demonstrates how they went in for horizontal reuse by integrating the reference SystemC based models developed and prototyped in the early phase of the project.

UVM 1.x includes support for the communication interfaces defined by the SystemC TLM-2.0 standard. This enables integration of SystemC TLM-2.0 IP into a SystemVerilog UVM verification environment. Dr David Long, John Aynsley, Doug Smith, Doulos in the paper A Beginner’s Guide to Using SystemC TLM-2.0 IP with UVMdescribes how this is done best. They talk about the fact that the connection between SystemC and SystemVerilog currently requires a tool specific interface such as Synopsys Transaction Level Interface (TLI). This paper begins with a brief overview of TLM-2.0 aimed at novice users. It then discusses the steps required to add a SystemC TLM-2.0 model into a SystemVerilog UVM environment and simulate it with VCS. At each step, issues that users will face are explored and suggestions made for practical fixes, showing the relevant pieces of code. Finally, the paper gives a summary of areas where the UVM implementation of TLM-2.0 differs from the SystemC standard and proposes workarounds to ensure correct communication between the SystemVerilog and SystemC domains.

There is an inherent need to enable the horizontal reuse of components created during the architecture and exploration stage. Subhra S Bandyopadhyay, Pavan N M, Intel Technology India Pvt. Ltd, in Integrating SystemC OSCI TLM 2.0 Models to OVM based System Verilog Verification Environments talks about how  theur architecture team creates SystemC models  for early performance analysis and accelerated software development. In OVM-based verification environment, the objective was to reuse this model as a reference model and thus helped in reducing the overall environment bring-up time. The challenge was not only to integrate the SystemC model in the OVM-based verification environment but also to be able to efficiently send transactions from SV to SystemC and vice versa. This paper explores the successful integration of SystemC TLM2 components in OVM based verification environments and also highlight how the VCS TLI (Transaction Level Interface) adapters help TLM2.0 sockets in SystemC to communicate with those in SV and vice versa.

Truly, I feel overwhelmed by the numbers of papers and the interesting use of technology across a variety of domains on which user share their experiences across the various SNUG conferences. As we speak, the SNUG events for 2013 have started, and the stage is all set for a new set of very informative and interesting sessions. I am sure most of you would be attending the SNUIG conferences in your area. . You can find the detailed schedule of those here.

Posted in Announcements, Automation, Callbacks, Coding Style, Communication, Reuse, Structural Components, SystemC/C/C++, SystemVerilog, Transaction Level Modeling (TLM), Tutorial, UVM, VMM | Comments Off

Coverage coding: simple tips and gotchas

Posted by paragg on 28th March 2013

Author - Bhushan Safi (E-Infochips)

Functional coverage has been the most widely accepted way by which we track the completeness of any constrained random testbench. However, does achieving 100% functional coverage means that the DUV is bug free? Certainly not , but it boosts the confidence of the verification engineer and management team.

Based on my experience of defining functional covergroups for different projects, I realized that coverage constructs and options in the SystemVerilog language have their own nuances for which one needs to keep an eye out. These “gotchas” have to be understood so that coverage can be used optimally to achieve appropriate usage results in correct alignment with the intent desired. Let me talk about some of these finer aspects of coverage so that you can use the constructs more productively.

Usage of ignore_bins

The ‘ignore_bins’ construct is meant to exclude a collection of bins from coverage.  While using this particular construct, you might end up with multiple ‘shapes’ issues (By ‘shapes’ I mean “Guard_OFF” and “Guard_ON”, which appears in the report whenever ‘ignore_bins’ is used). Lets look at a simple usage of ignore_bins is as shown in figure 1.

Looking at the code in figure 1, we would assume that since we have set “cfg.disable = 1” the bin with value 1 would be ignored from the generated coverage report.  Here we use the ‘iff’ condition to try to match our intent of not creating a bin for the variable under the said condition.  However in simulations, where the sample_event is not triggered, we see that we end up having an instance of our covergroup which still expects both the bins to be hit. (See the generated report in figure 1). Why does this happen? If you dig deep into the semantics, you will understand that the “iff” condition will come into action only when the event sample_event is triggered. So if we are writing  ‘ignore_bins’ for a covergroup which may/may not be sampled on each run then we need to look for an alternative.  Indeed there is a way to address this requirement and that is through the usage of the multi-mastered intelligent ternary operator. Look at the code in figure 2 to see how the ternary operator is used to model the same intent.

Now the report is as you expect!!!

Using the above mentioned coding style we make sure that the bin which is not desired in specific conditions is ignored irrespective of the condition of whether or not the covergroup is being sampled.  Also, we use the value “2’b11” to make sure that we don’t end up in ignoring a valid value for the variable concerned.

Using detect_overlap

The coverage option called “detect_overlap” helps in issuing a warning if there is an overlap between the range list (or transition list) of two bins of a coverpoint. Whenever we have plenty of ranges to be covered, and there is a possibility of overlap, it is important to use this option.

Why is it important and how can you be impacted if you don’t use it? You might actually end up with incorrect and unwanted coverage results!

Let’s look at an example. In the above scenario, if a value of 25 is generated, the coverage scores reported would be 50% when the desired outcome would ideally have been 25%. This is because the value ‘25’ contributes to two bins out of four bins when that was probably not wanted. The usage of ‘detect_overlap’ would have warned you about this and you could have fixed the bins to make sure that such a scenario doesn’t occur.

Coverage coding for crosses and assigning weight

What does the LRM (Table 19-1—Instance-specific coverage options) say about the ’weight’ attribute?  

If set at the covergroup syntactic level, it specifies the weight of this covergroup instance for computing the overall instance coverage of the simulation. If set at the coverpoint (or cross) syntactic level, it specifies the weight of a coverpoint (or cross) for computing the instance coverage of the enclosing covergroup. The specified weight shall be a non-negative integral value.”

What kinds of surprises can a combination of cross and option.weight create?

The SystemVerilog LRM shows a very simple way of writing a cross. Let’s look at the code below.

The expectation here is that for a single simulation (expecting one of the bins to be hit), we will end up with 25 % coverage as we have specified the weight of the individual coverpoints to zero. However, what essentially happens is the following, 2 internal coverpoints for check_4_a and check_4_b are generated, which are used to compute the coverage score of the ‘crossed’ coverpoint here. So you’ll end up having a total of four coverpoints, two of which have option.weight specified to 0 (i.e. CHECK_A and CHECK_B) and two of which are coverpoints with option.weigh as 1 (i.e. check_4_a and check_4_b). Thus for a single simulation, you will not get the 25% coverage desired.

Now with this report we see the following issues:

  • => We see four coverpoints while expectation is only two coverpoints
  • => The weights of the individual coverpoints is set to be expected to zero as option.weight is set to ‘0’
  • => The overall coverage numbers are undesired.

In order to avoid above disastrous results we need to take care of following aspects:

  • => Use the type_option.weight = 0, instead of option.weight = 0.
  • => Use the coverpoint labels instead of coverpoint names to specify the cross.

Hope my findings will be useful for you and you will use these options/attributes appropriately to get the best value out of your coverage metrics (without losing any sleep or debug cycles to figure out why they didn’t behave as you expected them to)!

Posted in Coding Style, Coverage, Metrics, Customization, Reuse, SystemVerilog, Tutorial | 9 Comments »

SNUG-2012 Verification Round Up – Language & Methodologies – I

Posted by paragg on 25th February 2013

As in the previous couple of years, last year’s SNUG – Synopsys User Group showcased an amazing number of useful user papers   leveraging the capabilities of the SystemVerilog language and verification methodologies centered on it.

I am always excited when I see this plethora of useful papers and I try to ensure that I set aside some time to go through all these user experiences.  Now, as we wait for SNUG, Silicon Valley to kick-start the SNUG events for this year, I would want to look back at some of the very interesting and useful paper from the different SNUGs of the year 2012.  Let me start with talking about a few papers in the area of the System Verilog language and SV methodologies.

Papers leveraging the SystemVerilog language and constructs

Hillel Miller of Freescale in the paper “Using covergroups and covergroup filters for effective functional coverageuncovers the mechanisms available for carving out the coverage goals. In the p1800-2012 of the SystemVerilog LRM, new constructs are provided just for doing this. The construct that is focused on is the “with” construct. The new construct provides the ability to carve out of a multidimensional range of possibilities for a sub-range of goals. This is very relevant in a “working” or under development setup that requires frequent reprioritization to meet tape-out goals.

The paperTaming Testbench Timing: Time’s Up for Clocking Block Confusionsby Jonathan Bromley, Kevin Johnston of Verilab, reviews the key features and purpose of clocking blocks and then examines why they continue to be a source of confusion and unexpected behavior for many verification engineers. Drawing from the authors’ project and mentoring experience, it highlights typical usage errors and how to avoid them. They clarify the internal behavior of clocking blocks to help engineers understand the reasons behind common problems, and show techniques that allow clocking blocks to be used productively and with confidence. Finally, they consider some areas that may cause portability problems across simulators and indicate how to avoid them.

Inference of latches and flops based on coding styles has always been a topic creates multiple viewpoints. There are other such scenarios of synthesis and simulation mismatches that one typically comes across. To address all such ambiguity, language developers have provided different constructs to provide for an explicit resolution based on the intent. To help us gain a deeper understanding of the topic, Don Mills of Microchip Technology Inc., presented the related concepts in the paper “Yet Another Latch and Gotchas Paper” @ SNUG Silicon Valley. This paper discusses and provides solutions to issues that designers using SystemVerilog for design come across, such as: Case expression issue for casez and casex, Latches generated when using unique case or priority case, SRFF coding style problems with synthesis, SystemVerilog 2009 new definition of logic

Gabi Glasser from Intel presented the paper “Utilizing SystemVerilog for Mixed-Signal Validation@ SNUG Israel, where he proposed a mechanism for simplifying analysis and increasing coverage for mixed signal simulations.  The method proposed here was to take advantage of SystemVerilog capabilities, which enables defining a hash (associative) array with unlimited size. During the simulation, vectors are created for required analog signals, allowing them to be analyzed within the testbench along or at the end of the simulation, without requiring saving these signals into a file. The flow change enables the ability to launch a large scale mixed signal regression while allowing an easier analysis of coverage data.

Design pattern is a general reusable solution to a commonly recurring problem within a given context. The benefit of using design patterns is clear: it gives a common language for designers when approaching a problem, and gives a set of tools, widely used, to solve issues as they come up.  The paper Design Patterns In Verification” by Guy Levenbroun of Qualcomm explores several common problems, which might rise, during the development of a testbench, and how we can use design patterns to solve these problems. The patterns are categorized majorly into following areas: creational (eg factory), structural (eg composite) and behavioral (eg template) are covered in the paper.

Arik Shmayovitsh, Avishay Tvila, Guy Lidor of Sigma Designs , in their paper “Truly reusable Testbench-to-RTL  connection for System Verilog , presents  a novel approach of  connecting the DUT and testbench using consistent semantics while  reusing the testbench. This is achieved by abstracting the connection layer of each testbench using the SystemVerilog ‘bind’ construct. This ensures that the only thing that is required to be done to reuse the testbench for a new DUT would be to identify the instance of the corresponding DUT.

In the paper, A Mechanism for Hierarchical Reuse of Interface Bindings, Thomas Zboril of Qualcomm (Canada) explores another method to instantiate SV interfaces, connect them to the DUT and wrap the virtual interfaces for use in the test environment. This method allows the reuse of all the code when the original block level DUT becomes a lower level instance  in a larger subsystem or chip. The method involves three key mechanisms: Hierarchical virtual interface wrappers, Novel approach of using hierarchical instantiation of SV interfaces, Another novel approach of automatic management of hierarchical references via SV macros (new)

Thinh Ngo & Sakar Jain of Freescale Semiconductor, in their paper, “100% Functional Coverage-Driven Verification Flow propose a coverage driven verification flow that can efficiently achieve 100% functional coverage during simulation. The flow targets varied functionality, focuses at transaction level, measures coverage during simulation, and fails a test if 100% of the expected coverage is not achieved. This flow maps stimulus coverage to functional coverage, with every stimulus transaction being associated with an event in the coverage model and vice versa. This association is derived from the DUT specification and/or the DUT model. Expected events generated along with stimulus transactions are compared against actual events triggered in the DUT. The comparison results are used to pass or fail the test. 100% functional coverage is achieved via 100% stimulus coverage. The flow enables every test with its targeted functionality to meet 100% functional coverage provided that it passes.

Papers on Verification Methodology

In the paper, Top-down vs. bottom-up verification methodology for complex ASICs, Paul Lungu & Zygmunt Pasturczyk of Ciena at Canada covers the simulation methodology used for two large ASICs requiring block level simulations. A top-down verification methodology was used for one of the ASICs while a larger version needed an expanded bottom-up approach using extended simulation capabilities. Some techniques and verification methods such as chaining of sub environments from block to top-level are highlighted  along with challenges and solutions found by the verification team. The paper presents a useful technique of  of passing a RAL (Register Abstraction Layer) mirror to the C models which are used as scoreboards in the environment. The paper also presents a method of generating stable clocks inside the “program” block.

In the paper,Integration of Legacy Verilog BFMs and VMM VIP in UVM using Abstract Classes by Santosh Sarma of Wipro Technologies(India) presents an alternative approach where Legacy BFMs written in Verilog and not implemented using ‘Classes’ are hooked up to higher level class based components to create a standard UVM VIP structure. The paper also discusses an approach where existing VMM Transactors that are tied to such Legacy BFMs can be reused inside the UVM VIP with the help of the VCS provided UVM-VMM Interoperability Library. The implementation makes use of abstract classes to define functions that invoke the BFM APIs. The abstract class is then concretized using derived classes which give the actual implementation of the functions in the abstract class. The concrete class is then bound to the Verilog instance of the BFM using the SystemVerilog “bind” concept. The concrete class handle is then used by the UVM VIP and the VMM Transactor to interact with the underlying Verilog BFM. Using this approach the UVM VIP can be made truly reusable by using run time binding of the Verilog BFM instance to the VIP instead of using hardcoded macro names or procedural calls.

A Unified Self-Check Infrastructure - A Standardized Approach for Creating the Self-Check Block of Any Verification Environmentby John Sotiropoulos, Matt Muresa , Massi Corba of Draper Laboratories Cambridge, MA, USA presents a structured approach for developing a centralized “Self-Check” block for a verification environment. The approach is flexible enough to work with various testbench architectures and is portable across different verification methodologies. Here, all of the design’s responses are encapsulated under a common base class, providing a single “Self-Check” interface for any checking that needs to be performed. This abstraction, combined with a single centralized scoreboard and a standardized set of components, provides the consistency needed for faster development and easier code maintenance. It expands the concept of ‘self-check’ to incorporate the white-box monitors (tracking internal DUT state changes etc.) and Temporal Models (reacting to wire changes) along-with traditional methodologies for enabling self-checking.

For VMM users looking at migrating to UVM, there is another paper from Courtney Schmitt of Analog Devices, Inc.Transitioning to UVM from VMMdiscusses the process of transitioning to a UVM based  environment from VMM Differences and parallels between the two verification methodologies are presented to show that updating to UVM is mostly a matter of getting acquainted with a new set of base classes. Topics include UVM phases, agents, TLM ports, configuration, sequences, and register models. Best practices and reference resources are highlighted to make the transition from VMM to UVM as painless as possible.

Posted in Announcements, Coverage, Metrics, Creating tests, Customization, Modeling, Optimization/Performance, Reuse, SystemVerilog, UVM, Uncategorized, VMM, VMM infrastructure | 3 Comments »

Using the VMM Datastream Scoreboard in a UVM environment

Posted by Amit Sharma on 2nd February 2012

Implementing the response checking mechanism in a self-checking environment remains the most time-consuming task. The VMM Data Stream Scoreboard package facilitates the implementation of verifying the correct transformation, destination and ordering of ordered data streams. This package is intuitively applicable to packet-oriented design, such as modems, routers and protocol interfaces. This package can also be used to verify any design transforming and moving sequences of data items, such as DSP data paths and floating-point units. Out-of-the-box, the VMM data stream scoreboard can be used to verify single-stream designs that do not modify the data flowing through them. For example, it can be used to verify FIFOs, Ethernet media access controllers (MACs) and bridges.

The VMM data scoreboard can also be used to verify multi-stream designs with user-defined data transformation and input-to-output stream routing. The transformation from input data items into expected data items is not limited to one-to-one transformation. An input data item may be transformed into multiple expected data items (e.g. segmenters) or none (e.g. reassemblers). Compared to this, the functionality available through UVM in-order comparator or the algorithmic comparator is significantly less. Thus, users might want to have access to the functionality provided by the VMM DS Scoreboard in a UVM environment. Using the UBUS example available in $VCS_HOME/doc/examples/uvm/integrated/ubus as a demo vehicle, this article shows how simple adapters are used to integrate the VMM DS scoreboard in a UVM environment and thus get access to more advanced scoreboarding functionality within the UVM environment

The UBUS example uses an example scoreboard to verify that the slave agent is operating as a simple memory. It extends from the uvm_scoreboard class and implements a memory_verify() function to makes the appropriate calls and comparisons needed to verify a memory operation. An uvm_analysis_export is explicitly created and implementation for ‘write’ defined. In the top level environment, the analysis export is connected to the analysis port of the slave monitor.

ubus0.slaves[0].monitor.item_collected_port.connect(scoreboard0.item_collected_export);

The simple scoreboard with its explicit implementation of the comparison routines suffices for verifying the basic operations, but would require to be enhanced significantly to provide more detailed information which the user might need. For example, lets take the ‘test_2m_4s’ test. Here , the environment is configured to have 2 Masters and 4 slaves.. Depending on how the slave memory map is configured, different slaves respond to different transfers on the bus. Now, if we want to get some information on how many transfer went into the scoreboard for a specific combination (eg: Master 1 to Slave 3), how many were verified to be processed correctly etc, it would be fair enough to conclude that the existing scoreboarding schemes will not suffice..

Hence, it was felt that the Data Stream Scoreboard with its advanced functionality and support for data transformation, data reordering, data loss, and multi-stream data routing should be available for verification environments not necessarily based on VMM. From VCS  2011.12-1, this integration have meed made very simple.  This VMM DS scoreboard implements a generic data stream scoreboard that accepts parameters for the input and output packet types. A single instance of this class is used to check the proper transformation, multiplexing and ordering of multiple data streams. The scoreboard class now  leverages a policy-based design and parameterized specializations to accepts any ‘Packet’ class or d, be it VMM, UVM or OVM.

The central element in policy-based design is a class template (called the host class, which in this case in the VMM DS Scoreboad), taking several type parameters as input, which are specialized with types selected by the user (called policy classes), each implementing a particular implicit method (called a policy), and encapsulating some orthogonal (or mostly orthogonal) aspect of the behavior of the instantiated host class. In this case, the ‘policies’ implemented by the policy classes are the ‘compare’ and ‘display’ routines.

By supplying a host class combined with a set of different, canned implementations for each policy, the VMM DS scoreboard can support all different behavior combinations, resolved at compile time, and selected by mixing and matching the different supplied policy classes in the instantiation of the host class template. Additionally, by writing a custom implementation of a given policy, a policy-based library can be used in situations requiring behaviors unforeseen by the library implementor .

So, lets go through a set of simple steps to see how you can use the VMM DS scoreboard in the UVM environment

Step 1: Creating the policy class for UVM and define its ‘policies’

image

Step 2: Replacing the UVM scoreboard with a VMM one extended from “vmm_sb_ds_typed” and specialize it with the ubus_transfer type and the previous created uvm_object_policy.

class ubus_example_scoreboard extends vmm_sb_ds_typed #(ubus_transfer,ubus_transfer, uvm_object_policy);

`vmm_typename(ubus_example_scoreboard)

endclass: ubus_example_scoreboard

Once, this is done, you can either declare an VMM TLM Analysis export to connect to the Bus Monitor in the UBUS environment or use the pre-defined on in the VMM DS scoreboard

vmm_tlm_analysis_export #(ubus_example_scoreboard,ubus_transfer) analysis_exp;

Given that for any configuration, one master and slave would be active, define the appropriate streams in the constructor (though this is not required if there are only single streams, we are defining this explicitly so that this can scale up to multiple input and expect streams for different tests)

this.define_stream(0, “Slave 0″, EXPECT);
this.define_stream(0, “Master 0″, INPUT);

Step 2 .a: Create the ‘write’ implementation for the Analysis export

Since, we are verifying the operation of the slave as a simple memory, we just add in the appropriate logic to insert a packet to the scoreboard when we do a ‘WRITE’ and an expect/check when the transfer is a ‘READ’ with an address that has already been written to.

image

Step 2.b: Implement the stream_id() method

You can use this method to determine to which stream a specific ‘transfer’ belongs to based on the packet’s content, such as a source or destination address. In this case, the BUS Monitor updates the ‘slave’ property of the collected transfer w.r.t where the address falls on the slave memory map.

image

image

Step 3: Create the UVM Analysis to VMM Analysis Adapter

The uvm_analysis_to_vmm_analysis is used to connect any UVM component with an analysis port to any VMM component via an analysis export. The adapter will convert all incoming UVM transactions to a VMM transaction and drive this converted transaction to the VMM component through the analysis port-export. If you are using the VMM UVM interoperability library, you do not have to create the adapter as it will be available in the library

image

image

Create the ‘write’ implementation for the analysis export in the adapter

The write method, called via the <analysis_export> would just post the receive UBUS transfer from the UVM analysis port to the VMM analysis port.

image

Step 4: Make the TLM connections

In the original example, the item_collected_port of the slave monitor was connected to the analysis export of the example scoreboard. Here, the DataStream scoreboard has an analysis port which expects a VMM transaction. Hence, we need the adapter created above to intermediate between the analysis port of the UVM Bus monitor and the analysis export of the VMM DS scoreboard..

image

Step 5: Define Additional streams if required for multi-master multi-slave configurations

This step is not required for a single master/slave configuration. However, would need to create additional streams so that you can verify the correctness on all the different permutations in terms of tests like “test_2m_4s” .

In this case, the following is added in the test_2m_2s in the connect_phase()

image

Step 6: Add appropriate options to your compile command and analyze your results

Change the Makefile by adding –ntb_opts rvm on the command line and add +define+UVM_ON_TOP

vcs -sverilog -timescale=1ns/1ns -ntb_opts uvm-1.1+rvm +incdir+../sv ubus_tb_top.sv -l comp.log +define+UVM_ON_TOP

And that is all, as far and you are ready to go and validate your DUT with a more advanced scoreboard with loads of built-in functionality. This is what you will get when you execute the “test_2m_4s” test

Thus, not only do you have stream specific information now, but you now have access to much more functionality as mentioned earlier. For example, you can model transformations, checks for out of order matches, allow for dropped packets, and iterate over different streams to get access to the specific transfers. Again, depending on your requirements, you can use the simple UVM comparator for your basic checks and switch over to the DS scoreboard for the more complex scenarios with the flip of a switch in the same setup. This is what we did for a UVM PCIe VIP we developed earlier ( From the Magician’s Hat: Developing a Multi-methodology PCIe Gen2 VIP) so that the users has access to all the information they require. Hopefully, this will keep you going, till we have a more powerful UVM scoreboard with some subsequent UVM version

Posted in Communication, Interoperability, Reuse, Scoreboarding, UVM, VMM infrastructure | 2 Comments »

Building & Configuring Coverage Model – VMM Style – Part-III

Posted by paragg on 25th June 2011

Parag Goel, Senior Corporate Application Engineer, Synopsys

In the final blog of this coverage modeling with VMM series, we focus on error coverage. Negative scenario testing is an integral part of verification. But again, we have this question – Whether I have covered all negative scenarios?

So it is important to ensure that the generic coverage model tracks all the error scenarios.

Let’s see, how a specific mechanism provided in VMM in the form of vmm_report_catcher helps to track error coverage efficiently and effectively. The VMM Log Catcher is able to identify/catch a specific string of any type any of the messages issue through the VMM reporting mechanism.

Typically, the Verification Environment issues messages to STDOUT when the DUT responds to an error scenario. These messages can be ‘caught’ by the Log Catcher to update the appropriate coverage groups. Let see how this is done in detail.

The Verification Environment would respond to each negative scenario by issuing a message with a unique text, specific to specific error messages.

In the context of the AXI in framework, we can introduce a wide-range of error scenarios and test if the DUT responds correctly or not. A few possible error scenarios in AXI are listed below for your reference.

clip_image001

However, all the scenarios may not be applicable always and hence configurability is required to enable only the required set of coverpoints tied to the relevant negative scenarios. Thus, we should have similar configurability for error coverage as I talked about in the earlier blogs.

Let’s see how we can catch the relevant responses and sample the appropriate covergroups.

As mentioned earlier, in the example below, we make use of the unique message issued as a result of a negative scenario.

This is how we use the VMM Log catcher.

1. The error coverage class is extended from vmm_log_catcher – VMM base class.

2. The vmm_log::caught() API is utilized as means to qualify the covergroup sampling.

clip_image001[11]

In the code above, whenever a message with the text “AXI_WRITE_RESPONSE_SLVERR “ is issued from anywhere in the verification environment, the ‘caught’ method is invoked which in turn samples the appropriate covergroup. Additionally, you an specify more parameters in the caught API, to restrict what ‘scenarios’ should be caught.

vmm_log_catcher::caught(

string name = “”,

string inst = “”,

bit recurse = 0,

int typs = ALL_TYPS,

int severity = ALL_SEVS,

string text = “”);

The above API, installs the specified message handler to catch any message of the specified type and severity, issued by the specified message service interface instances specified by name and instance arguments, which contains the specified text. By default, this method catches all messages issued by this message service interface instance.

Hope these set of articles would be relevant and useful to you.. I have made an attempt to leverage some of the built-in capabilities of the SV languages and the VMM base classes to target some of the challenges in creating configurable coverage models.. These techniques can be improvised further to make them more efficient and scalable. I would be waiting to hear from you all any inputs that you, have in this area.

Posted in Automation, Configuration, Coverage, Metrics, Reuse, Structural Components, VMM, VMM infrastructure | Comments Off

Building & Configuring Coverage Model – VMM Style – Part-II

Posted by paragg on 25th June 2011

Parag Goel, Senior Corporate Application Engineer, Synopsys

In the previous post, we looked at how you can enable/disable different types of coverage encapsulated in the Coverage Model wrapper class. In this post, let’s look at how we can easily create an infrastructure to pass different inputs to the wrapper class so as to able to configure the coverage collection based on user. The infrastructure ensure that these elements values percolate down to the to the sub-coverage model groups.

The following are some of the key inputs that needs to be passed to the difference coverage component classes

1. SV Virtual Interfaces so that different signal activity can be accessed

2. The Transactions observed and collected by the physical level monitors

3. The ‘Configuration’ information

Picture7

Let’s look at how the we can easily pass the signal level information to the Coverage Model

Step I: Encapsulation of the interface in the class wrapper.

class intf_wrapper extends vmm_object;

virtual axi_if v_if ;

function new (string name, virtual axi_if mst_if);
super.new(null, name);
this.v_if = mst_if;
endfunction

endclass: master_port

Step II: In the top class/environment- Set this object using vmm_opts API.

class axi_env extends vmm_env;
`vmm_typename(axi_env)
intf_wrapper mc_intf;

function void build_ph();
mc_intf = new(“Master_Port”, tb_top.master_if_p0);
// Set the master port interface
vmm_opts::set_object(“VIP_MSTR:vip_mstr_port“, mc_intf, env);
endfunction:build_ph
endclass: axi_env

Step III: Connecting in the coverage class.

A. Get the object containing interface in the coverage model class using vmm_opts.

assert($cast(this.mst_port_obj, vmm_opts::get_object_obj(is_set, this, “vip_mstr_port“)));

B. Connecting local virtual interface to one contained in the object.

this.cov_vif = mstr_port_obj.v_if;

Now, we need to pass the collected transaction object from the monitor needs to the coverage collector. This can be conveniently done in VMM using TLM communication. This is achieved through the vmm_tlm_analysis_port, which establishes the communication between a subscriber & an observer.

class axi_transfer extends vmm_data;

. . .

class axi_bus_monitor  extends  vmm_xactor;

vmm_tlm_analysis_port#(axi_bus_monitor, axi_transfer)  m_ap;
task collect_trans();

//Writing to the analysis port.

m_ap.write(trans);
endtask
endclass

class axi_coverage_model extends vmm_object;
vmm_tlm_analysis_export #( axi_coverage_model, axi_transfer) m_export;

function new (string inst, vmm_object parent = null);
m_export = new(this, “m_export”);

endfunction

function void write(int id, axi_transfer trans);

//Sample the appropriate covergroup, once the transaction is received

in the write function.

endfunction

endclass

To set up the TLM Connections in the agent/environment, we need to do the following:

class axi_subenv extends vmm_group;

//Instantiate the model classes and creates them.

axi_bus_monitor mon;

axi_coverage_model cov;

. . .

virtual function void build_ph;
mon = new( “mon”, this);
cov = new( “cov”, this);
endfunction
virtual function void connect_ph;

//Bind the TLM ports via VMM – tlm_bind

monitor.m_ap.tlm_bind( cov.m_export );

endfunction

To make the Coverage Model truly configurable, we need to look at some of the other key requirements as well at different level of granularity. This can be summarized as the ability to do the following.

1. Enable/Disable coverage collection for each covergroup defined . Every covergroup should be created only if a user wishes to do so. So there should be a configuration parameter which restricts the creation of the covergroup altogether. And this should also be used to control the sampling of a covergroup.

2. The user must be able to configure the limits on the individual values being covered in the coverage model within a legal set of values. Say for example, transaction field BurstLength – user should be able to guide the model what are the limits on this field that one wishes to get coverage on within a legal set of values ranging from ‘1’ to ‘16’ as per AXI spec. So providing lower and upper limits for transaction parameters like burst size, burst length, address etc. makes it re-usable. This limits should be modeled as variables which can be overwritten dynamically

3. The user should be able to control the number of bins to be created. For example in fields like address. auto_bin_max option can be exploited to achieve this in case the user doesn’t have explicitly defined bins..

4. The user must be able to control the number of hits for which a bin can be considered as covered. option.atleast can be used for this purpose and the input to this can be a user defined parameter.

5. The user should also have the control to specify his coverage goal, i.e. when the coverage collector should show the covergroupcovered” even though the coverage is not 100%. This can be achieved by using option.goal, where goal is again a user defined parameter.

All the parameters required to meet the above requirements can be encapsulated in the class (i.e. coverage configuration class) and this can be set and retrieved in a similar fashion described for setting & getting the interface wrapper class using vmm_opts API’s.

class coverage_cfg extends vmm_object;
  int disable_wr_burst_len;
   . . .
  function new( vmm_object parent=null, string name);
     super.new(parent, name);
  endfunction
  coverage_cfg cfg;
  function new(vmm_object parent=null, string name);
     bit is_set;
     super.new(parent, name);
     $cast(cfg, vmm_opts::get_object_obj(is_set, this,
                                           "COV_CFG_OBJ”));
  endfunction

Wei Hua presents another cool mechanism of collecting this parameters using vmm_notification mechanism in this earlier blog  :

A Generic Functional Coverage Solution Based On vmm_notify

Hope you found this useful. I will be talking about how to track Error Coverage in my next blog, so stay tuned!

Posted in Communication, Configuration, Coverage, Metrics, Reuse, Structural Components, VMM, VMM infrastructure | Comments Off

Building & Configuring Coverage Model – VMM Style – Part-I

Posted by paragg on 24th June 2011

Parag Goel, Senior Corporate Application Engineer, Synopsys

To minimize wasted effort, coverage is used as a guide for directing verification resources by identifying tested and untested portions of the design.”

- IEEE Standard for System Verilog (IEEE Std. 1800-2009)

Configurability & reusability are the buzz^^^ words in the verification of chips and this are enabled to a big extent by the present day verification methodologies. Through a set of blogs, I plan to show how we can create configurable coverage models in VMM based environments. Given that, AMBA – AXI is one of the most commonly used protocols in industry for communication amongst the SOC peripherals, I chose protocol AXI based framework for my case study.

The idea here is to create a configurable coverage model leveraging some of the base classes provided in the methodology so that we can make it completely reusable as we move from the block to system level or as we move across projects. Once, we enable that, we can move the coverage model inside the Sub-environment modeled by vmm_group or vmm_subenv which are the units of reuse.

Picture1

Primary Requirements of Configuration Control:

Two important requirements that are needed to be met to ensure that the coverage model is made a part of reusable components are:

1. Ability to enable/disable the coverage model whenever required.

2. Ability to Turn ON/OFF different subgroups at the desired granularity. For example, an user may not always want the Error Coverage to be enabled, unless under specific circumstances.

To meet the above requirements, we make use of the VMM Global and Hierarchical Configurations

Through the vmm_opts base classes, VMM provides a mechanism to control the configuration parameters of a verification environment. This can be done in a hierarchical as well as in a global manner. These options are summarized below:

Picture6

In the environment, the coverage_enable is by default set to 0, i.e. disabled.

coverage_enable = vmm_opts::get_int(“coverage_enable”, 0);

Now, the user can enable the coverage via either of the two mechanisms.

1. From user code using vmm_opts.

The basic rule is that you need to ‘set’ it *before* the ’get’ is invoked and during the time where the construction of the components take place.  As a general recommendation, for the construction of structural configuration, the build phase is the most appropriate place.
function axi_test::build_ph();
// Enable Coverage.
vmm_opts::set_int(“@%*:axi_subenv:enable_coverage”, 1);
endfunction

2. From command line or external option file. The option is specified using the command-line +vmm_name or +vmm_opts+name.
./simv
+vmm_opts+enable_coverage=1@axi_env.axi_subenv

The command line supersedes the option set within code as shown in 1.

User can also specify options for specific instances or hierarchically using regular expressions.

Picture3

Now let’s look at the typical classification of a coverage model.

From the perspective of AXI protocol, we can look at the 4 sub-sections.

Transaction coverage: coverage definition on the user-controlled parameters usually defined in the transaction class & controlled through sequences.

Error coverage: coverage definition on the pre-defined error injection scenarios

Protocol coverage: This is protocol specific ((AXI Handshake coverage)). In case of AXI, it is mainly for coverage on the handshake signals i.e. READY & VALID on all the 5 channels.

Flow coverage: This is again protocol specific and for AXI it covers various features like, outstanding, inter-leaving, write data before write address etc…

clip_image001[11]

At this point, let’s look at how these different sub-groups with the complete coverage model can be enabled or disabled. Once the coverage configuration class is built and passed on to the main coverage model, we need a fine grain control to enable/disable individual coverage models. The code shows how the user can control all the coverage models in the build phase of the main coverage class.

Here too, we can see how we use vmm_opts comes to meet the requirements of controlling individual parameters.

vmm_opts::set_int(“@%*:disable_transaction_coverage”, 0);
vmm_opts::set_int(“@%*:disable_error_coverage”, 0);
vmm_opts::set_int(“@%*:disable_axi_handshake_coverage”, 0);

vmm_opts::set_int(“@%*:disable_flow_coverage”, 0);

In my next blog, I show how the hierarchical VMM Configurations is used to dynamically pass on signal level and other configuration related information to the coverage model. Also, we shall discuss the usage of VMM TLM feature, towards fulfilling the goal of configurable coverage model. Stay tuned!

Posted in Configuration, Coverage, Metrics, Organization, Reuse, SystemVerilog, VMM, VMM infrastructure | Comments Off

Reusing Your Block Level Testbench

Posted by JL Gray on 12th November 2010

JL Gray, Vice President, Verilab, Austin, Texas, and Author of Cool Verification

Building reusable testbench components is a desirable, but not always achievable goal on most verification projects. Engineers love the idea of not having to rewrite the same code over and over again, and managers love the idea that they can get more work out of their existing team by simply reusing code within a project and/or between projects. It is interesting, then, that I frequently encounter code written by engineers that cannot work in a reusable way.

Consider the following scenario. You’ve just coded up a new block level testbench to verify an OCP to AHB bridge. As any good engineer knows, your environment must be self-checking, so you create a VMM scenario that does the following:

class my_dma_scenario extends vmm_scenario;

  rand ocp_trans ocp;
  ahb_trans ahb;

  // ... 

  virtual task execute(ref int n);
    vmm_channel ocp_chan = this.get_channel("OCP");
    vmm_channel ahb_chan = this.get_channel("AHB");
    this.ocp.randomize();
    ocp_chan.put(this.ocp.copy());

    // Wait for the transaction to complete...

    ahb_chan.get(ahb) 

    // Compare actual and expected values
    // in the stimulus instead of the scoreboard.
    my_compare_function(ocp, ahb);

  endtask

endclass

Or, perhaps you decide to get fancy and update and create a situation where you add the expected transaction from the scenario directly to a scoreboard, which compares the results with transactions on the AHB interface as observed by a monitor. Either way, you go merrily about your business verifying the bridge. You get excellent coverage numbers, and eventually, with the help of the block’s designer, you feel you’ve fully verified the module at the block level. However, things are not going well in the full chip environment. Full chip tests are failing in unexpected ways, and other engineers debugging the issue feel it must be a bug in the brigde! They start assigning bugs to you and the block designer. If only you had some checkers operating at the full chip level you could prove your module was operating correctly…

Unfortunately, you have a problem. In your block level testbench, your checkers only work in the presence of stimulus, and in the full chip environment the stimulus components cannot be used. And at this stage of the project, there is no time to go back and create the additional infrastructure (monitor(s) and possibly a new scoreboard) required to run your self-checking module-level testbench at the full chip level.

Fortunately, there is another way to build a module level testbench so that it will be guaranteed to work at the full chip. Follow these simple rules:

  1. Always create a monitor, driver, and scenario generator for every testbench component.
  2. Never populate a scoreboard from a driver or scenario. Always pull in this information from a passive monitor.

In general, when writing your testbench components always assume they will need to run in the absence of testbench stimulus. Yes – this means additional up-front work. Writing checkers that work with passive monitors instead of the information you have available to you in the driver can be time consuming. Often, though, the amount of additional work required is much less than the time you will spend debugging issues without checkers at the full chip level. That being said, sometimes it is too difficult to implement purely passive checkers. You will have a good idea of whether or not this is the case based on the upfront work you did writing a comprehensive testplan. You do have a testplan, right?

Posted in Reuse, VMM, Verification Planning & Management | 3 Comments »

Blocking/Non-blocking Transport Adaption in VMM 1.2

Posted by John Aynsley on 1st September 2010

John Aynsley, CTO, Doulos

One neat feature of the SystemC TLM-2.0 standard is the ability provided by the so-called simple target socket to perform automatic adaption between the blocking and non-blocking transport calls; an initiator that calls b_transport can be connected to a target that implements nb_transport, and vice-versa. VMM 1.2 provides similar functionality using the vmm_connect utility.

vmm_connect serves four distinct purposes in VMM 1.2.

•    Connecting one channel port to another channel port, taking account of whether each channel port is null or actually refers to an existing channel object
•    Connecting a notify observer to a notification
•    Binding channels to transaction-level ports and exports
•    Binding a transaction-level port to a transaction-level export where one uses the blocking
transport interface and the other the non-blocking transport interface. This is the case we are considering here.

When making transaction-level connections I think of vmm_connect as covering the “funny cases”.

To illustrate, suppose we have a transactor that acts as an initiator and calls b_transport. The following code fragment also serves as a reminder of how to use the blocking transport interface in VMM:

class initiator extends vmm_xactor;
`vmm_typename(initiator)

vmm_tlm_b_transport_port  #(initiator, vmm_tlm_generic_payload) m_b_port;

virtual task run_ph;
forever begin: loop
vmm_tlm_generic_payload tx;

// Create valid generic payload
assert( randomized_tx.randomize() with {…} )
else `vmm_error(log, “tx.randomize() failed”);

$cast(tx, randomized_tx.copy());

// Send copy through port
m_b_port.b_transport(tx, delay);

// Check response status
assert( tx.m_response_status == vmm_tlm_generic_payload::TLM_OK_RESPONSE );


Further, suppose we have another transactor that acts as a target for nb_transport calls, and which therefore implements nb_transport_fw to receive the request and subsequently calls nb_transport_bw to send the response. The code fragment below is just an outline, but it does show the main idea that non-blocking transport allows timing points to be modeled by having multiple method calls in both directions (as opposed to a single method call in one direction for b_transport):

class target extends vmm_xactor;
`vmm_typename(target)

vmm_tlm_nb_transport_export #(target, vmm_tlm_generic_payload) m_nb_export;

// Implementation of nb_transport method
virtual function vmm_tlm::sync_e nb_transport_fw(
int id=-1, vmm_tlm_generic_payload trans,
ref vmm_tlm::phase_e ph, ref int delay);

-> ev;
return vmm_tlm::TLM_ACCEPTED;
endfunction : nb_transport_fw

// Process to send response by calling nb_transport on backward path
virtual task run_ph;
forever  begin: loop
vmm_tlm::phase_e phase = vmm_tlm::BEGIN_RESP;
@(ev);
tx.m_response_status = vmm_tlm_generic_payload::TLM_OK_RESPONSE;
status = m_nb_export.nb_transport_bw(tx, phase, delay);
end
endtask: run_ph

Now for the main point. We can use tlm_connect to bind the two components together, despite the fact that one uses blocking calls and the other non-blocking calls:

virtual function void build_ph;
m_initiator = new( “m_initiator”, this );
m_target    = new( “m_target”,    this );
endfunction: build_ph

virtual function void connect_ph;

vmm_connect #(.D(vmm_tlm_generic_payload))::tlm_transport_interconnect(
m_initiator.m_b_port, m_target.m_nb_export, vmm_tlm::TLM_NONBLOCKING_EXPORT);

endfunction: connect_ph

That’s all there is to it! Notice that tlm_transport_interconnect is a static method of vmm_connect so its name is prefixed by the scope resolution operator and an instantiation of the vmm_connect parameterized class with the appropriate transaction type, namely the generic payload. The first argument is the port, the second argument the export, and the third argument indicates that it is the export that is non-blocking.
It is also possible to connect a non-blocking initiator to a blocking target: you would simply replace TLM_NONBLOCKING_EXPORT with TLM_NONBLOCKING_PORT. The objective of this post has been to show that VMM has all the bases covered when it comes to connecting together transaction-level ports.

Posted in Interoperability, Reuse, SystemC/C/C++, Transaction Level Modeling (TLM) | Comments Off

Vaporware, Slideware or Software?

Posted by Andrew Piziali on 27th August 2010

The Role of the Technical Marketing Engineer in Verification

by Andrew Piziali, Independent Consultant

In our previous blog posts on the subject of verification for designers we addressed the role of the architect, software engineer and system level designer. We now turn our attention to perhaps the least understood—and oftentimes most vilified—member of the design team, the technical marketing engineer. But, before we explain why, what is the role of the technical marketing engineer? After all, not all companies have such a position.

The technical marketing engineer is responsible for determining customer product requirements and ensuring that these requirements are satisfied in the delivered product. They typically build product prototypes—initially slideware and later rudimentary functional code—that are evaluated by the customer while refining requirements. With each iteration the customer and engineer come closer to understanding precisely what the product must do and the constraints under which it must operate. This role differs from other traditional marketing functions such as inbound and outbound communication. Given that such a position exists, how does the technical marketing engineer contribute to the functional verification of a design?

Functional Verification Process

Although there are many definitions of functional verification, my favorite is one I recorded at Convex Computer Corporation some twenty years ago: “Demonstrate that the intent of a design is preserved in its implementation.” It is short, colloquial and simple:

  • • Demonstrate — Illustrate but not necessarily prove
  • • Intent — What behaviors are desired?
  • • Design — The various representations of the product before it is ultimately realized and shipped
  • • Preserve — Prevent corruption, scrambling and omission
  • • Implementation — Final realization of the product for the customer

The technical marketing engineer plays a key role in this process as demonstrated by one personal experience of mine in this role.

One company I worked for offered a product that assisted in producing clean, efficient, bug-free code through pre-compilation analysis. In this environment one could navigate through your code within its module (file) structure, as well as within its object structure. However, the programming language supported not only objects—language elements that are defined, inherited and instantiated–but also aspects—language elements that group object extensions into common concerns. I proposed adding an aspect browser to the product as a natural extension to its existing navigation facilities. The challenge was inferring the aspect structure of a program from its files and object structure because an aspect is not explicitly identified as such in the program.

Slide Presentation Model Use

I put together a slide presentation that illustrated the two existing navigation paradigms, as well as the proposed aspect navigation. The feature looked great from a user interface perspective, but could it be implemented? Since heuristics could be employed in identifying each aspect, I also illustrated each heuristic and its application by way of sample code and its structure. This animated slide presentation served as the first prototype demonstration of the aspect browser for the product. When reviewed with existing users, they were able to provide valuable feedback about the new feature and its utility and limitations. When subsequently referenced by the programmer implementing the feature, it served as a rough executable specification.

Returning to the vilified technical marketing engineer, why are some poor souls subject to this criticism? More often than not, the marketing engineer promised more capability, performance or features than could be delivered by the developers. It is easy to “Powerpoint” a feature that cannot be implemented so the marketing engineer must walk far enough down the implementation path to understand what is feasible. If they do, they will likely avoid this charge and remain a perceived asset by the design team. Moreover, their employer will retain a reputation for delivering quality “product-ware,” not vaporware or slideware!

Posted in Organization, Reuse, Verification Planning & Management | Comments Off

TLM and VMM – What is it all for?

Posted by John Aynsley on 18th August 2010

John Aynsley, CTO, Doulos

Having discussed some of the technical details of the implementation of the TLM-2 standard in VMM 1.2 in previous posts, it is time to stand back and ask what it is all for, especially considering that VMM must now co-exist alongside UVM in some projects. Fortunately, using the OSCI TLM standard as a basis for communication takes us a long way down the road toward interoperability, both between VMM and UVM and between VMM and SystemC.

We have seen that VMM 1.2 now supports the concept of ports and exports borrowed from SystemC and the closely related concept of sockets from the TLM-2 standard. Armed with these new concepts, VMM now offers two alternative communication mechanisms: communication between a producer and a consumer through an intermediate channel (the vmm_channel), and communication using direct function calls from producer to consumer (and vice versa) through ports and sockets. So what? Well, the benefits of direct communication were reviewed in an earlier post, but in summary, the simplicity of direct communication can speed up simulation by removing the number of context switches between processes and can provide an easy-to-follow completion model for knowing when a transaction is over. VMM users now have the choice between channel-based communication and direct communication. Channel-based communication works just fine, and may be preferred in native VMM environments. Direct communication is closer to the communication model used in UVM and in SystemC TLM-2.0, so may be preferred when working in a mixed environment, such as when integrating VMM and UVM components or when driving SystemC models from a VMM test bench.

We have also seen that VMM 1.2 supports the generic payload and extension mechanism from the SystemC TLM-2.0 standard. Frankly, this addition is unlikely to be of much interest in native VMM environments, but could be critical when it comes to including SystemC reference models in a VMM test bench. Say you have an existing SystemC reference model written to the TLM-2.0 standard, and you want to drive it from a VMM test bench. The new features in VMM 1.2 would allow you to construct a transaction within the VMM environment that has precisely the attributes expected by the TLM-2.0 standard generic payload, and then to pass that transaction over the fence from SystemVerilog to SystemC. It is possible to code the SystemVerilog-to-SystemC interface yourself using the DPI (for details see http://www.doulos.com/knowhow/sysverilog/DVCon10_dpi_paper) or to use the TLI (Transaction Level Interface) provided by Synopsys. I will say more about the TLI in a later post.

I have heard some VMM users say that they like the idea of being able to use the industry standard blocking transaction-level interface in VMM. Indeed, given the relatively small size of the overall hardware verification community and the cost of maintaining multiple standards, the sharing of ideas between standards in this way has to be a good thing. The early adopter release of UVM, the Universal Verification Methodology from Accellera, uses communication based on the SystemC TLM-1 standard. Even as I write this post, Accellera are considering the best way to incorporate the TLM-2-style interfaces as used in VMM into UVM, which should help make transaction-level communication between the two methodologies a little easier.

Posted in Reuse, SystemC/C/C++, Transaction Level Modeling (TLM) | Comments Off

Non-blocking Transport Communication in VMM 1.2

Posted by John Aynsley on 24th June 2010

John Aynsley, CTO, Doulos

When discussing the TLM-2.0 transport interfaces my posts on this blog have referred to the blocking transport interface alone. Now it is time to take a brief look at the non-blocking transport interface of the TLM-2.0 standard, which offers the possibility of much greater timing accuracy.

The blocking transport interface is restricted to exactly two so-called timing points per transaction, marked by the call to and the return from the b_transport method, and by convention corresponding to the start of the transaction and the arrival of the response. The non-blocking transport interface, on the other hand, allows a transaction to be modeled with any number of timing points so it becomes possible to distinguish between the start and end of an arbitration phase, address phase, write data phase, read data phase, response phase, and so forth.

image

As shown in the diagram above, b_transport is only called in one direction from initiator to target, and the entire transaction is completed in a single method call. nb_transport, on the other hand, comes in two flavors: nb_transport_fw, called by the initiator on the so-called forward path, and nb_transport_bw, called by the target on the backward path. Whereas b_transport is blocking, meaning that it may execute a SystemVerilog event control, nb_transport is non-blocking, meaning that it must return control immediately to the caller. A single transaction may be associated with multiple calls to nb_transport in both directions, the actual number of calls (or phases) being determined by the protocol being modeled.

With just one call per transaction, b_transport is the simplest to use. nb_transport allows more timing accuracy, with multiple method calls in both directions per transaction, but is considerably more complicated to use. b_transport is fast, simple, but inaccurate. nb_transport is more accurate, supporting multiple pipelined transactions, but slower and more complicated to use.

In VMM the role of the TLM-2.0 non-blocking transport interface is usually played by vmm_channel, which allows multiple timing points per transaction to be implemented using the notifications embedded within the channel and the vmm_data transaction object. The VMM user guide still recommends vmm_channel for this purpose. nb_transport is provided in VMM for interoperability with SystemC models that use the TLM-2.0 standard.

Let’s take a quick look at a call to nb_transport, just so we can get a feel for some of the complexities of using the non-blocking transport interface:


class initiator extends vmm_xactor;
`vmm_typename(initiator)
vmm_tlm_nb_transport_port #(initiator, vmm_tlm_generic_payload) m_nb_port;

begin: loop
vmm_tlm_generic_payload tx;
int                     delay;
vmm_tlm::phase_e        phase;
vmm_tlm::sync_e         status;

phase  = vmm_tlm::BEGIN_REQ;

status = m_nb_port.nb_transport_fw(tx, phase, delay);
if (status == vmm_tlm::TLM_UPDATED)

else if (status == vmm_tlm::TLM_COMPLETED)

From the above, you will notice some immediate differences with b_transport. The call to nb_transport_fw takes a phase argument to distinguish between the various phases of an individual transaction and returns a status flag which signals how the values of the arguments are to be interpreted following the return from the method call. A status value of TLM_ACCEPTED indicates that the transaction, phase, and delay were unchanged by the call, TLM_UPDATED indicates that the return from the method call corresponds to an additional timing point and so the values of the arguments will have changed, and TLM_COMPLETED indicates that the transaction has jumped to its final phase.

You are not recommended to use nb_transport except when interfacing to a SystemC model because the rules are considerably more complicated than those for either b_transport or vmm_channel.

Posted in Communication, Interoperability, Reuse, Transaction Level Modeling (TLM) | Comments Off

Generic Payload Extensions in VMM 1.2

Posted by John Aynsley on 22nd June 2010

John Aynsley, CTO, Doulos

In a previous post I described the TLM-2 generic payload as implemented in VMM 1.2. In this post I focus on the generic payload extension mechanism, which allows any number of user-defined attributes to be added to a generic payload transaction without any need to change its data type.

Like all things TLM-2, the motivation for the TLM-2.0 extension mechanism arose in the world of virtual platform modeling in SystemC. There were two requirements for generic payload extensions: firstly, to enable a transaction to carry secondary attributes (or meta-data) without having to introduce new transaction types, and secondly, to allow specific protocols to be modeled using the generic payload. In the first case, introducing new transaction types would have required the insertion of adapter components between sockets of different types, whereas extensions permit meta-data to be transported through components written to deal with the generic payload alone. In the second case, extensions enable specific protocols to be modeled on top of the generic payload, which makes it possible to create very fast, efficient bridges between different protocols.

Let us have a look at an example that adds a timestamp to a VMM generic payload transaction using the extension mechanism. The first task is to define a new class to represent the user-defined extension:

class my_extension extends vmm_tlm_extension #(my_extension);

int timestamp;

`vmm_data_new(my_extension)
`vmm_data_member_begin(my_extension)
`vmm_data_member_scalar(timestamp, DO_ALL)
`vmm_data_member_end(my_extension)

endclass


The user-defined extension class extends vmm_tlm_extension, which should be parameterized with the name of the user-defined extension class itself, as shown. The extension can contain any number of user-defined class properties; this example contains just one, the timestamp.

The initiator of the transaction will create a new extension object, set the value of the extension, and add the extension to the transaction before sending the transaction out through a port:


class initiator extends vmm_xactor;
`vmm_typename(initiator)

vmm_tlm_b_transport_port #(initiator, vmm_tlm_generic_payload) m_port;

vmm_tlm_generic_payload randomized_tx;

begin: loop
my_extension ext = new;

$cast(tx, randomized_tx.copy());
ext.timestamp = $time;
tx.set_extension(my_extension::ID, ext);
m_port.b_transport(tx, delay);


Note the use of the extension ID in the call to the method set_extension: each extension class has its own unique ID, which is used as an index into an array-of-extensions within the generic payload transaction.

Any component that receives the transaction can test for the presence of a given extension type and then retrieve the extension object, as shown here:


class target extends vmm_xactor;
`vmm_typename(target)

vmm_tlm_b_transport_export #(target, vmm_tlm_generic_payload) m_export;

task b_transport(int id = -1, vmm_tlm_generic_payload trans, ref int delay);

my_extension ext;

$cast(ext, trans.get_extension(my_extension::ID));

if (ext)
$display(“Target received transaction with timestamp = %0d”, ext.timestamp);


Note that once again the extension type is identified by using its ID in the call to method get_extension. If the given extension object does not exist, get_extension will return a null object handle. If the extension is present, the target can retrieve the value of timestamp and, in this example, print it out.

The neat thing about the extension mechanism is that a transaction can carry extensions of many types simultaneously, and those transactions can be passed to or through transactors that may not know of the existence of particular extensions.

image

In the diagram above, the initiator sets an extension that is passed through an interconnect, where the interconnect knows nothing of that extension. The interconnect adds a second extension to the transaction that is only known to the interconnect itself and is ignored by the other transactors.

And the point of all this? The generic payload extension mechanism in VMM will permit transactions to be passed to SystemC virtual platform models, where the TLM-2.0 extension mechanism is heavily used.

Posted in Communication, Interoperability, Reuse, Transaction Level Modeling (TLM) | Comments Off

Diagnosing Transaction-Level Connections in VMM 1.2

Posted by John Aynsley on 7th June 2010

John Aynsley, CTO, Doulos

In my previous post on this blog I discussed hierarchical transaction-level connections in VMM 1.2. In this post I show how to remove connections, and also discuss the various diagnostic methods that can help when debugging connection issues.

Actually, removing transaction-level connections is very straightforward. Let’s start with a consumer having an export that permits multiple bindings:

class consumer extends vmm_xactor;

vmm_tlm_b_transport_export #(consumer, vmm_tlm_generic_payload) m_export;

function new (string inst, vmm_object parent = null);
super.new(get_typename(), inst, -1, parent);
m_export = new(this, “m_export”, 4);
endfunction: new

Passing the value 4 as the third argument to the constructor permits up to four different ports to be bound to this one export. These ports can be distinguished using their peer id, as described in a previous blog post.

function void start_of_sim_ph;
vmm_tlm_port_base #(vmm_tlm_generic_payload) q[$];
m_export.get_peers(q);
m_export.tlm_unbind(q[0]);

TLM ports have a method get_peer (singular) that returns the one-and-only export bound to that particular port. TLM exports have a similar method get_peers (plural) that returns a SystemVerilog queue containing all the ports bound to that particular export. The method tlm_unbind can then be called to remove a particular binding, as shown above.

There are several other methods that can be helpful when diagnosing connection problems. For example, the method get_n_peers returns the number of ports bound to a given export:

$display(“get_n_peers() = %0d”, m_export.get_n_peers());

There are also methods for getting a peer id from a peer, and vice-versa, as shown in the following code which loops through the entire queue of peers returned from get_peers:

m_export.get_peers(q);
foreach(q[i])
begin: blk
int id;
id = m_export.get_peer_id(q[i]);
$write(“id = %0d”, id);
$display(“, peer = %s”, m_export.get_peer(id).get_object_hiername());
end

In addition to these low-level methods that allow you to interrogate the bindings of individual ports and exports, there are also methods to print and check the bindings for an entire transactor. The methods are print_bindings, check_bindings and report_unbound, which can be called as follows:

class tb_env extends vmm_group;

virtual function void start_of_sim_ph;
$display(“\n——– print_bindings ——–”);
vmm_tlm::print_bindings(this);
$display(“——– check_bindings ——–”);
vmm_tlm::check_bindings(this);
$display(“——– report_unbound ——–”);
vmm_tlm::report_unbound(this);
$display(“——————————–”);
endfunction: start_of_sim_ph

print_bindings prints information concerning the binding of every port and export below the given transactor. check_bindings checks that every port and export has been bound at least the specified minimum number of times. report_unbound generates warnings for any unbound ports or exports, regardless of the specified minimum.

In summary, VMM 1.2 allows you easily to check whether all ports and exports have been bound, whether a given port or export has been bound, to find the objects to which a given port or export has been bound, and even to remove the bindings when necessary.

Posted in Communication, Interoperability, Reuse, Transaction Level Modeling (TLM) | Comments Off

Using RAL callbacks to model latencies within complex systems

Posted by Amit Sharma on 20th May 2010

Varun S, CAE, Synopsys

In very complex systems there may be cases where a write to a register may be delayed by blocks internal to the system while the block sitting on the periphery might signal a write complete. The RAL BFM will generally be connected to the peripheral block and as soon as it sees the write complete from the peripheral block it will update the RAL’s shadow registers. But due to latencies that might be present within the internal blocks, the state of the physical register wouldn’t have changed yet.The value within the real register and the shadow register will not match for some duration due to the internal latencies
within the system. This could cause problems if another component happens to read from the register, there would be a mis-match as the shadow register reflects the new value which hasn’t reached the register yet, whereas the DUT register still holds the old value.

To prevent such mis-matches one can use the callback methods within the vmm_rw_xactor_callbacks class and model the delay within the relevant callback task. The following are the methods present within the callback,

1. pre_single()  – This task is invoked before execution of a single cycle i.e., before execute_single().
2. pre_burst()   – This task is invoked before execution of a burst cycle i.e., before execute_burst().
3. post_single() – This task is invoked after execution of a single cycle i.e., after execute_single().
4. post_burst()  – This task is invoked after execution of a burst cycle i.e., after execute_burst().

In the scenario described above it would be ideal if we introduced a delay such that the updation of RAL’s shadow register is simultaneous with the state change of the real register. For this we will need a monitor that sits very close to the register which would correctly indicate the state changes of the register. Within the post_write() method of the callback we can wait till the monitor senses a change and then exit the task. This would thus delay the shadow register from being updated until the real register changes its state. The sample code below gives an illustration of how this is done.

//———————————————————————————————————————-

class my_rw_xactor_callbacks extends vmm_rw_xactor_callbacks;

my_monitor_channel mon_activity_chan;

function new(my_monitor_channel mon_activity_chan);
this.mon_activity_chan = mon_activity_chan;
endfunction

virtual task post_single(vmm_rw_xactor xact, vmm_rw_access tr);

my_monitor_transaction mon_tr;

while(1) begin
my_mon_activity_chan.get(tr);
if(tr.kind == vmm_ral::WRITE)
if(tr.addr == mon_tr.addr)
break;
end
endtask

endclass : my_rw_xactor_callbacks

//———————————————————————————————————————-

In the example code above the callback class receives a handle to the monitors activity channel which is then used within the post_single() task to wait till the monitor observes a state change on the register. A check is made using the address of the register to confirm if the state change was for the register in question. Real scenarios could get much more complex with multiple domains, drivers etc., additional complexities can be accordingly handled within the post_single() callback task using the monitor/monitors that monitor the interfaces through which the register is written to.

RAL also has a built-in mechanism known as auto mirroring to automatically update the mirror whenever the register changes inside the DUT through a different interface. One can refer to the latest RAL User Guide to get more details on its usage.

Posted in Register Abstraction Model with RAL, Reuse | Comments Off

Hierarchical Transaction-Level Connections in VMM 1.2

Posted by John Aynsley on 19th May 2010

John Aynsley, CTO, Doulos

In a previous blog post I described ports and exports in VMM 1.2, and explored the issue of binding a port of one transactor to the export of another transactor, where the two transactors are peers. Now let us look at how we can make hierarchical connections between ports and exports in the case where one transactor is nested within another.

Let’s start with ports. Suppose we have a producer that sends transactions out through a port and is nested inside another transactor:

class producer extends vmm_xactor;
vmm_tlm_b_transport_port #(producer, vmm_tlm_generic_payload) m_port;

virtual task run_ph;

vmm_tlm_generic_payload tx;

m_port.b_transport(tx, delay);

This transactor calls the b_transport method to send a transaction out through a port. So far, so good. Now let’s look at the parent transactor:

class producer_parent extends vmm_xactor;

vmm_tlm_b_transport_port #(producer_parent, vmm_tlm_generic_payload) m_port;

producer  m_producer;

virtual function void build_ph;
m_producer = new( “m_producer”, this );
endfunction: build_ph

The producer’s parent also has a port, through which it wishes to send the transaction out into its environment. The port on the producer must be bound to the port on the parent. This is done in the connect phase:

virtual function void connect_ph;
m_producer.m_port.tlm_import( this.m_port );
endfunction: connect_ph

Note the use of the method tlm_import in place of tlm_bind to perform the child-to-parent port binding. Why is this particular method named tlm_import? I am tempted to say “Don’t ask!” Whatever name had been selected, somebody would have found it confusing. Of course, the idea is that something is being imported. In this case it is actually the address of the b_transport method that is effectively being imported from the parent (this.m_port) to the child (m_producer.m_port). tlm_import is being called in the sense CHILD.tlm_import( PARENT), which makes sense to me, anyway.

So much for the producer. On the consumer side the situation is very similar so I will cut to the chase:

class consumer_parent extends vmm_xactor;

vmm_tlm_b_transport_export #(consumer_parent, vmm_tlm_generic_payload) m_export;

consumer  m_consumer;

virtual function void connect_ph;
m_consumer.m_export.tlm_import( this.m_export );
endfunction: connect_ph

In this case, the address of the b_transport method is effectively being passed up from the child (m_consumer.m_export) to the parent (this.m_export). In other words, b_transport is being exported rather than imported, but note that it is still the tlm_import method that is being used to perform the binding in the direction CHILD.tlm_import( PARENT ).

Now for the top-level environment, where we instantiate the parent transactors:

class tb_env extends vmm_group;

producer_parent  m_producer_1;
producer_parent  m_producer_2;
consumer_parent  m_consumer;

virtual function void build_ph;
m_producer_1 = new( “m_producer_1″, this );
m_producer_2 = new( “m_producer_2″, this );
m_consumer   = new( “m_consumer”,   this );
endfunction: build_ph

virtual function void connect_ph;
m_producer_1.m_port.tlm_bind( m_consumer.m_export, 0 );
m_producer_2.m_port.tlm_bind( m_consumer.m_export, 1 );
endfunction: connect_ph

endclass: tb_env

This is straightforward. We just use the tlm_bind method to perform peer-to-peer binding between a port and an export at the top level. Note that we are binding two distinct ports to a single export; as explained in a previous blog post, a VMM transactor can accept incoming transactions from multiple sources, distinguished using the value of the second argument to the tlm_bind method.

So, in summary, it is possible to bind TL- ports and exports up, down, and across the transactor hierarchy. Use tlm_bind for peer-to-peer bindings, and tlm_import for child-to-parent bindings.

Posted in Communication, Reuse, Transaction Level Modeling (TLM) | 1 Comment »

Stimulus

Posted by JL Gray on 5th May 2010

by Asif Jafri, verification engineer, Verilab

Atomic Generators

Atomic generators select and randomize transactions based on user constraints. If you do not care about what sequence the transactions are generated, then atomic generators are a good choice.

The code below shows a pcie_config class which defines the read and write transactions. It also has two macros defined that create a channel and the atomic generator to drive the channel.

// filename: pcie_config.sv

class pcie_config extends vmm_data;
typedef enum {Read, Write} kind_e;
rand kind_e instruction;
rand bit [31:0] address;
rand bit [31:0] data;

endclass: pcie_config
`vmm_channel(pcie_config)
`vmm_atomic_gen(pcie_config, “PCIE Configuration Atomic Generator”)

If you are running functional coverage, there are often corner cases or a sequence of events that will require multiple simulation cycles to be covered. The time to get to these specific points in the state space can be drastically reduced by using scenario generators to generate the specific sequence of events.

Scenario Generators

Scenario generators are useful if you need to constrain your generator to generate a specific sequence of transactions.

Let’s say to configure your PCIE device you need to follow a series of steps.

1.    Read the status register
2.    Read the control register and
3.    Write to the control register

In this case you can use your pcie_config class once again but now define a macro for its scenario generator instead of the atomic generator.

// filename: pcie_config.sv

`vmm_channel(pcie_config)
`vmm_scenario_gen(pcie_config, “ PCIE Configuration Scenario Generator”)

Here again a single channel will be connected to this scenario generator. We now need to define the scenario as shown below.

// filename: pcie_gen.sv

class pcie_control_scenario extends pcie_config_scenario;
// Variable to identify the scenario
int pcie_control_scenario_id

constraint pcie_control_scenario_items {
if($void(scenario_kind) == pcie_control_scenario_id) {
// number of elements in the scenario
length == 3;
// run scenario more than once
repeated == 0;
foreach(item[i])
if(i==0)
this.items[i].instruction == pcie_config:: Read(status_address);
else if(i==1)
this.items[i].instruction == pcie_config:: Read(control_address);
else if(i==2)
this.items[i].instruction == pcie_config:: Write(control_address);
}
}

endclass: pcie_gen

These are also called single stream scenarios.  Scenario generators work well for block level testbenches, but if we need to control multiple blocks in the system level testbench to generate specific interactions to improve coverage multi-stream scenario generators should be used.

Multi-stream Scenario (MSS) Generators

In a typical chip there will be more than one type of peripheral. Ex: PCIE, USB, Ethernet. Each needs to be controlled separately with its own scenario generator. But there is often a time when the interface on the PCIE and the USB need to be controlled together. This is where MSS generators are useful. MSS generators can feed transactions to multiple channels, unlike single stream scenario generators and atomic generators which can feed only one.

Figure 1: Multi-Stream Scenarios

image

Let’s try to create a MSS with the PCIE scenario and the USB scenario together.

1.    Use the macros to generate  the pcie_config channel and scenario

// filename: pcie_config.sv

`vmm_channel(pcie_config)
`vmm_scenario_gen(pcie_config, “PCIE Configuration Scenario Generation”)

2.    Use the macros to generate  the usb_config channel and scenario

// filename: usb_config.sv

`vmm_channel(usb_config)
`vmm_scenario_gen(usb_config, “USB Configuration Scenario Generation”)

3.    Define scenarios for the PCIE and USB that you want to control. An example for the PCIE control scenario is shown in the previous section.

4.    Next extend your scenario from vmm_ms_scenario to put the above defined scenarios together and in the execute task define how to use the above defined scenarios.  Directed stimulus for the ETH module can be reused from the block level testbench by encapsulating it in the MSS. The directed stimulus can be cut-and-pasted into the MSS or the MSS could directly call an external function to execute the stimulus.

// filename: my_ms_scenario.sv
class my_ms_scenario extends vmm_ms_scenario;

pcie_control_scenario pcie_control_scenario;
usb_control_scenario  usb_control_scenario;

task execute();
fork begin
pcie_config_channel out_chan;
pcie_control_scenario.apply(out_chan);
end
begin
usb_config_channel out_chan;
usb_control_scenario.apply(out_chan);
end
begin
// Directed test code goes here
end
join
endtask: execute
endclass: my_ms_scenario

5.    Of course in your top level testbench don’t forget to instantiate your multi-stream scenario and register the various channels that it will use to talk to the PCIE and USB peripherals. You will also need to register the multi-stream scenario generator.

// filename: top_tb.sv

vmm_ms_scenario_gen mss_gen;
my_ms_scenario my_ms_scenario;

pcie_config_chan pcie_config_chan;
usb_config_chan usb_config_chan;

mss_gen = new(“Multi-stream scenario generator”);
my_ms_scenario = new();
pcie_config_chan = new(“PCIE CONFIGURATION CHANNEL”, pcie_config_chan);
USB_config_chan = new(“USB CONFIGURATION CHANNEL”, usb_config_chan);

mss_gen.register_channel(“PCIE_SCENARIO_CHANNEL”, pcie_config_channel);
mss_gen.register_channel(“USB_SCENARIO_CHANNEL”, usb_config_channel);
mss_gen.register_ms_scenario(“MSS_SCENARIO”, my_ms_scenario);

mss_gen.stop_after_n_scenarios = 10;

As can be seen in Figure 1, you can combine scenarios to make multi-stream scenarios, or you can also have higher level multi-stream scenario generators calling other multi-stream scenario generators and directed tests.

Posted in Reuse, Stimulus Generation, Tutorial | Comments Off

Sharing RTL Configuration with the Testbench

Posted by JL Gray on 3rd May 2010

by Jason Sprott, CTO, Verilab

Often times we find ourselves with some configurable RTL to verify. The amount of configuration can vary from a few bus width parameters, or a configurable IP block with optional features and performance related control parameters, to a whole chip with optional interfaces. This can make our life as verification engineers that bit more complicated. We not only have to verify the multitude of scenarios for a specific implementation, we have to somehow handle building and running a testbench for the various RTL configurations. Especially in the case of standalone IP, it’s often the case that different configurations are included in our verification space.

Configurations may affect the physical interface between the testbench and RTL, for example:
•    Bus widths
•    Number of interrupts
•    Number of instances of an external interfaces such as USB or Ethernet

Or, the configurations might control internal behavior, which doesn’t affect the physical interface, but does affect the way the test bench has to interact with the DUT functionally. Examples might include:
•    FIFO Depth – which may affect data pattern generation to hit corner cases, or performance related water marks
•    QoS algorithms – where different algorithms implementations are selected depending on requirements, which could affect traffic generation, checking, or functional coverage

The VMM has a new RTL configuration feature which can make life a bit easier. RTL configurations can now be encapsulated in a testbench object that can be used to generate an output file. This output file can be shared between the RTL and testbench. The steps in the process go something like this:

STEP 1: Compile the RTL (with some default configuration) and testbench. This is needed to run the configuration file generation only. Tests will not be run in the simulation.

STEP 2: Run simulation to generate configuration file

./simv +vmm_rtl_config=<PREFIX> +vmm_gen_rtl_config …

The format of the output file can be customized by extending a companion format class. This determines the output format written and also the parsing of the file back into the testbench. The (simple) format that comes out of the box looks like this:

num_of_mems : 4
has_buffer  : 1

This obviously isn’t RTL code, so if we want our Verilog to understand it, for example converting it to assign parameter values, we have to perform the next step.

STEP 3: Convert output configuration file to verilog params file (or format of your choice, such as IP-XACT)

A script is provided as a demonstration, in the memsys_cntrlr std_lib VMM example.

./cfg2param <config_file>

STEP 4: Re-compile the RTL using the new parameters generated by the configuration, e.g. using -parameters switch in VCS.

STEP 5: Run the simulation executing tests, passing the configuration into the testbench

./simv +vmm_rtl_config=<PREFIX> …

The code encapsulating the configuration parameters in the testbench is encapsulated using the vmm_rtl_config class. The code that implements output to the configuration file for each variable is done using macros. The example below shows implementation for class member variables.

class my_cfg extends vmm_rtl_config;
rand int num_of_mems;
rand bit has_buffer;
constraint valid_c {
num_of_mems inside {1,2,4,8};
}

`vmm_rtl_config_begin(my_cfg)
`vmm_rtl_config_int(num_of_mems,num_of_mems)
`vmm_rtl_config_boolean(has_buffer,has_buffer)
`vmm_rtl_config_end(my_cfg)

endclass:my_cfg

On the testbench side the configuration is set using the vmm_opts::set_object() in the initial block of the program block of your testbench and can be retrieved through the vmm_opts::get_object().

The beauty of using this method for RTL configuration is that constrained randomization and functional coverage collection can be used for RTL configurations. In projects where highly configurable IP has to be verified, it’s a convenient way to control and observe progress of verification across, not only the modes of operation, but also those modes relating to specific RTL configurations.

Posted in Automation, Configuration, Reuse | 3 Comments »

Combining TLM Ports with VMM Channels in VMM 1.2

Posted by John Aynsley on 23rd April 2010

JohnAynsley

John Aynsley, CTO, Doulos

In previous posts I described the TLM ports and exports introduced with VMM 1.2. Of course, the vmm_channel still remains one of the primary communication mechanisms in VMM. In this article I explore how TLM ports can be used with VMM channels.

TLM ports and exports support a style of communication between transactors in which a b_transport method made from a producer is implemented within a consumer, thereby allowing a transaction to be passed directly from producer to consumer without any intervening channel. On the other hand, a vmm_channel sits as a buffer between a producer and a consumer, with the producer putting transactions into the tail of the channel and the consumer getting transactions from the head of the channel. The channel deliberately isolates the producer from the consumer.

Each style of communication has its advantages. The b_transport style of communication is fast, direct, and the end-of-life of the transaction is handled independently from the contents and behavior of the transaction object. On the other hand, vmm_channel provides a lot more functionality and flexibility, including the ability to process transactions while they remain in the channel and to support a range of synchronization models between producer and consumer.
In VMM 1.2, support for TLM ports and exports is now built into the vmm_channel class. It is possible to bind a TLM port to a VMM channel such that the channel provides the implementation of b_transport. The goal is to get the best of both worlds: the clean semantics of a b_transport call in the producer, and the convenience of using the active slot in the vmm_channel in the consumer.
An example is shown below. First we need a transaction class and a corresponding channel type:

class my_tx extends vmm_data;   // User-defined transaction

typedef vmm_channel_typed #(my_tx) my_channel;

The producer sends transactions using a b_transport call, knowing that by the time the call returns, the transaction will have been completed:

class producer extends vmm_xactor;

vmm_tlm_b_transport_port #(producer, my_tx) m_port;

my_tx tx;

m_port.b_transport(tx, delay);

The consumer manipulates the transaction while leaving it in the active slot of a vmm_channel and executing the standard notifications:

class consumer extends vmm_xactor;

my_channel m_chan;

my_tx tx;

m_chan.activate(tx);   // Peek at the transaction in the channel
m_chan.start();        // Notification vmm_data::STARTED

m_chan.complete();     // Notification vmm_data::ENDED
m_chan.remove();       // Unblock the producer

The channel is instantiated and connected in the surrounding environment:

class tb_env extends vmm_group;

my_channel  m_tx_chan;
producer    m_producer;
consumer    m_consumer;

virtual function void build_ph;

m_producer = new( “m_producer”, this );
m_consumer = new( “m_consumer”, this );

m_tx_chan  = new( “my_channel”, “m_tx_chan” );
m_tx_chan.reconfigure(1);

endfunction

function void connect_ph();

vmm_connect #(.D(my_tx))::tlm_bind( m_tx_chan, m_producer.m_port,
vmm_tlm::TLM_BLOCKING_EXPORT );
m_consumer.m_chan = m_tx_chan;

There are two key points to note in the above. Firstly, the channel is reconfigured to have a full level of 1. This ensures that the blocking transport call does indeed block. If the full level is greater than 1, the first call to b_transport will return immediately before the transaction has completed, which would defeat the purpose.

Secondly, the transport port and the vmm_channel are bound together using the vmm_connect utility. This connect utility must be used when binding VMM TLM objects to channels, and can also used in order to bind TLM ports and exports of differing interface types (e.g. a blocking port to a non-blocking export). The third argument to tlm_bind indicates that the connection is being made from the port in the producer to a blocking export within the channel. I will discuss other uses for this method in later posts.

Posted in Communication, Interoperability, Reuse, Transaction Level Modeling (TLM) | Comments Off

Changing Functionality: The Factory Service #3

Posted by JL Gray on 21st April 2010

by Jason Sprott

Jason Sprott is CTO at Verilab.

In Factory Service #1 and in Factory Service #2 we discussed what a Factory is and what it can do for us. In this post we’ll take a look at the two general types of objects we might want to replace using the Factory. Typically objects in testbenches fall into two categories:

  • Dynamic: These are objects that get created and garbage collected spuriously during normal operation. An example might be a randomized transaction generated by a scenario, or even the scenario itself.
  • Structural: These objects get created once at the beginning and live throughout the simulation. An example might be vmm_xactor or vmm_group.

The location within the VMM Phasing scheme where the factory override performed is different between the two.  A factory override has to be done before any of the objects affected by the override are instantiated (using <class>::create_instance() ). Structural components in the testbench are typically instantiated earlier than dynamic ones.

In the case of overriding dynamic objects, we need to perform the override before the test starts running the simulation. A good place to perform dynamic object overrides is in the vmm_test::configure_test_ph()method. This is executed in the test phase (see “Constructing and Controlling Environments > Composing Implicitly Phased Environments/Sub-Environments” section of the VMM User Guide), before test execution is started by the vmm_test::start_of_sim_ph() method. So no dynamic objects are likely to have been created. The following example shows where we override a transaction in a testcase:

class my_dynamic_factory_test extends vmm_test;

virtual function void configure_test_ph();

my_trans::override_with_new(“@%*”,
custom_trans::this_type(),
log, `__FILE__, `__LINE__);
endfunction:configure_test_ph

endclass

Structural object overrides on the other hand potentially give us a bit of a problem, because these objects are instantiated in the pre-pest “build” phase. Performing the factory override in vmm_test::configure_test() would be too late, as it happens after the objects have already been instantiated. Instead structural object factory overrides must be performed in vmm_test::set_config(), which is executed before the Pre-Test timeline, as long as test concatenation is not being done. The following is an example where we override a VIP (encapsulated in a vmm_group) in the testcase:

class my_structural_factory_test extends vmm_test;

virtual function void set_config();

my_vip::override_with_new(“@%*”,
my_extended_vip::this_type(),
log, `__FILE__, `__LINE__);
endfunction:set_config

endclass

The figure below illustrates the phases where overrides should be implemented, with respect to the associated objects types:

image

Another thing to be aware of when using a Factory is test concatenation. In general, if you want to be able to concatenate a test, it’s not a good idea to change components in the environment using factory overrides. Reprogramming the environment, either structural or dynamic objects, will affect subsequent concatenated tests. If the changes can be undone, in the cleanup phase of the test for example, it may be OK, but changes to structural objects are difficult to undo. Any change to the environment may affect the validity of a test. For this reason the VMM will not execute vmm_test::set_config() if test concatenation is being performed.  That might not be enough though. For reuse, considering test concatenation is an important point test developers and users of the tests need to be aware of. Factory overrides incompatible with test concatenation will not necessarily cause a noticeable side affect, or failure, so misuse could go unnoticed. This is not a problem of the Factory; it’s just a consideration when using factory overrides with concatenation of tests.

Posted in Configuration, Reuse, VMM infrastructure | Comments Off

Changing Functionality: The Factory Service #2

Posted by JL Gray on 19th April 2010

by Jason Sprott

Jason Sprott is CTO at Verilab

In a previous post we took a look at what the VMM Factory is and what it could do for us. In this post, we take a look at the problem the Factory solves. In order to understand when we should use a Factory, we really need to understand a bit more about the problem it solves, so we can spot situations where a Factory might be appropriate. Let’s start by looking at how we might do things in SystemVerilog without using VMM Factories. Typically we might instantiate a class of type Foo like this:

Foo my_foo;

my_foo = new(…);

The problem with that is my_foo will always create an object of type Foo, no matter what.  We can’t change that. We might use this class Foo in lots of places around our testbench code. If we want to replace Foo with a new class that maybe adds more variables, or replaces a method, we could have a problem. We’d have to go around our testbench looking for everywhere Foo is used, to see if there was a way it could be replaced. Anywhere Foo is constructed using new(), it’s is highly likely to involve modifying the original code. This is very undesirable, especially if the code may be part of a tested IP library. In general, modifying the original source is a Bad Thing.

In VMM we can avoid this problem using a different way to create an instance of the class with the factory. It would look something like this:

Foo my foo;

my_foo = Foo::create_instance(…);

Although this looks similar to an instantiation with new(), something much more sophisticated  is at play. If we factory enable the class Foo, we never call new()to instantiate it again. We now call the Factory’s static method for generating instances of the object create_instance(). Since the method is static, it can be called without having an instance of Foo, therefore it can be used to create itself. What’s the point?

Now that we’ve encapsulated the task of creating an instance in a method, we can change what that method does. We can tell create_instance() to return something different. This might be a new type (with some modifications), derived from Foo, or the same type populated with different values for the variables. What’s more, we can do this easily anywhere Foo is used in the code. We can pick specific instances to be replaced, or replace multiple instances globally.

The two methods used to reprogram the factory are:
•    create_with_new() – tells the factory, when creating new instances, to return a brand new instance of the type specified.
•    create_with_copy() – tells the factory, when creating new instances,  to return a copy of instance specified.

This replacement can be done anywhere in the code, as long as it happens before the instances you want to replace are created. Let’s say a test needs to replace our type Foo, with a new class, FooWithAttitude, derived from Foo. Here’s what that might look like:

class my_factory_test extends vmm_test;

virtual function void configure_test_ph();
Foo::override_with_new(         // Foo’s factory is being overridden
“@%*”,                        // instances matching this pattern will be replaced
FooWithAttitude::this_type(), // they will be replaced with this type of class
log, `__FILE__, `__LINE__);   // some generic log and debug stuff
endfunction:configure_test_ph

endclass:my_factory_test

As can be seen, the Factory override uses regular expression pattern matching to specify which instances will be targeted. The syntax is expressive enough to indentify single instances, multiple instances (e.g. by hierarchy), or all instances (as in this example). More information on the pattern matching syntax can be found in the “Common Infrastructure and Services > Simple Match Patterns” section of the VMM User Guide.

This next example shows how the factory can be programmed to return a copy of the class we’ve modified with some values.

class test_read_back2back extends vmm_test;

virtual function void configure_test_ph();
FooBusTrans tr = new();        // create a template for the override copy
tr.address = ‘habcd_1234;      // special value we want to set up for override
tr.address.rand_mode(0);       // you might want to protect value during randomization
FooBusTrans::override_with_copy(
“@top:foobus0:*”,           // instances matching this pattern will be replaced
tr,                         // they will be replaced by a copy of this instance
log, `__FILE__, `__LINE__); // some generic log and debug stuff
endfunction

endclass

The above example  shows a Factory override in the configure test phase of the simulation timeline. Exactly when a Factory replacement is done is quite important. As previously mentioned, the replacement has to be done before any instances of the class have been created. This depends on the type of class being replaced. For example, dynamic objects (such as transactions), created many times during normal operation of the testbench, are likely to be created after the test has started. However, structural objects (such as instances of a VIP), are likely to be created once when the testbench is built. The details of where to put Factory overrides is covered in another post.

Posted in Configuration, Reuse, VMM infrastructure | Comments Off

Changing Functionality: The Factory Service #1

Posted by JL Gray on 16th April 2010

by Jason Sprott

Jason Sprott is CTO at Verilab.

Building a testbench using SystemVerilog, an object-oriented testbench language, does not automatically make the end solution reusable, or easily extensible. In SystemVerilog we can certainly implement the kind of object-oriented principles and design patterns that enable reuse, but this requires significant programming skills and experience in understanding the requirements of building substantial reusable software solutions. Also, when users inevitably need to change the original functionality, the mechanisms in place to allow those changes would have to be well documented and understood.

Two common changes we might need to make are:
•    Swap one type of class with another (which may replace or add new constraints, variables and methods).
•    Add new functionality to an existing method without replacing it

Fortunately the VMM provides some standard solutions for handling these types of changes. This post takes a look at how the Factory Service helps with the first of the two requirements, swapping classes.

There are many cases where we might want to swap one class for another in a testbench. For example, a typical requirement in constrained random testbenches is to replace one randomizable class with derived version, adding or modifying constraints. We might also have various derived versions of VIP, supporting slightly different versions of a protocol, or injecting different types of errors. When we first develop some VIP, or a testbench, it’s almost impossible to predict what people will want and need from our solution in the future. We can however provide users with a standard way to replace our implementation with something slightly different. This could be a derived version (extension with additions or modification), of what we originally implemented, or a copy with modified values.

Although this sounds quite trivial, testbenches that do not have this capability can be very hard to change without modifying the original source. Changing an original implementation to add new functionality is undesirable and sometimes impossible. Access to original source code is quite often restricted, or at least controlled. The original code may be well tested and proven, and changing the original source could affect other users of the code. Testbench components implementing the Factory are easily replaced at runtime without modifying the original source. Such modification can even be done on a per test basis if required.

However, we don’t get a Factory for free; there’s a bit of up-front thought required. We have to care enough about this capability to implement a Factory in the first place, obviously. This seems obvious, but in fact many times I come across a class that was in dire need of a factory that wasn’t there, the only reason was that the original developer didn’t think to put one in. I try not to judge too harshly, because we’ve all developed code that didn’t quite meet down-steam requirements at some point in time. Theres’s also a bit of effort (a small amount of additional coding) required. So why should we bother?

The VMM Factory Service (Factory) is a standard mechanism for changing functionality by replacing classes. VMM has always recommended the use of Factories, but The Factory Service was introduced in VMM 1.2 to ease implementation. The Factory provides the following API and utilities:
•    Short-hand macros to make implementing a Factory simple. The macros implement the Factory API methods for a given class.
•    API method to create instances of the class – replacing the use of the new()constructor method.
•    API method to change the instance generated by the Factory to a new derived type.
•    API method to change the instance generated by the Factory to a copy of a class of the same type
•    Specific and regular expression based selection of component factories to override

As an example, imagine a Bus VIP. The VIP has quite a lot going on inside. It has a master, a slave, some monitors, coverage and checking. All use a particular transaction type. So if we wanted to replace that transaction with a derived type, to maybe inject some errors, it would affect multiple components in the VIP. If any of the components using the transaction create an instance using new(), there’s a pretty good chance we cannot replace the transaction type without modifying that source. This may not be an option, or at the very least not desirable.

If we use the Factory, we can not only do the replacement, we can choose which instances of the VIP in a testbench we want to perform the replacement on. Maybe not all nodes on the bus have to inject errors. In our test, or new testbench environment, we might decide to say something like:
•    When creating new classes of type Foo, instead instantiate my new class FooWithAttitude. Using the Factory in VMM that might look like:
Foo::override_with_new(         // Foo’s factory is being overridden
“@%*”,                        // match all instances
FooWithAttitude::this_type(), // they will be replaced with this type of class
log, `__FILE__, `__LINE__);   // some generic log and debug stuff

Or, maybe we want to make sure some variables in a configuration object are replaced with something specific. We can make sure a copy of a class is instantiated, with some specific values set. We might decide to say something like:
•    When creating new classes of type Foo,  instead insatiate my copy of that class (in this case we have created our own instance, tr), with some values set and randomization turned off:

FooBusTrans::override_with_copy(
“@top:foobus0:*”,           // instances matching this pattern will be replaced
tr,                         // they will be replaced by a copy of this instance
log, `__FILE__, `__LINE__); // some generic log and debug stuff

To understand when we should use a Factory, we really need to understand a bit more about the problem it solves. We’ll discuss this in another post.

Posted in Configuration, Reuse, VMM infrastructure | Comments Off

Bruce Kenner: Programmer

Posted by Andrew Piziali on 8th April 2010

by Andrew Piziali and Gary Stringham

Once upon a time in a cubicle far, far away there was a brilliant software engineer named Bruce Kenner. He had designed an elegant compiler for a processor with an exposed pipeline, requiring compiler-scheduled instructions for a new instruction set architecture. No, this was not the pedestrian VLIW machine you may be familiar with but a machine having a programmer specified number of delay slots following each branch instruction. In order to manage resource hazards, Bruce had furnished the compiler with an oracle that had a full pipeline model of the processor. The oracle advised the scheduler about resources and their availability.Orca Compiler Flow

Despite Bruce’s obvious talent, his business card simply read “Bruce Kenner: Programmer.”Since Bruce was a software engineer that designed and implemented exceedingly complex machinery, we might ask what is the role of the software engineer in the verification process. How did Bruce verify the oracle, scheduler and compiler he was designing? What role in general does the software engineer play in the verification of a modern system? To address these questions and others, I asked embedded software developer Gary Stringham to join me in this discussion. Gary is the founder and president of Gary Stringham & Associates, LLC, specializing in embedded systems development.

The typical SoC today contains a dozen or more processors of various flavors: general purpose, DSP, graphics, audio, encryption, etc. These processors are distinguished from their digital hardware brethren in that they are generally pre-verified cores that faithfully perform whatever tasks the programmer specifies. Hence, the DUV (design under verification) becomes the code written by the programmer rather than the hardware on which it executes. We must demonstrate this code implementation preserves the original intent of the architect. To do so requires the contribution of the software engineer to the verification process. We illustrate the cost of bringing the software engineer into the design cycle too late with this story from Gary’s experience. He writes:

I was having trouble getting my device driver to work with a block in an SoC for a printer so I went to the hardware engineer for help. We studied what my device driver was doing and it seemed fine. Then, we compared it to what the corresponding verification test was doing and both were writing the same values to the same registers. However, when we examined the device driver more closely, we discovered that it was programming the registers in a different order than the test case so that it exposed a problem.

I had my reasons for writing those registers in the way that I did, having to do with the overall architecture of the software. This was based upon the order that the device driver received information during the printing process. The hardware engineer had no rhyme nor reason to write to the registers in the order he did. He did not—nor was he expected to—know the nuances of the software architecture that required the driver to do it the way it did. But, with the order he happened to write those registers in his verification test, he obscured a defect. At this point in the design cycle the SoC was already in silicon so I figured out a work-around in my device driver.

Now, in this particular case, there was no reason to believe there would be order sensitivity of register writes; the driver was setting up some configuration registers before launching the task. But, there was a sensitivity. If I, the software engineer, had been involved months earlier with the design of the verification test suite, I might have suggested to write the registers in the order I needed the device driver to do it and it would have exposed the order-dependent error.

With this in mind, let’s examine the software engineering roles we recommend. The initial role of the software engineer should be collaborating with the system architect to ensure that required hardware resources are available for the algorithms delegated to the software.[1] Parametric specifications and version dependencies should also be examined and verified. Often the software engineer is the only one intimately familiar with the software limitations of legacy blocks reused in the design. Hence, they need to examine these in light of the requirements of the current design. What space/time/performance/power trade-offs become apparent as various hardware components are considered?

Next, the software engineer should review early specifications with an eye toward implementation challenges. What requirements and features lead to code complexities that might jeopardize the implementation or verification schedule? Are obscure use cases addressed? What ambiguities exist in the specification that could lead to different interpretations by the design, verification and software engineers? What hardware resources do software engineers need to debug problems, such as debug hooks and ports?

At the same time, the software engineer should be contributing to the verification planning process[2] during either top-down or bottom-up specification analysis. During the more common top-down analysis, where each designer explains their understanding of the features and their interactions, the software engineer will be asking questions that aid the extraction of design features requiring software implementation. Likewise, the software engineer will be answering questions posed by verification engineers and designers about their understanding of the specification. As the verification plan comes together, the software engineer should be a periodic reviewer.

Finally, during hardware/software integration, the software engineer plays an integral role, having a first-hand understanding (usually!) of their code and its intended behavior. Both the designer and software engineer will reference the specification and verification plan to disambiguate results. Each will learn from the other as they observe their respective components contribute to system features.

Summarizing, the software engineer must be involved early and throughout the design cycle to ensure design intent is preserved, yet properly partitioned between hardware and software, in the final implementation. Make sure software engineering is represented during the specification and verification planning processes, all the way through final system integration to maximize your potential for success. Or, as one of my earliest finger(1) .plan files (you remember those, right?) used to read: “Fully functional first pass silicon.”

——————-
[1] Hardware/Firmware Interface Design: Best Practices for Improving Embedded Systems Development, Stringham, Elsevier, 2010

[2] ESL Design and Verification, Bailey, Martin and Piziali, Elsevier, 2007 Read the rest of this entry »

Posted in Interoperability, Reuse, Verification Planning & Management | Comments Off

Analysis Ports in VMM 1.2

Posted by John Aynsley on 3rd March 2010

JohnAynsley

John Aynsley, CTO, Doulos

Analysis ports are another feature from the SystemC TLM-2.0 standard that has been incorporated into VMM 1.2. Analysis ports provide a mechanism for distributing transactions to passive components in a verification environment, such as checkers and scoreboards.

Analysis ports and exports are a variant on the TLM ports and exports that I have discussed in previous blog posts. The main difference between analysis ports and regular ports is that a single analysis port can be bound to multiple exports, in which case the same transaction is sent to each and every export or “subscriber” or “observer” connected to the analysis port. The terms subscriber and observer are used interchangeably in the VMM documentation.

Let us take a look at an example:

class my_tx extends vmm_data;  // User-defined transaction class

class transactor extends vmm_xactor;
vmm_tlm_analysis_port #(transactor, my_tx) m_ap;   // The analysis port

virtual task main;
my_tx tx;

m_ap.write(tx);

The transactor above sends a transaction tx out through an analysis port m_ap.

The type of the analysis port is parameterized with the type of the transactor and of the transaction my_tx. The call to write sends the transaction to any object that has registered itself with the analysis port. There could be zero, one, or many such observers registered with the analysis port.

To continue the example, let us look at one observer:

class observer extends vmm_object;
vmm_tlm_analysis_export #(observer, my_tx) m_export;
function new (string inst, vmm_object parent = null);

m_export = new(this, “m_export”);

function void write(int id, my_tx tx);

The observer has an instance of an analysis export and must implement the write method that the export will provide to the transactors. Note that the observer extends vmm_object. Since an observer is passive, it need not extend vmm_xactor.

The analysis port may be bound to any number of observers in the surrounding environment:

class tb_env extends vmm_group;
transactor  m_transactor;
observer    m_observer_1;
another     m_observer_2;
yet_another m_observer_3;
virtual function void build_ph;
m_transactor = new( “m_transactor”, this );
m_observer   = new( “m_observer”,   this );

endfunction
virtual function void connect_ph;
m_transactor.m_ap.tlm_bind( m_observer_1.m_export );
m_transactor.m_ap.tlm_bind( m_observer_2.m_export );
m_transactor.m_ap.tlm_bind( m_observer_3.m_export );

Note the use of the predefined phase methods from VMM 1.2. Transactors are created during the build phase, and ports are connected during the connect phase.

Finally, let us compare analysis ports with VMM callbacks:

m_ap.write(tx);
versus
`vmm_callback(callback_facade, write(tx));

The effect is very similar, but there are differences. Unlike VMM callbacks, the name of the method called through an analysis port is fixed at write. A VMM callback method is permitted to modify the transaction object, whereas a transaction sent through an analysis port cannot be modified. When multiple callbacks are registered, the prepend_callback and append_callback methods allow you to determine the order in which the callbacks are made, whereas you have no control over the order in which write is called for multiple observers bound to an analysis port. Because of these differences, only VMM callbacks are appropriate for modifying the behavior of transactors. Analysis ports are only appropriate for sending transactions to passive components that will not attempt to modify the transaction object. On the other hand, that in itself is the feature and strength of analysis ports; they are only for analysis.

It can make sense to combine a VMM callback with an analysis port in the same transactor, using the callback to inject an error and the analysis port to send the modified transaction to a scoreboard, for example:

`vmm_callback(callback_facade, inject_error(tx));
m_ap.write(tx);

In this situation, the VMM recommendation is to make the analysis call after the callback, as shown here.

Posted in Communication, Reuse, SystemC/C/C++, Transaction Level Modeling (TLM), VMM infrastructure | Comments Off

03808dc488097de57ccc726a8b8a7946====