Verification Martial Arts: A Verification Methodology Blog

Archive for the 'SystemVerilog' Category

SystemVerilog

Avoiding Redundant Simulation Cycles with your UVM VIP based simulation with a Simple Save-Restore Strategy

Posted by paragg on 6th March 2014

In many verification environments, you reuse the same configuration cycles across different testcases. These cycles might involve writing and reading from different configuration and status registers, loading program memories, and other similar tasks to set up a DUT for the targeted stimulus. In many of these environments, the time taken during this configuration cycles are very long. Also, there is a lot of redundancy as the verification engineers have to run the same set of verified configuration cycles for different testcases leading to a loss in productivity. This is especially true for complex verification environments with multiple interfaces which require different components to be configured.

The Verilog language provides an option of saving the state of the design and the testbench at a particular point in time. You can restore the simulation to the same state and continue from there. This can be done by adding appropriate built in system calls from the Verilog code. VCS provides the same options from the Unified Command line Interpreter (UCLI).

However, it is not enough for you to restore simulation from the saved state. For different simulations, you may want to apply different random stimulus to the DUT. In the context of UVM, you would want to run different sequences from a saved state as show below.

In the above example apart from the last step which varies to large extent, the rest of the steps once established need no iterations.

Here we explain how to achieve the above strategy with the simple existing UBUS example available in the standard UVM installation. Simple changes are made in the environment to show what needs to be done to bring in this additional capability. Within the existing set of tests, the two namely, “test_read_modify_write” and “test_r8_w8_r4_w4”, differs only w.r.t the master sequence being executed – i.e. “read_modify_write_seq” and “r8_w8_r4_w4_seq” respectively.

Let’s say that we have a scenario where we would want to save a simulation once the reset_phase is done and then start executing different sequences post the reset_phase the restored simulations. To demonstrate a similar scenario through the UBUS tests, we introduced a delay in the reset_phase of the base test (in a real test, this may correspond to the PLL lock, DDR Initialization, Basic DUT Configuration).

The following snippet shows how the existing tests are modified to bring in the capability of running different tests in different ‘restored’ simulations.

As evident in the code we made two major modifications.

  • Shifted the setting of the phase default_sequence from the build phase to the start of the main phase.
  • Get the name of the sequence as an argument from the command-line and process the string appropriately in the code to execute the sequence on the relevant sequencer.

As you can see, the changes are kept to a minimum. With this, the above generic framework is ready to be simulated.  In VCS, one of the different ways, the save/restore flow can be enabled as follows.

Thus above strategy helps in optimal utilization of the compute resources with simple changes in your verification flow. Hope this was useful and you manage to easily make the changes in your verification environment to adopt this flow and avoid redundant simulation cycles.

Posted in Automation, Coding Style, Configuration, Creating tests, Customization, Optimization/Performance, Organization, Reuse, Stimulus Generation, SystemVerilog, Tutorial, UVM, Uncategorized, Verification Planning & Management | 1 Comment »

SNUG-2012 Verification Round Up – Miscellaneous Topics

Posted by paragg on 29th March 2013

In my final installment of the series of blogs summing up the various SNUG verification papers of 2012, I try to cover the user papers on the Design IP/Verification IP and SystemC and SystemVerilog co-simulation. Please find my earlier blogs on the other domains here: System Verilog Language, Methodologies & VCS technologies

DesignWare core USB3.0 Controller (DWC_usb3) can be configured as a USB3.0 Device Controller. When verifying a system that comprises a DWC_usb3 Device Controller, the verification environment is responsible for bringing up the DWC_usb3 Device Controller to its proper operation mode to communicate with the USB3.0 Host. The paper Integrating DesignWare USB3.0 Device Controller In a UVM-based Testbench from Ning Guo of Paradigm Works describes the process of configuring and driving the DWC_usb3 Device Controller in a UVM based verification environment using the Discovery USB 3.0 Verification IP. This paper describes how the verification environment needs to be created so that it’s highly configurable and reusable.

The AMBA 4 ACE specification enables system level cache coherency across clusters of multicore processors, such as the ARM Cortex-A15 and Cortex-A7 MPCore™ processors .This ensures optimum performance and power efficiency of complex SoC designs. However, the design complexity associated with these capabilies is also higher.  And it throws up new verification challenges.  In the paper, Creating AMBA4 ACE Test Environment With Discovery VIP”, Whitney Huang, Sean Chou, MediaTek Inc, demonstrates how to tackle complex verification challenges increase their verification productivity by using Synopsys Discovery AMBA ACE VIP.

The paper, Verification Methodology of Dual NIC SOC Using VIPs by A.V. Anil Kumar, Mrinal Sarmah, Sunita Jain of Xilinx India Technology Services Pvt. Ltd, talks about how various features of Synopsys PCIe and Ethernet Verification IPs can be exploited to help in the efficient verification of the DUT across various traffic configurations. The paper explores how the VIP Application Programming Interface (API)s can leveraged in the  tests cases to reach high  functional coverage numbers  in a very short duration. They also show how a dual NIC verification environment can effectively use Ethernet VIP APIs to test various Media Access Control (MAC) features. Finally conclude how of the implementation can be used across future revisions of their design.

The ability to analyze the performance of the SoC at the early stage of the design can make a significant different to the end product.  This can lead to more accurate and an earlier estimate of the desired performance that is expected.  Dayananda Yaraganalu Sadashivappa, Igal Mariasin, Jayaprakash Naradasi of SanDisk India Device Design Centre Pvt. Ltd., in the paperGeneric MLM environment for SoC Performance Enhancement”, outlines the solution that was found by using the Synopsys VIP models. The VIPs were used in conjunction with interconnect, which in this case is a Multi-Layer-Matrix (MLM). The environment was built leveraging the VMM base classes. The VMM multiple stream scenario (vmm_ms_scenario) base class was used to create the traffic across the matrix, and the performance meters were constructed using the base classes. The callbacks were leverage appropriately help in collating the statistics. Multiple knobs were used to make the environment generic and configurable. The approach helped in finding multiple performance bugs which could not have been easily found using conventional verification.

In the paper, “User Experience Verifying Ethernet IP Core”, Puneet Rattia of Altera Corporation, presents his experience with verifying the Altera® 40-100Gbps Ethernet IP core utilizing VMM environment while integrating the Ethernet VIP from Synopsys. He explains how he created a full suite of system and blocks level regression tests and then goes on to show how he  utilizes the coverage mapping capabilities of VCS to merge the results across these various testbenches and produce meaningful reports. Besides showing how to reuse the verification infrastructure at the SoC level, the paper also demonstrates how they went in for horizontal reuse by integrating the reference SystemC based models developed and prototyped in the early phase of the project.

UVM 1.x includes support for the communication interfaces defined by the SystemC TLM-2.0 standard. This enables integration of SystemC TLM-2.0 IP into a SystemVerilog UVM verification environment. Dr David Long, John Aynsley, Doug Smith, Doulos in the paper A Beginner’s Guide to Using SystemC TLM-2.0 IP with UVMdescribes how this is done best. They talk about the fact that the connection between SystemC and SystemVerilog currently requires a tool specific interface such as Synopsys Transaction Level Interface (TLI). This paper begins with a brief overview of TLM-2.0 aimed at novice users. It then discusses the steps required to add a SystemC TLM-2.0 model into a SystemVerilog UVM environment and simulate it with VCS. At each step, issues that users will face are explored and suggestions made for practical fixes, showing the relevant pieces of code. Finally, the paper gives a summary of areas where the UVM implementation of TLM-2.0 differs from the SystemC standard and proposes workarounds to ensure correct communication between the SystemVerilog and SystemC domains.

There is an inherent need to enable the horizontal reuse of components created during the architecture and exploration stage. Subhra S Bandyopadhyay, Pavan N M, Intel Technology India Pvt. Ltd, in Integrating SystemC OSCI TLM 2.0 Models to OVM based System Verilog Verification Environments talks about how  theur architecture team creates SystemC models  for early performance analysis and accelerated software development. In OVM-based verification environment, the objective was to reuse this model as a reference model and thus helped in reducing the overall environment bring-up time. The challenge was not only to integrate the SystemC model in the OVM-based verification environment but also to be able to efficiently send transactions from SV to SystemC and vice versa. This paper explores the successful integration of SystemC TLM2 components in OVM based verification environments and also highlight how the VCS TLI (Transaction Level Interface) adapters help TLM2.0 sockets in SystemC to communicate with those in SV and vice versa.

Truly, I feel overwhelmed by the numbers of papers and the interesting use of technology across a variety of domains on which user share their experiences across the various SNUG conferences. As we speak, the SNUG events for 2013 have started, and the stage is all set for a new set of very informative and interesting sessions. I am sure most of you would be attending the SNUIG conferences in your area. . You can find the detailed schedule of those here.

Posted in Announcements, Automation, Callbacks, Coding Style, Communication, Reuse, Structural Components, SystemC/C/C++, SystemVerilog, Transaction Level Modeling (TLM), Tutorial, UVM, VMM | Comments Off

Coverage coding: simple tips and gotchas

Posted by paragg on 28th March 2013

Author - Bhushan Safi (E-Infochips)

Functional coverage has been the most widely accepted way by which we track the completeness of any constrained random testbench. However, does achieving 100% functional coverage means that the DUV is bug free? Certainly not , but it boosts the confidence of the verification engineer and management team.

Based on my experience of defining functional covergroups for different projects, I realized that coverage constructs and options in the SystemVerilog language have their own nuances for which one needs to keep an eye out. These “gotchas” have to be understood so that coverage can be used optimally to achieve appropriate usage results in correct alignment with the intent desired. Let me talk about some of these finer aspects of coverage so that you can use the constructs more productively.

Usage of ignore_bins

The ‘ignore_bins’ construct is meant to exclude a collection of bins from coverage.  While using this particular construct, you might end up with multiple ‘shapes’ issues (By ‘shapes’ I mean “Guard_OFF” and “Guard_ON”, which appears in the report whenever ‘ignore_bins’ is used). Lets look at a simple usage of ignore_bins is as shown in figure 1.

Looking at the code in figure 1, we would assume that since we have set “cfg.disable = 1” the bin with value 1 would be ignored from the generated coverage report.  Here we use the ‘iff’ condition to try to match our intent of not creating a bin for the variable under the said condition.  However in simulations, where the sample_event is not triggered, we see that we end up having an instance of our covergroup which still expects both the bins to be hit. (See the generated report in figure 1). Why does this happen? If you dig deep into the semantics, you will understand that the “iff” condition will come into action only when the event sample_event is triggered. So if we are writing  ‘ignore_bins’ for a covergroup which may/may not be sampled on each run then we need to look for an alternative.  Indeed there is a way to address this requirement and that is through the usage of the multi-mastered intelligent ternary operator. Look at the code in figure 2 to see how the ternary operator is used to model the same intent.

Now the report is as you expect!!!

Using the above mentioned coding style we make sure that the bin which is not desired in specific conditions is ignored irrespective of the condition of whether or not the covergroup is being sampled.  Also, we use the value “2’b11” to make sure that we don’t end up in ignoring a valid value for the variable concerned.

Using detect_overlap

The coverage option called “detect_overlap” helps in issuing a warning if there is an overlap between the range list (or transition list) of two bins of a coverpoint. Whenever we have plenty of ranges to be covered, and there is a possibility of overlap, it is important to use this option.

Why is it important and how can you be impacted if you don’t use it? You might actually end up with incorrect and unwanted coverage results!

Let’s look at an example. In the above scenario, if a value of 25 is generated, the coverage scores reported would be 50% when the desired outcome would ideally have been 25%. This is because the value ‘25’ contributes to two bins out of four bins when that was probably not wanted. The usage of ‘detect_overlap’ would have warned you about this and you could have fixed the bins to make sure that such a scenario doesn’t occur.

Coverage coding for crosses and assigning weight

What does the LRM (Table 19-1—Instance-specific coverage options) say about the ’weight’ attribute?  

If set at the covergroup syntactic level, it specifies the weight of this covergroup instance for computing the overall instance coverage of the simulation. If set at the coverpoint (or cross) syntactic level, it specifies the weight of a coverpoint (or cross) for computing the instance coverage of the enclosing covergroup. The specified weight shall be a non-negative integral value.”

What kinds of surprises can a combination of cross and option.weight create?

The SystemVerilog LRM shows a very simple way of writing a cross. Let’s look at the code below.

The expectation here is that for a single simulation (expecting one of the bins to be hit), we will end up with 25 % coverage as we have specified the weight of the individual coverpoints to zero. However, what essentially happens is the following, 2 internal coverpoints for check_4_a and check_4_b are generated, which are used to compute the coverage score of the ‘crossed’ coverpoint here. So you’ll end up having a total of four coverpoints, two of which have option.weight specified to 0 (i.e. CHECK_A and CHECK_B) and two of which are coverpoints with option.weigh as 1 (i.e. check_4_a and check_4_b). Thus for a single simulation, you will not get the 25% coverage desired.

Now with this report we see the following issues:

  • => We see four coverpoints while expectation is only two coverpoints
  • => The weights of the individual coverpoints is set to be expected to zero as option.weight is set to ‘0’
  • => The overall coverage numbers are undesired.

In order to avoid above disastrous results we need to take care of following aspects:

  • => Use the type_option.weight = 0, instead of option.weight = 0.
  • => Use the coverpoint labels instead of coverpoint names to specify the cross.

Hope my findings will be useful for you and you will use these options/attributes appropriately to get the best value out of your coverage metrics (without losing any sleep or debug cycles to figure out why they didn’t behave as you expected them to)!

Posted in Coding Style, Coverage, Metrics, Customization, Reuse, SystemVerilog, Tutorial | 9 Comments »

SNUG-2012 Verification Round Up: VCS Technologies

Posted by paragg on 15th March 2013

Continuing from my earlier blog posts about SNUG papers on the SystemVerilog language and verification methodologies, I will now go through some of the interesting papers that highlight core technologies in VCS which users can deploy to improve their productivity. We will walk through various stages of the verification cycle including simulation bring up, RTL simulation, gate-level simulation, regression and simulation debug which each benefit from different features and technologies in VCS.

Beating the SoC challenges

One of the biggest challenges that today’s complex SoC architectures pose is rigorous verification of SoC designs. Functional verification of the full-system represented by these mammoth scale (> 1 billion transistors per chip) designs calls for the verification environment to employ advanced methodologies, powerful tools and techniques. Constrained-random stimulus generation, coverage-driven-completion criteria, assertion-based checking, faster triage and debug turnaround, C/C++/SystemC co-simulation, gate-level verification, etc. are just some of these methods and techniques which aid in tackling the challenge.  Patrick Hamilton, Richard Yin, Bobjee Nibhanupudi, Amol Bhinge of Freescale in their paper SoC Simulation Performance: Bottlenecks and Remedies discuss the several simulation and debug bottlenecks experienced during the verification of a complex next-generation SoC; they discuss how they gained knowledge of these bottlenecks and  overcame them using VCS diagnostic capabilities, profile reports, VCS arguments, testbench modifications, smarter utilities, fine tuning of computing resources, etc.

The challenge of simulation environment is the sheer amount of tests that are being written and need to be managed. As more tests are added to the regressions, there is a quantifiable impact on several aspects of the project. These include a dramatic and unsustainable increase in the overall regression time.  As the regression time increases, the intermediate interval for collecting and analyzing results between successive regressions run shrinks.  Overlaps arising from having multiple regressions in flight can cause failure to track design bugs for several snapshots, which can also result in the inability to ensure coverage tracking by design management tools. Given constantly shortening project timelines, this affects the time-to-market of core designs and their dependent SoC products. Simulation-based Productivity Enhancements using VCS Save/Restore by Scot Hildebrandt, Lloyd Cha, AMD, Inc. looks at using VCS’s Save/Restore feature to develop steps involving binary image capture of sections of simulation. These “sections” consist of aspects replicated in all tests, like the reset sequence, or allow the skipping of the specific phases of a failing test which are ‘clean’. They further provide statistics in terms of the reduction in the regression time and the memory footprint that the saved image would typically enable. They also talk about how the dynamic re-seeding of the test case with the stored images enabled them to leverage the full strength and capabilities of the CRV methodologies.

The paper SoC Compilation and Runtime Optimization using VCS by Santhosh K.R., Sreenath Mandagani of Mindspeed Technologies(India) talks about the Partition Compile flow and associated methodology to improve TAT(turnaround time) for SOC compilations. The flow leverages v2k configurations, parallel compilation and various performance optimization switches of VCS-MX. They further explain how a SoC can be partitioned into multiple functional blocks or clusters and each block can be selectively replaced with empty shells if that particular functionality is not exercised in the desired tests. Also the paper demonstrates how new tests can be added and run without requiring to recompile the whole SoC. Thus using Partition Compile flow, only a subset of SoC or test bench blocks would be recompiled based on the dependencies across clusters. They share the productivity gains in compile TAT as well, overall runtime gains for the current SoC and the savings in overall disk space requirement. This is then shown to correlate with the reduction in the license usage time and disk space which leads to savings desired.

By the way, now there been further developments in the latest VCS release to help ensure isolation of switches between partitions in the SoC. This additional functionality helps reduce memory, decrease runtime, and reduce initial scratch compile time even further while maintaining the advantages of partition compile.

Addressing the X-optimism challenges – X-prop Technology

Gate simulations are onerous and many of the risks normally mitigated by gate simulations can now be addressed by RTL lint tools, static timing analysis tools and logic equivalence checking. However, one risk that persists, until now, is the potential for optimism in the X semantics of RTL simulation.  The semantics of the Verilog language can create mismatches between RTL and gate-level simulation due to X-optimism Also, the semantics of X’s in gate simulations are pessimistic resulting in simulation failures that don’t represent real bugs.

Improved X-Propagation using the xProp technology by Rafi Spigelman of Intel Corporation presents the motivation for having the relevant semantics for X-propagation. The process of how such semantics was validated and deployed on a major CPU design at Intel is also described. He delves upon its merits and limitations, and comments on the effort required in enabling such semantics in the RTL regressions.

Robert Booth of Freescale Inc. in the paper X-Optimism Elimination during RTL Verification explains how the chips suffer from X-optimism issues that often conceal design bugs. The deployment of low power techniques such as power-shutdown in today’s designs exacerbates these X-optimism issues. To address these problems they show how they leverage the new simulation semantics with VCS that more accurately models non-deterministic values in logic simulation. The paper describes how X-optimism can be eliminated during RTL verification.

In the paper X-Propagation: An Alternative to Gate Level Simulation”, Adrian Evans, Julius Yam, Craig Forward @cisco.com explores X-Propagation technology which attempts to model X behavior more accurately at the RTL level. In this paper, they review the sources of X’s in simulation and their handling in the Verilog language. They further describe their experience using this feature on design blocks from Cisco ASICs including several simulation failures that did not represent RTL bugs. They conclude by suggesting how X-Propagation can be used to reduce and potentially eliminate gate-level simulations.

In  the paper Improved x-propagation semantics: CPU server learning, Peeyush Purohit, Ashish Alexander, Anees Sutarwala of Intel stresses on the need to model and simulate silicon like behavior in RTL simulations. They bring out the fact that traditionally Gate-Level Simulations have been used to fill that void but come at the cost of time and resources. Then they go on to explain the limitations with the regular 4-value Verilog/System Verilog based RTL simulation and also cover the specifications for enhanced simulator semantics to overcome those limitations. They explain how design issues that were found on their next-generation CPU server project used the enhanced semantics; the potential flow implications and a sample methodology implementing the new semantics are provided.

Power-on-Reset (POR) is a key functional sequence for all SoC designs and any bug not detected in this logic can lead to dead silicon. Complexities in reset logic pose increasing challenges for verification engineers to catch any such design issue(s) during RTL/GL simulations. POR sequence simulations are many times accompanied by ‘X’ propagation due to non-resettable flops and un-initialized logic. Generally uninitialized and non-resettable logic is initialized to 0’s or 1’s or some random values using Forces or Deposits to bypass unwanted X propagation. Ideally, one would like to have a stimulus to try all possible combinations of initial values for such logic but this is practically impossible due to short design cycle and limited resources. This practical limitation can leave space for critical design bugs that may remain undetected during the design verification cycle. Deepak Jindal, Freescale, India in the paper Gaps and Challenges with Reset Logic Verification discusses these reset logic simulation challenges in detail and shares the experience of evaluating the new semantics in VCS technology which can help to catch most of the POR bugs/issues during RTL stage itself.

SNUG allows users to discuss their current challenges and emerging solutions they are using to address them. You can find all SNUG papers online via solvnet (Of course a login required!!!).

Posted in SystemVerilog, Tools & 3rd Party interfaces, Tutorial | 1 Comment »

SNUG-2012 Verification Round Up – Language & Methodologies – I

Posted by paragg on 25th February 2013

As in the previous couple of years, last year’s SNUG – Synopsys User Group showcased an amazing number of useful user papers   leveraging the capabilities of the SystemVerilog language and verification methodologies centered on it.

I am always excited when I see this plethora of useful papers and I try to ensure that I set aside some time to go through all these user experiences.  Now, as we wait for SNUG, Silicon Valley to kick-start the SNUG events for this year, I would want to look back at some of the very interesting and useful paper from the different SNUGs of the year 2012.  Let me start with talking about a few papers in the area of the System Verilog language and SV methodologies.

Papers leveraging the SystemVerilog language and constructs

Hillel Miller of Freescale in the paper “Using covergroups and covergroup filters for effective functional coverageuncovers the mechanisms available for carving out the coverage goals. In the p1800-2012 of the SystemVerilog LRM, new constructs are provided just for doing this. The construct that is focused on is the “with” construct. The new construct provides the ability to carve out of a multidimensional range of possibilities for a sub-range of goals. This is very relevant in a “working” or under development setup that requires frequent reprioritization to meet tape-out goals.

The paperTaming Testbench Timing: Time’s Up for Clocking Block Confusionsby Jonathan Bromley, Kevin Johnston of Verilab, reviews the key features and purpose of clocking blocks and then examines why they continue to be a source of confusion and unexpected behavior for many verification engineers. Drawing from the authors’ project and mentoring experience, it highlights typical usage errors and how to avoid them. They clarify the internal behavior of clocking blocks to help engineers understand the reasons behind common problems, and show techniques that allow clocking blocks to be used productively and with confidence. Finally, they consider some areas that may cause portability problems across simulators and indicate how to avoid them.

Inference of latches and flops based on coding styles has always been a topic creates multiple viewpoints. There are other such scenarios of synthesis and simulation mismatches that one typically comes across. To address all such ambiguity, language developers have provided different constructs to provide for an explicit resolution based on the intent. To help us gain a deeper understanding of the topic, Don Mills of Microchip Technology Inc., presented the related concepts in the paper “Yet Another Latch and Gotchas Paper” @ SNUG Silicon Valley. This paper discusses and provides solutions to issues that designers using SystemVerilog for design come across, such as: Case expression issue for casez and casex, Latches generated when using unique case or priority case, SRFF coding style problems with synthesis, SystemVerilog 2009 new definition of logic

Gabi Glasser from Intel presented the paper “Utilizing SystemVerilog for Mixed-Signal Validation@ SNUG Israel, where he proposed a mechanism for simplifying analysis and increasing coverage for mixed signal simulations.  The method proposed here was to take advantage of SystemVerilog capabilities, which enables defining a hash (associative) array with unlimited size. During the simulation, vectors are created for required analog signals, allowing them to be analyzed within the testbench along or at the end of the simulation, without requiring saving these signals into a file. The flow change enables the ability to launch a large scale mixed signal regression while allowing an easier analysis of coverage data.

Design pattern is a general reusable solution to a commonly recurring problem within a given context. The benefit of using design patterns is clear: it gives a common language for designers when approaching a problem, and gives a set of tools, widely used, to solve issues as they come up.  The paper Design Patterns In Verification” by Guy Levenbroun of Qualcomm explores several common problems, which might rise, during the development of a testbench, and how we can use design patterns to solve these problems. The patterns are categorized majorly into following areas: creational (eg factory), structural (eg composite) and behavioral (eg template) are covered in the paper.

Arik Shmayovitsh, Avishay Tvila, Guy Lidor of Sigma Designs , in their paper “Truly reusable Testbench-to-RTL  connection for System Verilog , presents  a novel approach of  connecting the DUT and testbench using consistent semantics while  reusing the testbench. This is achieved by abstracting the connection layer of each testbench using the SystemVerilog ‘bind’ construct. This ensures that the only thing that is required to be done to reuse the testbench for a new DUT would be to identify the instance of the corresponding DUT.

In the paper, A Mechanism for Hierarchical Reuse of Interface Bindings, Thomas Zboril of Qualcomm (Canada) explores another method to instantiate SV interfaces, connect them to the DUT and wrap the virtual interfaces for use in the test environment. This method allows the reuse of all the code when the original block level DUT becomes a lower level instance  in a larger subsystem or chip. The method involves three key mechanisms: Hierarchical virtual interface wrappers, Novel approach of using hierarchical instantiation of SV interfaces, Another novel approach of automatic management of hierarchical references via SV macros (new)

Thinh Ngo & Sakar Jain of Freescale Semiconductor, in their paper, “100% Functional Coverage-Driven Verification Flow propose a coverage driven verification flow that can efficiently achieve 100% functional coverage during simulation. The flow targets varied functionality, focuses at transaction level, measures coverage during simulation, and fails a test if 100% of the expected coverage is not achieved. This flow maps stimulus coverage to functional coverage, with every stimulus transaction being associated with an event in the coverage model and vice versa. This association is derived from the DUT specification and/or the DUT model. Expected events generated along with stimulus transactions are compared against actual events triggered in the DUT. The comparison results are used to pass or fail the test. 100% functional coverage is achieved via 100% stimulus coverage. The flow enables every test with its targeted functionality to meet 100% functional coverage provided that it passes.

Papers on Verification Methodology

In the paper, Top-down vs. bottom-up verification methodology for complex ASICs, Paul Lungu & Zygmunt Pasturczyk of Ciena at Canada covers the simulation methodology used for two large ASICs requiring block level simulations. A top-down verification methodology was used for one of the ASICs while a larger version needed an expanded bottom-up approach using extended simulation capabilities. Some techniques and verification methods such as chaining of sub environments from block to top-level are highlighted  along with challenges and solutions found by the verification team. The paper presents a useful technique of  of passing a RAL (Register Abstraction Layer) mirror to the C models which are used as scoreboards in the environment. The paper also presents a method of generating stable clocks inside the “program” block.

In the paper,Integration of Legacy Verilog BFMs and VMM VIP in UVM using Abstract Classes by Santosh Sarma of Wipro Technologies(India) presents an alternative approach where Legacy BFMs written in Verilog and not implemented using ‘Classes’ are hooked up to higher level class based components to create a standard UVM VIP structure. The paper also discusses an approach where existing VMM Transactors that are tied to such Legacy BFMs can be reused inside the UVM VIP with the help of the VCS provided UVM-VMM Interoperability Library. The implementation makes use of abstract classes to define functions that invoke the BFM APIs. The abstract class is then concretized using derived classes which give the actual implementation of the functions in the abstract class. The concrete class is then bound to the Verilog instance of the BFM using the SystemVerilog “bind” concept. The concrete class handle is then used by the UVM VIP and the VMM Transactor to interact with the underlying Verilog BFM. Using this approach the UVM VIP can be made truly reusable by using run time binding of the Verilog BFM instance to the VIP instead of using hardcoded macro names or procedural calls.

A Unified Self-Check Infrastructure - A Standardized Approach for Creating the Self-Check Block of Any Verification Environmentby John Sotiropoulos, Matt Muresa , Massi Corba of Draper Laboratories Cambridge, MA, USA presents a structured approach for developing a centralized “Self-Check” block for a verification environment. The approach is flexible enough to work with various testbench architectures and is portable across different verification methodologies. Here, all of the design’s responses are encapsulated under a common base class, providing a single “Self-Check” interface for any checking that needs to be performed. This abstraction, combined with a single centralized scoreboard and a standardized set of components, provides the consistency needed for faster development and easier code maintenance. It expands the concept of ‘self-check’ to incorporate the white-box monitors (tracking internal DUT state changes etc.) and Temporal Models (reacting to wire changes) along-with traditional methodologies for enabling self-checking.

For VMM users looking at migrating to UVM, there is another paper from Courtney Schmitt of Analog Devices, Inc.Transitioning to UVM from VMMdiscusses the process of transitioning to a UVM based  environment from VMM Differences and parallels between the two verification methodologies are presented to show that updating to UVM is mostly a matter of getting acquainted with a new set of base classes. Topics include UVM phases, agents, TLM ports, configuration, sequences, and register models. Best practices and reference resources are highlighted to make the transition from VMM to UVM as painless as possible.

Posted in Announcements, Coverage, Metrics, Creating tests, Customization, Modeling, Optimization/Performance, Reuse, SystemVerilog, UVM, Uncategorized, VMM, VMM infrastructure | 3 Comments »

Are You Afraid of Breakpoints?

Posted by Yaron Ilani on 17th May 2012

Are you afraid of breakpoints? Don’t worry, many of us have been.  After all, breakpoints are for software folks, not for us chip heads, right??
Well, not really… In many ways, chip verification is pretty much a software project.
Still, most of the people I know fall into one of these two categories – the $display person, or the breakpoints person.

The former doesn’t like breakpoints. He or she would rather fill up their code with $display’s and UVM_INFO’s and recompile their code every time around.
That’s cool.
The latter likes and appreciates breakpoints and uses them whenever possible instead of traditional $display commands.

So if you are the second type – here’s some great news from DVE that will help you be even more efficient.
And $display folks – stay tuned as it may be time for you to finally convert :-)

In this short video I show how to use Object ID’s – a new infrastructure in DVE – to break in a specific class instance !

 

Want more ?

Go here to catch up with our latest DVE and UVM short videos:
http://www.youtube.com/playlist?list=PL06DA5270480FC7E1

Posted in Debug, SystemVerilog, UVM, Uncategorized | Comments Off

Namespaces, Build Order, and Chickens

Posted by Brian Hunter on 14th May 2012

As described in the video, vkits are our convenient method of lumping together reusable UVM packages with the interface(s) that they operate on. Because code within packages can only peek or poke wires that are contained by a virtual interface, it is often useful to wrap these together somehow, and vkits are our technique at Cavium for doing that.

What goes in a vkit? Anything that is reusable. From simple agents and the interfaces they work on to complete UVM environments that connect these agents together, including scoreboards, sequence libraries, types, and utility functions.

What does not go in a vkit are items that are bound to a specific testbench, including the tests themselves.

The video describes the wildcard import syntax as an “egregiously bad idea.” First and foremost, doing so can lead to namespace pollution, which comes about when one engineer independently adds types or classes to their package and only later finds out that they conflict with those of another package. Secondly, wildcard imports prevent our shorter naming conventions of having an agent_c, drv_c, env_c, etc., within each package.

Not described in the video are CSR packages that are auto-generated by RAL, IP-XACT, or your script of choice. These packages should be independent of your vkits, such that your vkits refer to them with their explicit scopes (i.e., chx_csr_pkg::PLUCKING_CFG_C).

Future posts will go into more detail about how we architect UVM testbenches and some of our other conventions that work within this framework. Until then, I’ve got a lot of pies to eat.

PS. I’ll be at DAC this year! Come see me on Tuesday, June 5, during the “Industry Leaders Verify with Synopsys” lunch. Hopefully they’ll be serving some of my favorite foods!

Posted in Organization, Structural Components, SystemVerilog, Tutorial, UVM | 6 Comments »

Customizing UVM Messages Without Getting a Sunburn

Posted by Brian Hunter on 24th April 2012

The code snippets presented are available below the video.

my_macros.sv:

   `define my_info(MSG, VERBOSITY) \
      begin \
         if(uvm_report_enabled(VERBOSITY,UVM_INFO,get_full_name())) \
            uvm_report_info(get_full_name(), $sformatf MSG, 0, `uvm_file, `uvm_line); \
      end

  `define my_err(MSG)         \
      begin \
         if(uvm_report_enabled(UVM_NONE,UVM_ERROR,get_full_name())) \
            uvm_report_error(get_full_name(), $sformatf MSG, UVM_NONE, `uvm_file, `uvm_line); \
      end

   `define my_warn(MSG)        \
      begin \
         if(uvm_report_enabled(UVM_NONE,UVM_WARNING,get_full_name())) \
            uvm_report_warning(get_full_name(), $sformatf MSG, UVM_NONE, `uvm_file, `uvm_line); \
      end

   `define my_fatal(MSG)       \
      begin \
         if(uvm_report_enabled(UVM_NONE,UVM_FATAL,get_full_name())) \
            uvm_report_fatal(get_full_name(), $sformatf MSG, UVM_NONE, `uvm_file, `uvm_line); \
      end

my_init.sv:

  initial begin
      my_report_server_c report_server = new("my_report_server");

      if($value$plusargs("fname_width=%d", fwidth)) begin
         report_server.file_name_width = fwidth;
      end
      if($value$plusargs("hier_width=%d", hwidth)) begin
         report_server.hier_width = hwidth;
      end

      uvm_pkg::uvm_report_server::set_server(report_server);

      // all "%t" shall print out in ns format with 8 digit field width
      $timeformat(-9,0,"ns",8);
   end

my_report_server.sv:

class my_report_server_c extends uvm_report_server;
   `uvm_object_utils(my_report_server_c)

   string filename_cache[string];
   string hier_cache[string];

   int    unsigned file_name_width = 28;
   int    unsigned hier_width = 60;

   uvm_severity_type sev_type;
   string prefix, time_str, code_str, fill_char, file_str, hier_str;
   int    last_slash, flen, hier_len;

   function new(string name="my_report_server");
      super.new();
   endfunction : new

   virtual function string compose_message(uvm_severity severity, string name, string id, string message,
                                           string filename, int line);
      // format filename & line-number
      last_slash = filename.len() - 1;
      if(file_name_width > 0) begin
         if(filename_cache.exists(filename))
            file_str = filename_cache[filename];
         else begin
            while(filename[last_slash] != "/" && last_slash != 0)
               last_slash--;
            file_str = (filename[last_slash] == "/")?
                       filename.substr(last_slash+1, filename.len()-1) :
                       filename;

            flen = file_str.len();
            file_str = (flen > file_name_width)?
                       file_str.substr((flen - file_name_width), flen-1) :
                       {{(file_name_width-flen){" "}}, file_str};
            filename_cache[filename] = file_str;
         end
         $swrite(file_str, "(%s:%6d) ", file_str, line);
      end else
         file_str = "";

      // format hier
      hier_len = id.len();
      if(hier_width > 0) begin
         if(hier_cache.exists(id))
            hier_str = hier_cache[id];
         else begin
            if(hier_len > 13 && id.substr(0,12) == "uvm_test_top.") begin
               id = id.substr(13, hier_len-1);
               hier_len -= 13;
            end
            if(hier_len < hier_width)
               hier_str = {id, {(hier_width - hier_len){" "}}};
            else if(hier_len > hier_width)
               hier_str = id.substr(hier_len - hier_width, hier_len - 1);
            else
               hier_str = id;
            hier_str = {"[", hier_str, "]"};
            hier_cache[id] = hier_str;
         end
      end else
         hier_str = "";

      // format time
      $swrite(time_str, " {%t}", $time);

      // determine fill character
      sev_type = uvm_severity_type'(severity);
      case(sev_type)
         UVM_INFO:    begin code_str = "%I"; fill_char = " "; end
         UVM_ERROR:   begin code_str = "%E"; fill_char = "_"; end
         UVM_WARNING: begin code_str = "%W"; fill_char = "."; end
         UVM_FATAL:   begin code_str = "%F"; fill_char = "*"; end
         default:     begin code_str = "%?"; fill_char = "?"; end
      endcase

      // create line's prefix (everything up to time)
      $swrite(prefix, "%s-%s%s%s", code_str, file_str, hier_str, time_str);
      if(fill_char != " ") begin
         for(int x = 0; x < prefix.len(); x++)
            if(prefix[x] == " ")
               prefix.putc(x, fill_char);
      end

      // append message
      return {prefix, " ", message};
   endfunction : compose_message
endclass : my_report_server_c

Posted in Debug, Messaging, SystemVerilog, UVM, Verification Planning & Management | 1 Comment »

Building & Configuring Coverage Model – VMM Style – Part-I

Posted by paragg on 24th June 2011

Parag Goel, Senior Corporate Application Engineer, Synopsys

To minimize wasted effort, coverage is used as a guide for directing verification resources by identifying tested and untested portions of the design.”

- IEEE Standard for System Verilog (IEEE Std. 1800-2009)

Configurability & reusability are the buzz^^^ words in the verification of chips and this are enabled to a big extent by the present day verification methodologies. Through a set of blogs, I plan to show how we can create configurable coverage models in VMM based environments. Given that, AMBA – AXI is one of the most commonly used protocols in industry for communication amongst the SOC peripherals, I chose protocol AXI based framework for my case study.

The idea here is to create a configurable coverage model leveraging some of the base classes provided in the methodology so that we can make it completely reusable as we move from the block to system level or as we move across projects. Once, we enable that, we can move the coverage model inside the Sub-environment modeled by vmm_group or vmm_subenv which are the units of reuse.

Picture1

Primary Requirements of Configuration Control:

Two important requirements that are needed to be met to ensure that the coverage model is made a part of reusable components are:

1. Ability to enable/disable the coverage model whenever required.

2. Ability to Turn ON/OFF different subgroups at the desired granularity. For example, an user may not always want the Error Coverage to be enabled, unless under specific circumstances.

To meet the above requirements, we make use of the VMM Global and Hierarchical Configurations

Through the vmm_opts base classes, VMM provides a mechanism to control the configuration parameters of a verification environment. This can be done in a hierarchical as well as in a global manner. These options are summarized below:

Picture6

In the environment, the coverage_enable is by default set to 0, i.e. disabled.

coverage_enable = vmm_opts::get_int(“coverage_enable”, 0);

Now, the user can enable the coverage via either of the two mechanisms.

1. From user code using vmm_opts.

The basic rule is that you need to ‘set’ it *before* the ’get’ is invoked and during the time where the construction of the components take place.  As a general recommendation, for the construction of structural configuration, the build phase is the most appropriate place.
function axi_test::build_ph();
// Enable Coverage.
vmm_opts::set_int(“@%*:axi_subenv:enable_coverage”, 1);
endfunction

2. From command line or external option file. The option is specified using the command-line +vmm_name or +vmm_opts+name.
./simv
+vmm_opts+enable_coverage=1@axi_env.axi_subenv

The command line supersedes the option set within code as shown in 1.

User can also specify options for specific instances or hierarchically using regular expressions.

Picture3

Now let’s look at the typical classification of a coverage model.

From the perspective of AXI protocol, we can look at the 4 sub-sections.

Transaction coverage: coverage definition on the user-controlled parameters usually defined in the transaction class & controlled through sequences.

Error coverage: coverage definition on the pre-defined error injection scenarios

Protocol coverage: This is protocol specific ((AXI Handshake coverage)). In case of AXI, it is mainly for coverage on the handshake signals i.e. READY & VALID on all the 5 channels.

Flow coverage: This is again protocol specific and for AXI it covers various features like, outstanding, inter-leaving, write data before write address etc…

clip_image001[11]

At this point, let’s look at how these different sub-groups with the complete coverage model can be enabled or disabled. Once the coverage configuration class is built and passed on to the main coverage model, we need a fine grain control to enable/disable individual coverage models. The code shows how the user can control all the coverage models in the build phase of the main coverage class.

Here too, we can see how we use vmm_opts comes to meet the requirements of controlling individual parameters.

vmm_opts::set_int(“@%*:disable_transaction_coverage”, 0);
vmm_opts::set_int(“@%*:disable_error_coverage”, 0);
vmm_opts::set_int(“@%*:disable_axi_handshake_coverage”, 0);

vmm_opts::set_int(“@%*:disable_flow_coverage”, 0);

In my next blog, I show how the hierarchical VMM Configurations is used to dynamically pass on signal level and other configuration related information to the coverage model. Also, we shall discuss the usage of VMM TLM feature, towards fulfilling the goal of configurable coverage model. Stay tuned!

Posted in Configuration, Coverage, Metrics, Organization, Reuse, SystemVerilog, VMM, VMM infrastructure | Comments Off

Cool Things You Can Do With DVE – Part 1

Posted by Yaron Ilani on 3rd April 2011

Yaron Ilani, Apps. Consultant, Synopsys

A SystemVerilog test bench could get quite complex. Typical projects today have thousands of lines of code, and the number is constantly on the rise. However, standard base class libraries such as VMM and UVM can help you minimize the amount of code that needs to be rewritten by providing a rich set of macros that substitute long lines of code with a single line. For example, the simple line `vmm_channel(atm_cell) defines a standard VMM channel for an ATM cell with all the necessary fields and methods, all under the hood. All you have to do is instantiate the newly defined channel wherever you need it.A close cousin to the SV macro is the `include directive which basically substitutes an entire file with a single line. This is a neat way to reuse files and enhance code clarity.

But what happens when you need to debug your source code? Indeed, macros and `includes allow for less clutter and enhanced readability, but at the same time hide from you pieces of code you might actually need access to during debug. Fortunately enough, DVE ships with some new cool features that give you quick and easy access to any underlying code and thus taking the pain out of debugging a SystemVerilog test bench. Let’s see some of them:

Macro Tooltips

Hover your mouse over a macro statement and a tooltip window will pop up displaying the underlying code – very useful for short macros. The tooltip’s height can be customized to your liking!

Macro Expand/Collapse

Macros can be expanded interactively so that the underlying source code is presented in the source file you are viewing. Very powerful!

Hyperlinks / Back / Forward

Clicking on a macro/include statement will take you to the original source code or file.

Don’t worry, you can always go back and forth using the browser-like Back/Forward buttons.

To sum up, DVE offers a really comfortable way to debug your SystemVerilog source code – be it plain code, macros or `included files. While keeping you focused on the important part of your code, DVE provides quick and easy access to any underlying code. And thanks to the Back & Forward buttons you can skip back and forth between macros, `included files and your main source file as smoothly as you would in your internet browser. This really takes the pain out of debugging a modern SystemVerilog test bench.

Posted in Debug, SystemVerilog | 3 Comments »

Blocking and Non-blocking Communication Using the TLI

Posted by John Aynsley on 31st March 2011

John Aynsley, CTO, Doulos

In the previous blog post I introduced the VCS TLI Adapters for transaction-level communication between SystemVerilog and SystemC. Now let’s look at the various coding styles supported by the TLI Adapters, and at the same time review the various communication options available in VMM 1.2.

We will start with the options for sending transactions from SystemVerilog to SystemC. VMM 1.2 allows transactions to be sent through the classic VMM channel or through the new-style TLM ports, which come in blocking- and non-blocking flavors. Blocking means that the entire transaction completes in one function call, whereas non-blocking interfaces may required multiple function calls in both directions to complete a single transaction:

image

On the SystemVerilog side, transactions can be sent out through blocking or non-blocking TLM ports, through VMM channels or through TLM analysis ports. On the SystemC side, transactions can be received by b_transport or nb_transport, representing the loosely-timed (LT) and approximately-timed (AT) coding styles, respectively, or through analysis exports. In the TLM-2.0 standard any socket supports both the LT and AT coding styles, although SystemVerilog does not offer quite this level of flexibility, and hence neither does the TLI.

Now we will look at the options for sending transactions from SystemC back to SystemVerilog. Not surprisingly, they mirror the previous case:

image

On the SystemC side, transactions can be sent out from LT or from AT initiators or through analysis ports. On the SystemVerilog side, transactions can be received by exports for blocking- or non-blocking transport, by vmm_channels, or by analysis subscribers.

Note the separation of the transport interfaces from the analysis interfaces in either direction. The transport interfaces are used for modeling transactions in the target application domain, whereas the analysis interfaces are typically used internally within the verification environment for coverage collection or checking.

In the SystemVerilog and SystemC source code, the choice of which TLI interface to use is made when binding ports, exports, or sockets to the TLI Adapter, for example:


// SystemVerilog
`include “tli_sv_bindings.sv”
import vmm_tlm_binds::*;           // For port/export
import vmm_channel_binds::*;       // For channel

tli_tlm_bind(m_xactor.m_b_port,    vmm_tlm::TLM_BLOCKING_EXPORT,    “sv_tlm_lt”);
tli_tlm_bind(m_xactor.m_nb_port,   vmm_tlm::TLM_NONBLOCKING_EXPORT, “sv_tlm_at”);
tli_tlm_bind(m_xactor.m_b_export,  vmm_tlm::TLM_BLOCKING_PORT,      “sc_tlm_lt”);
tli_tlm_bind(m_xactor.m_nb_export, vmm_tlm::TLM_NONBLOCKING_PORT,   “sc_tlm_at”);
tli_channel_bind(m_xactor.m_out_at_chan, “sv_chan_at”, SV_2_SC_NB);

// SystemC
#include “tli_sc_bindings.h”
tli_tlm_bind_initiator(m_scmod->init_socket_lt, LT, “sc_tlm_lt”,true);
tli_tlm_bind_initiator(m_scmod->init_socket_at, AT, “sc_tlm_at”,true);
tli_tlm_bind_target   (m_scmod->targ_socket_lt, LT, “sv_tlm_lt”,true);
tli_tlm_bind_target   (m_scmod->targ_socket_at, AT, “sv_tlm_at”,true);
tli_tlm_bind_target   (m_scmod->targ_socket_chan_at, AT, “sv_chan_at”,true);


Note how the tli_tlm_bind calls require you to specify in each case whether the LT or AT coding style is being used. The root cause of this inflexibility is certain language restrictions in SystemVerilog, in particular the lack of multiple inheritance, which makes it harder to create sockets that support multiple interfaces. Hence, in SystemVerilog, the blocking- and non-blocking interfaces get partitioned across multiple ports and exports. In the SystemC TLM-2.0 standard there is only a single kind of initiator socket and a single kind of target socket, each able to forward method calls of any of the core interfaces, namely, the blocking transport, non-blocking transport, direct memory, and debug interfaces.

In summary, the VCS TLI provides a simple and straightforward mechanism for passing transaction in both directions between SystemVerilog and SystemC by exploiting the TLM-2.0 standard.

Posted in SystemC/C/C++, SystemVerilog, Transaction Level Modeling (TLM) | 1 Comment »

You can also “C” with RAL.

Posted by S. Varun on 18th November 2010

The RAL C interface provides a convenient way for verification engineers to develop firmware code which can be debugged on a RTL simulation of the design. This interface provides a rich set of C API’s using which one can access fields, registers and memories included within the RAL model. The developed firmware code can be used to interface with the RAL model running on a SystemVerilog simulator using DPI (Direct Programming Interface) and can also be re-used as application-level code to be compiled on the target processor in the final application as illustrated in the figure below.

image

The “-gen_c” option available with “ralgen“ has to be used for this and that would cause the generator to generate the necessary files containing the API’s to interface to the C domain. The generated API’s can be used in one of two forms.

1. To interface to the System Verilog RAL model running on a System Verilog simulator using DPI and

2. As a pure standalone-C code designed to compile on the target processor.

Typically firmware runs a set of pre-defined sequences of writes/reads to the registers on a device performing functions for boot-up, servicing interrupts etc. You generally have these functions coded in C, and these would need to access the registers in the DUT. Using the RAL C model, these functions can be generated so that the firmware can now perform the register access through the SystemVerilog RAL model. Thus, this allows firmware and application-level code to be developed and debugged on a simulation of the design and the same functions can later be used as part of the device drivers to perform the same tasks on the hardware.

In the first scenario, when executing the C code within a simulation, it is necessary for the C code to be called by the simulation to be executed and hence the application software’s main() routine must be replaced by one or more entry points known to the simulation. All entry points must take at least one argument that will receive the base address of the RAL model to be used by the C code. The C-side reference is then used by the RAL C API to access required fields, registers or memories.

image

Consider a sequence of register accesses performed at the boot time of a router as defined in function system_bootup() shown below,

File : router_test.c

#ifdef VMM_RAL_PURE_C_MODEL

/*when used as a pure C model for the target processor */

int main() {

void *blk = calloc(410024,1);

system_bootup((void *) (((size_t)blk)>>2)+1);

}

#endif

void system_bootup (unsigned int blk){

unsigned int dev_id = 0xABCD;

unsigned int ver_num = 0×1234;

/* Invoking the generated RAL C APIs */

ral_write_DEV_ID_in_ROUTER_BLK(blk, &dev_id);

ral_write_VER_NUM_in_ROUTER_BLK(blk, &ver_num);

}

In the above example “ral_write_DEV_ID_in_ROUTER_BLK()” & “ral_write_VER_NUM_in_ROUTER_BLK()” are C API’s generated by “ralgen” to access registers named “DEV_ID” & “VER_NUM” located in a block named “ROUTER_BLK”. In general registers can be accessed using “ral_read_<reg>_in_<blk>” & “ral_write_<reg>_in_<blk>” macros present within the RAL-C interface.

The above C function can also be called from within a System Verilog testbench via DPI

File : ral_boot_seq_test.sv

import “DPI-C” context task system_bootup(int unsigned ral_model_ID);

program boot_seq_test();

initial

begin

router_blk_env env = new();

// Configuring the DUT

env.cfg_dut();

// Calling the C function using DPI

system_bootup(env.ral_model.get_block_ID()); // ‘ral_model ‘is the instance of the generated SV model in the env class

end

endprogram

In the example above, system_bootup is the service entry point which is called from the SV simulation and is passed the RAL Model reference. The ‘C’ code that is executed and simulation freezes till one of the registers accesses is made which then shifts the execution to the SV side.

image

The entire execution in ‘C’ is in ‘0’ time in the simulation timeline. The RAL C API hides the physical addresses of registers and the position and size of fields. The hiding is performed by functions and macros rather than an object-oriented structure like the native RAL model in SystemVerilog. This eliminates the need to compile a complete object-oriented model in an embedded processor object code with a limited amount of memory

Thus by using the RAL C interface one can develop a compact and efficient firmware code, while preserving as much as possible of the abstraction offered by RAL. I hope this was useful and do let me know your thoughts on the same if you plan to use this flow to meet some of your requirements of having your firmware and application-level code to be developed and debugged on a simulation of the design.

Please refer to the RAL userguide for more information. You can also refer to the example present within VCS installation located at $VCS_HOME/doc/examples/vmm/applications/vmm_ral/C_api_ex.

Posted in Register Abstraction Model with RAL, SystemC/C/C++, SystemVerilog, VMM | Comments Off

Generating microcode stimuli using a constrained-random verification approach

Posted by Shankar Hemmady on 15th July 2010

As microprocessor designs have grown considerably in complexity, generating microcode stimuli has become increasingly challenging.  An article by AMD and Synopsys engineers in EE Times explores using a hierarchical constrained-random approach to accelerate generation and reduce memory consumption, while providing optimal distribution and biasing to hit corner cases using the Synopsys VCS constraint solver.

You can find the full article in PDF here.

Posted in Stimulus Generation, SystemVerilog | Comments Off

Managing VMM Log verbosity in a smart way

Posted by Srinivasan Venkataramanan on 1st March 2010

Srinivasan Venkataramanan, CVC Pvt. Ltd.

Vishal Namshiker, Brocade Communications

Any complex system requires debugging at some point or the other. To ease the debug process, a good, proven coding practice is to add enough messages for the end user to aid in debug. However as systems become mature the messages tend to become too many and quickly users feel a need for controlling the messages. VMM provides a comprehensive log scheme that provides enough flexibility to let users control what-how-and-when to see certain messages (See: http://www.vmmcentral.org/vmartialarts/?p=259).

As we know the usage of `vmm_verbose/`vmm_debug macros requires the +vmm_log_default=VERBOSE run time argument. However when using this, there are tons of messages coming from VMM base classes too – as they are under the VERBOSE/DEBUG severity. Users at Brocade did not prefer to have these messages when debugging problems in user code. Parsing through these messages and staying focussed on debugging the problem at hand was tedious if post-processing of the log file was not implemented. Sure the messages from VMM base classes are useful to one set of/class of problems, but if the current problem is with user code, user would like to be able to exclude them easily. An interesting problem of contradictory requirements perhaps? Not really, VMM base class is well architected to handle this situation.

In VMM, there are two dimensions to control which messages user would like to see. The verbosity level specifies the minimum severity to display and you’ll see every message with a severity greater to equal to it. The other dimension/classification is based on TYPE. There are several values for the TYPE such as NOTE_TYP, DEBUG_TYP etc. Most relevant here is the INTERNAL_TYP – a special type intended to be used exclusively by VMM base class code. All debug related VMM library messages are classified under INTERNAL_TYP. You can use vmm_log::disable_types() method.

A quick example to do this inside the user_env is below:

virtual function void my_env::build();
super.build();

this.log.disable_types(.typs(vmm_log::INTERNAL_TYP),
.name(“/./”),.inst( “/./”) );
endfunction : build

This is a typical usage if everyone in the team agrees to such a change. However if a localized change is needed for few runs alone, one can combine the power of VCS’s Aspect Oriented Extensions (AOE) made to SystemVerilog. In this case, user supply a separate file as shown below:

///////////  disable_vmm_msg.sv
extends disable_log(vmm_log);
after function new(string name = “/./”,
string instance = “/./”,
vmm_log under = null);
this.disable_types(.typs(vmm_log::INTERNAL_TYP));
endfunction:new

endextends

Add this file to the compile list and voila! BTW, during recent SystemVerilog extensions discussion at DVCon 2010, AOP extensions are being requested by more users to be added to the LRM standard. With its due process, a version of AOP is likely to be added to the LRM in the future (let’s hope in the “near future” :) ).

Posted in Debug, Messaging, SystemVerilog, VMM | Comments Off

class factory

Posted by Wei-Hua Han on 26th August 2009

Weihua Han, CAE, Synopsys

As a well-known Object-Oriented technique, class factory has actually been applied in VMM since inception. For instance, in the vmm atomic and scenario generators, by assigning different blueprints to randomized_obj and scenario_set[] properties, these generators can generate transactions with user specified patterns. Using the class factory pattern, users create an instance with a pre-defined method (such as allocate() or copy()) instead of the constructor. This pre-defined method will create an instance from the factory not just the type of the variable being assigned.

VMM1.2 now simplifies the application of the class factory pattern within the whole verification environment so that users can easily replace any kind of object, transaction, scenario and transactor by a similar object. Users can easily follow the steps below to apply the class factory pattern within the verification environment.

1. define “new”, “allocate”, “copy” methods for a class and create the factory for the class.

class vehicle_c extends vmm_object;

//defines the new function. each argument should have default values

function new(string name=”",vmm_object parent=null);

super.new(parent,name);

endfunction

//defines allocate and copy methods

virtual function vehicle_c allocate();

vehicle_c it;

it = new(this.get_object_name,get_parent_object());

allocate = it;

endfunction

virtual function vehicle_c copy();

vehicle_c it;

it = new this;

copy = it;

endfunction

//these two macros will define necessary methods for class factory and create factory for the class

`vmm_typename(vehicle_c);

`vmm_class_factory(vehicle_c);

endclass

`vmm_typename, `vmm_class_factory will implement the necessary methods to support the class factory pattern, like get_typename(), create_instance(), override_with_new(), override_with_copy(), etc.

Users can also use `vmm_data_member_begin and `vmm_data_member_end to implement the “new”, “copy”, “allocate” methods conveniently.

2. create an instance using the pre-defined “create_instance()” method

To use the class factory, the class instance should be created with pre-defined create_instance() method instead of the constructor. For example:

class driver_c extends vmm_object;

vehicle_c myvehicle;

function new(string name=”",vmm_object parent=null);

super.new(parent,name);

endfunction

task drive();

//create an instance from create_instance method

myvehicle = vehicle_c::create_instance(this,”myvehicle”);

$display(“%s is driving %s(%s)”, this.get_object_name(),

myvehicle.get_object_name(),

myvehicle.get_typename());

endtask

endclass

program p1;

driver_c Tom=new(“Tom”,null);

initial begin

Tom.drive();

end

endprogram

For this example, the output is:

Tom is driving myvehicle(class $unit::vehicle_c)

3.  define a new class

Let’s now define the following new class which is derived from the original class vehicle_c:

class sedan_c extends vehicle_c;

`vmm_typename(sedan_c);

function new(string name=”",vmm_object parent=null);

super.new(name,parent);

endfunction

virtual function vehicle_c allocate();

sedan_c it;

it = new(this.get_object_name,get_parent_object());

allocate = it;

endfunction

virtual function vehicle_c copy();

sedan_c it;

it = new this;

copy = it;

endfunction

`vmm_class_factory(sedan_c);

endclass

And we would like to create myvehicle instance from this new class without modifying driver_c class.

4. override the original instance or type with the new class

VMM1.2 provides two methods for users to override the original instances or type.

  • override_with_new:(string name, new_class factory, vmm_log log,string fname=”",int lineno=0)

With this method, when create_instance() is called, a new instance of new_class will be created through facory.allocate() and returned.

  • override_with_copy(string name, new_class factory,vmm_log log, string fname=”", int lineno=0)

With this method, when create_instance() is called, a new instanced of new_class will be created through factory.copy() and returned.

For both methods, the first argument is the instance name, as specified in the create_instance() method, which users hope to override with the type of new_class. Users can use powerful name matching mechanism defined in VMM to specify the override happens on dedicated instance or all the instances of one class in the whole verification environment.

The code below will override all vehicle_c instances with sedan_c type in the environment:

program p1;

driver_c Tom=new(“Tom”,null);

vmm_log log;

initial begin

//override all vehicle_c instances with type of sedan_c

vehicle_c::override_with_new(“@%*”,sedan_c::this_type,log);

Tom.drive();

end

endprogram

And the output of the above code is:

Tom is driving myvehicle(class $unit::sedan_c)

If users only want to override one dedicated instance with a copy of another instance, users can call override_with_copy using the following code:

vehicle_c::override_with_copy(“@%Tom:myvehicle”,another_sedan_c_instance,log);

As the above example shows, with the class factory pattern short-hand macros provided with VMM1.2, users can easily use class factories patterns to replace transactors, transactions and other verification components without modifying the testbench code. I find this very useful for increasing the reusability of verification components.

Posted in Configuration, Modeling, SystemVerilog, Tutorial, VMM, VMM infrastructure | 1 Comment »

How to connect your SystemC Reference Models to your verification VMM based framework

Posted by Shankar Hemmady on 17th August 2009

Nasib_Naser

Nasib Naser, Phd, CAE, Synopsys

In this blog I will discuss the Use Model demonstrated in Figure 1 where a VMM layered testbench is used to verify an RTL DUT against a SystemC transaction level model. SystemVerilog allows for the creation of a reusable layered testbench architectures. The VMM methodology provides the basis for such a layered architecture. With a layered approach, transaction-level reference models can be easily integrated at the appropriate level to provide self-checking functions. In this Use Model the VMM Function layer is communicating with SystemC model using the TLI mechanism to perform read/write transactions, and using the same testbench scenarios the VMM command layer is driving the DUT at pin level.

clip_image002

Figure 1 – VMM driving TLM and RTL with checking

Synopsys’ VCS functional verification solution addresses the challenge of this use model with its SystemC-SystemVerilog Transaction-Level Interface (TLI). Using TLI SystemC interface methods can be invoked in SystemVerilog and vice versa. The **value add** for using TLI is that the SystemVerilog DPI based communication code that synchronizes both domains is automatically generated.

Let’s take a look at the various code components in SystemC and SystemVerilog based on the VMM methodology that enables such a verification use model. In the following example we define the read and write transactions as SC Interface methods.

class Buf_if: virtual public sc_interface {
public:

// do the pure virtual function read()/write() declarations here
virtual void read(unsigned int addr, unsigned int* data) = 0;

virtual void write(unsigned int addr, unsigned int data) = 0;
};

Following code shows VMM Transactor invoking SystemC transactions read and write at function layer. VMM transactor tb_mast is communicating to SystemC TLM using vmm channel tb_mast_out_ch1 and with the RTL model using the vmm channel tb_mast_out_ch2 channel, as shown in the following code:

class tb_master extends vmm_xactor;
virtual tb_if.master ifc;
tb_data_channel tb_master_in_ch;
tb_data_channel
tb_master_out_ch1;
tb_data_channel
tb_master_out_ch2;

extern virtual task main();
endclass: tb_master

task tb_master::main();
tb_data tr, tr_out;
super.main();
forever begin
.
..
// Send the Instruction to SC Reference Model
tb_master_out_ch2.put(tr_out);
// Send the Instruction to RTL
tb_mast_out_ch1.put(tr_out);
end
endtask: main

The following code shows VMM reference Transactors calling the SystemC transaction functions via the adaptor alu_tl_if_adpt_vlog which was automatically generated by VCS TLI.

class tb_ref extends vmm_xactor;

tb_data_channel     tb_ref_in_ch;

alu_tl_if_adpt_vlog alu_tl_if_adpt_vlog_inst0;


extern function new (string instance,
integer stream_id = -1,

tb_data_channel tb_ref_in_ch = null);

extern virtual task main();

endclass: tb_ref

task tb_ref::main();

super.main();

forever begin

case(tr_out.tb_data_type)

SA_SB_OP_GO : begin

alu_tl_if_adpt_vlog_inst0.write(addrA,a);

alu_tl_if_adpt_vlog_inst0.write(addrB,b);

alu_tl_if_adpt_vlog_inst0.write(addrOP,op);

alu_tl_if_adpt_vlog_inst0.read(addrOut,d);

tr_out.data_out = d;
end

endcase

end

endtask

This VCS TLI use model provides a complete and easy way to integrating blocking and non-blocking SystemC reference models into a VMM based multi-layer verification environment.

Posted in SystemC/C/C++, SystemVerilog, Transaction Level Modeling (TLM), Tutorial, VMM | Comments Off

Navigate through sea of log messages in a SoC env – smart application of vmm_log

Posted by Shankar Hemmady on 13th August 2009

srinivasan_VenkataramanBagath_SinghSrinivasan Venkataramanan & Bagath Singh, CVC Pvt. Ltd. Jaya Chelani, Quartics Technologies

Consider an SoC with several interfaces being verified.  It is quite common to have each interface report its activity via display messages.  Now let’s take up a case where-in a specific user is debugging say the AXI interface for failure, performance analysis etc.  While the full log file is providing the overall picture, it is quite possible to get lost quickly in the sea of messages.

Won’t it be nice to have AXI report all its activities to its own log file? This would greatly reduce the analysis/debug time for a given task.  However doing a change inside the testbench is not a good practice, if one were to do it via `ifdef, config etc.  Also, it may be restricted to only few tests/runs, and not for entire regression.

Huh, such a common scenario, you wonder.  Yes, and that’s why vmm_log has that capability built-in.  By default, loggers direct messages to STDOUT, but can be easily asked to direct them to a specific file. The function vmm_log::log_start(file_pointer) performs just this and can be used at will during run time.

1. program automatic sep_log_files;

2.   `include “vmm.sv

3.   vmm_log axi_logger, cvc_prop_if_logger;

4.   int axi_fp;

5.   initial begin : b1

6.     axi_fp = $fopen (“axi.log”, “w”);

7.     axi_logger = new (“AXI Log”, “0″);

8.     cvc_prop_if_logger = new (“CVC Log”, “0″); // Defaults to STDOUT

9.     axi_logger.log_start(axi_fp); // AXI alone is being sent to a separate LOG file

10.    `vmm_note (axi_logger, “Message from AXI Interface”);

11.    `vmm_note (cvc_prop_if_logger, “Messages from CVC Proprietary Interface”);

12.   end : b1

13.  endprogram : sep_log_files

Line 6 opens a file named “axi.log” for writing. Line 9 ensures that all messages form axi_log to the new log file pointer (created in Line 6). Note that one can point to any logger inside the env via hierarchical path and hence this can be done at a testcase level, if desired.

A few additional notes

1. As these methods work on specific log instance, they ought to be used once the log instance is constructed, typically after the vmm_env::build() phase.

2. There is also a counterpart to stop logging vmm_log::log_stop(file_pointer) to a separate file (it continues sending messages to STDOUT).

3. By design, the vmm_log::log_start() sends a copy of the message to the specified file in addition to STDOUT. This is done so that the complete log (sent to STDOUT) is intact from all loggers in the environment.

4. If it is desired by the user to avoid the duplication of log messages, the user can use an explicit call to vmm_log::log_stop(STDOUT).

5. These two methods can also be useful in performance measurement during a specific time window in a simulation run. One can setup notifiers to indicate the start & stop of the time window, and run a parallel thread to correspondingly perform log_start()  log_stop(). More on this in another blog post.

Posted in Debug, Messaging, SystemVerilog, Tutorial | 2 Comments »

Introducing VMM 1.2

Posted by Fabian Delguste on 27th July 2009

Fabian Delguste / Synopsys Verification Group

I’m very pleased to announce that VMM 1.2 beta is now available. You’re welcome to enroll our VMM 1.2 Beta program by signing up the form on VMM Central at:
http://www.vmmcentral.org/cgi-bin/beta/reg1.cgi

As you know, the VMM methodology defines industry best practices for creating robust, reusable and scalable verification environments using SystemVerilog. Introduced in 2005, it is the industry’s most proven verification methodology for SystemVerilog, with over 500 successful tape outs and over 50 SNUG papers. The VMM methodology enables verification novices and experts alike to develop powerful transaction-level, constrained-random verification environments. A comprehensive set of guidelines, recommendations and rules help engineers avoid common mistakes while creating interoperable verification components. The VMM Standard Library provides the foundation base classes for building advanced testbenches, while VMM Applications like the Register Abstraction Layer, Performance Analyzer and Hardware Abstraction Layer provide higher-level functions for improved productivity.

We’ve gained valuable feedback and insight after working with customers on hundreds of production verification projects using VMM. From time to time, we incorporate these findings back into VMM so the broad VMM community can take advantage of them. Such is the case with VMM 1.2, where we’ve made some great enhancements to improve productivity and ease of use. These changes, like those in last year’s VMM 1.1 update are backward compatible with earlier releases so you won’t have to change your existing code to take advantage of the new features.

Here are some of new features in VMM 1.2:

  • SystemC/SystemVerilog TLM 2.0 support
    • We have added TLM-2.0 support, which adds remote procedure call functionality between components and extends support to SystemC modules
    • TLM-2.0 can connect to VMM channels and notification. The conjunction of both interfaces creates a robust completion model. You can now easily integrate reference models written in SystemC with TLM-2.0 transport interface directly in your VMM testbench
  • Enhanced block-to-top reuse
    • Hierarchical phasing: we have added the concept of phasing and timelines for enhanced flexibility and reuse of verification components. You can now control execution order directly from transactors and get all phase to be coordinated in upper layers
    • Class Factory: this allows for faster stimulus configuration and reuse. It’s now possible to replace any kind of transaction, scenario, transactor and class
  • Enhanced ease-of-use
    • Implicit phasing: as each transactor comes with its own pre-defined phasing, You can use these phases to ensure my transactor are fully controlled and do follow some pre-defined phases conventions. Each transactor controls its own status. Serial phasing supports multiple timelines within a simulation, improving regression productivity. This way, you can run multiple tests one after the other in the same simulation run
    • Parameterization: VMM 1.2 adds new classes and concepts to provide additional functionality and flexibility. We have added parameterization support to many existing classes including channels, generators, scoreboard, notify and new base classes for making it easy to connect transactors together
    • Configuration Options: VMM 1.2 adds a rich set of methods for controlling testbench functionality from the runtime command line
    • Common Base Class: VMM 1.2 make possible multiple name spaces, hierarchical naming along with enhanced search functionality RTL configuration insures the same configuration is shared between RTL and testbench.

We’ll cover these new features in more detail in subsequent blogs.

If you are at DAC, you can pick up a copy of the new Doulos VMM Golden Reference Guide, which contains more details on the VMM 1.2 enhancements. This will be available in the VCS suite at the Synopsys booth, at the Synopsys Standards booth while Doulos is presenting, and at the Synopsys Interoperability Breakfast on Wednesday morning.

VMM 1.2 is available today for beta review. You’re welcome to sign up for our beta program. Please note that the beta program runs from today until August 31, 2009.

I look forward to hearing what you think about the new VMM 1.2 features!


Posted in Announcements, SystemVerilog, VMM, VMM infrastructure | Comments Off

How to connect your SystemC reference model to your verification framework

Posted by Janick Bergeron on 8th July 2009

Nasib Naser, Phd, CAE, Synopsys

Architects and designers have many reasons to start using the SystemC language to describe their high level specifications for their target System-on-Chip (SoC). In this blog I will not discuss the “why” the SystemC selection BUT the “how” to bring these models into the team’s design verification flow.

Mixed abstractions come from the fact that designers often develop models at the transaction level to capture the correct architecture and analyze the system performance and do so fairly early in the design cycle. Various components of a SoC developed with high level languages such as C, C++, SystemC, and SystemVerilog are increasingly in demand and for obvious reasons. They are faster to write and faster to simulate. High level reference models became increasingly important in verifying the implementation details at the RTL level.

Although SystemVerilog provides different methods (PLI, VPI, DPI, etc) to support communications between SystemVerilog and C/C++ domains, it is not so straight forward for SystemC interface methods. One important reason is that these SystemC methods can be “blocking”, i.e. these methods can “consume” time. And these “blocking” SystemC methods can return value as well. Users should build a very elaborate mechanism to maintain synchronization between the simulations running in the SystemVerilog and the SystemC domains.

Synopsys’ VCS functional verification solution addresses the challenge of this use model with its SystemC-SystemVerilog Transaction-Level Interface (TLI). With TLI, VCS automatically generates adapters to be instantiated in each domain to support SystemVerilog block calls of SystemC methods, or vice versa. These TLI adapters take care of the synchronization between both domains.

To demonstrate the ease of use of this unique interface we start by identifying the SystemC interface methods that SystemVerilog require accessibility to. In the following example we define Read and Write transactions as SC Interface methods as such:

#include<systemc.h>
class Mem_if : virtual public sc_interface {
public :

   //do the pure virtual function declaration here
  
virtual void rd_data(int addr, int &data)=0;
  
virtual void wr_data(int addr, int data)=0;
};

Manually we create an interface configuration file that declares the methods required for the interface. These methods are declared in a file called Mem_if.h. The content of this configuration file is as follows:

interface Mem_if
direction sv_calls_sc
verilog_adapter Mem_if_adapt_vlog
systemc_adapter Mem_if_adapt_sc

#include "Mem_if.h"

task rd_data
input int addVal
inout& int dataVal

task wr_data
input int addVal
input int dataVal

#endif
};

VCS uses this configuration file to generate the necessary SystemC and SystemVerilog code that enables complete synchronization. The command line is as follows:

%syscan -idf Mem_if.idf

This operation creates the following helper files:

Mem_if_adapt_sc.cpp
Mem_if_adapt_sc.h
Mem_if_adapt_vlog.sv

The helper file Mem_if_adapt_vlog.sv contains a SystemVerilog Interface called Mem_if_adapt_vlog which declares the following tasks:

task wr_data(input int addVal, input int dataVal);
task rd_data(input int addVal, inout int dataVal);

These tasks could then be wrapped and used in the SystemVerilog testbench to access SystemC methods as demonstrated in the following code:

module testbench;
reg clk;
int data;
int cnt;

// Instantiate the helper interface
Mem_if_adapt_vlog #(0) mem_if_adapt_vlog_inst();

// THE CODE BELOW IS WHERE ALL THE SC TLI FUNCTIONS ARE CALLED.
// VERILOG TASKS ARE WRITTEN AS A WRAPPER FOR READABILITY.
// THESE WRAPPER TASKS ARE INSTANTIATED IN THE VERILOG CODE.

// Calling SC TLI function **Write Data**
task
sv_wr_data(input int addr, input int data);
  
mem_if_adapt_vlog_inst.wr_data(addr,data);
endtask

// Calling SC TLI function **Read Data**
task
sv_rd_data(input int addr, output int data);
  
mem_if_adapt_vlog_inst.rd_data(addr,data);
endtask

. . .

// Use the wrapper code
always @(posedge clk)
begin
   if(cnt < 100)
   begin
      #8
sv_wr_data(cnt,cnt+100);
     
cnt = cnt+1;
   end
end

always @(posedge clk)
begin
   if(cnt < 100)
   begin
      #3
sv_rd_data(cnt-1,data);
  
end
end

endmodule

Posted in SystemC/C/C++, SystemVerilog, Tutorial, VMM | 1 Comment »

A generic functional coverage solution based on vmm_notify

Posted by Wei-Hua Han on 15th June 2009

Weihua Han, CAE, Synopsys

Functional coverage plays an essential role in Coverage Driven Verification. In this blog, I’ll explain a modular way of modeling and implementing  functional coverage models.

SystemVerilog users can take the advantage of  the “covergroup” construct to implement functional coverage. However this is not enough. The VMM methodology provides some important design-independent guidelines on how to implement functional coverage in a reusable verification environment.

Here are a few guidelines  from the VMM book that I consider very important for implementing a functional coverage model:

“Stimulus coverage shall be sampled after submission to the design” because in some cases it is possible that not all generated transactions will be applied to DUT.

“Stimulus coverage should be sampled via a passive transactor stack” for the purpose of reuse so that the same implementation can be used in a different stimulus generation structure. For example in another verification environment, the stimulus may be generated by a design block instead of testbench component.

“Functional coverage should be associated with testbench components with a static lifetime” to avoid creating a large number of coverage group instances. So “Coverage groups should not be added to vmm_data class extensions”. These static testbench components include monitors, generators, self-checking structure, etc.

“The comment option in covergroup, coverpoint, and cross should be used to document the corresponding verification requirements”.
You can find some more details on these and other functional coverage related guidelines in the VMM book, pages 263-277.

In an earlier post “Did you notice vmm_notify?”, Janick showed how vmm_notify can be used to connect a transactor to a passive observer like a functional coverage model. Here, let me borrow his idea to implement the following VMM-based functional coverage example.

1.    Transaction class

1. class eth_frame extends vmm_data;
2.    typedef enum {UNTAGGED, TAGGED, CONTROL} frame_formats_e;
3.    rand frame_formats_e format;
4.    rand bit[3:0] src;
5.     …
6. endclass

There is some random property defined in the transaction class.  We would like to collect the coverage information for these properties in our coverage class, for example “format” and “src”.

2.    A generic subscribe class

1.  class subscribe #(type T = int) extends vmm_notify_callbacks;
2.     local T obs;
3.     function new(T obs, vmm_notify ntfy, int id);
4.        this.obs = obs;
5.        ntfy.append_callback(id, this);
6.     endfunction
7.     virtual function void indicated(vmm_data status);
8.          this.obs.observe(status);
9.       endfunction
10. endclass

The generic subscribe class specifies the behavior for “indicated”method which will be called when vmm_notify “indicate” method is called. Then with using this generic subscribe class, functional coverage and other observer models only need to implement the desired behavior with “observe” method. Please note that in line 8 the vmm_data object is passed to “observe” method through vmm notification status information.

3.    Coverage class

1.   class eth_cov;
2.      eth_frame tr;

3.      covergroup cg_eth_frame(string cg_name,string cg_comment,int cg_at_least);
4.         type_option.comment = “eth frame coverage”;
5.         option.at_least = cg_at_least;
6.         option.name=cg_name;
7.         option.comment = cg_comment;
8.         option.per_instance=1;
9.         cp_format:coverpoint tr.format {
10.         type_option.weight=10;
11.       }
12.       cp_src: coverpoint tr.src {
13.          illegal_bins ilg = {4′b0000};
14.         wildcard bins src0[] = {4′b0???};
15.         wildcard bins src1[] = {4′b1???};
16.         wildcard bins src2[] = (4′b0??? => 4′b1???);
17.      }
18.       format_src_crs: cross cp_format, cp_src {
19.          bins c1 = !binsof(cp_src) intersect {4’b0000,4’b1111 };
20.      }
21.    endgroup

22.    function new(string name=”eth_cov”, vmm_notify notify, int id);
23.       subscribe #(eth_cov) cb=new(this,notify,id);
24.       cg_eth_frame = new(“cg_eth_frame”,”eth frame coverage”,1);
25.    endfunction
26.    function void observe (vmm_data tr);
27.       $cast(this.tr,tr);
28.       cg_eth_frame.sample();
29.    endfunction
30. endclass

The coverage group is implemented in coverage class eth_cov. And this coverage class is registered to one vmm_notify service through scubscribe class. The coverage group is sampled in “observe” method so when the notification is indicated the interesting properties will be sampled.

4.    Monitor

1.   class eth_mon extends vmm_xactor;
2.      int OBSERVED;
3.      eth_frame tr; // Contains reassembled eth_frame transaction

4.      function new(…)
5.         OBSERVED = this.notify.configure();
6.         …
7.      endfunction
8.      protected virtual task main();
9.         forever begin
10.          tr = new;
11.          //catch transaction from interface
12.         …
13.         this.notify.indicate(OBSERVED,tr);
14.         …
15.       end
16.    endtask
17. endclass

The monitor extracts the transaction from the interface then it indicates the notification with the transaction as the status information (line 13).

5.    Connect monitor and coverage object

1.   class eth_env extends vmm_env;
2.      eth_mon mon;
3.      eth_cov cov;
4.      …
5.      virtual function void build();
6.         …
7.         mon = new(…);
8.         cov = new(“eth_cov”,mon.notify,mon.OBSERVED);
9.         …
10.    endfunction
11.     …
12. endclass

In the verification environment, the coverage object is created with the monitor notification service interface and the proper notification ID.

There are other ways to implement functional coverage in VMM based environments.  For example a callback-based implementation is used in the example located under sv/examples/std_lib/vmm_test in the VMM package which can be downloaded from www.vmmcentral.org.

I haven’t discussed assertion coverage in this post, which are another important type of “functional coverage”.  If you are interested in using assertions for functional coverage please check out chapter 3 and chapter 6 in the VMM book for its suggestions.

Posted in Communication, Coverage, Metrics, Register Abstraction Model with RAL, Reuse, SystemVerilog, VMM | 4 Comments »

Protocol Layering Using Transactors

Posted by Janick Bergeron on 9th June 2009

jb_blog Janick Bergeron
Synopsys Fellow

Bus protocols, such as AHB, are ubiquitous and often used in examples because they are simple to use: some control algorithm decides which address to read or write and what value to expect or to write. Pretty simple.

But data protocols can be made a lot more complex because they can often be layered arbitrarily. For example, an ethernet frame may contain an IP segment of an IP frame that contains a TCP packet which carries an FTP frame. Some ethernet frames in that same stream may contain HDLC-encapsulated ATM cells carrying encrypted PPP packets.

How would one generate stimulus for these protocol layers?

One way would be to generate a hierarchy of protocol descriptors representing the layering of the protocol. For example, for an ethernet frame carrying an IP frame, you could do:

class eth_frame extends vmm_data;
rand bit [47:0] da;
rand bit [47:0] sa;
rand bit [15:0] len_typ;
rand ip_frame payload;
rand bit [31:0] fcs;

endclass

class ip_frame extends vmm_data;
eth_frame transport;
rand bit [3:0] version;
rand bit [3:0] IHL;

rand bit [7:0] data;
endclass

That works if you have exactly one IP frame per ethernet frame. But what if your IP frame does not fit into the ethernet frame and it needs to be segmented? This approach works when you have a one-to-one layering granularity, but not when you have to deal with one-to-many (i.e. segmentation), many-to-one (i.e. reassembly, concatenation) or plesio-synchronous (e.g. justification) payloads.

This approach also limits the reusability of the protocol transactions: the ethernet frame above can only carry an IP frame. How could it carry other protocols? or random bytes? How could the IP frame above be transported by another protocol?

And let’s not even start to think about error injection…

One solution is to use transactors to perform the encapsulation. The encapsulator would have an input channel for the higher layer protocol and an output channel for the lower layer protocol.

class ip_on_ethernet extends vmm_xactor;
ip_frame_channel in_chan;
eth_frame_channel out_chan;

endclass

The protocol transactions are generic and may contain generic references to their payload or transport layers.

class eth_frame extends vmm_data;
vmm_data transport[$];
vmm_data payload[$];

rand bit [47:0] da;
rand bit [47:0] sa;
rand bit [15:0] len_typ;
rand bit [  7:0] data[];
rand bit [31:0] fcs;

endclass

class ip_frame extends vmm_data;

vmm_data transport[$];
vmm_data payload[$];

rand bit [3:0] version;
rand bit [3:0] IHL;

rand bit [7:0] data;
endclass

The transactor main() task, simply waits for higher-layer protocol transactions, packs them into a byte stream, then lays the byte stream into the payload portion of new instances of the lower-layer protocol.

virtual task main();
super.main();

forever begin
bit [7:0] bytes[];
ip_frame ip;
eth_frame eth;

this.wait_if_stopped_or_empty(this.in_chan);
this.in_chan.activate(ip);

// Pre-encapsulation callbacks (for delay & error injection)…

this.in_chan.start();
ip.byte_pack(bytes, 0);
if (bytes.size() > 1500) begin

`vmm_error(log, “IP packet is too large for Ethernet frame”);
continue;
end

eth = new(); // Should really use a factory here

eth.da = …;
eth.sa = …;
eth.len_typ = ‘h0800;  // Indicate IP payload

eth.data = bytes;
eth.fcs = 32’h0000_0000;

ip.transport.push_back(eth);
eth.payload.push_back(ip);

// Pre-tx callbacks (for delay and ethernet-level error injection)…

this.out_chan.put(eth);
eth.notify.wait_for(vmm_data::ENDED);

this.in_chan.complete();

// Post-encapsulation callbacks (for functional coverage)…

this.in_chan.remove();
end
endtask

When setting the header fields in the lower-layer protocol, you can use values from the higher-layer protocols (like setting the len_typ field to 0×0800 above, indicating an IP payload), you can use values configured in the encapsulator (e.g. a routing table) or they can be randomly generated with appropriate constraints:

if (!route.exists(ip.da)) begin
bit [47:0] da = {$urandom, $urandom};  // $urandom is only 32-bit

da[41:40] = 2’b00; // Unicast, global address
route[ip.da] = da;
end
eth.da = route[ip.da];

The protocol layers observed by your DUT are then defined by the combination and order of these encapsulation transactor.

vmm_scheducler instances may also be used at various points in the layering to combine multiple streams (maybe carrying different protocol stacks and layers) into a single stream.

Posted in Modeling, Modeling Transactions, Phasing, Structural Components, SystemVerilog, Tutorial | 3 Comments »

How VMM can help controlling transactors easily?

Posted by Fabian Delguste on 29th May 2009

Fabian Delguste / Synopsys Verification Group

Controlling VMM transactors can sometimes be a bit hectic. A typical situation I see is when I have registered a list of transactors for driving some DUT interfaces but only want to start a few of them. Another common situation is when I want to turn off scenario generators and replay transactions directly from a file. Yet another task I often face is registering transactor callbacks without knowing where they are exactly located in the environment.

As you can see, there are many situations where fine-grain functional control of transactors is necessary.

Since VMM 1.1 came out, I have been using a new base class called vmm_xactor_iter that allows accessing any transactor directly by name. In this case all I need to do is to construct a vmm_xactor_iter with regular expression and use the iterator to loop thru all matching transactors.

To understand better how this base class works, I’ll show you a real life example. The scope of this example is to show how to start generators only when vmm_channel playback has not been turned on. As you know vmm_channel can be used to replay transactions directly from files that contain transactions that were recorded in a previous session. This can speed up simulation by turning off constraint solving.

1. string match_xactors = (cfg.mode == tb_cfg::PLAYBACK) ? /Drivers/” : “/./”;

2.

3. `foreach_vmm_xactor(vmm_xactor, “/./”, match_xactors)

4. begin

5.  `vmm_note(log, $psprintf(“Starting %s”, xact.get_instance()));

6.   xact.start_xactor();

7. end

  • In line 1, match_xactors string takes “Drivers*” value when playback mode is selected otherwise it takes “.” when no this mode is not selected. In the first case, transactors named “Drivers” match otherwise all transactors, including generators match
  • In line 3, `foreach_vmm_xactor macro is used to create a vmm_xactor_iter using previous regular expression. This macro can be used to traverse and start all matching objects by using the xact object to access transactors

In case you’d like to have more control over vmm_xactor_iter, it’s possible to use its first() / next() / xactor() methods to traverse matching transactors. Also it’s possible to ensure the regular expression returns at least one transactor. Here is the same example written using these methods.

1. string match_xactors = (cfg.mode == tb_cfg::PLAYBACK) ? “/Drivers/” : “/./”;

3. vmm_xactor_iter iter = new(“/./”, match_xactors);

4. if(iter.xactor()==null)

5. `vmm_fatal(log, $psprintf(“No matching transactors for ‘%s’”, match_xactors));

7. while(iter.xactor()!=null) begin

8.   xact = iter.xactor();

9.   xact.start_xactor();

10.end

Should you need to reclaim the memory allocation required to store all transactors, it’s possible to enable garbage collection by invoking vmm_xactor::kill().

The good news is that vmm_xactor_iter allows me to:

  • Configure the transactor without knowing its hierarchy
  • Provide dynamic access to transactors
  • Reduce code for multiple configurations and callback extensions
  • Use powerful regular expressions for name matching
  • Reuse transactors: no need to modify code when changing the environment content

I hope you find vmm_xactor_iter, and all of the other VMM features, as useful as I do

Posted in Configuration, Structural Components, SystemVerilog, Tutorial | Comments Off

The hidden pitfalls of type name hiding in a derived class

Posted by Wei-Hua Han on 9th May 2009

Weihua Han, CAE, Synopsys

Here I describe one major difference between virtual and non-virtual methods, and how type name hiding can collide with these methods. This is commonly used OOP feature in SystemVerilog.

“type name hiding” here refers to the situation where a user type definition in the derived class uses the same type name defined in base class, i.e, the new type definition in derived class “hides” the type definition in base class.

SystemVerilog LRM (Language Reference Manual) does not forbid hiding base class type definition in a derived class.  Here’s a typical example:

  1. class company_xactor_c;
  2.         typedef enum { SOFT, HARD } reset_e;
  3.         function void do_reset (reset_e rst);
  4.                 if(rst==SOFT)
  5.                         $display("Do SOFT reset");
  6.                 else
  7.                         $display("Do HARD reset");
  8.         endfunction
  9. endclass
  10. class pci_prj_xactor_c extend company_xactor_c;
  11.                 typedef enum { SOFT, HARD, PCI } reset_e; 
  12.                 function void do_reset(reset_e rst);
  13.                         if(rst==SOFT)..
  14.                         else if(rst==HARD)…
  15.                         else ….
  16.                 endfunction
  17. endclass

Here we intend to have a new “reset_e” definition in PCI project by adding a new PCI specific label called “PCI”.  Next, we also rewrite the “do_reset” method which now comes with a specific “reset” behavior for PCI project.  Then, all transactors derived from “pci_prj_xactor_c” for this PCI project can use this “PCI” label to define PCI specific behaviors.

It seems everything works fine until now. But we will see problems crop up when virtual function (polymorphism) comes into the picture. 

Below is a simplified but realistic scenario.

As you know, VMM allows you to extend the base class library. It’s not uncommon for you to add some specific methods and to create your own base class library, such as:

  1. class company_xactor extends vmm_xactor;
  2.         virtual function void tb_start(int nth_run,
  3. reset_e  reset_e_list[$]);
  4.         …
  5.         endfunction
  6. endclass

Here a company-wide “tb_start” method is added to the company-wide transactor class. And “reset_e” here is derived from vmm_xactor. We may also have a project specific transactor base class like:

  1. class project_xactor extends company_xactor; 
  2.                 typedef enum int { 
  3.                          SOFT_RST, 
  4.                          PROTOCOL_RST, 
  5.                          FIRM_RST, 
  6.                          HARD_RST, 
  7.                          PON, 
  8.                          PCIE 
  9.                 } reset_e; 
  10.     virtual function void tb_start(int nth_run,  reset_e  reset_e_list[$]); 
  11.             … 
  12.             endfunction
  13. endclass

And we have a new “reset_e” definition with some new project specific labels.  “tb_start” is also refined for the project.

Although this may seem fine, there is a fairly serious error in this code: the virtual function in SystemVerilog class follows similar common OOP polymorphism requirement.

These OOP requirements are described in SystemVerilog LRM. Here is an excerpt:

“Virtual methods provide prototypes for the methods that later override them, i.e., all of the information generally found on the first line of a method declaration: the encapsulation criteria, the type and number of arguments, and the return type if it is needed. Later, when subclasses override virtual methods, they shall follow the prototype exactly by having matching return types and matching argument names, types, and directions. It is not necessary to have matching default expressions, but the presence of a default shall match. ”

In above code, “reset_e” definition in derived project_xactor class actually defines a new type, i.e, project_xactor::reset_e, which is a different one as the “reset_e” defined in vmm_xactor.  Now the prototypes of tb_start in company_xactor is
      void company_xactor::tb_reset(int nth_run, vmm_xactor::reset_e reset_e_list[$]).
And in project_xactor it is:
      void project_xactor::tb_reset(int nth_run, project_xactor::reset_e reset_e_list[$]).
These are different.  They are not in compliance with the LRM. Sure, a user can omit to make “tb_start” virtual.  Technically, this will work fine.  However, in this case, all the benefits of using polymorphism will be lost.

Although type name hiding is allowed, we need to be careful in how it is used, since polymorphism is quite commonly employed in VMM and in most testbench environments. This is a very important aspect of reuse and extendibility.

In my next blog post, I will discuss more about what is overriding, what is hiding, what are overridden and what are hiden in SystemVerilog.

Posted in SystemVerilog, VMM | 1 Comment »

VMM 1.1 is finally out

Posted by Janick Bergeron on 18th December 2008

Even though VMM 1.1 is only the second Open Source release of the VMM library, it follows in a long series of customer-based productivity enhancements that have been made to VMM since the original specification was published back in 2005.

I would like to thank all of the customers who kindly contributed to the requirement specification, reviews and beta-testing. The feedback from late beta customers and early adoptees has been very positive.

So, what’s new in VMM 1.1?

First, we have fixed all of the non-compliant language usage that were reported in VMM 1.0.1.

It also adds two new functional coverage models to the generated Register Abstraction model. One measures that all addresses have been accessed, the other that all interesting field values have been used.

It adds a new Performance Analysis package to measure coverage metrics that are more statistical in nature, compared to the simple singular counts of traditional functional coverage.

It adds a Multi-Stream Scenario Generator (MSSG) that can generate and coordinate stimulus across multiple channels, anywhere in the verification environment. Multi-Stream Scenarios (MSS) can be built of individual transactions, single-stream scenarios (used by the VMM Scenario Generator) or other multi-stream scenarios.

It adds a transactor iterator that brings the simple named-based controllability of vmm_log to vmm_xactor. It makes it possible to control and configure transactors reguardless of where they are located in the verification environment.

It adds a message catching mechanism that combines, in a single handler, the capability of vmm_log::modify() and vmm_log::wait_for_msg(). It can catch any message, making it easy to react or mask exceptions and perform negative testing (i.e. making sure that your environment does catch errors when they are present).

It adds a command-line option manager that makes it easy to define, document and manage environment and test options. Options can be specified on the command line or in a series of option files. And the +vmm_help option will display the automatically-generated usage documentation of all the options available in an environment or test.

And finally, it adds a mechanisms for run-time test selection for those who prefer to compile all of your tests in one simulation binary then select, via the command line, which test to execute.

Examples are provided that demonstrate how to use each of these new productivity-enhancing capabilities.

There are other minor additions that are too numerous to mention here. Just refer to the file “RELEASE.txt” in the distribution for an exhaustive list.

VMM 1.1 requires the following versions of VCS: 2006.06-SP2-9(*), 2008.09-4 or 2008.12. As usual, we believe that it is implemented using IEEE-compliant SystemVerilog code — but should some non-compliant usage be identified, be assured that it is entirely unintentional and it will be fixed as soon as possible.

The Open Source distribution of VMM 1.1 can be downloaded immediately from vmmcentral.org but will also be included in VCS2009.06. If you wish to use this version of VMM with OpenVera and/or DesignWare VIPs that require OpenVera interoperability, you must download the “SvOv” version of the distribution and patch your VCS installation using the “patch_vcs” script. This latter version is identical except for some additional encrypted code that enables methodology-level interoperability between OpenVera and SystemVerilog.

Give it a try and let us know what you think. And keep those suggestions and requests coming! We have a healthy backlog of enhancement requests that are sure to keep VMM moving forward for years to come!

(*) See VCS2006.06-SP2.txt in the distribution. The `VCS2006_06 symbol MUST be defined to enable some work-arounds unsupported SystemVerilog features.

Posted in Announcements, SystemVerilog, VMM, VMM infrastructure | 1 Comment »

Size does matter

Posted by Janick Bergeron on 6th July 2008

The VMM Register Abstract Abstraction layer is documented with a 64-bit data value system. For example, the write() method in the vmm_ral_field class is documented as:

task vmm_ral_field::write(output vmm_rw::status_e status,
                          input  bit [63:0]       value,
                          ...);

However, to be able to handle fields (and registers and virtual registers) that are wider than 64 bits, the implementation uses the VMM_RAL_DATA_WIDTH macro to define the maximum size of a field (and register and virtual register):

class vmm_ral_field;
   local bit [`VMM_RAL_DATA_WIDTH-1:0] value;
   ...
   task vmm_ral_field::write(output vmm_rw::status_e              status,
                             input  bit [`VMM_RAL_DATA_WIDTH-1:0] value,
                             ...);
   ...
endclass

Of course, the macro is defined by default to “64″, unless you define it otherwise:

`ifndef VMM_RAL_DATA_WIDTH
   `define VMM_RAL_DATA_WIDTH 64
`endif

Normally, you don’t really need to worry about the value of this macro, as SystemVerilog will silently extend or trim the value assigned to the value argument (or returned via that same argument in the read() method). That works fine if you know a priori the size of the field and use an appropriately sized expression or variable for it.

However, when writing generic code that needs to work with unknown field sizes, you must be careful to declare any temporary or working variable using:

bit [`VMM_RAL_DATA_WIDTH-1:0] tmp;

to avoid inadvertently losing data. You can get the actual size of the field by using the vmm_ral_field::get_n_bits() method. See the pre-defined register tests in $VMM_HOME/sv/RAL/tests for examples.

All of the above is documented in Chapter 16 “Maximum Data Size” of the RAL User’s Guide.

But why not make the size of the field a parameter and do away with the macro altogether?

You mean beside the fact that VCS did not support parameterized class at the time RAL was created? :-)

There are a few reasons…

One could be the potential memory saving by making the data members holding the mirrored field value match the actual size of the field. However, experiments on a customer RAL model with hundreds of thousands of fields (with a VMM_RAL_DATA_WIDTH defined to 512) did not show a significant memory saving (at least with VCS).

The other is that the notion of an absolute maximum field size would still be required to be able to write generic code, independent of their respective sizes. It is impossible to operate on a field through a parameterized class unless its size is known a priori because you must specialize1 a parameterized class whenever you use it. It would thus still be necessary to create a size-generic field base class that can deal with any size of fields (so a function such as vmm_ral_reg::get_fields() can be provided). As a user, you would still need to know about and use the VMM_RAL_DATA_WIDTH macro.

Finally, the automatic value resizing only works for input, output and inout arguments. A ref argument (such as the ones use in the field pre/post_read/write callback methods) require that a variable of the exact same size be used. Again, this would require knowing, a priori, how large a field is before you could write code for it.

So my current line of thinking is that class parameters are fine for generic types, but not that useful for generic sizes. Or am I all wet?

1and that specialization must be defined at compile-time because you can’t use a run-time expression to specialize a class. For example, you could not call get_n_bits() to obtain the size of the field then use the result to specialize a parameterized class to access it.

Posted in Coding Style, Register Abstraction Model with RAL, Reuse, SystemVerilog | 4 Comments »