Verification Martial Arts: A Verification Methodology Blog

Avoiding Redundant Simulation Cycles with your UVM VIP based simulation with a Simple Save-Restore Strategy

Posted by paragg on March 6th, 2014

In many verification environments, you reuse the same configuration cycles across different testcases. These cycles might involve writing and reading from different configuration and status registers, loading program memories, and other similar tasks to set up a DUT for the targeted stimulus. In many of these environments, the time taken during this configuration cycles are very long. Also, there is a lot of redundancy as the verification engineers have to run the same set of verified configuration cycles for different testcases leading to a loss in productivity. This is especially true for complex verification environments with multiple interfaces which require different components to be configured.

The Verilog language provides an option of saving the state of the design and the testbench at a particular point in time. You can restore the simulation to the same state and continue from there. This can be done by adding appropriate built in system calls from the Verilog code. VCS provides the same options from the Unified Command line Interpreter (UCLI).

However, it is not enough for you to restore simulation from the saved state. For different simulations, you may want to apply different random stimulus to the DUT. In the context of UVM, you would want to run different sequences from a saved state as show below.

In the above example apart from the last step which varies to large extent, the rest of the steps once established need no iterations.

Here we explain how to achieve the above strategy with the simple existing UBUS example available in the standard UVM installation. Simple changes are made in the environment to show what needs to be done to bring in this additional capability. Within the existing set of tests, the two namely, “test_read_modify_write” and “test_r8_w8_r4_w4”, differs only w.r.t the master sequence being executed – i.e. “read_modify_write_seq” and “r8_w8_r4_w4_seq” respectively.

Let’s say that we have a scenario where we would want to save a simulation once the reset_phase is done and then start executing different sequences post the reset_phase the restored simulations. To demonstrate a similar scenario through the UBUS tests, we introduced a delay in the reset_phase of the base test (in a real test, this may correspond to the PLL lock, DDR Initialization, Basic DUT Configuration).

The following snippet shows how the existing tests are modified to bring in the capability of running different tests in different ‘restored’ simulations.

As evident in the code we made two major modifications.

  • Shifted the setting of the phase default_sequence from the build phase to the start of the main phase.
  • Get the name of the sequence as an argument from the command-line and process the string appropriately in the code to execute the sequence on the relevant sequencer.

As you can see, the changes are kept to a minimum. With this, the above generic framework is ready to be simulated.  In VCS, one of the different ways, the save/restore flow can be enabled as follows.

Thus above strategy helps in optimal utilization of the compute resources with simple changes in your verification flow. Hope this was useful and you manage to easily make the changes in your verification environment to adopt this flow and avoid redundant simulation cycles.

Share and Enjoy:
  • Print
  • Digg
  • del.icio.us
  • Facebook
  • Google Bookmarks
  • LinkedIn
  • RSS
  • Twitter

Posted in Automation, Coding Style, Configuration, Creating tests, Customization, Optimization/Performance, Organization, Reuse, Stimulus Generation, SystemVerilog, Tutorial, UVM, Uncategorized, Verification Planning & Management | 1 Comment »

1 Star2 Stars3 Stars4 Stars5 Stars (No Ratings Yet)
Loading ... Loading ...

SNUG-2012 Verification Round Up – Miscellaneous Topics

Posted by paragg on March 29th, 2013

In my final installment of the series of blogs summing up the various SNUG verification papers of 2012, I try to cover the user papers on the Design IP/Verification IP and SystemC and SystemVerilog co-simulation. Please find my earlier blogs on the other domains here: System Verilog Language, Methodologies & VCS technologies

DesignWare core USB3.0 Controller (DWC_usb3) can be configured as a USB3.0 Device Controller. When verifying a system that comprises a DWC_usb3 Device Controller, the verification environment is responsible for bringing up the DWC_usb3 Device Controller to its proper operation mode to communicate with the USB3.0 Host. The paper Integrating DesignWare USB3.0 Device Controller In a UVM-based Testbench from Ning Guo of Paradigm Works describes the process of configuring and driving the DWC_usb3 Device Controller in a UVM based verification environment using the Discovery USB 3.0 Verification IP. This paper describes how the verification environment needs to be created so that it’s highly configurable and reusable.

The AMBA 4 ACE specification enables system level cache coherency across clusters of multicore processors, such as the ARM Cortex-A15 and Cortex-A7 MPCore™ processors .This ensures optimum performance and power efficiency of complex SoC designs. However, the design complexity associated with these capabilies is also higher.  And it throws up new verification challenges.  In the paper, Creating AMBA4 ACE Test Environment With Discovery VIP”, Whitney Huang, Sean Chou, MediaTek Inc, demonstrates how to tackle complex verification challenges increase their verification productivity by using Synopsys Discovery AMBA ACE VIP.

The paper, Verification Methodology of Dual NIC SOC Using VIPs by A.V. Anil Kumar, Mrinal Sarmah, Sunita Jain of Xilinx India Technology Services Pvt. Ltd, talks about how various features of Synopsys PCIe and Ethernet Verification IPs can be exploited to help in the efficient verification of the DUT across various traffic configurations. The paper explores how the VIP Application Programming Interface (API)s can leveraged in the  tests cases to reach high  functional coverage numbers  in a very short duration. They also show how a dual NIC verification environment can effectively use Ethernet VIP APIs to test various Media Access Control (MAC) features. Finally conclude how of the implementation can be used across future revisions of their design.

The ability to analyze the performance of the SoC at the early stage of the design can make a significant different to the end product.  This can lead to more accurate and an earlier estimate of the desired performance that is expected.  Dayananda Yaraganalu Sadashivappa, Igal Mariasin, Jayaprakash Naradasi of SanDisk India Device Design Centre Pvt. Ltd., in the paperGeneric MLM environment for SoC Performance Enhancement”, outlines the solution that was found by using the Synopsys VIP models. The VIPs were used in conjunction with interconnect, which in this case is a Multi-Layer-Matrix (MLM). The environment was built leveraging the VMM base classes. The VMM multiple stream scenario (vmm_ms_scenario) base class was used to create the traffic across the matrix, and the performance meters were constructed using the base classes. The callbacks were leverage appropriately help in collating the statistics. Multiple knobs were used to make the environment generic and configurable. The approach helped in finding multiple performance bugs which could not have been easily found using conventional verification.

In the paper, “User Experience Verifying Ethernet IP Core”, Puneet Rattia of Altera Corporation, presents his experience with verifying the Altera® 40-100Gbps Ethernet IP core utilizing VMM environment while integrating the Ethernet VIP from Synopsys. He explains how he created a full suite of system and blocks level regression tests and then goes on to show how he  utilizes the coverage mapping capabilities of VCS to merge the results across these various testbenches and produce meaningful reports. Besides showing how to reuse the verification infrastructure at the SoC level, the paper also demonstrates how they went in for horizontal reuse by integrating the reference SystemC based models developed and prototyped in the early phase of the project.

UVM 1.x includes support for the communication interfaces defined by the SystemC TLM-2.0 standard. This enables integration of SystemC TLM-2.0 IP into a SystemVerilog UVM verification environment. Dr David Long, John Aynsley, Doug Smith, Doulos in the paper A Beginner’s Guide to Using SystemC TLM-2.0 IP with UVMdescribes how this is done best. They talk about the fact that the connection between SystemC and SystemVerilog currently requires a tool specific interface such as Synopsys Transaction Level Interface (TLI). This paper begins with a brief overview of TLM-2.0 aimed at novice users. It then discusses the steps required to add a SystemC TLM-2.0 model into a SystemVerilog UVM environment and simulate it with VCS. At each step, issues that users will face are explored and suggestions made for practical fixes, showing the relevant pieces of code. Finally, the paper gives a summary of areas where the UVM implementation of TLM-2.0 differs from the SystemC standard and proposes workarounds to ensure correct communication between the SystemVerilog and SystemC domains.

There is an inherent need to enable the horizontal reuse of components created during the architecture and exploration stage. Subhra S Bandyopadhyay, Pavan N M, Intel Technology India Pvt. Ltd, in Integrating SystemC OSCI TLM 2.0 Models to OVM based System Verilog Verification Environments talks about how  theur architecture team creates SystemC models  for early performance analysis and accelerated software development. In OVM-based verification environment, the objective was to reuse this model as a reference model and thus helped in reducing the overall environment bring-up time. The challenge was not only to integrate the SystemC model in the OVM-based verification environment but also to be able to efficiently send transactions from SV to SystemC and vice versa. This paper explores the successful integration of SystemC TLM2 components in OVM based verification environments and also highlight how the VCS TLI (Transaction Level Interface) adapters help TLM2.0 sockets in SystemC to communicate with those in SV and vice versa.

Truly, I feel overwhelmed by the numbers of papers and the interesting use of technology across a variety of domains on which user share their experiences across the various SNUG conferences. As we speak, the SNUG events for 2013 have started, and the stage is all set for a new set of very informative and interesting sessions. I am sure most of you would be attending the SNUIG conferences in your area. . You can find the detailed schedule of those here.

Share and Enjoy:
  • Print
  • Digg
  • del.icio.us
  • Facebook
  • Google Bookmarks
  • LinkedIn
  • RSS
  • Twitter

Posted in Announcements, Automation, Callbacks, Coding Style, Communication, Reuse, Structural Components, SystemC/C/C++, SystemVerilog, Transaction Level Modeling (TLM), Tutorial, UVM, VMM | Comments Off

1 Star2 Stars3 Stars4 Stars5 Stars (No Ratings Yet)
Loading ... Loading ...

Coverage coding: simple tips and gotchas

Posted by paragg on March 28th, 2013

Author - Bhushan Safi (E-Infochips)

Functional coverage has been the most widely accepted way by which we track the completeness of any constrained random testbench. However, does achieving 100% functional coverage means that the DUV is bug free? Certainly not , but it boosts the confidence of the verification engineer and management team.

Based on my experience of defining functional covergroups for different projects, I realized that coverage constructs and options in the SystemVerilog language have their own nuances for which one needs to keep an eye out. These “gotchas” have to be understood so that coverage can be used optimally to achieve appropriate usage results in correct alignment with the intent desired. Let me talk about some of these finer aspects of coverage so that you can use the constructs more productively.

Usage of ignore_bins

The ‘ignore_bins’ construct is meant to exclude a collection of bins from coverage.  While using this particular construct, you might end up with multiple ‘shapes’ issues (By ‘shapes’ I mean “Guard_OFF” and “Guard_ON”, which appears in the report whenever ‘ignore_bins’ is used). Lets look at a simple usage of ignore_bins is as shown in figure 1.

Looking at the code in figure 1, we would assume that since we have set “cfg.disable = 1” the bin with value 1 would be ignored from the generated coverage report.  Here we use the ‘iff’ condition to try to match our intent of not creating a bin for the variable under the said condition.  However in simulations, where the sample_event is not triggered, we see that we end up having an instance of our covergroup which still expects both the bins to be hit. (See the generated report in figure 1). Why does this happen? If you dig deep into the semantics, you will understand that the “iff” condition will come into action only when the event sample_event is triggered. So if we are writing  ‘ignore_bins’ for a covergroup which may/may not be sampled on each run then we need to look for an alternative.  Indeed there is a way to address this requirement and that is through the usage of the multi-mastered intelligent ternary operator. Look at the code in figure 2 to see how the ternary operator is used to model the same intent.

Now the report is as you expect!!!

Using the above mentioned coding style we make sure that the bin which is not desired in specific conditions is ignored irrespective of the condition of whether or not the covergroup is being sampled.  Also, we use the value “2’b11” to make sure that we don’t end up in ignoring a valid value for the variable concerned.

Using detect_overlap

The coverage option called “detect_overlap” helps in issuing a warning if there is an overlap between the range list (or transition list) of two bins of a coverpoint. Whenever we have plenty of ranges to be covered, and there is a possibility of overlap, it is important to use this option.

Why is it important and how can you be impacted if you don’t use it? You might actually end up with incorrect and unwanted coverage results!

Let’s look at an example. In the above scenario, if a value of 25 is generated, the coverage scores reported would be 50% when the desired outcome would ideally have been 25%. This is because the value ‘25’ contributes to two bins out of four bins when that was probably not wanted. The usage of ‘detect_overlap’ would have warned you about this and you could have fixed the bins to make sure that such a scenario doesn’t occur.

Coverage coding for crosses and assigning weight

What does the LRM (Table 19-1—Instance-specific coverage options) say about the ’weight’ attribute?  

If set at the covergroup syntactic level, it specifies the weight of this covergroup instance for computing the overall instance coverage of the simulation. If set at the coverpoint (or cross) syntactic level, it specifies the weight of a coverpoint (or cross) for computing the instance coverage of the enclosing covergroup. The specified weight shall be a non-negative integral value.”

What kinds of surprises can a combination of cross and option.weight create?

The SystemVerilog LRM shows a very simple way of writing a cross. Let’s look at the code below.

The expectation here is that for a single simulation (expecting one of the bins to be hit), we will end up with 25 % coverage as we have specified the weight of the individual coverpoints to zero. However, what essentially happens is the following, 2 internal coverpoints for check_4_a and check_4_b are generated, which are used to compute the coverage score of the ‘crossed’ coverpoint here. So you’ll end up having a total of four coverpoints, two of which have option.weight specified to 0 (i.e. CHECK_A and CHECK_B) and two of which are coverpoints with option.weigh as 1 (i.e. check_4_a and check_4_b). Thus for a single simulation, you will not get the 25% coverage desired.

Now with this report we see the following issues:

  • => We see four coverpoints while expectation is only two coverpoints
  • => The weights of the individual coverpoints is set to be expected to zero as option.weight is set to ‘0’
  • => The overall coverage numbers are undesired.

In order to avoid above disastrous results we need to take care of following aspects:

  • => Use the type_option.weight = 0, instead of option.weight = 0.
  • => Use the coverpoint labels instead of coverpoint names to specify the cross.

Hope my findings will be useful for you and you will use these options/attributes appropriately to get the best value out of your coverage metrics (without losing any sleep or debug cycles to figure out why they didn’t behave as you expected them to)!

Share and Enjoy:
  • Print
  • Digg
  • del.icio.us
  • Facebook
  • Google Bookmarks
  • LinkedIn
  • RSS
  • Twitter

Posted in Coding Style, Coverage, Metrics, Customization, Reuse, SystemVerilog, Tutorial | 9 Comments »

1 Star2 Stars3 Stars4 Stars5 Stars (1 votes, average: 5.00 out of 5)
Loading ... Loading ...

SNUG-2012 Verification Round Up: VCS Technologies

Posted by paragg on March 15th, 2013

Continuing from my earlier blog posts about SNUG papers on the SystemVerilog language and verification methodologies, I will now go through some of the interesting papers that highlight core technologies in VCS which users can deploy to improve their productivity. We will walk through various stages of the verification cycle including simulation bring up, RTL simulation, gate-level simulation, regression and simulation debug which each benefit from different features and technologies in VCS.

Beating the SoC challenges

One of the biggest challenges that today’s complex SoC architectures pose is rigorous verification of SoC designs. Functional verification of the full-system represented by these mammoth scale (> 1 billion transistors per chip) designs calls for the verification environment to employ advanced methodologies, powerful tools and techniques. Constrained-random stimulus generation, coverage-driven-completion criteria, assertion-based checking, faster triage and debug turnaround, C/C++/SystemC co-simulation, gate-level verification, etc. are just some of these methods and techniques which aid in tackling the challenge.  Patrick Hamilton, Richard Yin, Bobjee Nibhanupudi, Amol Bhinge of Freescale in their paper SoC Simulation Performance: Bottlenecks and Remedies discuss the several simulation and debug bottlenecks experienced during the verification of a complex next-generation SoC; they discuss how they gained knowledge of these bottlenecks and  overcame them using VCS diagnostic capabilities, profile reports, VCS arguments, testbench modifications, smarter utilities, fine tuning of computing resources, etc.

The challenge of simulation environment is the sheer amount of tests that are being written and need to be managed. As more tests are added to the regressions, there is a quantifiable impact on several aspects of the project. These include a dramatic and unsustainable increase in the overall regression time.  As the regression time increases, the intermediate interval for collecting and analyzing results between successive regressions run shrinks.  Overlaps arising from having multiple regressions in flight can cause failure to track design bugs for several snapshots, which can also result in the inability to ensure coverage tracking by design management tools. Given constantly shortening project timelines, this affects the time-to-market of core designs and their dependent SoC products. Simulation-based Productivity Enhancements using VCS Save/Restore by Scot Hildebrandt, Lloyd Cha, AMD, Inc. looks at using VCS’s Save/Restore feature to develop steps involving binary image capture of sections of simulation. These “sections” consist of aspects replicated in all tests, like the reset sequence, or allow the skipping of the specific phases of a failing test which are ‘clean’. They further provide statistics in terms of the reduction in the regression time and the memory footprint that the saved image would typically enable. They also talk about how the dynamic re-seeding of the test case with the stored images enabled them to leverage the full strength and capabilities of the CRV methodologies.

The paper SoC Compilation and Runtime Optimization using VCS by Santhosh K.R., Sreenath Mandagani of Mindspeed Technologies(India) talks about the Partition Compile flow and associated methodology to improve TAT(turnaround time) for SOC compilations. The flow leverages v2k configurations, parallel compilation and various performance optimization switches of VCS-MX. They further explain how a SoC can be partitioned into multiple functional blocks or clusters and each block can be selectively replaced with empty shells if that particular functionality is not exercised in the desired tests. Also the paper demonstrates how new tests can be added and run without requiring to recompile the whole SoC. Thus using Partition Compile flow, only a subset of SoC or test bench blocks would be recompiled based on the dependencies across clusters. They share the productivity gains in compile TAT as well, overall runtime gains for the current SoC and the savings in overall disk space requirement. This is then shown to correlate with the reduction in the license usage time and disk space which leads to savings desired.

By the way, now there been further developments in the latest VCS release to help ensure isolation of switches between partitions in the SoC. This additional functionality helps reduce memory, decrease runtime, and reduce initial scratch compile time even further while maintaining the advantages of partition compile.

Addressing the X-optimism challenges – X-prop Technology

Gate simulations are onerous and many of the risks normally mitigated by gate simulations can now be addressed by RTL lint tools, static timing analysis tools and logic equivalence checking. However, one risk that persists, until now, is the potential for optimism in the X semantics of RTL simulation.  The semantics of the Verilog language can create mismatches between RTL and gate-level simulation due to X-optimism Also, the semantics of X’s in gate simulations are pessimistic resulting in simulation failures that don’t represent real bugs.

Improved X-Propagation using the xProp technology by Rafi Spigelman of Intel Corporation presents the motivation for having the relevant semantics for X-propagation. The process of how such semantics was validated and deployed on a major CPU design at Intel is also described. He delves upon its merits and limitations, and comments on the effort required in enabling such semantics in the RTL regressions.

Robert Booth of Freescale Inc. in the paper X-Optimism Elimination during RTL Verification explains how the chips suffer from X-optimism issues that often conceal design bugs. The deployment of low power techniques such as power-shutdown in today’s designs exacerbates these X-optimism issues. To address these problems they show how they leverage the new simulation semantics with VCS that more accurately models non-deterministic values in logic simulation. The paper describes how X-optimism can be eliminated during RTL verification.

In the paper X-Propagation: An Alternative to Gate Level Simulation”, Adrian Evans, Julius Yam, Craig Forward @cisco.com explores X-Propagation technology which attempts to model X behavior more accurately at the RTL level. In this paper, they review the sources of X’s in simulation and their handling in the Verilog language. They further describe their experience using this feature on design blocks from Cisco ASICs including several simulation failures that did not represent RTL bugs. They conclude by suggesting how X-Propagation can be used to reduce and potentially eliminate gate-level simulations.

In  the paper Improved x-propagation semantics: CPU server learning, Peeyush Purohit, Ashish Alexander, Anees Sutarwala of Intel stresses on the need to model and simulate silicon like behavior in RTL simulations. They bring out the fact that traditionally Gate-Level Simulations have been used to fill that void but come at the cost of time and resources. Then they go on to explain the limitations with the regular 4-value Verilog/System Verilog based RTL simulation and also cover the specifications for enhanced simulator semantics to overcome those limitations. They explain how design issues that were found on their next-generation CPU server project used the enhanced semantics; the potential flow implications and a sample methodology implementing the new semantics are provided.

Power-on-Reset (POR) is a key functional sequence for all SoC designs and any bug not detected in this logic can lead to dead silicon. Complexities in reset logic pose increasing challenges for verification engineers to catch any such design issue(s) during RTL/GL simulations. POR sequence simulations are many times accompanied by ‘X’ propagation due to non-resettable flops and un-initialized logic. Generally uninitialized and non-resettable logic is initialized to 0’s or 1’s or some random values using Forces or Deposits to bypass unwanted X propagation. Ideally, one would like to have a stimulus to try all possible combinations of initial values for such logic but this is practically impossible due to short design cycle and limited resources. This practical limitation can leave space for critical design bugs that may remain undetected during the design verification cycle. Deepak Jindal, Freescale, India in the paper Gaps and Challenges with Reset Logic Verification discusses these reset logic simulation challenges in detail and shares the experience of evaluating the new semantics in VCS technology which can help to catch most of the POR bugs/issues during RTL stage itself.

SNUG allows users to discuss their current challenges and emerging solutions they are using to address them. You can find all SNUG papers online via solvnet (Of course a login required!!!).

Share and Enjoy:
  • Print
  • Digg
  • del.icio.us
  • Facebook
  • Google Bookmarks
  • LinkedIn
  • RSS
  • Twitter

Posted in SystemVerilog, Tools & 3rd Party interfaces, Tutorial | 1 Comment »

1 Star2 Stars3 Stars4 Stars5 Stars (No Ratings Yet)
Loading ... Loading ...

SNUG-2012 Verification Round Up – Language & Methodologies – II

Posted by paragg on March 3rd, 2013

In my previous post, we discussed papers that leveraged SystemVerilog language and constructs, as well as those that covered broad methodology topics.  In this post, I will summarize papers that are focused on the industry standard methodologies: Universal Verification Methodology (UVM) and Verification Methodology Manual (VMM).

Papers on Universal Verification Methodology (UVM)

Some users prefer not to use the base classes of a methodology directly. Adding a custom layer enables them to add in additional capabilities specific to their requirements. This layer would consist of a set of generic classes that extend the classes of the original methodology. These classes provide a convenient location to develop and share the processes that are relevant to an organization for re-use across different projects. Pierre Girodias of IDT (Canada) in the paper, Developing a re-use base layer with UVMfocuses on the recommendations that adopters of these “methodologies” should follow while developing the desired ‘base’ layer.  In the paper typical problems and possible solutions are also identified while developing this layer. Some of these including dealing with the lack of multiple-inheritance and foraging through class templates.

UVM provides many features but fails to define a reset methodology, forcing users to develop their own methodology within the UVM framework to test the ‘reset’ of their DUT. Timothy Kramer of The MITRE Corporation in the paper “Implementing Reset Testingoutlines several different reset strategies and enumerates the merits and disadvantages of each. As is the case for all engineering challenges, there are several competing factors to consider, and in this paper the different strategies are compared on flexibility, scalability, code complexity, efficiency, and how easily they can be integrated into existing testbenches. The paper concludes by presenting the reset strategy which proved to be the most optimal for their application.

The ‘Factory’ concept in advanced OOP based verification methodologies like UVM is something that has baffled most verification engineers. But is it all that complicated? Not necessarily  and this is what is  explained by Clifford E. Cummings of Sunburst Design, Inc. in his paper– The OVM/UVM Factory & Factory Overrides – How They Works – Why They Are Important” . This paper explains the fundamental details related to the OVM/UVM factory and explain how it works and how overrides facilitate simple modification to the testbench component and transaction structures on a test by test basis. This paper not only explains why the factory should be used but also demonstrates how users can create configurable UVM/OVM based environments without it.

Register Abstraction Layer has always been an integral component of most of the HVL methodologies defined so far. Doug Smith of Doulos, in his paper, Easier RAL: All You Need to Know to Use the UVM Register Abstraction Layer”, presents a simple introduction to RAL. He distills the adoption of UVM RAL into a few easy and salient steps which is adequate for most cases. The paper describes the industry standard automation tools for the generation of register model.  Additionally the integration of the generated model along with the front-door and backdoor access mechanism is explained in a lucid manner.

The combination of the SystemVerilog language features coupled with the DPI & VPI language extensions can enable the testbench to generically react to value-changes on arbitrary DUT signals (which might or might not be part  of a standard interface protocol).  Jonathan Bromley, Verilab in “I Spy with My VPI: Monitoring signals by name, for the UVM register package and more”, presents a package which supports both value probing and value-change detection for signals identified at runtime by their hierarchical name, represented as a string. This provides a useful enhancement to the UVM Register package, allowing the same string to be used for backdoor register access.

Proper testing of most digital designs requires that error conditions be stimulated to verify that the design either handles them in the expected fashion, or ignores them, but in all cases recovers gracefully. How to do it efficiently and effectively is presented in “UVM Sequence Item Based Error Injectionby Jeffrey Montesano and Mark Litterick, Verilab. A self-checking constrained-random environment can be put to the test when injecting errors, because unlike the device-under-test (DUT) which can potentially ignore an error, the testbench is required to recognize it, potentially classify it, and determine an appropriate response from the design. This paper presents an error injection strategy using UVM that meets all of these requirements. The strategy encompasses both active and reactive components, with code examples provided to illustrate the implementation details.

The Universal Verification Methodology is a huge win for the Hardware Verification community, but does it have anything to offer Electronic System Level design? David C Black from Doulos Inc. explores UVM on the ESL front in the paperDoes UVM make sense for ESL?The paper considers UVM and SystemVerilog enhancements that could make the methodology even more enticing.

Papers on Verification Methodology Manual (VMM)

Joseph Manzella of LSI Corp in “Snooping to Enhance Verification in a VMM Environmentdiscusses situations in which a verification environment may have to peek at internal RTL states and signals to enhance results, and provides guidelines of what is an acceptable practice. This paper explains how the combination of vmm_log (logger class for VMM) and +vmm_opts (Command-line utility to change the different configurable values) helps in creating a configurable message wrapper for the internal grey-box testing. The techniques show how different assertion failures can be re-routed through the VMM messaging interface. An effective and reusable snooping technique for robust checking is also covered.

At Silicon Valley in Mechanism to allow easy writing of test cases in a SystemVerilog Verification environment, then auto-expand coverage of the test case Ninad Huilgol of VerifySys addresses designer’s apprehension of using a class based environment  through a  tool that leverages the VMM base classes. It  automatically expands the scope of the original test case to cover a larger verification space around it, based on a user friendly API that looks more like Verilog, hiding the complexity underneath.

Andrew Elms of Huawei in Verification of a Custom RISC Processorpresents the successful application of VMM to the verification of a custom RISC processer. The challenges in verifying a programmable design and the solutions to address them  are presented. Three topics explored in detail are the – Use of Verification Planner, Constrained random generation of instructions, Coverage closure.The importance of the Verification Plan as the foundation for the verification effort is explored. Enhancements to the VMM generators are also explored. By default VMM data generation is independent of the current design state, such as register values and outstanding requests. RAL and generator callbacks are used to address this. Finally, experiences with coverage closure are presented.

Keep you covered on the varied verification topics in the upcoming blog ahead!!! Enjoy reading!!!

Share and Enjoy:
  • Print
  • Digg
  • del.icio.us
  • Facebook
  • Google Bookmarks
  • LinkedIn
  • RSS
  • Twitter

Posted in Announcements, Register Abstraction Model with RAL, UVM, VMM, VMM infrastructure | 1 Comment »

1 Star2 Stars3 Stars4 Stars5 Stars (No Ratings Yet)
Loading ... Loading ...

SNUG-2012 Verification Round Up – Language & Methodologies – I

Posted by paragg on February 25th, 2013

As in the previous couple of years, last year’s SNUG – Synopsys User Group showcased an amazing number of useful user papers   leveraging the capabilities of the SystemVerilog language and verification methodologies centered on it.

I am always excited when I see this plethora of useful papers and I try to ensure that I set aside some time to go through all these user experiences.  Now, as we wait for SNUG, Silicon Valley to kick-start the SNUG events for this year, I would want to look back at some of the very interesting and useful paper from the different SNUGs of the year 2012.  Let me start with talking about a few papers in the area of the System Verilog language and SV methodologies.

Papers leveraging the SystemVerilog language and constructs

Hillel Miller of Freescale in the paper “Using covergroups and covergroup filters for effective functional coverageuncovers the mechanisms available for carving out the coverage goals. In the p1800-2012 of the SystemVerilog LRM, new constructs are provided just for doing this. The construct that is focused on is the “with” construct. The new construct provides the ability to carve out of a multidimensional range of possibilities for a sub-range of goals. This is very relevant in a “working” or under development setup that requires frequent reprioritization to meet tape-out goals.

The paperTaming Testbench Timing: Time’s Up for Clocking Block Confusionsby Jonathan Bromley, Kevin Johnston of Verilab, reviews the key features and purpose of clocking blocks and then examines why they continue to be a source of confusion and unexpected behavior for many verification engineers. Drawing from the authors’ project and mentoring experience, it highlights typical usage errors and how to avoid them. They clarify the internal behavior of clocking blocks to help engineers understand the reasons behind common problems, and show techniques that allow clocking blocks to be used productively and with confidence. Finally, they consider some areas that may cause portability problems across simulators and indicate how to avoid them.

Inference of latches and flops based on coding styles has always been a topic creates multiple viewpoints. There are other such scenarios of synthesis and simulation mismatches that one typically comes across. To address all such ambiguity, language developers have provided different constructs to provide for an explicit resolution based on the intent. To help us gain a deeper understanding of the topic, Don Mills of Microchip Technology Inc., presented the related concepts in the paper “Yet Another Latch and Gotchas Paper” @ SNUG Silicon Valley. This paper discusses and provides solutions to issues that designers using SystemVerilog for design come across, such as: Case expression issue for casez and casex, Latches generated when using unique case or priority case, SRFF coding style problems with synthesis, SystemVerilog 2009 new definition of logic

Gabi Glasser from Intel presented the paper “Utilizing SystemVerilog for Mixed-Signal Validation@ SNUG Israel, where he proposed a mechanism for simplifying analysis and increasing coverage for mixed signal simulations.  The method proposed here was to take advantage of SystemVerilog capabilities, which enables defining a hash (associative) array with unlimited size. During the simulation, vectors are created for required analog signals, allowing them to be analyzed within the testbench along or at the end of the simulation, without requiring saving these signals into a file. The flow change enables the ability to launch a large scale mixed signal regression while allowing an easier analysis of coverage data.

Design pattern is a general reusable solution to a commonly recurring problem within a given context. The benefit of using design patterns is clear: it gives a common language for designers when approaching a problem, and gives a set of tools, widely used, to solve issues as they come up.  The paper Design Patterns In Verification” by Guy Levenbroun of Qualcomm explores several common problems, which might rise, during the development of a testbench, and how we can use design patterns to solve these problems. The patterns are categorized majorly into following areas: creational (eg factory), structural (eg composite) and behavioral (eg template) are covered in the paper.

Arik Shmayovitsh, Avishay Tvila, Guy Lidor of Sigma Designs , in their paper “Truly reusable Testbench-to-RTL  connection for System Verilog , presents  a novel approach of  connecting the DUT and testbench using consistent semantics while  reusing the testbench. This is achieved by abstracting the connection layer of each testbench using the SystemVerilog ‘bind’ construct. This ensures that the only thing that is required to be done to reuse the testbench for a new DUT would be to identify the instance of the corresponding DUT.

In the paper, A Mechanism for Hierarchical Reuse of Interface Bindings, Thomas Zboril of Qualcomm (Canada) explores another method to instantiate SV interfaces, connect them to the DUT and wrap the virtual interfaces for use in the test environment. This method allows the reuse of all the code when the original block level DUT becomes a lower level instance  in a larger subsystem or chip. The method involves three key mechanisms: Hierarchical virtual interface wrappers, Novel approach of using hierarchical instantiation of SV interfaces, Another novel approach of automatic management of hierarchical references via SV macros (new)

Thinh Ngo & Sakar Jain of Freescale Semiconductor, in their paper, “100% Functional Coverage-Driven Verification Flow propose a coverage driven verification flow that can efficiently achieve 100% functional coverage during simulation. The flow targets varied functionality, focuses at transaction level, measures coverage during simulation, and fails a test if 100% of the expected coverage is not achieved. This flow maps stimulus coverage to functional coverage, with every stimulus transaction being associated with an event in the coverage model and vice versa. This association is derived from the DUT specification and/or the DUT model. Expected events generated along with stimulus transactions are compared against actual events triggered in the DUT. The comparison results are used to pass or fail the test. 100% functional coverage is achieved via 100% stimulus coverage. The flow enables every test with its targeted functionality to meet 100% functional coverage provided that it passes.

Papers on Verification Methodology

In the paper, Top-down vs. bottom-up verification methodology for complex ASICs, Paul Lungu & Zygmunt Pasturczyk of Ciena at Canada covers the simulation methodology used for two large ASICs requiring block level simulations. A top-down verification methodology was used for one of the ASICs while a larger version needed an expanded bottom-up approach using extended simulation capabilities. Some techniques and verification methods such as chaining of sub environments from block to top-level are highlighted  along with challenges and solutions found by the verification team. The paper presents a useful technique of  of passing a RAL (Register Abstraction Layer) mirror to the C models which are used as scoreboards in the environment. The paper also presents a method of generating stable clocks inside the “program” block.

In the paper,Integration of Legacy Verilog BFMs and VMM VIP in UVM using Abstract Classes by Santosh Sarma of Wipro Technologies(India) presents an alternative approach where Legacy BFMs written in Verilog and not implemented using ‘Classes’ are hooked up to higher level class based components to create a standard UVM VIP structure. The paper also discusses an approach where existing VMM Transactors that are tied to such Legacy BFMs can be reused inside the UVM VIP with the help of the VCS provided UVM-VMM Interoperability Library. The implementation makes use of abstract classes to define functions that invoke the BFM APIs. The abstract class is then concretized using derived classes which give the actual implementation of the functions in the abstract class. The concrete class is then bound to the Verilog instance of the BFM using the SystemVerilog “bind” concept. The concrete class handle is then used by the UVM VIP and the VMM Transactor to interact with the underlying Verilog BFM. Using this approach the UVM VIP can be made truly reusable by using run time binding of the Verilog BFM instance to the VIP instead of using hardcoded macro names or procedural calls.

A Unified Self-Check Infrastructure - A Standardized Approach for Creating the Self-Check Block of Any Verification Environmentby John Sotiropoulos, Matt Muresa , Massi Corba of Draper Laboratories Cambridge, MA, USA presents a structured approach for developing a centralized “Self-Check” block for a verification environment. The approach is flexible enough to work with various testbench architectures and is portable across different verification methodologies. Here, all of the design’s responses are encapsulated under a common base class, providing a single “Self-Check” interface for any checking that needs to be performed. This abstraction, combined with a single centralized scoreboard and a standardized set of components, provides the consistency needed for faster development and easier code maintenance. It expands the concept of ‘self-check’ to incorporate the white-box monitors (tracking internal DUT state changes etc.) and Temporal Models (reacting to wire changes) along-with traditional methodologies for enabling self-checking.

For VMM users looking at migrating to UVM, there is another paper from Courtney Schmitt of Analog Devices, Inc.Transitioning to UVM from VMMdiscusses the process of transitioning to a UVM based  environment from VMM Differences and parallels between the two verification methodologies are presented to show that updating to UVM is mostly a matter of getting acquainted with a new set of base classes. Topics include UVM phases, agents, TLM ports, configuration, sequences, and register models. Best practices and reference resources are highlighted to make the transition from VMM to UVM as painless as possible.

Share and Enjoy:
  • Print
  • Digg
  • del.icio.us
  • Facebook
  • Google Bookmarks
  • LinkedIn
  • RSS
  • Twitter

Posted in Announcements, Coverage, Metrics, Creating tests, Customization, Modeling, Optimization/Performance, Reuse, SystemVerilog, UVM, Uncategorized, VMM, VMM infrastructure | 3 Comments »

1 Star2 Stars3 Stars4 Stars5 Stars (No Ratings Yet)
Loading ... Loading ...

PST Verification – Best Practices

Posted by namitg on February 4th, 2013

When I see around most common electronic devices I find are smartphones, tablets, laptops, televisions. One of the common user need which drives competition among these devices is to conserve battery life and at the same time save/restore the application. From the semiconductor perspective, requirement is translated to “designs should be using advanced low power techniques to use minimum power and at the same time save/restore critical design states”.

Present and future chip designs, it is almost a necessity that from very beginning power intent is mostly clear. This is the main reason that power intent description has become integral part of RTL development. Are you concerned about power consumption during RTL development?

Using UPF 2.0 abstract power supply networks are defined, also called as “supply sets”. There is no ambiguity in defining supply network as it is part of planning power architecture. But the most important step which needs careful planning is to define valid power states combinations. Let me go in more depth to explain this, once supplies are defined explicitly using “create_supply_port” OR implicitly using “create_supply_set”, then valid supply states are defined next using “add_port_state” and “add_power_state”. This means defining when state will be “off”, “high_voltage”, “low_voltage” etc. There can be many permutation and combination of these supply states which are possible, but not all are expected and needed for design functionality. So valid combinations (or legal states) of these supplies are defined as “power state table” (PST). How complex is PST in your current designs?

PST holds the key regarding which power domain crossovers needs protection policies like isolation. Implementation tools uses PST as reference to insert protection devices on power domain crossovers, similarly static verification tools uses PST as reference to validate correctness of power intent. But what if PST has some bug, like while defining UPF user provided invalid state in PST. In such cases relying on static tools and implementation tools is not enough, and this is one of the many reasons where there is a need for dynamic verification. Based on design specification (power specification), verification team will create testplan which will create test vectors and this includes putting design into all specified valid states and once power-aware simulation will corrupt crossovers during shutdown, failing simulation will uncover PST bug. Did you ever catch PST bug in simulation? What if that bug was not caught and slipped to implementation?

 Consider following example, where there are 2 domains PD1 and PD2, where VDD1 is primary supply of PD1 and VDD2 is primary supply of PD2, as per specification below are the valid states

some_text

some_text


While writing UPF, user forget to define power state “State3”, in such cases static verifier such as MVRC may not be able to spot this problem as PST is reference. But verification engineer developing test will create a test to cover state “State3” and VCS-NLP will spot the problem in simulation.

In above example consider a situation where “State3” is not a valid state and not defined, in such situation if isolation policy is defined for domain PD1 which is applying isolation on all outputs, static verifier such as MVRC will immediately pin-point that paths from VDD1 to VDD2 does not need isolation as they are simultaneously getting ON and OFF states.

some_text

Above example illustrates low power verification flow which guarantees smooth implementation of designs using advanced low power techniques.

As size of SoC is growing and increased usage of IP, there is a strong need for hierarchical methodology, which is also applicable on low power. In hierarchical low power methodology, block level power intent is captured in UPF and once blocks are verified at top-level all block UPF are loaded using “load_upf –scope”, this create some interesting situations where there can be many supplies and power states at block level, but top level supplies are not that many, which means many block supplies are connected to same supply at top-level and so are there power states. This is the basic need of PST merging, concept where different power state tables needs to be merged in a smart way to create top-level PST.

Consider the following example:

some_text

PST defined in block level UPF of block_b and block_c are following:

some_text

Merged PST will merge in such a way that matches top level connection like VDD_B2 and VDD_C1 is connected to same supply at top level which is VDD2. Below is the merged PST for this example:

some_text

Once dynamic verification/simulation is done, it is very important to review low power specific coverage to ensure all valid states mentioned using PST are properly verified. How your current low power verification flow ensures that proper coverage mechanism is in place to cover power intent. Does it covers PST states/transitions, power switch states/transitions, supply net/set power state/simstate transitions?

Reviewing coverage results is same as non-LP coverage, it is important from PST verification perspective to review illegal states reached during simulation. How do you ensure all legal state combinations are covered during simulation?

To summarize, best practices for implementing low power techniques on a design, power network should be defined in UPF, along with this careful addition of power states and PST will be done. This point onwards either manually defines protection policies or quickly verifies using static checker like MVRC OR iterates with MVRC and completes the protection policies. Verification team will create test plan based on design specification which will also include tests for bringing design in all legal PST states. After performing power aware simulation using tool like VCS-NLP, usual simulation debug will be done for failing tests, it is important to ensure LP specific coverage is met. At this point, design is ready for implementation using tool like Design Compiler. Are you using these best practices in your low power verification flow? Share your experience or best practices.

Share and Enjoy:
  • Print
  • Digg
  • del.icio.us
  • Facebook
  • Google Bookmarks
  • LinkedIn
  • RSS
  • Twitter

Posted in Low Power | Comments Off

1 Star2 Stars3 Stars4 Stars5 Stars (No Ratings Yet)
Loading ... Loading ...

Reusable Sequences in UVM

Posted by haribali on November 12th, 2012

In this blog, I will be sharing the necessary steps one has to take while writing a sequence to make sure it can be reusable. Having developed sequences and tests in UVM, while using Verification IPs, I think writing sequences is the most challenging part in verifying any IP.  Careful planning is required to write sequences without which we end up writing one sequence for every scenario from scratch. This makes sequences hard to maintain and debug.

As we know, sequences are made up of several data items, which together form an interesting scenario. Sequences can be hierarchical thereby creating more complex scenarios. In its simplest form, a sequence should be a derivative of the uvm_sequence base class by specifying request and response item type parameter and implement body task with the specific scenario you want to execute.

class usb_simple_sequence extends uvm_sequence #(usb_transfer);

    rand int unsigned sequence_length;
    constraint   reasonable_seq_len { sequence_length < 10 };
    //Constructor
    function new(string name=”usb_simple_bulk_sequence”);
        super.new(name);
    endfunction

    //Register with factory
    `uvm_object_utils(usb_simple_bulk_sequence)

    //the body() task is the actual logic of the sequence
    virtual task body();
        repeat(sequence_length)
        `uvm_do_with(req,   {
            //Setting the device_id to 2
            req.device_id   == 8’d2;
            //Setting transfer type to BULK
            req.type   == usb_transfer::BULK_TRANSFER;
        })
    endtask   : body
endclass

In the above sequence we are trying to send usb bulk transfer to a device whose id is 2. Test writers can invoke this by just assigning this sequence to the default sequence of the sequencer in the top level test.

class usb_simple_bulk_test extends uvm_test;

    …
    virtual function void   build_phase(uvm_phase phase );
        …
        uvm_config_db#(uvm_object_wrapper)::set(this, "sequencer_obj.
        main_phase","default_sequence", usb_simple_sequence::type_id::get());
        …
    endfunction : build_phase
endclass

So far, things look simple and straight forward. To make sure the sequence is reusable for more complex scenarios, we have to follow a few more guidelines.

  • First off, it is important to manage the end of test by raising and dropping objections in the pre_start and post_start tasks in the sequence class. This way we raise and drop objection only in the top most sequence instead of doing it for all the sub sequences.

task pre_start()

    if(starting_phase != null)
    starting_phase.raise_objection(this);
endtask : pre_start


task post_start()

    if(starting_phase   != null)
    starting_phase.drop_objection(this);
endtask : post_start


Note that starting_phase is defined only for the sequence which is started as the default sequence for a particular phase. If you have started it explicitly by calling the sequence’s start method then it is the user’s responsibility to set the starting_phase.

class usb_simple_bulk_test extends uvm_test;

    usb_simple_sequence seq;
    …
    virtual function void main_phase(uvm_phase   phase );
        …
        //User need to set the starting_phase as   sequence start method
        is explicitly called to invoke the sequence
        seq.starting_phase = phase;
        seq.start();
        …
    endfunction : main_phase

endclass

  • Use UVM configurations to get the values from top level test. In the above example there is no controllability given to test writers as the sequence is not using configurations to take values from the top level test or sequence (which will be using this sequence to build a complex scenario). Modifying the sequence to give more control to the top level test or sequence which is using this simple sequence.

class usb_simple_sequence extends uvm_sequence #(usb_transfer);

    rand int unsigned sequence_length;
    constraint reasonable_seq_len {   sequence_length < 10 };
    …
    virtual task body();
        usb_transfer::type_enum local_type;
        bit[7:0] local_device_id;
        //Get the values for the variables in case toplevel
         //test/sequence sets it.
        uvm_config_db#(int   unsigned)::get(null, get_full_name(),
            “sequence_length”, sequence_length);
        uvm_config_db#(usb_transfer::type_enum)::get(null,
            get_full_name(), “local_type”, local_type);
        uvm_config_db#(bit[7:0])::get(null, get_full_name(),?
            “local_device_id”, local_device_id);

        repeat(sequence_length)
        `uvm_do_with(req,   {
            req.device_id   == local_device_id;
            req.type   == local_type;
        })
    endtask : body

endclass

With the above modifications we have given control to the top-level test or sequence to modify the device_id, sequence_length and type. A few things to note here:-  the parameter type and string (third argument) used in uvm_config_db#()::set should be matching the type being used in uvm_config_db#()::get. Make sure to ‘set’ and ‘get’ with exact datatype. Otherwise value will not get set properly, and debugging will become a nightmare.

One problem with the above sequence is: if there are any constraints in the usb_transfer class on device_id or type, then this will restrict the top-level test or sequence to make sure it is within the constraint.

For example if there is a constraint on the device_id in the usb_transfer class, constraining it to be below 10 then top-level test or sequence should constraint it, within this range. If the top-level test or sequence sets it to a value like 15 (which is over 10) then you will see a constraint failure during runtime.

Sometimes the top-level test or sequence may need to take full control, and may not want to enable the constraints which are defined inside the lower level sequences or data items. One example where this is required is negative testing:- the host wants to make sure devices are not responding to the transfer with a device_id greater than 10 and so wants to send a transfer with device_id 15. So to give full control to the top-level test or sequence, we can modify the body task as shown below:-

virtual task body();

    usb_transfer::type_enum local_type;
    bit[7:0] local_device_id;
    int status_seq_len = 0;
    int status_type = 0;
    int status_device_id = 0;

    status_seq_len = uvm_config_db#(int unsigned)::get(null,
        get_full_name(), “sequence_length”, sequence_length);
    status_type = uvm_config_db#(usb_transfer::type_enum)::get(null,
        get_full_name(),“local_type”,local_type);
    status_device_id = uvm_config_db#(bit[7:0])::get(null,
        get_full_name(), “local_device_id”,local_device_id);

    //If status of uvm_config_db::get is true then try to use the values
        // set by toplevel test or sequence instead of the random value.
    if(status_device_id  || status_type)
    begin
        `uvm_create(req)
        req.randomize();
        if(status_type)
        begin
        //Using the value set by top level test or sequence
        //instead of the random value.
            req.type   = local_type;
        end
        if(status_device_id)
        begin
            //Using the value set by top level test or sequence
        //instead of the random value.
            req.device_id   = local_device_id;
        end
    end
    repeat(sequence_length)
        `uvm_send(req)

endtask : body

It is always good to be cautious while using `uvm_do_with as it will add the constraints on top of any existing constraints in a lower level sequence or sequence item.

Also note that if you have more variables to ‘set’ and ‘get’ then I recommend you create the object and set the values in the created object, and then set this object using uvm_config_db from the top-level test/sequence (instead of setting each and every variable inside this object explicitly). This way we can improve runtime performance by not searching each and every variable (when we execute uvm_config_db::get) , and instead get all variables in one shot using the object.

virtual task body();

    usb_simple_sequence local_obj;
    int   status = 0;
    status = uvm_config_db#usb_simple_sequence)::get(null,
        get_full_name(),“local_obj”,local_obj);

    //If status of uvm_config_db::get is true   then try to use
    //the values set in the object we received.
    if(status)
    begin
        `uvm_create(req)
        this.sequence_length   = local_obj.sequence_length;
        //Copy the entire req object inside the object which we
        //received from uvm_config_db   to the local req.
        req.copy   (local_obj.req);
    end
    else
    begin
        //If we did not get the object from top level sequence/test
        //then create one and   randomize it.
        `uvm_create(req)
        req.randomize();
    end
    repeat(sequence_length)
        `uvm_send(req)

endtask : body

  • Always try to reuse the simple sequences by creating a top level sequence for complex scenarios. For example, in below sequence am trying to send bulk transfer followed by an interrupt transfer to 2 different devices. For this scenario I will be using our usb_simple_sequence as shown below:-

class usb_complex_sequence extends uvm_sequence #(usb_transfer);

    //Object of simple sequence   used for sending bulk transfer
    usb_simple_sequence simp_seq_bulk;
    //Object of simple sequence used for sending interrupt transfer
    usb_simple_sequence simp_seq_int;
    …
    virtual task body();
        //Variable for getting device_id for bulk transfer
        bit[7:0] local_device_id_bulk;
        //Variable for getting device_id for   interrupt transfer
        bit[7:0] local_device_id_int;
        //Variable for getting sequence length for   bulk
        int unsigned local_seq_len_bulk;
        //Variable for getting sequence length for   interrupt
        int unsigned local_seq_len_int;

        //Get the values for the variables in case top level
        //test/sequence sets it.

        uvm_config_db#(int unsigned)::get(null, get_full_name(),
        “local_seq_len_bulk”,local_seq_len_bulk);

        uvm_config_db#(int unsigned)::get(null, get_full_name(),
        “local_seq_len_int”,local_seq_len_int);

        uvm_config_db#(bit[7:0])::get(null, get_full_name(),
        “local_device_id_bulk”,local_device_id_bulk);

        uvm_config_db#(bit[7:0])::get(null, get_full_name(),
        “local_device_id_int”,local_device_id_int);

        //Set the values for the variables to the   lowerlevel
        //sequence/sequence item, which we got from
        //above uvm_config_db::get.
        //Setting the values for bulk sequence
        uvm_config_db#(int unsigned)::set(null,   {get_full_name(),”.”,
        ”simp_seq_bulk”}, “sequence_length”,local_seq_len_bulk);
        uvm_config_db#(usb_transfer::type_enum)::set(null, {get_full_name(),
        “.”,“simp_seq_bulk”} , “local_type”,usb_transfer::BULK_TRANSFER);
        uvm_config_db#(bit[7:0])::set(null,   {get_full_name(), “.”,
        ”simp_seq_bulk”}, “local_device_id”,local_device_id_bulk);

        //Setting the values for interrupt   sequence
        uvm_config_db#(int unsigned)::set(null,   {get_full_name(),”.”,
        ”simp_seq_int”}, “sequence_length”,local_ seq_len_int);
        uvm_config_db#(usb_transfer::type_enum)::set(null, {get_full_name(),
        “.”,“simp_seq_int”} , “local_type”,usb_transfer::INT_TRANSFER);
        uvm_config_db#(bit[7:0])::set(null,{get_full_name(),“.”,
        ”simp_seq_bulk”},“local_device_id”,local_device_id_int);

        `uvm_do(simp_seq_bulk)
        simp_seq_bulk.get_response();
        `uvm_send(simp_seq_int)
        simp_seq_int.get_response();
    endtask : body

endclass

Note that in the above sequence, we get the values using uvm_config_db::get from the top level test or sequence, and then we set it to a lower level sequence again using uvm_config_db::set. This is important without this if we try to use `uvm_do_with and pass the values inside the constraint block then this will be applied as an additional constraint instead of setting these values.

I came across these guidelines and learned them, at times the hard way. So I am sharing them here. I sure hope these will come in handy when you use sequences that come pre-packed with VIPs to build more complex scenarios, and also when you wish to write your own sequences from scratch. If you come across more such guidelines or rules of thumb for writing re-usable, maintainable and debuggable sequences, please share them with me.

Share and Enjoy:
  • Print
  • Digg
  • del.icio.us
  • Facebook
  • Google Bookmarks
  • LinkedIn
  • RSS
  • Twitter

Posted in UVM, Uncategorized | Comments Off

1 Star2 Stars3 Stars4 Stars5 Stars (1 votes, average: 5.00 out of 5)
Loading ... Loading ...

Webinar on Deploying UVM Effectively

Posted by Shankar Hemmady on October 22nd, 2012

Over the past two years, several design and verification teams have begun using SystemVerilog testbench with UVM. They are moving to SystemVerilog because coverage, assertions and object-oriented programming concepts like inheritance and polymorphism allow them to reuse code much more efficiently.  This helps them in not only finding the bugs they expect, but also corner-case issues. Building testing frameworks that randomly exercise the stimulus state space of a design-under-test and analyzing completion through coverage metrics seems to be the most effective way to validate a large chip. UVM offers a standard method for abstraction, automation, encapsulation, and coding practice, allowing teams to build effective, reusable testbenches quickly that can be leveraged throughout their organizations.

However, for all of its value, UVM deployment has unique challenges, particularly in the realm of debugging. Some of these challenges are:

  • Phase management: objections and synchronization
  • Thread debugging
  • Tracing issues through automatically generated code, macro expansion, and parameterized classes
  • Default error messages that are verbose but often imprecise
  • Extended classes with methods that have implicit (and maybe unexpected) behavior
  • Object IDs that are distinct from object handles
  • Visualization of dynamic types and ephemeral classes

Debugging even simple issues can be an arduous task without UVM-aware tools. Here is a public webinar that reviews how to utilize VCS and DVE to most effectively deploy, debug and optimize UVM testbenches.

Link at http://www.synopsys.com/tools/verification/functionalverification/pages/Webinars.aspx

Share and Enjoy:
  • Print
  • Digg
  • del.icio.us
  • Facebook
  • Google Bookmarks
  • LinkedIn
  • RSS
  • Twitter

Posted in Coverage, Metrics, Debug, UVM | Comments Off

1 Star2 Stars3 Stars4 Stars5 Stars (No Ratings Yet)
Loading ... Loading ...

Using UVM Register Model

Posted by Vidyashankar Ramaswamy on October 22nd, 2012

When I was preparing for a customer presentation on UVM RAL, I could not understand what the UVM base class library is saying about updating the values of desired value and the mirror value registers. Also I felt that the terms used are not exactly reflecting the intent. After spending some time, I came up with a table which will help to understand the behavior when the register model APIs are called. . . . .

 For the complete post please visit   http://www.vip-central.org/blog/

Share and Enjoy:
  • Print
  • Digg
  • del.icio.us
  • Facebook
  • Google Bookmarks
  • LinkedIn
  • RSS
  • Twitter

Posted in Uncategorized | Comments Off

1 Star2 Stars3 Stars4 Stars5 Stars (No Ratings Yet)
Loading ... Loading ...

VCS Built-in TLI connectivity for UVM to SystemC TLM 2.0

Posted by vikasg on September 20th, 2012

Vikas Grover | Sr. Manager, Central Verification | AMD-India

One of the challenges faced in SOC verification is to validate the designs in mixed language and mixed  abstraction level. SystemC is widely used language to define the system model at higher level of abstraction.  SystemC is an IEEE standard language for System Level modeling and it is rich with constructs for  describing models at various levels of abstraction i.e. Untimed, Timed, Transaction Level, Cycle Accurate,  and RTL. The transaction level model simulates much faster than RTL model, besides OSCI defined the TLM  2.0 interface standard for SystemC which enables SystemC model interoperability and reuse at transaction  level.

On the other side, SystemVerilog is a unified language for design and verification. It is effective for designing advance testbenches for both RTL and Transaction level models, since it has features like constraint randomization for stimulus generation, functional coverage, assertions, object oriented constructs(like class,inheritance etc). Early availability of standard methodologies (providing framework and testbench coding guidelines for resue) like VMM, OVM, UVM enabled wide adoption for System Verilog in industry. The UVM 1.0 Base Class Library which was   released on Feb 2011  includes OSCI TLM 2.0 socket interface to enable interoperability for UVM with SystemC . Essentially it allows UVM testbench to include SystemC TLM 2.0 reference models. The UVM testbench can pass (or receive) transactions from SystemC models. The transaction passed across System Verilog ßàSystemC could be TLM 2.0 generic payload OR uvm_sequence_item. The implementation of UVM to SC TLM 2.0 communication is vendor dependent.

Starting with with the 2011.03 release, VCS provides a new TLI adaptor which enables UVM TLM 2.0 sockets to communicate with SC TLM 2.0 based environment to pass transactions across language domains.  You can also check out  a couple of earlier post from John Aynsley, (VMM-to-SystemC Communication Using the TLI and  Blocking and Non-blocking Communication Using the TLI) on SV-SystemC communication using TLI.   In this Blog, I am going to describe VCS TLI connectivity mechanism between UVM and SystemC. There are other advance TLI features in VCS ( like direct access of data, invoking task/functions  across SV and SC language),  message unification across UVM-SC, transaction debug techniques, extending TLI adaptor for user defined interface other than VMM/UVM/TLM2.0 which can be written about on later.

With the support for TLM2.0 interfaces in both UVM and VMM, the importance of OSCI TLM2.0 across both SystemC and SystemVerilog is now apparent. UVM provides the following TLM2.0 socket interfaces (for both blocking and non-blocking communication)

  • uvm_tlm_b_initiator_socket
  • uvm_tlm_b_target_socket
  • uvm_tlm_nb_initiator_socket
  • uvm_tlm_nb_target_socket
  • uvm_analysis_port
  • uvm_subscriber

SystemC TLM2.0 consists of following TLM 2.0 interface

  • tlm_initiator_socket
  • tlm_target_socket
  • tlm_analysis_port

The Built-in TLI adaptor solution for VCS is a general purpose solution to simplify the transaction passing across UVM and  SystemC as shown below. The transactions can be TLM 2.0 generic payload OR uvm_sequence_item object. The UVM 1.0 does have the TLM 2.0 generic payload class as well.

The Built-in TLI adaptor is available as a pre-compiled library with VCS. The user would need to follow two simple steps to include the TLI adaptor in his/her verification environment.

  1. Include a header file in System Verilog and SystemC code. The System Verilog header file provides a package which implements the bind function parameterized on uvm_sequence_item object.
  2. Invoke the bind function on System Verilog and SystemC side to connect each socket across language.  The bind function has a string argument which must be unique for each socket connection across System Verilog and SystemC.

The code snippet for above steps is shown below. The TLI adaptor code is highlighted in orange/blue color.  The UVM Initiator  “initiator_udf” from System Verilog is driving SystemC Target “ target_udf” using the  TLM  blocking socket.

The TLI adaptor bind function uses the unique string “str_udf_pkt” to identify the socket connectivity across SystemVerilog and SystemC domain.  For multiple sockets, the user needs to invoke the TLI bind function once for each socket. The TLI adaptor supports both blocking and non-blocking transport interfaces for sockets to communicate across System Verilog and SystemC.

Thus, the Built-in UVM-SC TLI adaptor capability of VCS ensures that SystemC can be connected seamlessly in UVM based verification environment.

Share and Enjoy:
  • Print
  • Digg
  • del.icio.us
  • Facebook
  • Google Bookmarks
  • LinkedIn
  • RSS
  • Twitter

Posted in Communication, Interoperability, SystemC/C/C++, Tools & 3rd Party interfaces, Transaction Level Modeling (TLM) | 1 Comment »

1 Star2 Stars3 Stars4 Stars5 Stars (No Ratings Yet)
Loading ... Loading ...

Introducing VIP-central.org

Posted by Amit Sharma on August 17th, 2012

On August 8, Synopsys announced the launch of VIP-Central.org, a technical community site focused on system-on-chip (SoC) verification engineers and users of verification IP (VIP). This is a new resource which will be meant to provide relevant forums and blogs focused on verification of industry-standard protocols and bus interfaces.  Through the resources and blogs  on this  community driven portal, I expect that  engineers can accelerate their understanding of  intricacies and foibles of each protocol and also effectively leverage industry-available verification IP to verify these protocols while employing methodologies such as UVM, OVM and VMM.

With the increasing number of complex protocols used in SoCs, verification engineers face a tough challenge to quickly acquire the protocol expertise needed to verify a SoC as well as all of its on-chip fabrics and off-chip interfaces. The challenge is made tougher by the frequent release of new and more sophisticated generations of protocols to improve performance, power and quality of service. Verification engineers must complete many tasks requiring both protocol and methodology expertise including developing environments, integrating VIP, using and modifying test sequences, debugging complex protocol results and analyzing coverage data.  The site would  aggregate information from industry experts across the verification community, providing best practices and ideas for better verification performance, protocol debug, methodology, verification planning, coverage management and ease-of-use. I  am sure most of the discussions and blogs in VIP-Central.org would be relevant to the folks who follow the Verification Martial Arts blog and I would encourage everyone to register and start contributing at : http://www.vip-central.org

Share and Enjoy:
  • Print
  • Digg
  • del.icio.us
  • Facebook
  • Google Bookmarks
  • LinkedIn
  • RSS
  • Twitter

Posted in Uncategorized | Comments Off

1 Star2 Stars3 Stars4 Stars5 Stars (No Ratings Yet)
Loading ... Loading ...

Non blocking communication in TLM2.0

Posted by haribali on July 2nd, 2012

In this blog I want to introduce how non-blocking communication works in TLM-2.0. This is same in UVM, VMM and SystemC. Only the names of the functions and arguments may differ.

As name suggests, non-blocking communication returns immediately. Due to this behavior target may or may not process the request immediately when the nb_transport_fw function is called. For this very reason we have backward path using which target can intimate initiator whenever the request is processed using nb_transport_bw method.  We can look at this as 2 function calls one from initiator to target for sending the request and second is from target to initiator for sending the response once the request is processed.

If this is as simple as shown above then why do we need an additional argument in form of phase and a return type. Let’s see what these additional arguments are conveying. I have used the term “partner” (as in link partner) in the blog to identify either target or initiator.

Phase argument in the nb_transport function calls is used to communicate the phase of this transaction. Phase can be one of the following

BEGIN_REQ: This is sent by initiator along with the transaction to tell the target that this is a new request.

END_REQ: This is sent by target to tell the initiator that request has accepted but not yet processed.

BEGIN_RESP: This is sent by target to tell initiator that request is processed and this is the response for the corresponding request.

END_RESP: This is sent by the initiator to tell target that response has been accepted. With this the transaction is complete.

Return type for the nb_transport function calls is used to tell the partner (initiator/target, whoever initiated the call) whether the transaction is processed at this stage or not.  Return type can be one of the following

TLM_ACCEPTED: This is used to tell the partner (Initiator/target, whoever initiated the call) that transaction is accepted but processing has not yet started. Whenever this is returned we can simple ignore the arguments as they are not changed.

Example:

  • Target returns TLM_ACCEPTED when it has accepted the request sent by the initiator but not processed yet. Initiator need not process the function arguments as they are not changed by the target.

TLM_UPDATED: This is returned to tell the partner (initiator/target, whoever initiated the call) that the transaction is accepted and the function arguments are modified.

Example:

  • Target returns TLM_UPDATED when it has accepted the request sent by the initiator and modified the function arguments. Initiator need to take action depending on the phase argument.

TLM_COMPLETED: This is returned to tell the partner (initiator/target, whoever initiated the call) that the transaction is completed and the function arguments are modified.

Example:

  • Target returns TLM_COMPLETED when it has accepted the request sent by the initiator and completed processing it. Initiator need to take action depending on the phase argument.

In summary, non-blocking interface has a protocol associated with it which is used to make sure each transaction is properly processed and communicated the response back. If you see the blocking interface, this is not required as initiator waits for target to complete the transaction. In non-blocking there would be no way for initiator to know if the target processed the transaction or not without this protocol.

Non-blocking coding style is generally used in coding accurate models in virtual prototypes. If you are planning to use TLM models from virtual platform to speed up your simulations or for any other reasons then you will end up connecting the non-blocking TLM models to your verification environment where non-blocking interface will be useful. I welcome you to share other use cases of non-blocking interface.

Share and Enjoy:
  • Print
  • Digg
  • del.icio.us
  • Facebook
  • Google Bookmarks
  • LinkedIn
  • RSS
  • Twitter

Posted in Transaction Level Modeling (TLM), UVM | Comments Off

1 Star2 Stars3 Stars4 Stars5 Stars (No Ratings Yet)
Loading ... Loading ...

Build your own code generator!

Posted by Amit Sharma on June 25th, 2012

One of the critical problems developers cope with during SoC development process (architecture plan, RTL design, verification, software development) is the constant need to synchronize between different forms of the same data structure: the SoC registers database. The SoC registers database can be found at the SoC architecture team (who write the SoC registers description document), design engineers (who implement the registers structure in RTL code), verification engineers (who write the verification infrastructure – such as RAL code, and write verification tests – such as exhaustive read/write tests from all registers), and software engineers (who use the registers information to write embedded software).

Since the same information is used in different forms, it is only natural to have a single, central database which will hold all SoC registers data. You would ideally like to generate all the required output files  (documentation, UVM or VMM Regsiter Abstraction Model, RTL, C headers, …) from this central database ..  Different vendors and CAD teams provide different automation solutions for doing this.

The RALF specification contains all of the necessary information to generate RTL and testbench  code that implements the specified registers. There are many ways to code and implement RTL, so it is not possible to provide a general purpose solution.  As far as the testbench abstraction model is concerned, there are multiple ways of customizing your model post generation in both UVM and VMM. Callbacks, factories, configurable RAL model  attributes are some of the ways through which the desired customization can be brought in.  “The ‘user’ in RALF : get ralgen to generate ‘your’ code” highlights a very convenient way of adding of bringing in  SystemVerilog compatible code which will be copied as-is into the RAL model and in the desired scope. When it comes down to generating the RTL and the ‘C’ headers, we cannot leave the customization to such a late stage.  Also, different organizations and project groups have their own RTL and C-code coding styles which means  a generated output of a very generic nature might not be very helpful. For RTL generation, engineers would want the generated code to be power and gate-count efficient. Similar for C registers header generation, it often needs to follow coding styles and match CPU Firmware API. How do we bring in all this customizations to the end user?

Using the RALF C++ API, you have full access to parsed RALF data (through C++ routines), which you can use to implement a customized RTL code generator, or any other feature that needs RALF data. So, you can use  it to generate your C header files, HTML documentation, or translate the i/p RALF files to another register description format, custom covergroups and coverage convergence sequences (DAC 2012 User Track poster 6U.8 — Register Verification on a Fast Lane: Using Automation to Converge on UVM REG Coverage Models)

I have seen two instances of the need to generate a different register specification in the recent past and that is one of the reasons I decided to put this down in a blog.   Let me talk about the first instance..

One of the project groups were in the process of migrating from their own SV base classes to UVM. They had their own Register description format from which they used to generate their Register Abstraction Model. This was a proven flow.

So, when they migrated to UVM , they wanted to have a flow which would validate the changes that they are doing..

Given that they were moving to using RALF and ‘ralgen’, they didn’t want to create Register Specification in the legacy format anymore. So, they wanted to have some automation for generating scripts in the earlier format.  So, how did they go about doing this?..   They took the RAL C++ APIs and using them there were able to create the necessary automation to to generate the legacy format from RALF in no time.. (From what I remember, it was a half days work).. Everyone were involved in doing what they were best at, and that helped in the overall scheme of things.

The other customer had their own format from which they were generating RTL, firmware code and HTML. They had the necessary automation to create RALF for generating the UVM register Model. They also had a mechanism in place to generate IPXACT from this format as well as vice versa.. So, to complete the traceability matrix, they wanted a RALF  to IPXACT conversion.. Again, the most logical approach was to take the RALF CPP APIs and get those to iterate through the parsed RALF data and generate IPXACT.. Though, this effort is not complete, it just took a day or so to be able to generate valid IPXACT1.5 schema and all that is required now is some more additional work to smoothen the corners.

How do you start using these APIs and build your own code/html generators? You need to include ‘ralf.hpp” (which is in $VCS_HOME/include) in your ‘generator’ block. And then to compile the code, you need to pick up the shared library libralf.so from the VCS installation.

$CPP_COMPILER $CFLAGS -I${VCS_HOME}/include –L${VCS_HOME}/lib –lralf your-file.cpp $LDFLAGS

#include
#include "ralf.hpp"
int main(int argc, char *argv[])
{
// Check basic command line usage…
if (argc < 3) {
fprintf(stderr, "Error: Wrong Usage.\n");
// Show Correct Usage …
exit(1);
}
/**
* Parse command line arguments to get the essential
* constructor arguments. See documentation
* of class ralf::vmm_ralf’s constructor parameters.
*/
{

/**
* Create a ralf::vmm_ralf object by passing in proper
constructor arguments. */
ralf::vmm_ralf ralf_data(ralf_file, top, rtl_dir,
inc_dir);
/**
* Get the top level object storing the parsed RALF
* block/system data and traverse that, top-down, to get
* access to complete RALF data.
*/
const ralf::vmm_ralf_blk_or_sys_typ * top_lvl_blk_or_sys
= ralf_data.getTopLevelBlockOrSys();
#ifndef GEN_RTL_IN_SNPS_STYLE
/*
* TODO–Traverse the parsed RALF data structure top-down
* using/starting-from ‘top_lvl_blk_or_sys’ for getting
* complete access to the RALF data and then, do whatever
* you would want to do, with the parsed RALF data. One
* typical usage of parsed RALF data could be, to generate
* RTL code in your own style.
*/
//
// TODO – Add your RTL generator code here.
//
#else
/*
* As part of this library, Synopsys also provides a
* default RTL generator, which can be invoked by
* invoking ‘generateRTL()’ method of ‘ralf::vmm_ralf’
* class, as demonstrated below.
*/
ralf_data.generateRTL();
#endif
}

Essentially, you have a handle to the parsed database, and with the available APIs you can do whatever you want with it :) ..  The documentation of the APIs are in the documentation shipped with the VCS installation.. Also, if you are like me and would rather hack away at existing code rather than start with something from scratch, you can just check with Synopsys support to give you existing templates to dump out code in specific format, and you can starting modifying that for your requirements..

Share and Enjoy:
  • Print
  • Digg
  • del.icio.us
  • Facebook
  • Google Bookmarks
  • LinkedIn
  • RSS
  • Twitter

Posted in Automation, Coverage, Metrics, Customization, Interoperability, Register Abstraction Model with RAL, Tools & 3rd Party interfaces, UVM, VMM infrastructure | Comments Off

1 Star2 Stars3 Stars4 Stars5 Stars (No Ratings Yet)
Loading ... Loading ...

DVE and UVM on wheels

Posted by Yaron Ilani on May 22nd, 2012

Sometimes driving to work can be a little bit boring so a few days ago I decided to take advantage of this time slot to introduce myself and tell you a little bit about the behind-the-scenes of my video blog. Hope you’ll like it !

Hey if you’ve missed any of my short DVE videos – here they are:

Share and Enjoy:
  • Print
  • Digg
  • del.icio.us
  • Facebook
  • Google Bookmarks
  • LinkedIn
  • RSS
  • Twitter

Posted in Debug, UVM, Verification Planning & Management | Comments Off

1 Star2 Stars3 Stars4 Stars5 Stars (No Ratings Yet)
Loading ... Loading ...

Are You Afraid of Breakpoints?

Posted by Yaron Ilani on May 17th, 2012

Are you afraid of breakpoints? Don’t worry, many of us have been.  After all, breakpoints are for software folks, not for us chip heads, right??
Well, not really… In many ways, chip verification is pretty much a software project.
Still, most of the people I know fall into one of these two categories – the $display person, or the breakpoints person.

The former doesn’t like breakpoints. He or she would rather fill up their code with $display’s and UVM_INFO’s and recompile their code every time around.
That’s cool.
The latter likes and appreciates breakpoints and uses them whenever possible instead of traditional $display commands.

So if you are the second type – here’s some great news from DVE that will help you be even more efficient.
And $display folks – stay tuned as it may be time for you to finally convert :-)

In this short video I show how to use Object ID’s – a new infrastructure in DVE – to break in a specific class instance !

 

Want more ?

Go here to catch up with our latest DVE and UVM short videos:
http://www.youtube.com/playlist?list=PL06DA5270480FC7E1

Share and Enjoy:
  • Print
  • Digg
  • del.icio.us
  • Facebook
  • Google Bookmarks
  • LinkedIn
  • RSS
  • Twitter

Posted in Debug, SystemVerilog, UVM, Uncategorized | Comments Off

1 Star2 Stars3 Stars4 Stars5 Stars (No Ratings Yet)
Loading ... Loading ...

Namespaces, Build Order, and Chickens

Posted by Brian Hunter on May 14th, 2012

As described in the video, vkits are our convenient method of lumping together reusable UVM packages with the interface(s) that they operate on. Because code within packages can only peek or poke wires that are contained by a virtual interface, it is often useful to wrap these together somehow, and vkits are our technique at Cavium for doing that.

What goes in a vkit? Anything that is reusable. From simple agents and the interfaces they work on to complete UVM environments that connect these agents together, including scoreboards, sequence libraries, types, and utility functions.

What does not go in a vkit are items that are bound to a specific testbench, including the tests themselves.

The video describes the wildcard import syntax as an “egregiously bad idea.” First and foremost, doing so can lead to namespace pollution, which comes about when one engineer independently adds types or classes to their package and only later finds out that they conflict with those of another package. Secondly, wildcard imports prevent our shorter naming conventions of having an agent_c, drv_c, env_c, etc., within each package.

Not described in the video are CSR packages that are auto-generated by RAL, IP-XACT, or your script of choice. These packages should be independent of your vkits, such that your vkits refer to them with their explicit scopes (i.e., chx_csr_pkg::PLUCKING_CFG_C).

Future posts will go into more detail about how we architect UVM testbenches and some of our other conventions that work within this framework. Until then, I’ve got a lot of pies to eat.

PS. I’ll be at DAC this year! Come see me on Tuesday, June 5, during the “Industry Leaders Verify with Synopsys” lunch. Hopefully they’ll be serving some of my favorite foods!

Share and Enjoy:
  • Print
  • Digg
  • del.icio.us
  • Facebook
  • Google Bookmarks
  • LinkedIn
  • RSS
  • Twitter

Posted in Organization, Structural Components, SystemVerilog, Tutorial, UVM | 6 Comments »

1 Star2 Stars3 Stars4 Stars5 Stars (1 votes, average: 5.00 out of 5)
Loading ... Loading ...

Customizing UVM Messages Without Getting a Sunburn

Posted by Brian Hunter on April 24th, 2012

The code snippets presented are available below the video.

my_macros.sv:

   `define my_info(MSG, VERBOSITY) \
      begin \
         if(uvm_report_enabled(VERBOSITY,UVM_INFO,get_full_name())) \
            uvm_report_info(get_full_name(), $sformatf MSG, 0, `uvm_file, `uvm_line); \
      end

  `define my_err(MSG)         \
      begin \
         if(uvm_report_enabled(UVM_NONE,UVM_ERROR,get_full_name())) \
            uvm_report_error(get_full_name(), $sformatf MSG, UVM_NONE, `uvm_file, `uvm_line); \
      end

   `define my_warn(MSG)        \
      begin \
         if(uvm_report_enabled(UVM_NONE,UVM_WARNING,get_full_name())) \
            uvm_report_warning(get_full_name(), $sformatf MSG, UVM_NONE, `uvm_file, `uvm_line); \
      end

   `define my_fatal(MSG)       \
      begin \
         if(uvm_report_enabled(UVM_NONE,UVM_FATAL,get_full_name())) \
            uvm_report_fatal(get_full_name(), $sformatf MSG, UVM_NONE, `uvm_file, `uvm_line); \
      end

my_init.sv:

  initial begin
      my_report_server_c report_server = new("my_report_server");

      if($value$plusargs("fname_width=%d", fwidth)) begin
         report_server.file_name_width = fwidth;
      end
      if($value$plusargs("hier_width=%d", hwidth)) begin
         report_server.hier_width = hwidth;
      end

      uvm_pkg::uvm_report_server::set_server(report_server);

      // all "%t" shall print out in ns format with 8 digit field width
      $timeformat(-9,0,"ns",8);
   end

my_report_server.sv:

class my_report_server_c extends uvm_report_server;
   `uvm_object_utils(my_report_server_c)

   string filename_cache[string];
   string hier_cache[string];

   int    unsigned file_name_width = 28;
   int    unsigned hier_width = 60;

   uvm_severity_type sev_type;
   string prefix, time_str, code_str, fill_char, file_str, hier_str;
   int    last_slash, flen, hier_len;

   function new(string name="my_report_server");
      super.new();
   endfunction : new

   virtual function string compose_message(uvm_severity severity, string name, string id, string message,
                                           string filename, int line);
      // format filename & line-number
      last_slash = filename.len() - 1;
      if(file_name_width > 0) begin
         if(filename_cache.exists(filename))
            file_str = filename_cache[filename];
         else begin
            while(filename[last_slash] != "/" && last_slash != 0)
               last_slash--;
            file_str = (filename[last_slash] == "/")?
                       filename.substr(last_slash+1, filename.len()-1) :
                       filename;

            flen = file_str.len();
            file_str = (flen > file_name_width)?
                       file_str.substr((flen - file_name_width), flen-1) :
                       {{(file_name_width-flen){" "}}, file_str};
            filename_cache[filename] = file_str;
         end
         $swrite(file_str, "(%s:%6d) ", file_str, line);
      end else
         file_str = "";

      // format hier
      hier_len = id.len();
      if(hier_width > 0) begin
         if(hier_cache.exists(id))
            hier_str = hier_cache[id];
         else begin
            if(hier_len > 13 && id.substr(0,12) == "uvm_test_top.") begin
               id = id.substr(13, hier_len-1);
               hier_len -= 13;
            end
            if(hier_len < hier_width)
               hier_str = {id, {(hier_width - hier_len){" "}}};
            else if(hier_len > hier_width)
               hier_str = id.substr(hier_len - hier_width, hier_len - 1);
            else
               hier_str = id;
            hier_str = {"[", hier_str, "]"};
            hier_cache[id] = hier_str;
         end
      end else
         hier_str = "";

      // format time
      $swrite(time_str, " {%t}", $time);

      // determine fill character
      sev_type = uvm_severity_type'(severity);
      case(sev_type)
         UVM_INFO:    begin code_str = "%I"; fill_char = " "; end
         UVM_ERROR:   begin code_str = "%E"; fill_char = "_"; end
         UVM_WARNING: begin code_str = "%W"; fill_char = "."; end
         UVM_FATAL:   begin code_str = "%F"; fill_char = "*"; end
         default:     begin code_str = "%?"; fill_char = "?"; end
      endcase

      // create line's prefix (everything up to time)
      $swrite(prefix, "%s-%s%s%s", code_str, file_str, hier_str, time_str);
      if(fill_char != " ") begin
         for(int x = 0; x < prefix.len(); x++)
            if(prefix[x] == " ")
               prefix.putc(x, fill_char);
      end

      // append message
      return {prefix, " ", message};
   endfunction : compose_message
endclass : my_report_server_c
Share and Enjoy:
  • Print
  • Digg
  • del.icio.us
  • Facebook
  • Google Bookmarks
  • LinkedIn
  • RSS
  • Twitter

Posted in Debug, Messaging, SystemVerilog, UVM, Verification Planning & Management | 1 Comment »

1 Star2 Stars3 Stars4 Stars5 Stars (No Ratings Yet)
Loading ... Loading ...

Leveraging UVM for verifying High-speed IOs with Verilog-AMS

Posted by Shankar Hemmady on April 17th, 2012

When the industry decided to standardize on UVM, we had high hopes that some day we will be able to use the standard to raise the level of abstraction and solve many open issues. Take the case of AMS verification of High-speed IOs, which has largely been the turf of hand-crafted custom designs verified mostly with directed tests.

Warren Anderson and his team at AMD here describe a simple, innovative approach they used with UVM and Verilog-AMS running VCS and CustomSim/XA… yes, UVM really works wonders when used prudently with the right flow.

http://blogs.synopsys.com/analoginsights/2012/04/17/uvm-based-random-verification-using-customsim-vcs-for-analog-mixed-signal-designs/

Share and Enjoy:
  • Print
  • Digg
  • del.icio.us
  • Facebook
  • Google Bookmarks
  • LinkedIn
  • RSS
  • Twitter

Posted in AMS, UVM | Comments Off

1 Star2 Stars3 Stars4 Stars5 Stars (No Ratings Yet)
Loading ... Loading ...

Using the VMM Datastream Scoreboard in a UVM environment

Posted by Amit Sharma on February 2nd, 2012

Implementing the response checking mechanism in a self-checking environment remains the most time-consuming task. The VMM Data Stream Scoreboard package facilitates the implementation of verifying the correct transformation, destination and ordering of ordered data streams. This package is intuitively applicable to packet-oriented design, such as modems, routers and protocol interfaces. This package can also be used to verify any design transforming and moving sequences of data items, such as DSP data paths and floating-point units. Out-of-the-box, the VMM data stream scoreboard can be used to verify single-stream designs that do not modify the data flowing through them. For example, it can be used to verify FIFOs, Ethernet media access controllers (MACs) and bridges.

The VMM data scoreboard can also be used to verify multi-stream designs with user-defined data transformation and input-to-output stream routing. The transformation from input data items into expected data items is not limited to one-to-one transformation. An input data item may be transformed into multiple expected data items (e.g. segmenters) or none (e.g. reassemblers). Compared to this, the functionality available through UVM in-order comparator or the algorithmic comparator is significantly less. Thus, users might want to have access to the functionality provided by the VMM DS Scoreboard in a UVM environment. Using the UBUS example available in $VCS_HOME/doc/examples/uvm/integrated/ubus as a demo vehicle, this article shows how simple adapters are used to integrate the VMM DS scoreboard in a UVM environment and thus get access to more advanced scoreboarding functionality within the UVM environment

The UBUS example uses an example scoreboard to verify that the slave agent is operating as a simple memory. It extends from the uvm_scoreboard class and implements a memory_verify() function to makes the appropriate calls and comparisons needed to verify a memory operation. An uvm_analysis_export is explicitly created and implementation for ‘write’ defined. In the top level environment, the analysis export is connected to the analysis port of the slave monitor.

ubus0.slaves[0].monitor.item_collected_port.connect(scoreboard0.item_collected_export);

The simple scoreboard with its explicit implementation of the comparison routines suffices for verifying the basic operations, but would require to be enhanced significantly to provide more detailed information which the user might need. For example, lets take the ‘test_2m_4s’ test. Here , the environment is configured to have 2 Masters and 4 slaves.. Depending on how the slave memory map is configured, different slaves respond to different transfers on the bus. Now, if we want to get some information on how many transfer went into the scoreboard for a specific combination (eg: Master 1 to Slave 3), how many were verified to be processed correctly etc, it would be fair enough to conclude that the existing scoreboarding schemes will not suffice..

Hence, it was felt that the Data Stream Scoreboard with its advanced functionality and support for data transformation, data reordering, data loss, and multi-stream data routing should be available for verification environments not necessarily based on VMM. From VCS  2011.12-1, this integration have meed made very simple.  This VMM DS scoreboard implements a generic data stream scoreboard that accepts parameters for the input and output packet types. A single instance of this class is used to check the proper transformation, multiplexing and ordering of multiple data streams. The scoreboard class now  leverages a policy-based design and parameterized specializations to accepts any ‘Packet’ class or d, be it VMM, UVM or OVM.

The central element in policy-based design is a class template (called the host class, which in this case in the VMM DS Scoreboad), taking several type parameters as input, which are specialized with types selected by the user (called policy classes), each implementing a particular implicit method (called a policy), and encapsulating some orthogonal (or mostly orthogonal) aspect of the behavior of the instantiated host class. In this case, the ‘policies’ implemented by the policy classes are the ‘compare’ and ‘display’ routines.

By supplying a host class combined with a set of different, canned implementations for each policy, the VMM DS scoreboard can support all different behavior combinations, resolved at compile time, and selected by mixing and matching the different supplied policy classes in the instantiation of the host class template. Additionally, by writing a custom implementation of a given policy, a policy-based library can be used in situations requiring behaviors unforeseen by the library implementor .

So, lets go through a set of simple steps to see how you can use the VMM DS scoreboard in the UVM environment

Step 1: Creating the policy class for UVM and define its ‘policies’

image

Step 2: Replacing the UVM scoreboard with a VMM one extended from “vmm_sb_ds_typed” and specialize it with the ubus_transfer type and the previous created uvm_object_policy.

class ubus_example_scoreboard extends vmm_sb_ds_typed #(ubus_transfer,ubus_transfer, uvm_object_policy);

`vmm_typename(ubus_example_scoreboard)

endclass: ubus_example_scoreboard

Once, this is done, you can either declare an VMM TLM Analysis export to connect to the Bus Monitor in the UBUS environment or use the pre-defined on in the VMM DS scoreboard

vmm_tlm_analysis_export #(ubus_example_scoreboard,ubus_transfer) analysis_exp;

Given that for any configuration, one master and slave would be active, define the appropriate streams in the constructor (though this is not required if there are only single streams, we are defining this explicitly so that this can scale up to multiple input and expect streams for different tests)

this.define_stream(0, “Slave 0″, EXPECT);
this.define_stream(0, “Master 0″, INPUT);

Step 2 .a: Create the ‘write’ implementation for the Analysis export

Since, we are verifying the operation of the slave as a simple memory, we just add in the appropriate logic to insert a packet to the scoreboard when we do a ‘WRITE’ and an expect/check when the transfer is a ‘READ’ with an address that has already been written to.

image

Step 2.b: Implement the stream_id() method

You can use this method to determine to which stream a specific ‘transfer’ belongs to based on the packet’s content, such as a source or destination address. In this case, the BUS Monitor updates the ‘slave’ property of the collected transfer w.r.t where the address falls on the slave memory map.

image

image

Step 3: Create the UVM Analysis to VMM Analysis Adapter

The uvm_analysis_to_vmm_analysis is used to connect any UVM component with an analysis port to any VMM component via an analysis export. The adapter will convert all incoming UVM transactions to a VMM transaction and drive this converted transaction to the VMM component through the analysis port-export. If you are using the VMM UVM interoperability library, you do not have to create the adapter as it will be available in the library

image

image

Create the ‘write’ implementation for the analysis export in the adapter

The write method, called via the <analysis_export> would just post the receive UBUS transfer from the UVM analysis port to the VMM analysis port.

image

Step 4: Make the TLM connections

In the original example, the item_collected_port of the slave monitor was connected to the analysis export of the example scoreboard. Here, the DataStream scoreboard has an analysis port which expects a VMM transaction. Hence, we need the adapter created above to intermediate between the analysis port of the UVM Bus monitor and the analysis export of the VMM DS scoreboard..

image

Step 5: Define Additional streams if required for multi-master multi-slave configurations

This step is not required for a single master/slave configuration. However, would need to create additional streams so that you can verify the correctness on all the different permutations in terms of tests like “test_2m_4s” .

In this case, the following is added in the test_2m_2s in the connect_phase()

image

Step 6: Add appropriate options to your compile command and analyze your results

Change the Makefile by adding –ntb_opts rvm on the command line and add +define+UVM_ON_TOP

vcs -sverilog -timescale=1ns/1ns -ntb_opts uvm-1.1+rvm +incdir+../sv ubus_tb_top.sv -l comp.log +define+UVM_ON_TOP

And that is all, as far and you are ready to go and validate your DUT with a more advanced scoreboard with loads of built-in functionality. This is what you will get when you execute the “test_2m_4s” test

Thus, not only do you have stream specific information now, but you now have access to much more functionality as mentioned earlier. For example, you can model transformations, checks for out of order matches, allow for dropped packets, and iterate over different streams to get access to the specific transfers. Again, depending on your requirements, you can use the simple UVM comparator for your basic checks and switch over to the DS scoreboard for the more complex scenarios with the flip of a switch in the same setup. This is what we did for a UVM PCIe VIP we developed earlier ( From the Magician’s Hat: Developing a Multi-methodology PCIe Gen2 VIP) so that the users has access to all the information they require. Hopefully, this will keep you going, till we have a more powerful UVM scoreboard with some subsequent UVM version

Share and Enjoy:
  • Print
  • Digg
  • del.icio.us
  • Facebook
  • Google Bookmarks
  • LinkedIn
  • RSS
  • Twitter

Posted in Communication, Interoperability, Reuse, Scoreboarding, UVM, VMM infrastructure | 2 Comments »

1 Star2 Stars3 Stars4 Stars5 Stars (No Ratings Yet)
Loading ... Loading ...

Why do we need an integrated coverage database for simulation and formal analysis?

Posted by Shankar Hemmady on January 23rd, 2012

Closing the coverage gap has been a long-standing challenge in simulation-based verification, resulting in unpredictable delays while achieving functional closure. Formal analysis is a big help here. However, most of the verification metrics that give confidence to a design team are still governed by directed and constrained random simulation. This article describes a methodology that embraces formal analysis along with dynamic verification approaches to automate functional convergence: http://soccentral.com/results.asp?CatID=488&EntryID=37389

I would love to learn what you do to attain functional closure.

Share and Enjoy:
  • Print
  • Digg
  • del.icio.us
  • Facebook
  • Google Bookmarks
  • LinkedIn
  • RSS
  • Twitter

Posted in Coverage, Metrics, Formal Analysis, Verification Planning & Management | Comments Off

1 Star2 Stars3 Stars4 Stars5 Stars (No Ratings Yet)
Loading ... Loading ...

How do I debug issues related to UVM objections?

Posted by Vidyashankar Ramaswamy on January 19th, 2012

Recently one of the engineers I work with in the networking industry was describing the challenges in debugging the UVM timeout error message. I was curious and looked into his test bench. After spending an hour or so, I was able to point out the master/slave driver issue where the objection was not dropped and the simulation thread hung waiting for the objections to drop. Then I started thinking, why not use the run time option to track the status of the objection: +UVM_OBJECTION_TRACE? Well, this printed detailed messages about the objections, a lot more than what I was looking for! The problem now was to decipher the overwhelming messages spitted by the objection trace option! In a hierarchical test bench, there could be hundreds of component, and you might be debugging some SoC level test bench which you didn’t write or are familiar with. Here is an excerpt of the message log using the built in objection trace:

….
VM_INFO @ 0: reset [OBJTN_TRC] Object uvm_test_top.env.master_agent.mast_drv dropped 1 objection(s): count=0 total=0
UVM_INFO @ 0: reset [OBJTN_TRC] Object uvm_test_top.env.master_agent.mast_mon raised 1 objection(s): count=1 total=1UVM_INFO @ 0: reset [OBJTN_TRC] Object uvm_test_top.env.master_agent added 1 objection(s) to its total (raised from source object ): count=0 total=2
UVM_INFO @ 0: reset [OBJTN_TRC] Object uvm_test_top.env added 1 objection(s) to its total (raised from source object ): count=0 total=2
….
UVM_INFO @ 0: reset [OBJTN_TRC] Object uvm_test_top.env.master_agent.mast_mon dropped 1 objection(s): count=0 total=0
UVM_INFO @ 0: reset [OBJTN_TRC] Object uvm_test_top.env.slave_agent.drv raised 1 objection(s): count=1 total=1
….

As a verification engineer, you want to begin debugging the component or part of the test bench code which did not lower the objection as soon as possible. You want to minimize looking into the unfamiliar test bench code as much as possible or stepping through the code using a debugger.

The best way is to call the display_objections() just before timeout has been reached. As there is no callback available in the timeout procedure, I thought of writing the following few lines of code which can be forked off in any task based phase. I would recommend doing this in your base test which can be extended to create feature-specific tests. You can save some CPU processing cycles by coding this into a run time option:

top = uvm_root::get();
fork
begin
#(top.phase_timeout -1);
phase.phase_done.display_objections();
end
join_none

Output of the log message using the above code is shown below:

————————————————————————————
The total objection count is 2
————————————————————————————
Source              Total
Count               Count             Object
————————————————————————————
0                          2                   uvm_top
0                          2                       uvm_test_top
0                          2                           env
0                          1                               master_agent
1 1 mast_drv
0                          1                               slave_agent
1 1 slv_drv
———————————————————————————-

From the above table, it is clear that the master and slave driver did not drop the objection. Now you can look into the master and slave driver components, and further debug why these components did not drop their objection. There are many different ways to achieve the same results. I welcome you to share your thoughts and ideas on this.

Share and Enjoy:
  • Print
  • Digg
  • del.icio.us
  • Facebook
  • Google Bookmarks
  • LinkedIn
  • RSS
  • Twitter

Posted in Debug, Messaging, UVM | 1 Comment »

1 Star2 Stars3 Stars4 Stars5 Stars (No Ratings Yet)
Loading ... Loading ...

Auto-Generation of Performance Charts with the VMM Performance Analyzer

Posted by Amit Sharma on September 30th, 2011

Amit Sharma, Synopsys

Quoting from one of Janick’s earlier blog on the VMM Performance Analyzer Analyzing results of the Performance Analyzer with Excel,
”The VMM Performance Analyzer stores its performance-related data in a SQL database.SQL was chosen because it is an IEEEANSI/ISO standard with a multiplicity of implementation, from powerful enterprise systems like Oracle, to open source versions like MySQL to simple file-based like SQLite. SQL offers the power of a rich query and analysis language to generate the reports that are relevant to yourapplication.”

And given that everyone doesn’t understand SQL, he goes on to show how one can get VMM Performance Analyzer data from a SQLite database into an Excel spreadsheet and then subsequently analyze the data by doing any additional computation and creating the appropriate graphs. This involves a set of steps leveraging the SQlite ODBC (Open Database Conduit) and thus requires the installation of the same.

This article presents a mechanism how TCL scripts are used to bring in the next level of automation so that the users can retrieve the required data from the SQL DB and even automate the process of results analysis by auto-generating the relevant performance charts for statistical analysis.. Also, as users migrate to using DVE as a single platform for their design debug, coverage analysis, verification planning, it is shown how these scripts can be integrated into DVE, so that the generation process is a matter of using the relevant menus and clicking on the appropriate buttons in DVE.

For generating the SQL databases with the VMM Performance Analyzer, an SQLite Installation is required which can be obtained from www.sqlite.org. Once, you have installed it, you would need to set the SQLITE3_HOME environment variable to the path where its installed. Once that is done, these are the following steps that you need to follow to generate the appropriate graphs out of the data gets generated out of your batch regressions runs..

First, you need to download the utility from the link provided in the article DVE TCL Utility for Auto-Generation of Performance Charts

Once it is extracted, you can try it on the tl_bus examples that ships with the utility. You would need to go the directory vmm_perf_utility/tl_bus.

Use make to run the tl_bus which will generate the sql_data.db and sql_data.sql. Now, go to the ‘chart_utility’ directory

(cd vmm_perf_utility/chart_utility/)

The TCL scripts which are involved in the automation of the performance charts are in this directory.

This script vmm_perf_utility/chart_utility/chart.tcl  can then be executed from inside DVE as shown below

dve_an

Once, that is done, it will add will add a button “Generate Chart” in View menu.. BTW, adding a button is fairly simple..

eg:    gui_set_hotkey -menu “View->Generate Chart” -hot_key “G”

is how the button gets added..

Now,  click on a “Generate Chart” to select the sql database.

dve_an2

This will bring up the dialog box to select the SQL database..

dve_an3

Once, the appropriate data base is selected, the user can select which table to work with and then generate the appropriate.. The options would be provided to the user based on the data that is dumped into the SQL database.. From the combinations of charts, that is shown, select the graph that you want to generate and the required graphs will be generated for you. This is what you can see when you use the SQL DB generated for the TL bus example

dve_an4

dve_an5

Once, you have made the selections, you would see the following chart generated..

dve_an6

Now, obviously, you as a user would not just want the graphs to be  generated but you would also want these values to be available to you..

Thus, once you use this chart generation mechanism, the relevant .csv files corresponding to the graphs that you have generated would also be dumped for you..

This will be generated in the perfReports directory that would be created as well.. So, you can do any additional custom computation in Excel or by running your own scripts..  To generate the graphs for any other example, you just need to pick up the appropriate SQL DB  that was generated based on your simulation runs and then subsequently generate the reports and graphs of your interest.

So whether you use the Performance Analyzer in VMM (Performance and statistical analysis from HDL simulations using the VMM Performance Analyzer) or in UVM (Using the VMM Performance Analyzer in a UVM Environment) and even while you are doing your own PA customizations Performance appraisal time – Getting the analyzer to give more feedback , you can easily generate whatever charts you require which  would easily help you analyze all the  different performance aspects of the design you are verifying..

Share and Enjoy:
  • Print
  • Digg
  • del.icio.us
  • Facebook
  • Google Bookmarks
  • LinkedIn
  • RSS
  • Twitter

Posted in Automation, Coverage, Metrics, Customization, Performance Analyzer, Tools & 3rd Party interfaces | Comments Off

1 Star2 Stars3 Stars4 Stars5 Stars (No Ratings Yet)
Loading ... Loading ...

Closed Loop Register Verification using IDesignSpec and the Register Abstraction Layer

Posted by Amit Sharma on September 26th, 2011

Nitin Ahuja, Agnisys Technology Pvt. Ltd

In the previous article titled “Automatic generation of Register Model for VMM using IDesignSpecTM ” we discussed how it is advantageous to use a register model generator such as IDesignSpecTM, to automate the process of RALF model generation. Taking it forward, in this article we will discuss how to close the loop on register verification.

Various forms of coverage are used to ensure that registers are functioning properly. There are three coverage models in VMM. They are:

1. reg_bits coverage: this model is used to make sure that all the bits in the register are covered. This model works by writing and reading both 1 and 0 on every register bit, hence the name. This is specified using “cover +b” in the RALF model.

2. field_vals coverage: field value coverage model is implemented at the register level and supports value coverage of all fields and cross coverage between fields and other cross coverage points within the same register. This is specified using “cover +f” in the RALF model. User can specify the cross coverage depending on the functionality.

3. Address map: this coverage model is implemented at block level and ensures that all registers and the memories in the block have been read from and written to. This is specified using “cover +a” in the RALF model.

We will discuss how coverage can be switched on/off and how the type of coverage can be controlled for each field directly from the register specification.

Once the RALF model is generated, the next step in verification is to generate the RTL and the SystemVerilog RAL model using ‘ralgen’. The generated RAL model along with the RTL can be compiled and simulated in the VMM environment to generate the coverage database. This database is used for the report generation and analysis.

Reports can be generated using IDesignSpecTM (IDS). IDS generated reports have advantages over other report in that it generates the reports in a much more concise way showing all the coverage at one glance.

Turning Coverage ON or OFF

IDesignSpecTM enables the users to turn ON/OFF all the three types of coverage from within the MS Word specification itself.

Coverage can be specified and controlled using the “coverage” property in IDesignSpecTM which has the following possible values:

image

The hierarchical “coverage” property enables users to control the coverage of the whole block or at the chip level.

Here is a sample of how coverage can be specified in IDesignSpecTM:

image

image

This would be the corresponding RALF file :

agnisys_ralf

image

The coverage bins for each CoverPoint along with the cross for the various CoverPoints can also be defined in the specification as shown below:

image

This would translate to the following RALF:

image

Now, the next step after RALF generation would be to generate the RAL Model from the IDS generated RALF.

RAL MODEL AND RTL GENERATION FROM RALF:

The IDS generated RALF can be used with the Synopsys ‘ralgen’ to generate the RAL  (VMM or UVM) model as well as the RTL.

RAL model can be generated by using the following command:

image

If you specify –uvm above in the fisrt ralgen invocation above, a UVM Register Model would be generated.

COMPILATION AND REPORT GENERATION:

Once the RTL and the RAL model are generated using the ‘ralgen’, the complete model can be compiled and simulated in the VMM environment using VCS.

To compile the model use the following command on the command line:

vcs -R +plusarg_save -sverilog -o “simv1″ -ntb_opts rvm+dtm +incdir+<directories to search `defines> <files to be compiled> +define+RAL_COVERAGE

The compilation and simulation generates the simulation database which is used for the generation of the coverage reports.

Coverage reports can be generated in various forms but the most concise form can be in the form of the graphics showing all the coverage at a glance. For this, a tcl script “ivs_simif.tcl” takes up the simulation database and generates the text based report on execution of the following command:

% ivs_simif.tcl -in simv.vdb –svg

For running the above command set the environment variable “IDS_SIM_DIR”, the text report are generated at this location. This will also tell IDS where to look for the simulation data file.

A detailed graphical view of the report can be generated from IDS with the help of this text report. To generate the graphical report in the form of “scalable vector graphics” (SVG) select the “SVG” output from the IDS config and regenerate.

Another way of generating the SVG could be by using the IDS-XML or the Doc/Docx specification of the model as the input to the IDS in batch mode to generate the graphical report of the simulation by using the following command:

% idsbatch <IDS_generated_XML or doc/docx specification> -out “svg” -dir output_directory

Coverage Reports

IDesignSpec generates two types of reports from the input database.

They are:

1. Field_vals report

2. Reg_bits report

Field_vals report:

Field_vals report gives the graphical view of the field_vals coverage and the address coverage of the various registers and their respective fields.

The amount of coverage for the field (CoverPoints) is depicted by the level of green color in the fields, while that for complete register (CoverGroup) is shown by the color of name of the register.

The address coverage for the individual register (CoverPoint) is shown by the color of the address of the register (green if addressed; black if not addressed), while that of the entire block (CoverGroup) is shown by the color of the name of the block.

The coloring scheme for all the CoverGroups i.e. register name in case of the field_vals coverage and block name in case of the address coverage is:

1. If the overall coverage is greater than or equal to 80% then the name appears in GREEN color

2. If the coverage is greater than 70% but less than 80% then it appears in YELLOW

3. For coverage less than 70% name appears in RED color

Figure1 shows the field_vals and address coverage.

image

Figure:  Closed loop register verification using RALF and IDS

The above sample gives the following coverage information:

a. 2 registers, T and resetvalue, are not addressed out of total of 9 registers. Thus the overall coverage of the block falls in the range >70% &<80% which is depicted by the color of the Stopwatch (name of the block).

b. All the fields of the registers are filled with some amount of the green color which shows the amount of the coverage. As an example field T1 of register arr is covered 100% thus it is completely filled and FLD4 of register X is covered only about 10%. The exact value of coverage can be obtained by hovering over the field to get the tooltip showing the exact coverage value

c. Color of the name of the register, for example X is red, show the overall coverage of the whole register , which is less than 70% for X.

Reg_bits report:

Reg_bits report gives the detailed graphical view of the reg_bits coverage and address coverage.

Address coverage for reg_bits is shown in the same way as for the address coverage in field_vals. Reg_bits coverage has 4 components, that is,

1. Written as 1

2. Read as 1

3. Written as 0

4. Read as 0

Each of the 4 components is allocated a specific region inside a bit. If that component of the coverage is hit then the corresponding region is shown as green else it is blank. The overall coverage of the entire register is shown by the color of the name of the register as in the case of the field_vals.

image

The above sample report shows that there is no issue in “Read as 1” for the ‘resetvalue’ register. While other types or read/write has not been hit completely.

Thus, in this article we described what the various coverage models for a register are and how to generate the RALF coverage model of the registers automatically with minimum effort. An intuitive visualization of the register coverage data will ease the effort involved in deciphering the coverage reports from simulation lengthy log files. This type of closed loop register verification ensures better coverage and high quality results in less time. Hope you found this useful.. Do share with me your feedback on the same and and also let me know if you want any additional details to get the maximum benefits from this flow..

Share and Enjoy:
  • Print
  • Digg
  • del.icio.us
  • Facebook
  • Google Bookmarks
  • LinkedIn
  • RSS
  • Twitter

Posted in Automation, Coverage, Metrics, Organization, Register Abstraction Model with RAL, Tools & 3rd Party interfaces, Verification Planning & Management | 1 Comment »

1 Star2 Stars3 Stars4 Stars5 Stars (No Ratings Yet)
Loading ... Loading ...

Functional Coverage Driven VMM Verification scalable to 40G/100G Technology

Posted by Shankar Hemmady on September 9th, 2011

Amit Baranwal, ASIC Design Verification Engineer, ViaSat

As data rate increases to 100 Gbps and beyond, optical links suffer severely from various impairments in the optical channel, such as chromatic dispersion, polarization-mode dispersion etc. Traditional optical compensation techniques are expensive and complex. ViaSat has developed DSP IP cores for coherent, differential, burst and continuous, high data rate networks. These cores can be customized according to system requirements.

One of the inherent problems with verifying communication applications is that there is large amount of information which is arranged over space and time. These are generally dealt using Fourier Transform, equalization and other DSP techniques. Thus, we needed to come up with interesting stimulus to match these complex equations which exercises the full design. With horizontal and vertical polarization (Four I and Q streams running at 128 samples per cycle), there was high level of parallelism to deal. To address these challenges, we decided to go with Constraint Random Self Checking Test Bench Environment using SystemVerilog and VMM. We have extensively used the reusability, direct programming interface and scalable features with various interesting coverage techniques to minimize our efforts and meet aggressive deadlines of the project. Our system model was bit and cycle accurate developed using C language. Class configurations were used to allow different behaviors such as sampling output at every cycle vs valid cycle only. Parameterized VMM data classes were used for control signals and feedback path which required parameterized generators, drivers, monitors and scoreboards so that they can be scaled as required to match different filter designs and specifications.

Code and functional coverage was used as benchmark to gauge completeness of verification. We used lot of useful constructs from SV in FCM like – ‘ignore bins’ to remove any unwanted sets, helping us to avoid any overhead efforts and ‘illegal bins’ to catch error conditions and intersect keyword etc. Here is an example:

data_valid_H_trans: coverpoint ifc_data_valid.data_valid_H {

bins valid_1_285 = (0=>1[*1:285]=>0);

illegal_bins valid_286 = (0=>1[*286]);

bins one_invalid = (1=>0=>1);

illegal_bins two_invalid = (1=>0[*2:5]=>1);

}


Covergroup “data_valid_H_trans” covers a signal ‘data_valid_H’ which should never have consecutive 286 or more asserted cycles. Also, data_valid_H signal should never be low for two consecutive data cycle. These are interesting scenario’s and can be found in many designs under test where two blocks have dependency between each other and there is data input/output rate that needs to be met for maintaining the data integrity between blocks else the data might overflow/underflow or can induce other possible errors. In such situations, an illegal bin can be effectively used to continuously check this condition through out the simulation. An easy usage of an illegal bin, keeps an eye on this condition and if this condition ever occurs, VCS flags a runtime error

Another interesting feature that we found out was the capability to merge different coverage reports using flexible merging. As we move along the project, due to various reasons like any system specification changes, signal name change etc we might have to modify our cover groups. Currently, if we have a saved data base of vdb files from previous simulations and we run urg command to create directory of coverage report, we will find multiple cover groups with same name in the new coverage report, this can make things very confusing to identify which cover groups are of our interest. Thus, corrupting our previous efforts, coverage report and leading to more engineering efforts and resource usage. To counter this problem, flexible merging can be used.

To enable flexible merging –group flex_merge_drop option is passed with urg command.

Urg –dir simv1.vdb –dir simv2.vdb –group flex_merge_drop

Note: URG assumes the first specified coverage database as a reference for flexible merging.

This feature is available only for covergroup coverage and is very useful when the coverage model is still evolving and minor changes in the coverage model between the test runs might be required. To merge two coverpoints, they need to be merge equivalent. Requirements for merge equivalence are as follows -

1. For User defined coverpoints:

Coverpoint C1 is said to be merge equivalent to a coverpoint C2 only if the coverpoint names and width are the same.

2. For Autobin Coverpoints:

Coverpoint C1 is said to be merge equivalent to a coverpoint C2 only if the name, auto_bin_max and width are the same.

3. For Cross coverpoints:

Coverpoint C1 is said to be merge equivalent to a coverpoint C2 only if the crosspoint have same number of coverpoints

If the cover points are merge equivalent. The merged cover points will contain a union of all the cover points for different tests. If the cover points are not merge equivalent then merged coverpoint will only contain all the coverpoint bins in the most recent test run and older test run data is not considered.

To achieve our verification goals, SystemVerilog and VMM Methodology features were very helpful in achieving our verification goals by giving us a robust verification environment which was very productive and reusable over course of project. Moreover, it also gave us a head start to our next project verification efforts. To find more details, please refer to the paper I presented at SNUG, San Jose, 2011, “Functional Coverage Driven VMM Verification scalable to 40G/100G Technology”

Share and Enjoy:
  • Print
  • Digg
  • del.icio.us
  • Facebook
  • Google Bookmarks
  • LinkedIn
  • RSS
  • Twitter

Posted in Coverage, Metrics | 1 Comment »

1 Star2 Stars3 Stars4 Stars5 Stars (No Ratings Yet)
Loading ... Loading ...