Verification Martial Arts: A Verification Methodology Blog

Archive for the 'Creating tests' Category

Avoiding Redundant Simulation Cycles with your UVM VIP based simulation with a Simple Save-Restore Strategy

Posted by paragg on 6th March 2014

In many verification environments, you reuse the same configuration cycles across different testcases. These cycles might involve writing and reading from different configuration and status registers, loading program memories, and other similar tasks to set up a DUT for the targeted stimulus. In many of these environments, the time taken during this configuration cycles are very long. Also, there is a lot of redundancy as the verification engineers have to run the same set of verified configuration cycles for different testcases leading to a loss in productivity. This is especially true for complex verification environments with multiple interfaces which require different components to be configured.

The Verilog language provides an option of saving the state of the design and the testbench at a particular point in time. You can restore the simulation to the same state and continue from there. This can be done by adding appropriate built in system calls from the Verilog code. VCS provides the same options from the Unified Command line Interpreter (UCLI).

However, it is not enough for you to restore simulation from the saved state. For different simulations, you may want to apply different random stimulus to the DUT. In the context of UVM, you would want to run different sequences from a saved state as show below.

In the above example apart from the last step which varies to large extent, the rest of the steps once established need no iterations.

Here we explain how to achieve the above strategy with the simple existing UBUS example available in the standard UVM installation. Simple changes are made in the environment to show what needs to be done to bring in this additional capability. Within the existing set of tests, the two namely, “test_read_modify_write” and “test_r8_w8_r4_w4”, differs only w.r.t the master sequence being executed – i.e. “read_modify_write_seq” and “r8_w8_r4_w4_seq” respectively.

Let’s say that we have a scenario where we would want to save a simulation once the reset_phase is done and then start executing different sequences post the reset_phase the restored simulations. To demonstrate a similar scenario through the UBUS tests, we introduced a delay in the reset_phase of the base test (in a real test, this may correspond to the PLL lock, DDR Initialization, Basic DUT Configuration).

The following snippet shows how the existing tests are modified to bring in the capability of running different tests in different ‘restored’ simulations.

As evident in the code we made two major modifications.

  • Shifted the setting of the phase default_sequence from the build phase to the start of the main phase.
  • Get the name of the sequence as an argument from the command-line and process the string appropriately in the code to execute the sequence on the relevant sequencer.

As you can see, the changes are kept to a minimum. With this, the above generic framework is ready to be simulated.  In VCS, one of the different ways, the save/restore flow can be enabled as follows.

Thus above strategy helps in optimal utilization of the compute resources with simple changes in your verification flow. Hope this was useful and you manage to easily make the changes in your verification environment to adopt this flow and avoid redundant simulation cycles.

Posted in Automation, Coding Style, Configuration, Creating tests, Customization, Optimization/Performance, Organization, Reuse, Stimulus Generation, SystemVerilog, Tutorial, UVM, Uncategorized, Verification Planning & Management | 1 Comment »

SNUG-2012 Verification Round Up – Language & Methodologies – I

Posted by paragg on 25th February 2013

As in the previous couple of years, last year’s SNUG – Synopsys User Group showcased an amazing number of useful user papers   leveraging the capabilities of the SystemVerilog language and verification methodologies centered on it.

I am always excited when I see this plethora of useful papers and I try to ensure that I set aside some time to go through all these user experiences.  Now, as we wait for SNUG, Silicon Valley to kick-start the SNUG events for this year, I would want to look back at some of the very interesting and useful paper from the different SNUGs of the year 2012.  Let me start with talking about a few papers in the area of the System Verilog language and SV methodologies.

Papers leveraging the SystemVerilog language and constructs

Hillel Miller of Freescale in the paper “Using covergroups and covergroup filters for effective functional coverageuncovers the mechanisms available for carving out the coverage goals. In the p1800-2012 of the SystemVerilog LRM, new constructs are provided just for doing this. The construct that is focused on is the “with” construct. The new construct provides the ability to carve out of a multidimensional range of possibilities for a sub-range of goals. This is very relevant in a “working” or under development setup that requires frequent reprioritization to meet tape-out goals.

The paperTaming Testbench Timing: Time’s Up for Clocking Block Confusionsby Jonathan Bromley, Kevin Johnston of Verilab, reviews the key features and purpose of clocking blocks and then examines why they continue to be a source of confusion and unexpected behavior for many verification engineers. Drawing from the authors’ project and mentoring experience, it highlights typical usage errors and how to avoid them. They clarify the internal behavior of clocking blocks to help engineers understand the reasons behind common problems, and show techniques that allow clocking blocks to be used productively and with confidence. Finally, they consider some areas that may cause portability problems across simulators and indicate how to avoid them.

Inference of latches and flops based on coding styles has always been a topic creates multiple viewpoints. There are other such scenarios of synthesis and simulation mismatches that one typically comes across. To address all such ambiguity, language developers have provided different constructs to provide for an explicit resolution based on the intent. To help us gain a deeper understanding of the topic, Don Mills of Microchip Technology Inc., presented the related concepts in the paper “Yet Another Latch and Gotchas Paper” @ SNUG Silicon Valley. This paper discusses and provides solutions to issues that designers using SystemVerilog for design come across, such as: Case expression issue for casez and casex, Latches generated when using unique case or priority case, SRFF coding style problems with synthesis, SystemVerilog 2009 new definition of logic

Gabi Glasser from Intel presented the paper “Utilizing SystemVerilog for Mixed-Signal Validation@ SNUG Israel, where he proposed a mechanism for simplifying analysis and increasing coverage for mixed signal simulations.  The method proposed here was to take advantage of SystemVerilog capabilities, which enables defining a hash (associative) array with unlimited size. During the simulation, vectors are created for required analog signals, allowing them to be analyzed within the testbench along or at the end of the simulation, without requiring saving these signals into a file. The flow change enables the ability to launch a large scale mixed signal regression while allowing an easier analysis of coverage data.

Design pattern is a general reusable solution to a commonly recurring problem within a given context. The benefit of using design patterns is clear: it gives a common language for designers when approaching a problem, and gives a set of tools, widely used, to solve issues as they come up.  The paper Design Patterns In Verification” by Guy Levenbroun of Qualcomm explores several common problems, which might rise, during the development of a testbench, and how we can use design patterns to solve these problems. The patterns are categorized majorly into following areas: creational (eg factory), structural (eg composite) and behavioral (eg template) are covered in the paper.

Arik Shmayovitsh, Avishay Tvila, Guy Lidor of Sigma Designs , in their paper “Truly reusable Testbench-to-RTL  connection for System Verilog , presents  a novel approach of  connecting the DUT and testbench using consistent semantics while  reusing the testbench. This is achieved by abstracting the connection layer of each testbench using the SystemVerilog ‘bind’ construct. This ensures that the only thing that is required to be done to reuse the testbench for a new DUT would be to identify the instance of the corresponding DUT.

In the paper, A Mechanism for Hierarchical Reuse of Interface Bindings, Thomas Zboril of Qualcomm (Canada) explores another method to instantiate SV interfaces, connect them to the DUT and wrap the virtual interfaces for use in the test environment. This method allows the reuse of all the code when the original block level DUT becomes a lower level instance  in a larger subsystem or chip. The method involves three key mechanisms: Hierarchical virtual interface wrappers, Novel approach of using hierarchical instantiation of SV interfaces, Another novel approach of automatic management of hierarchical references via SV macros (new)

Thinh Ngo & Sakar Jain of Freescale Semiconductor, in their paper, “100% Functional Coverage-Driven Verification Flow propose a coverage driven verification flow that can efficiently achieve 100% functional coverage during simulation. The flow targets varied functionality, focuses at transaction level, measures coverage during simulation, and fails a test if 100% of the expected coverage is not achieved. This flow maps stimulus coverage to functional coverage, with every stimulus transaction being associated with an event in the coverage model and vice versa. This association is derived from the DUT specification and/or the DUT model. Expected events generated along with stimulus transactions are compared against actual events triggered in the DUT. The comparison results are used to pass or fail the test. 100% functional coverage is achieved via 100% stimulus coverage. The flow enables every test with its targeted functionality to meet 100% functional coverage provided that it passes.

Papers on Verification Methodology

In the paper, Top-down vs. bottom-up verification methodology for complex ASICs, Paul Lungu & Zygmunt Pasturczyk of Ciena at Canada covers the simulation methodology used for two large ASICs requiring block level simulations. A top-down verification methodology was used for one of the ASICs while a larger version needed an expanded bottom-up approach using extended simulation capabilities. Some techniques and verification methods such as chaining of sub environments from block to top-level are highlighted  along with challenges and solutions found by the verification team. The paper presents a useful technique of  of passing a RAL (Register Abstraction Layer) mirror to the C models which are used as scoreboards in the environment. The paper also presents a method of generating stable clocks inside the “program” block.

In the paper,Integration of Legacy Verilog BFMs and VMM VIP in UVM using Abstract Classes by Santosh Sarma of Wipro Technologies(India) presents an alternative approach where Legacy BFMs written in Verilog and not implemented using ‘Classes’ are hooked up to higher level class based components to create a standard UVM VIP structure. The paper also discusses an approach where existing VMM Transactors that are tied to such Legacy BFMs can be reused inside the UVM VIP with the help of the VCS provided UVM-VMM Interoperability Library. The implementation makes use of abstract classes to define functions that invoke the BFM APIs. The abstract class is then concretized using derived classes which give the actual implementation of the functions in the abstract class. The concrete class is then bound to the Verilog instance of the BFM using the SystemVerilog “bind” concept. The concrete class handle is then used by the UVM VIP and the VMM Transactor to interact with the underlying Verilog BFM. Using this approach the UVM VIP can be made truly reusable by using run time binding of the Verilog BFM instance to the VIP instead of using hardcoded macro names or procedural calls.

A Unified Self-Check Infrastructure - A Standardized Approach for Creating the Self-Check Block of Any Verification Environmentby John Sotiropoulos, Matt Muresa , Massi Corba of Draper Laboratories Cambridge, MA, USA presents a structured approach for developing a centralized “Self-Check” block for a verification environment. The approach is flexible enough to work with various testbench architectures and is portable across different verification methodologies. Here, all of the design’s responses are encapsulated under a common base class, providing a single “Self-Check” interface for any checking that needs to be performed. This abstraction, combined with a single centralized scoreboard and a standardized set of components, provides the consistency needed for faster development and easier code maintenance. It expands the concept of ‘self-check’ to incorporate the white-box monitors (tracking internal DUT state changes etc.) and Temporal Models (reacting to wire changes) along-with traditional methodologies for enabling self-checking.

For VMM users looking at migrating to UVM, there is another paper from Courtney Schmitt of Analog Devices, Inc.Transitioning to UVM from VMMdiscusses the process of transitioning to a UVM based  environment from VMM Differences and parallels between the two verification methodologies are presented to show that updating to UVM is mostly a matter of getting acquainted with a new set of base classes. Topics include UVM phases, agents, TLM ports, configuration, sequences, and register models. Best practices and reference resources are highlighted to make the transition from VMM to UVM as painless as possible.

Posted in Announcements, Coverage, Metrics, Creating tests, Customization, Modeling, Optimization/Performance, Reuse, SystemVerilog, UVM, Uncategorized, VMM, VMM infrastructure | 3 Comments »

Automating Coverage Closure

Posted by Janick Bergeron on 5th July 2010

“Coverage Closure” is the process used to reach 100% of your coverage goals. In a directed test methodology, it is simply the process of writing all of the testcases outlined in the verification plan. In a constrained-random methodology, it is the process of adding constraints, defining scenarios or writing directed tests to hit the uncovered areas in your functional and structural coverage model. In the latter case, it is a process that is time-consuming and challenging: you must reverse-engineer the design and verification environment to determine why specific stimulus must be generated to hit those uncovered areas.

Note that your first strategy should be to question the relevance of the uncovered coverage point. A coverage point describes an interesting and unique condition that must be verified. If that condition is already represented by another coverage point, or it is not that interesting (if no one on your team is curious about nor looking forward to analyzing a particular coverage point, then it is probably not that interesting), then get rid of the coverage point rather than trying to cover it.

Something that is challenging and time-consuming is an ideal candidate for automation. In this case, the Holy Grail is the automation of the feedback loop between the coverage metrics and the constraint solver. The challenge in automating that loop is correlating those metrics with the constraints.

Coverage ConvergenceFor input coverage, this correlation is obvious. For every random variable in a transaction description, there is usually a corresponding coverage point, then cross coverage points for various combinations of random variables. VCS includes automatic input coverage convergence. It automatically generates the coverage model based on the constraints in a transaction descriptor, then will automatically tighten the constraints at run-time to reach 100% coverage in very few runs.

For internal or output coverage, the correlation is a lot more obscure. How can a tool determine how to tweak the constraints to reach a specific line in the RTL code, trigger a specific expression or produce a specific set of values in an output transaction? That is where newly acquired technology from Nusym will help. Their secret sauce traces the effect of random input values on expressions inside the design. From there, it is possible to correlate the constraints and the coverage points. Once this correlation is known, it is a relatively straightforward process to modify the constraints to target uncovered coverage points.

A complementary challenge to coverage closure is identifying coverage points that are unreachable. Formal tools, such as Magellan, can help identify structural coverage points that cannot be reached and thus further trim your coverage space. For functional coverage, that same secret sauce from Nusym can also be used to help identify why existing constraints are preventing certain coverage points from being reached.

Keep in mind that filling the coverage model is not the goal of a verification process: it is to find bugs! Ultimately, a coverage model is no different than a set of directed testcases: it will only measure the conditions you have thought of and consider interesting. The value of constraint-random stimulus is not just in filling those coverage models automatically, but also in creating conditions you did not think of. In a constrained-random methodology, the journey is just as interesting—and valuable—as the destination.

Posted in Coverage, Metrics, Creating tests, Verification Planning & Management | 4 Comments »

Simplifying test writing with MSSG and constraint switching feature of System Verilog

Posted by Tushar Mattu on 25th May 2010

Pratish Kumar KT  & Sachin Sohale, Texas Instruments
Tushar Mattu,  Synopsys

In an ideal world, one would want maximum automation and minimal effort. The same holds true when you would want to uncover bugs in your DUV. You would want to provide your team members with an environment whereby they can focus their efforts on debugging issues in the design rather than spend time writing hundreds of lines of testcase code. Here we want to share some simple techniques to make the test writer’s life easier by enabling him to achieve the required objectives with minimum lines of test code, and by providing more automation and reuse. Along with automation, it is important to ensure that the test logic remains easy to understand. In one recent project, where the DUV was an image processing block, the following were some of the requirements:

- The relevant configuration for each block had to be driven first on a configuration port before driving the frame data on another port

- The configuration had to be written to the registers in individual blocks and each block had its own unique configuration.

- Several such blocks together constituted the overall sub-system which required an integrated testbench

- Because all these blocks had to be verified in parallel, the requirement was to have one a generic Register Model which could work not only with the blocks in parallel but also be reused at the sub-system level

- Also, another aspect was to provide as much of reuse in the testcases as possible from block to system level

Given the requirements, we decided to go ahead in the manner described below using a clever mix of the MSS scenarios , named constraints blocks and RAL:

first

Here is the brief description of the different components and the flow which is represented in the pictorial representation above:

Custom RAL model

As the RAL register model forms the configuration stimulus space, we decided to put all test-specific constraints for register configuration in the top level RAL block itself and had them switched off by default. As seen below, the custom RAL block extended from the generated model has all the test-specific constraints which are switched off.

second

Basic Multistream Scenario

For each block, one basic scenario (“BASIC SCENARIO”) was always registered with the MSSG. This procedural scenario governs the test flow, which is:

- Randomize the RAL configuration with default constraints,

- Drive the configuration to the configuration port

- Put the desired number of frame data into the driver BFM input channel

The following snippet shows the actual ‘BASIC SCENARIO”

third

Scenario Library

With the basic infrastructure in place, the strategy for creating a scenario library for the test-specific requirements was simple. Each extended scenario class was meant to change the configuration generation through the RAL model of the different blocks. Thus, for the scenario library for each block, each independent scenario extended the basic scenario and overloaded only the required virtual method for configuration change as shown below:

fourth

Test

Each test would then select the scenario corresponding to the test and register it with the MSSG. fifth

At the block level, we were able to reuse this approach across all block level tests consistently and effectively.

Later we were able to reuse all block level tests in the sub-system level testbench , as tests were written in terms of constraints in the RAL model itself. At the sub-system level, we would then turn on specific constraint per block using corresponding block level scenarios. This ensures that there is maximum reuse and that the test flow is consistent across levels.

sixth

At higher levels, the scenarios extend the “basic_system_scenario”. The test flow managed by the system level scenarios have a slightly different execution flow than block level tests.But the ‘configuration generation’ is reused consistently and efficiently from block to system. That means modifications of block level test constraints would not require any modification for subsystem level tests using that configuration.

And voila !! Scenarios and test dumping steps were automated at block and sub-system level using simple perl scripting. Test-specific constraint per block were written in an XL sheet which a script will take that as input to generate the scenario and test at block/sub-system level. The scenario execution flow is predetermined and defined in the block/sub-system basic scenario . Furthermore, the flexibility was given to user within this automation process to create multiple such scenarios and reuse existing test constraints.

Posted in Creating tests, Register Abstraction Model with RAL, Tutorial, VMM | 3 Comments »

Verification in the trenches: Chaining tests using VMM1.2

Posted by Ambar Sarkar on 26th April 2010

Dr. Ambar Sarkar, Chief Verification Technologist, Paradigm Works Inc.

Have you been frustrated by how long it takes to run the initial phases of a test before it gets to those that really matter? And it gets worse when you need to run a number of these tests. I often see projects where about 80% of the test execution time is used up in these initial phases.

While you do need to execute these initial phases, repeating them for each test does not contribute to any additional verification. Ideally, you would like to run the initial phases for a group of related tests just once, just to a point where you are ready to send real traffic or transactions. The phases that follow should be executed once per test, with the environment rolled back to an appropriate state before the next test starts executing.

The diagram below shows how you may want chain two testsTEST1 and TEST2, where you want to use the same configuration for both.


VMM1.2 supports this model of test execution for implicitly phased environments.

First the steps.

Step 1. Tag a test as concatenable and specify the rewind-to phase.?

If you are using implicit phasing of your environment, this is done simply by using the macro that announces the tests as concatenable, along with the phase to roll_back to.

class TEST1 extends vmm_test;
//Macro to indicate which phase to rollback to
`vmm_test_concatenate(configure_test)

endclass

class TEST2 extends vmm_test;
//Macro to indicate which phase to rollback to
`vmm_test_concatenate(configure_test)

endclass

Step 2. Invoke it.

This is done by simply specifying the command line parameters:

//Command line arguments to run the example
./simv +vmm_test=TEST1+TEST2

You also have the option of specifying all tests in a file or just chain all the tests.

So what are the gotchas?

The underlying VMM implementation (and you can read the gory details of how this is done using the three timelines: pre, top, and post in the VMM user guide) takes care of synchronizing all the test phases so that the tests can be effectively chained. However, the user still has to implement any application specific cleanup in between tests to restore the environment. This seems fair enough, and the cleanup phase is the perfect phase to implement that.

However, there is one limitation when considering chained tests. You cannot use vmm_test::set_config() in concatenated tests, which is the place to perform class-factory overrides for the environment components. This is not a limitation of the implementation however, but just how class factory overrides work. Note that you can still use class factory to override your traffic scenarios, just not the environment components.

First, a test can only run after the environment is created. Second, a class factory override for the environment can work only if it is called before the environment is created. This means that after the first test is executed, you cannot override the environment setting using class factory override, unless you are willing to rebuild the environment from scratch in between. Clearly, this rebuild per test run will negate any benefits of chaining. Since vmm_test::set_config is explicitly used to reconfigure environments using the class factory method, it is not allowed when tests are chained.

This article is the 5th in the Verification in the trenches series. Hope you found this article useful. If you would like to hear about any other related topic, please comment or drop me a line at ambar.sarkar@paradigm-works.com. Also, if you are starting out fresh, please check out the free VMM1.2 environment generator.

Posted in Creating tests | Comments Off

Using vmm_test Base Class

Posted by Fabian Delguste on 24th April 2009

Fabian Delguste, Snr CAE Manager, Synopsys

VMM 1.1 comes with a new base class called vmm_test that can be used for all tests. The main motivation for developing this base class was to enable a single compile-elaboration-simulate step for all tests rather than multiple ones. It is recommended to implement tests in a program thread as this provides a good way of encapsulating a testbench, and reduces races between design and testbench code. All test examples showed the tests implemented directly in program blocks. The drawback of this technique is that the user needs to recompile, elaborate and simulate each test individually. When dealing with large regressions consisting of 1000s of tests, multiple elaborations can waste a significant amount of time whereas tests using vmm_test only requires one elaboration. A given test can be selected at run-time using a command-line option. Switches like +ntb_random_seed can be used in conjunction with these tests.

To understand better how this base class works, let’s look at the example that ships with the VMM 1.1 release in the sv/examples/std_lib/vmm_test directory. This example shows how to constrain some ALU transactions. These transactions, modelled by the alu_data class , are randomly generated by an instance of a vmm_atomic_gen and passed to an ALU driver using a vmm_channel.

Before digging in the gory details of vmm_test, let’s see how tests are traditionally written:

1. class add_test_data extends alu_data;

2.   constraint cst_test {

3.      kind == ADD;

4.    }

5. endclass

6.

7. program alu_test();

8.    alu_env env;

9.    initial begin

10.     add_test_data tdata = new;

11.     env.build();

12.     env.gen.randomized_obj = tdata;

13.     env.run();

14.   end

15.endprogram

· In lines 1-4, the alu_data class is extended to the new class add_test_data which contains test-specific constraints. In this test, only ADD operations are carried modelled by this class.

· In line 7, a program block is used to instantiate the environment alu_env based on vmm_env.

· In line 10-13, the environment is built and the new transaction add_test_data is used as the factory instance for our vmm_atomic_gen transactor.

Of course, this test is very specific and a user clearly needs to create a similar program block with other constraints to fulfill the corresponding test plan. For instance, this test can be duplicated many times to send {MUL, SUB, DIV} ALU operations to the ALU driver. In this case, multiple program blocks are required, so are multiple elaborations and binaries to simulate these tests.

VMM 1.1 provides a way to include all test files in a single compilation. The previous test can now be written like this:

1. class add_test_data extends alu_data;

2.   constraint cst_test {

3.     kind == ADD;

4.   }

5. endclass

6.

7.  `vmm_test_begin(test_add, alu_env, “Addition”)

8.  env.build();

9.   begin

10.   add_test_data tdata = new;

11.   env.gen.randomized_obj = tdata;

12. end

13. env.run();

14. `vmm_test_end(test_add)

· In line 7, the vmm_test short hand macro `vmm_test_begin is used to declare the test name (test_add in our example), the name of the vmm_env where all transactors reside (alu_env in our example) and a label that is used to tag this particular test.

· In lines 8-13, users can build the environment, insert the factory instance and kick off the test.

· In line 14, the vmm_test short hand macro `vmm_test_end is used to terminate this test declaration.

Of course, other tests with variations in constraints can be written in the same way. Since the environment is exposed after the `vmm_test_begin short hand macro, it is possible to register callbacks, replace generators or do any other operation which is traditionally done in the VMM program block.

An important aspect of these tests is that whenever they are included, they become statically declared and visible to the environment.

Now let’s see how to include these tests in VMM program block:

1. `include “test_add.sv”

2. `include “test_sub.sv”

3. `include “test_mul.sv”

4. `include “test_ls.sv”

5. `include “test_rs.sv”

6.

7. program alu_test();

8.    alu_env env;

9.    initial begin

10.    vmm_test_registry registry = new;

11.    env = new(alu_drv_port, alu_mon_port);

12.    registry.run(env);

13.  end

14. endprogram

· In line 1-5, all tests are simply included

· In line 10, registry, which is an instance of vmm_test_registry,is constructed. This object contains all tests implemented using vmm_test_begin that have been previously included

· In line 12, registry can be run and a handle to the environment is passed as an argument to this class. This is how all vmm_tests can have access to the environment

Running a specific test is achieved by providing the test name on the command line, for example:

simv +vmm_test=test_add

simv +vmm_test=test_sub

Note that calling simv without the +vmm_test switch returns a FATAL error and lists all registered tests. This is a good way to document the available tests.

Posted in Creating tests, Optimization/Performance | 11 Comments »

dadcfc53e827c1678903c5df3ed9a93aZZZZZZZZZZZ