Verification Martial Arts: A Verification Methodology Blog

Archive for the 'Tools & 3rd Party interfaces' Category

SNUG-2012 Verification Round Up: VCS Technologies

Posted by paragg on 15th March 2013

Continuing from my earlier blog posts about SNUG papers on the SystemVerilog language and verification methodologies, I will now go through some of the interesting papers that highlight core technologies in VCS which users can deploy to improve their productivity. We will walk through various stages of the verification cycle including simulation bring up, RTL simulation, gate-level simulation, regression and simulation debug which each benefit from different features and technologies in VCS.

Beating the SoC challenges

One of the biggest challenges that today’s complex SoC architectures pose is rigorous verification of SoC designs. Functional verification of the full-system represented by these mammoth scale (> 1 billion transistors per chip) designs calls for the verification environment to employ advanced methodologies, powerful tools and techniques. Constrained-random stimulus generation, coverage-driven-completion criteria, assertion-based checking, faster triage and debug turnaround, C/C++/SystemC co-simulation, gate-level verification, etc. are just some of these methods and techniques which aid in tackling the challenge.  Patrick Hamilton, Richard Yin, Bobjee Nibhanupudi, Amol Bhinge of Freescale in their paper SoC Simulation Performance: Bottlenecks and Remedies discuss the several simulation and debug bottlenecks experienced during the verification of a complex next-generation SoC; they discuss how they gained knowledge of these bottlenecks and  overcame them using VCS diagnostic capabilities, profile reports, VCS arguments, testbench modifications, smarter utilities, fine tuning of computing resources, etc.

The challenge of simulation environment is the sheer amount of tests that are being written and need to be managed. As more tests are added to the regressions, there is a quantifiable impact on several aspects of the project. These include a dramatic and unsustainable increase in the overall regression time.  As the regression time increases, the intermediate interval for collecting and analyzing results between successive regressions run shrinks.  Overlaps arising from having multiple regressions in flight can cause failure to track design bugs for several snapshots, which can also result in the inability to ensure coverage tracking by design management tools. Given constantly shortening project timelines, this affects the time-to-market of core designs and their dependent SoC products. Simulation-based Productivity Enhancements using VCS Save/Restore by Scot Hildebrandt, Lloyd Cha, AMD, Inc. looks at using VCS’s Save/Restore feature to develop steps involving binary image capture of sections of simulation. These “sections” consist of aspects replicated in all tests, like the reset sequence, or allow the skipping of the specific phases of a failing test which are ‘clean’. They further provide statistics in terms of the reduction in the regression time and the memory footprint that the saved image would typically enable. They also talk about how the dynamic re-seeding of the test case with the stored images enabled them to leverage the full strength and capabilities of the CRV methodologies.

The paper SoC Compilation and Runtime Optimization using VCS by Santhosh K.R., Sreenath Mandagani of Mindspeed Technologies(India) talks about the Partition Compile flow and associated methodology to improve TAT(turnaround time) for SOC compilations. The flow leverages v2k configurations, parallel compilation and various performance optimization switches of VCS-MX. They further explain how a SoC can be partitioned into multiple functional blocks or clusters and each block can be selectively replaced with empty shells if that particular functionality is not exercised in the desired tests. Also the paper demonstrates how new tests can be added and run without requiring to recompile the whole SoC. Thus using Partition Compile flow, only a subset of SoC or test bench blocks would be recompiled based on the dependencies across clusters. They share the productivity gains in compile TAT as well, overall runtime gains for the current SoC and the savings in overall disk space requirement. This is then shown to correlate with the reduction in the license usage time and disk space which leads to savings desired.

By the way, now there been further developments in the latest VCS release to help ensure isolation of switches between partitions in the SoC. This additional functionality helps reduce memory, decrease runtime, and reduce initial scratch compile time even further while maintaining the advantages of partition compile.

Addressing the X-optimism challenges – X-prop Technology

Gate simulations are onerous and many of the risks normally mitigated by gate simulations can now be addressed by RTL lint tools, static timing analysis tools and logic equivalence checking. However, one risk that persists, until now, is the potential for optimism in the X semantics of RTL simulation.  The semantics of the Verilog language can create mismatches between RTL and gate-level simulation due to X-optimism Also, the semantics of X’s in gate simulations are pessimistic resulting in simulation failures that don’t represent real bugs.

Improved X-Propagation using the xProp technology by Rafi Spigelman of Intel Corporation presents the motivation for having the relevant semantics for X-propagation. The process of how such semantics was validated and deployed on a major CPU design at Intel is also described. He delves upon its merits and limitations, and comments on the effort required in enabling such semantics in the RTL regressions.

Robert Booth of Freescale Inc. in the paper X-Optimism Elimination during RTL Verification explains how the chips suffer from X-optimism issues that often conceal design bugs. The deployment of low power techniques such as power-shutdown in today’s designs exacerbates these X-optimism issues. To address these problems they show how they leverage the new simulation semantics with VCS that more accurately models non-deterministic values in logic simulation. The paper describes how X-optimism can be eliminated during RTL verification.

In the paper X-Propagation: An Alternative to Gate Level Simulation”, Adrian Evans, Julius Yam, Craig Forward @cisco.com explores X-Propagation technology which attempts to model X behavior more accurately at the RTL level. In this paper, they review the sources of X’s in simulation and their handling in the Verilog language. They further describe their experience using this feature on design blocks from Cisco ASICs including several simulation failures that did not represent RTL bugs. They conclude by suggesting how X-Propagation can be used to reduce and potentially eliminate gate-level simulations.

In  the paper Improved x-propagation semantics: CPU server learning, Peeyush Purohit, Ashish Alexander, Anees Sutarwala of Intel stresses on the need to model and simulate silicon like behavior in RTL simulations. They bring out the fact that traditionally Gate-Level Simulations have been used to fill that void but come at the cost of time and resources. Then they go on to explain the limitations with the regular 4-value Verilog/System Verilog based RTL simulation and also cover the specifications for enhanced simulator semantics to overcome those limitations. They explain how design issues that were found on their next-generation CPU server project used the enhanced semantics; the potential flow implications and a sample methodology implementing the new semantics are provided.

Power-on-Reset (POR) is a key functional sequence for all SoC designs and any bug not detected in this logic can lead to dead silicon. Complexities in reset logic pose increasing challenges for verification engineers to catch any such design issue(s) during RTL/GL simulations. POR sequence simulations are many times accompanied by ‘X’ propagation due to non-resettable flops and un-initialized logic. Generally uninitialized and non-resettable logic is initialized to 0’s or 1’s or some random values using Forces or Deposits to bypass unwanted X propagation. Ideally, one would like to have a stimulus to try all possible combinations of initial values for such logic but this is practically impossible due to short design cycle and limited resources. This practical limitation can leave space for critical design bugs that may remain undetected during the design verification cycle. Deepak Jindal, Freescale, India in the paper Gaps and Challenges with Reset Logic Verification discusses these reset logic simulation challenges in detail and shares the experience of evaluating the new semantics in VCS technology which can help to catch most of the POR bugs/issues during RTL stage itself.

SNUG allows users to discuss their current challenges and emerging solutions they are using to address them. You can find all SNUG papers online via solvnet (Of course a login required!!!).

Posted in SystemVerilog, Tools & 3rd Party interfaces, Tutorial | 1 Comment »

VCS Built-in TLI connectivity for UVM to SystemC TLM 2.0

Posted by vikasg on 20th September 2012

Vikas Grover | Sr. Manager, Central Verification | AMD-India

One of the challenges faced in SOC verification is to validate the designs in mixed language and mixed  abstraction level. SystemC is widely used language to define the system model at higher level of abstraction.  SystemC is an IEEE standard language for System Level modeling and it is rich with constructs for  describing models at various levels of abstraction i.e. Untimed, Timed, Transaction Level, Cycle Accurate,  and RTL. The transaction level model simulates much faster than RTL model, besides OSCI defined the TLM  2.0 interface standard for SystemC which enables SystemC model interoperability and reuse at transaction  level.

On the other side, SystemVerilog is a unified language for design and verification. It is effective for designing advance testbenches for both RTL and Transaction level models, since it has features like constraint randomization for stimulus generation, functional coverage, assertions, object oriented constructs(like class,inheritance etc). Early availability of standard methodologies (providing framework and testbench coding guidelines for resue) like VMM, OVM, UVM enabled wide adoption for System Verilog in industry. The UVM 1.0 Base Class Library which was   released on Feb 2011  includes OSCI TLM 2.0 socket interface to enable interoperability for UVM with SystemC . Essentially it allows UVM testbench to include SystemC TLM 2.0 reference models. The UVM testbench can pass (or receive) transactions from SystemC models. The transaction passed across System Verilog ßàSystemC could be TLM 2.0 generic payload OR uvm_sequence_item. The implementation of UVM to SC TLM 2.0 communication is vendor dependent.

Starting with with the 2011.03 release, VCS provides a new TLI adaptor which enables UVM TLM 2.0 sockets to communicate with SC TLM 2.0 based environment to pass transactions across language domains.  You can also check out  a couple of earlier post from John Aynsley, (VMM-to-SystemC Communication Using the TLI and  Blocking and Non-blocking Communication Using the TLI) on SV-SystemC communication using TLI.   In this Blog, I am going to describe VCS TLI connectivity mechanism between UVM and SystemC. There are other advance TLI features in VCS ( like direct access of data, invoking task/functions  across SV and SC language),  message unification across UVM-SC, transaction debug techniques, extending TLI adaptor for user defined interface other than VMM/UVM/TLM2.0 which can be written about on later.

With the support for TLM2.0 interfaces in both UVM and VMM, the importance of OSCI TLM2.0 across both SystemC and SystemVerilog is now apparent. UVM provides the following TLM2.0 socket interfaces (for both blocking and non-blocking communication)

  • uvm_tlm_b_initiator_socket
  • uvm_tlm_b_target_socket
  • uvm_tlm_nb_initiator_socket
  • uvm_tlm_nb_target_socket
  • uvm_analysis_port
  • uvm_subscriber

SystemC TLM2.0 consists of following TLM 2.0 interface

  • tlm_initiator_socket
  • tlm_target_socket
  • tlm_analysis_port

The Built-in TLI adaptor solution for VCS is a general purpose solution to simplify the transaction passing across UVM and  SystemC as shown below. The transactions can be TLM 2.0 generic payload OR uvm_sequence_item object. The UVM 1.0 does have the TLM 2.0 generic payload class as well.

The Built-in TLI adaptor is available as a pre-compiled library with VCS. The user would need to follow two simple steps to include the TLI adaptor in his/her verification environment.

  1. Include a header file in System Verilog and SystemC code. The System Verilog header file provides a package which implements the bind function parameterized on uvm_sequence_item object.
  2. Invoke the bind function on System Verilog and SystemC side to connect each socket across language.  The bind function has a string argument which must be unique for each socket connection across System Verilog and SystemC.

The code snippet for above steps is shown below. The TLI adaptor code is highlighted in orange/blue color.  The UVM Initiator  “initiator_udf” from System Verilog is driving SystemC Target “ target_udf” using the  TLM  blocking socket.

The TLI adaptor bind function uses the unique string “str_udf_pkt” to identify the socket connectivity across SystemVerilog and SystemC domain.  For multiple sockets, the user needs to invoke the TLI bind function once for each socket. The TLI adaptor supports both blocking and non-blocking transport interfaces for sockets to communicate across System Verilog and SystemC.

Thus, the Built-in UVM-SC TLI adaptor capability of VCS ensures that SystemC can be connected seamlessly in UVM based verification environment.

Posted in Communication, Interoperability, SystemC/C/C++, Tools & 3rd Party interfaces, Transaction Level Modeling (TLM) | 1 Comment »

Build your own code generator!

Posted by Amit Sharma on 25th June 2012

One of the critical problems developers cope with during SoC development process (architecture plan, RTL design, verification, software development) is the constant need to synchronize between different forms of the same data structure: the SoC registers database. The SoC registers database can be found at the SoC architecture team (who write the SoC registers description document), design engineers (who implement the registers structure in RTL code), verification engineers (who write the verification infrastructure – such as RAL code, and write verification tests – such as exhaustive read/write tests from all registers), and software engineers (who use the registers information to write embedded software).

Since the same information is used in different forms, it is only natural to have a single, central database which will hold all SoC registers data. You would ideally like to generate all the required output files  (documentation, UVM or VMM Regsiter Abstraction Model, RTL, C headers, …) from this central database ..  Different vendors and CAD teams provide different automation solutions for doing this.

The RALF specification contains all of the necessary information to generate RTL and testbench  code that implements the specified registers. There are many ways to code and implement RTL, so it is not possible to provide a general purpose solution.  As far as the testbench abstraction model is concerned, there are multiple ways of customizing your model post generation in both UVM and VMM. Callbacks, factories, configurable RAL model  attributes are some of the ways through which the desired customization can be brought in.  “The ‘user’ in RALF : get ralgen to generate ‘your’ code” highlights a very convenient way of adding of bringing in  SystemVerilog compatible code which will be copied as-is into the RAL model and in the desired scope. When it comes down to generating the RTL and the ‘C’ headers, we cannot leave the customization to such a late stage.  Also, different organizations and project groups have their own RTL and C-code coding styles which means  a generated output of a very generic nature might not be very helpful. For RTL generation, engineers would want the generated code to be power and gate-count efficient. Similar for C registers header generation, it often needs to follow coding styles and match CPU Firmware API. How do we bring in all this customizations to the end user?

Using the RALF C++ API, you have full access to parsed RALF data (through C++ routines), which you can use to implement a customized RTL code generator, or any other feature that needs RALF data. So, you can use  it to generate your C header files, HTML documentation, or translate the i/p RALF files to another register description format, custom covergroups and coverage convergence sequences (DAC 2012 User Track poster 6U.8 — Register Verification on a Fast Lane: Using Automation to Converge on UVM REG Coverage Models)

I have seen two instances of the need to generate a different register specification in the recent past and that is one of the reasons I decided to put this down in a blog.   Let me talk about the first instance..

One of the project groups were in the process of migrating from their own SV base classes to UVM. They had their own Register description format from which they used to generate their Register Abstraction Model. This was a proven flow.

So, when they migrated to UVM , they wanted to have a flow which would validate the changes that they are doing..

Given that they were moving to using RALF and ‘ralgen’, they didn’t want to create Register Specification in the legacy format anymore. So, they wanted to have some automation for generating scripts in the earlier format.  So, how did they go about doing this?..   They took the RAL C++ APIs and using them there were able to create the necessary automation to to generate the legacy format from RALF in no time.. (From what I remember, it was a half days work).. Everyone were involved in doing what they were best at, and that helped in the overall scheme of things.

The other customer had their own format from which they were generating RTL, firmware code and HTML. They had the necessary automation to create RALF for generating the UVM register Model. They also had a mechanism in place to generate IPXACT from this format as well as vice versa.. So, to complete the traceability matrix, they wanted a RALF  to IPXACT conversion.. Again, the most logical approach was to take the RALF CPP APIs and get those to iterate through the parsed RALF data and generate IPXACT.. Though, this effort is not complete, it just took a day or so to be able to generate valid IPXACT1.5 schema and all that is required now is some more additional work to smoothen the corners.

How do you start using these APIs and build your own code/html generators? You need to include ‘ralf.hpp” (which is in $VCS_HOME/include) in your ‘generator’ block. And then to compile the code, you need to pick up the shared library libralf.so from the VCS installation.

$CPP_COMPILER $CFLAGS -I${VCS_HOME}/include –L${VCS_HOME}/lib –lralf your-file.cpp $LDFLAGS

#include
#include "ralf.hpp"
int main(int argc, char *argv[])
{
// Check basic command line usage…
if (argc < 3) {
fprintf(stderr, "Error: Wrong Usage.\n");
// Show Correct Usage …
exit(1);
}
/**
* Parse command line arguments to get the essential
* constructor arguments. See documentation
* of class ralf::vmm_ralf’s constructor parameters.
*/
{

/**
* Create a ralf::vmm_ralf object by passing in proper
constructor arguments. */
ralf::vmm_ralf ralf_data(ralf_file, top, rtl_dir,
inc_dir);
/**
* Get the top level object storing the parsed RALF
* block/system data and traverse that, top-down, to get
* access to complete RALF data.
*/
const ralf::vmm_ralf_blk_or_sys_typ * top_lvl_blk_or_sys
= ralf_data.getTopLevelBlockOrSys();
#ifndef GEN_RTL_IN_SNPS_STYLE
/*
* TODO–Traverse the parsed RALF data structure top-down
* using/starting-from ‘top_lvl_blk_or_sys’ for getting
* complete access to the RALF data and then, do whatever
* you would want to do, with the parsed RALF data. One
* typical usage of parsed RALF data could be, to generate
* RTL code in your own style.
*/
//
// TODO – Add your RTL generator code here.
//
#else
/*
* As part of this library, Synopsys also provides a
* default RTL generator, which can be invoked by
* invoking ‘generateRTL()’ method of ‘ralf::vmm_ralf’
* class, as demonstrated below.
*/
ralf_data.generateRTL();
#endif
}

Essentially, you have a handle to the parsed database, and with the available APIs you can do whatever you want with it :) ..  The documentation of the APIs are in the documentation shipped with the VCS installation.. Also, if you are like me and would rather hack away at existing code rather than start with something from scratch, you can just check with Synopsys support to give you existing templates to dump out code in specific format, and you can starting modifying that for your requirements..

Posted in Automation, Coverage, Metrics, Customization, Interoperability, Register Abstraction Model with RAL, Tools & 3rd Party interfaces, UVM, VMM infrastructure | Comments Off

Auto-Generation of Performance Charts with the VMM Performance Analyzer

Posted by Amit Sharma on 30th September 2011

Amit Sharma, Synopsys

Quoting from one of Janick’s earlier blog on the VMM Performance Analyzer Analyzing results of the Performance Analyzer with Excel,
”The VMM Performance Analyzer stores its performance-related data in a SQL database.SQL was chosen because it is an IEEEANSI/ISO standard with a multiplicity of implementation, from powerful enterprise systems like Oracle, to open source versions like MySQL to simple file-based like SQLite. SQL offers the power of a rich query and analysis language to generate the reports that are relevant to yourapplication.”

And given that everyone doesn’t understand SQL, he goes on to show how one can get VMM Performance Analyzer data from a SQLite database into an Excel spreadsheet and then subsequently analyze the data by doing any additional computation and creating the appropriate graphs. This involves a set of steps leveraging the SQlite ODBC (Open Database Conduit) and thus requires the installation of the same.

This article presents a mechanism how TCL scripts are used to bring in the next level of automation so that the users can retrieve the required data from the SQL DB and even automate the process of results analysis by auto-generating the relevant performance charts for statistical analysis.. Also, as users migrate to using DVE as a single platform for their design debug, coverage analysis, verification planning, it is shown how these scripts can be integrated into DVE, so that the generation process is a matter of using the relevant menus and clicking on the appropriate buttons in DVE.

For generating the SQL databases with the VMM Performance Analyzer, an SQLite Installation is required which can be obtained from www.sqlite.org. Once, you have installed it, you would need to set the SQLITE3_HOME environment variable to the path where its installed. Once that is done, these are the following steps that you need to follow to generate the appropriate graphs out of the data gets generated out of your batch regressions runs..

First, you need to download the utility from the link provided in the article DVE TCL Utility for Auto-Generation of Performance Charts

Once it is extracted, you can try it on the tl_bus examples that ships with the utility. You would need to go the directory vmm_perf_utility/tl_bus.

Use make to run the tl_bus which will generate the sql_data.db and sql_data.sql. Now, go to the ‘chart_utility’ directory

(cd vmm_perf_utility/chart_utility/)

The TCL scripts which are involved in the automation of the performance charts are in this directory.

This script vmm_perf_utility/chart_utility/chart.tcl  can then be executed from inside DVE as shown below

dve_an

Once, that is done, it will add will add a button “Generate Chart” in View menu.. BTW, adding a button is fairly simple..

eg:    gui_set_hotkey -menu “View->Generate Chart” -hot_key “G”

is how the button gets added..

Now,  click on a “Generate Chart” to select the sql database.

dve_an2

This will bring up the dialog box to select the SQL database..

dve_an3

Once, the appropriate data base is selected, the user can select which table to work with and then generate the appropriate.. The options would be provided to the user based on the data that is dumped into the SQL database.. From the combinations of charts, that is shown, select the graph that you want to generate and the required graphs will be generated for you. This is what you can see when you use the SQL DB generated for the TL bus example

dve_an4

dve_an5

Once, you have made the selections, you would see the following chart generated..

dve_an6

Now, obviously, you as a user would not just want the graphs to be  generated but you would also want these values to be available to you..

Thus, once you use this chart generation mechanism, the relevant .csv files corresponding to the graphs that you have generated would also be dumped for you..

This will be generated in the perfReports directory that would be created as well.. So, you can do any additional custom computation in Excel or by running your own scripts..  To generate the graphs for any other example, you just need to pick up the appropriate SQL DB  that was generated based on your simulation runs and then subsequently generate the reports and graphs of your interest.

So whether you use the Performance Analyzer in VMM (Performance and statistical analysis from HDL simulations using the VMM Performance Analyzer) or in UVM (Using the VMM Performance Analyzer in a UVM Environment) and even while you are doing your own PA customizations Performance appraisal time – Getting the analyzer to give more feedback , you can easily generate whatever charts you require which  would easily help you analyze all the  different performance aspects of the design you are verifying..

Posted in Automation, Coverage, Metrics, Customization, Performance Analyzer, Tools & 3rd Party interfaces | Comments Off

Closed Loop Register Verification using IDesignSpec and the Register Abstraction Layer

Posted by Amit Sharma on 26th September 2011

Nitin Ahuja, Agnisys Technology Pvt. Ltd

In the previous article titled “Automatic generation of Register Model for VMM using IDesignSpecTM ” we discussed how it is advantageous to use a register model generator such as IDesignSpecTM, to automate the process of RALF model generation. Taking it forward, in this article we will discuss how to close the loop on register verification.

Various forms of coverage are used to ensure that registers are functioning properly. There are three coverage models in VMM. They are:

1. reg_bits coverage: this model is used to make sure that all the bits in the register are covered. This model works by writing and reading both 1 and 0 on every register bit, hence the name. This is specified using “cover +b” in the RALF model.

2. field_vals coverage: field value coverage model is implemented at the register level and supports value coverage of all fields and cross coverage between fields and other cross coverage points within the same register. This is specified using “cover +f” in the RALF model. User can specify the cross coverage depending on the functionality.

3. Address map: this coverage model is implemented at block level and ensures that all registers and the memories in the block have been read from and written to. This is specified using “cover +a” in the RALF model.

We will discuss how coverage can be switched on/off and how the type of coverage can be controlled for each field directly from the register specification.

Once the RALF model is generated, the next step in verification is to generate the RTL and the SystemVerilog RAL model using ‘ralgen’. The generated RAL model along with the RTL can be compiled and simulated in the VMM environment to generate the coverage database. This database is used for the report generation and analysis.

Reports can be generated using IDesignSpecTM (IDS). IDS generated reports have advantages over other report in that it generates the reports in a much more concise way showing all the coverage at one glance.

Turning Coverage ON or OFF

IDesignSpecTM enables the users to turn ON/OFF all the three types of coverage from within the MS Word specification itself.

Coverage can be specified and controlled using the “coverage” property in IDesignSpecTM which has the following possible values:

image

The hierarchical “coverage” property enables users to control the coverage of the whole block or at the chip level.

Here is a sample of how coverage can be specified in IDesignSpecTM:

image

image

This would be the corresponding RALF file :

agnisys_ralf

image

The coverage bins for each CoverPoint along with the cross for the various CoverPoints can also be defined in the specification as shown below:

image

This would translate to the following RALF:

image

Now, the next step after RALF generation would be to generate the RAL Model from the IDS generated RALF.

RAL MODEL AND RTL GENERATION FROM RALF:

The IDS generated RALF can be used with the Synopsys ‘ralgen’ to generate the RAL  (VMM or UVM) model as well as the RTL.

RAL model can be generated by using the following command:

image

If you specify –uvm above in the fisrt ralgen invocation above, a UVM Register Model would be generated.

COMPILATION AND REPORT GENERATION:

Once the RTL and the RAL model are generated using the ‘ralgen’, the complete model can be compiled and simulated in the VMM environment using VCS.

To compile the model use the following command on the command line:

vcs -R +plusarg_save -sverilog -o “simv1″ -ntb_opts rvm+dtm +incdir+<directories to search `defines> <files to be compiled> +define+RAL_COVERAGE

The compilation and simulation generates the simulation database which is used for the generation of the coverage reports.

Coverage reports can be generated in various forms but the most concise form can be in the form of the graphics showing all the coverage at a glance. For this, a tcl script “ivs_simif.tcl” takes up the simulation database and generates the text based report on execution of the following command:

% ivs_simif.tcl -in simv.vdb –svg

For running the above command set the environment variable “IDS_SIM_DIR”, the text report are generated at this location. This will also tell IDS where to look for the simulation data file.

A detailed graphical view of the report can be generated from IDS with the help of this text report. To generate the graphical report in the form of “scalable vector graphics” (SVG) select the “SVG” output from the IDS config and regenerate.

Another way of generating the SVG could be by using the IDS-XML or the Doc/Docx specification of the model as the input to the IDS in batch mode to generate the graphical report of the simulation by using the following command:

% idsbatch <IDS_generated_XML or doc/docx specification> -out “svg” -dir output_directory

Coverage Reports

IDesignSpec generates two types of reports from the input database.

They are:

1. Field_vals report

2. Reg_bits report

Field_vals report:

Field_vals report gives the graphical view of the field_vals coverage and the address coverage of the various registers and their respective fields.

The amount of coverage for the field (CoverPoints) is depicted by the level of green color in the fields, while that for complete register (CoverGroup) is shown by the color of name of the register.

The address coverage for the individual register (CoverPoint) is shown by the color of the address of the register (green if addressed; black if not addressed), while that of the entire block (CoverGroup) is shown by the color of the name of the block.

The coloring scheme for all the CoverGroups i.e. register name in case of the field_vals coverage and block name in case of the address coverage is:

1. If the overall coverage is greater than or equal to 80% then the name appears in GREEN color

2. If the coverage is greater than 70% but less than 80% then it appears in YELLOW

3. For coverage less than 70% name appears in RED color

Figure1 shows the field_vals and address coverage.

image

Figure:  Closed loop register verification using RALF and IDS

The above sample gives the following coverage information:

a. 2 registers, T and resetvalue, are not addressed out of total of 9 registers. Thus the overall coverage of the block falls in the range >70% &<80% which is depicted by the color of the Stopwatch (name of the block).

b. All the fields of the registers are filled with some amount of the green color which shows the amount of the coverage. As an example field T1 of register arr is covered 100% thus it is completely filled and FLD4 of register X is covered only about 10%. The exact value of coverage can be obtained by hovering over the field to get the tooltip showing the exact coverage value

c. Color of the name of the register, for example X is red, show the overall coverage of the whole register , which is less than 70% for X.

Reg_bits report:

Reg_bits report gives the detailed graphical view of the reg_bits coverage and address coverage.

Address coverage for reg_bits is shown in the same way as for the address coverage in field_vals. Reg_bits coverage has 4 components, that is,

1. Written as 1

2. Read as 1

3. Written as 0

4. Read as 0

Each of the 4 components is allocated a specific region inside a bit. If that component of the coverage is hit then the corresponding region is shown as green else it is blank. The overall coverage of the entire register is shown by the color of the name of the register as in the case of the field_vals.

image

The above sample report shows that there is no issue in “Read as 1” for the ‘resetvalue’ register. While other types or read/write has not been hit completely.

Thus, in this article we described what the various coverage models for a register are and how to generate the RALF coverage model of the registers automatically with minimum effort. An intuitive visualization of the register coverage data will ease the effort involved in deciphering the coverage reports from simulation lengthy log files. This type of closed loop register verification ensures better coverage and high quality results in less time. Hope you found this useful.. Do share with me your feedback on the same and and also let me know if you want any additional details to get the maximum benefits from this flow..

Posted in Automation, Coverage, Metrics, Organization, Register Abstraction Model with RAL, Tools & 3rd Party interfaces, Verification Planning & Management | 1 Comment »

Automatic generation of Register Model for VMM using IDesignSpec

Posted by Amit Sharma on 5th August 2011

Nitin Ahuja, Verification Engineer, Agnisys Technology Pvt Ltd

Generating a register model by hand could take up a lot of time in the design process and may result in serious bugs, which makes the code inefficient. On the other hand, generating the register model using the register model generator such as IDesignSpecTM reduces the coding effort, as well as generates more competent codes by avoiding the bugs in the first place, thus making the process more efficient and reduces the time to market exponentially.

Register model generator can be proved efficient in the following ways:

1. Error free codes in the first place, i.e. being automatically generated, the register model code is free from all the human as well as logical errors.

2. In the case of change in the register model specification, it is easy to modify the spec and generate the codes again in no time.

3. Generating all kind of hardware, software, industry standard specifications as well as verification codes from a single source of specification.

IDesignSpecTM (IDS) is capable of generating all the RTL as well as the verification codes such as VMM(RALF) from the register specification defined in Word, Excel, Open-office or IDS-XML.

Getting Started

A simple register can be defined inside a block in IDesigSpecTM as:

The above specification is translated into the following RALF code by IDS.

image

As a protocol, all the registers for which the hdl_path is mentioned in the RALF file, the ralgen generates the backdoor access. Thus special properties on the register such as hdl_path and coverage can be mentioned inside the IDS specification itself and will be appropriately translated into the RALF file.

The properties can be defined as below:

For Block:

image

As for the block, hdl_path , coverage or even any other such property can be mentioned for other IDS elements, such as register or field.

For register/field:

clip_image002OR

clip_image004

Note: Coverage property can take up the following three possible values:

1. ON/on: This enables all the coverage types i.e for block or memory address coverage and for registers and field the REG_BITS and FIELD_VALS coverage is on.

2. OFF/off: By default all the coverage is off. This option holds valid only in case, when the coverage is turned ON from the top level of the hierarchy or from the parent and to turn off the coverage for some particular register, specify ‘coverage=off’ for that register or field. The coverage for that specific child will be invert of what its parent has.

3. abf: Any combination of these three characters can be used to turn ON the particular type of the coverage. These characters stand for:

· a : address coverage

· b : reg_bits

· f : field_vals

For example to turn on the reg_bits and field_vals coverage, it can be mentioned as:

“coverage=bf”.

In addition to these properties, there are few more properties that can be mentioned in a similar way as above. Some of them are:

1. bins: various bins for the coverpoints can be specified using this property.

Syntax: {bins=”bin_name = {bin} ; <bin_name>…”}

2. constraint : constraints can also be specified for the register or field or for any element.

Syntax : {constraint=”constraint_name {constraint};<constraint_name> …”}

3. [vmm]<code>[/vmm]: This tag gives the users the ability to specify their own piece of system-verilog code in any element.

4. cross: cross for the coverpoints of the registers can be specified using cross property in the syntax:

Syntax: {cross = “coverpoint_1 <{label label_name}>;< coverpoint_2>…”}

Different types of registers in IDS :

1.Register Arrays:

Register arrays in RALF can be defined in IDS using the register groups. To define a register array of size ‘n’, it can be defined by placing a register inside a regroup with the repeat count equal to size of the array (n).

For example a register array of the name ”reg_array” with size equal to 10 can be defined in the IDS as follows:

clip_image006

The above specification will be translated into the following vmm code by the ids:

clip_image008

2.Memory :

Similar to the register array, Memories can also be defined in the IDS using the register groups. The only difference in the memory and register array definition is that in case of memory the external is equal to “true”. The size of the memory is calculated as, ((End_Address – Start_Address)*Repeat_Count)

As an example a memory of name “RAM” can be defined in IDS as follows:

clip_image010

The above memory specification will be translated into following VMM code:

clip_image012

3.Regfile

Regfile in RALF can be specified in IDS using the register group containing multiple registers(> 1).

One such regfile with 3 registers, repeated 16 times is shown below:

clip_image014

Following is the IDS generated VMM code for the above reg file:

clip_image016

RAL MODEL AND RTL GENERATION FROM RALF:

The IDS generated RALF can be used with the Synopsys Ralgen to generate the RAL model as well as the RTL.

To generate the RAL model use the following command:

clip_image018

And for the RTL generation use the following command:

clip_image020

SUMMARY:

It is beneficial to generate the RALF using the register model generator “IDesignspecTM”, as it guarantees bug free code, making it more competent and also reduces the time and effort. In case of modifications in the register model specification, it enables the users to regenerate the code again in no time.

Note:

We will extend this automation further in the next article where we will cover details about how you can “close the loop” on register verification. The “Closed Loop Register Verification” article will be available on VMM Central soon. Meanwhile if you have any questions/comments you can reach me at nitin[at]agnisys[dot]com .

Posted in Automation, Organization, Register Abstraction Model with RAL, Tools & 3rd Party interfaces, VMM infrastructure | 1 Comment »

The One stop shop: get done with everything you need to do with your registers

Posted by Amit Sharma on 14th July 2011

Ballori Bannerjee, Design Engineer, LSI India

Processes are created, refined and improved upon and the change in productivity which starts with a big leap subsequently slows down and at the same time as the complexity of tasks increases, the existing processes can no longer scale up. This drives the next paradigm shift in moving towards new process and automation. As in the case of all realms of technology, this is true in the context of the Register development and validation flow as well.. So, let’s look at how we changed our process to get the desired boost in productivity that we wanted..

This following flowchart represents our legacy register design and validation process.. This was a closed process and served us well initially when the number of registers, their properties etc were limited.. However, with the complex chips that we are designing and validating today, does this scale up?

register_verif

As an example, in a module that we are implementing, there are four thousand registers. Translating into number of fields, for 4000 32-bit registers we have 128,000 fields, with different hardware and software properties!

Coding the RTL with address decoding for 4000 registers, with fields having different properties is a week’s effort by a designer. Developing a re-usable randomized verification environment with tests like reset value check, read-write is another 2 weeks, at the least. Closure on bugs requires several feedbacks from verification to update design or document. So overall, there is at least a month’s effort plus maintenance overhead anytime the address mapping is modified or a register updated/added.

This flow is susceptible to errors where there could be disconnect between document, design, verification and software.

So, what do we do? We redefine the process! And this is what I will be talking about, our automated register design and verification (DV) flow which streamlines this process.

AUTOMATED REGISTER DESIGN AND VERIFICATION FLOW

The flow starts with the designer modeling the registers using a high level register description language. In our case , we use SystemRDL, and then leverage third party tools are available to generate the various downstream components from the RDL file:

· RTL in Verilog/VHDL

· C/C++ code for firmware

· Documentation ( different formats)

· High level verification environment code (HVL) in VMM

This is shown in below. The RDL file serves as a one-stop point for any register update required following a requirement change.

register2

Automated Register DV Flow

Given, that its critical to create an efficient object oriented abstraction layer to model registers and memories in a design under test, we exploit VMM RAL for the same. How do we generate the VMM RAL Model? This is generated from RALF. Many 3rd party tools are available to generate RALF from various inputs formats and we use one of them to generate RALF from SystemRDL

Thus, a complete VMM compliant randomized, coverage driven register verification environment can be created by extending the flow such that:

i. Using 3rd party tool, from SystemRDL the verification component generated is RALF, Synopsys’ Register Abstraction Layer File.

ii. RALF is passed through RALGEN, a Synopsys utility which converts the RALF information to a complete VMM based register verification environment. This includes automatic generation of pre-defined tests like reset value check, bit bash tests etc of registers and complete functional coverage model, which would have taken considerable staff-days of effort to write.

The flowchart below elucidates the process.

register3

Adopting the automated flow, it took 2 days to write the RDL. The rest of components were generated from this source. A small amount of manual effort may be required for items like back-door path definition, but it is minimal and a one-time effort. The overall benefits are much more than the number of staff days saved and we see this as something which gives us perpetual returns.. I am sure, a lot of you would already be bringing in some amount of automation in your register design and verification setup, and if you aren’t, its time you do it J

While, we are talking about abstraction and automation, lets look at another aspect in register verification.

Multiple Interfaces/Views for a register

It is possible to have registers in today’s complex SOC designs which need to be connected to two or more different buses and accessed differently. The register address will be different for the different physical interfaces it is shared between. So, how do we model this..

This can be defined in SystemRDL by using a parent addressmap with bridge property, which contains sub addressmaps representing the different views.

For example:

addrmap dma_blk_bridge {
bridge;// top level address map
reg commoncontrol_reg {
shared; // register will be shared by multiple address maps
field {
hw=rw;
sw=rw;
reset=32’h0;
} f1[32];
};

addrmap {// Define the Map for the AHB Side of the bridge
commoncontrol_reg cmn_ctl_ahb @0×0; // at address=0
} ahb;

addrmap { // Define the Map for the AXI Side of the bridge
commoncontrol_reg cmn_ctl_axi @0×40; // at address=0×40
} axi;
};

The equivalent of multiple view addressmap, in RALF is domain.

This allows one definition of the shared register while allowing access from each domain to it, where register address associated with each domain may be different .The following code is RALF with domain implementation for above RDL.

register commoncontrol_reg {
shared;
field f1 {
bits 32;
access rw;
reset ‘h0;
}
}

block dma_blk_bridge {
domain ahb {
bytes 4;
register commoncontrol_reg =cmn_ctl_ahb @’h00 ;
}

domain axi {
bytes 4;

register commoncontrol_reg=cmn_ctl_axi @’h40 ;
}
}

Each physical interface is a domain in RALF. Only blocks and systems have domains, registers are in the block. For access to a register from one interface/domain RAL provides read/write methods which can be called with the domain name as argument. This is shown below..

ral_model.STATUS.write(status, data, “pci”);

ral_model.STATUS.read(status, data, “ahb”);

This considerably simplifies the verification environment code for the shared register accesses. For more on the same, you can look at : Shared Register Access in RAL though multiple physical interfaces

However, unfortunately, in our case, the tools we used did not support multiple interfaces and the automated flow created a the RALF having effectively two or more top level systems re-defining the registers. This can blow up the RALF file size and also verification environment code.

system dma_blk_bridge {
bytes 4;
block ahb (ahb) @0×0 {
bytes 4;
register cmn_ctl_ahb @0×0 {
bytes 4;
field cmn_ctl_ahb_fl(cmn_ctl_ahb_f1)@0{
bits 32;
access rw;
reset 0×0;
} }
}

block axi (axi) @0×0 {
bytes 4;
register cmn_ctl_axi @0×40 {
bytes 4;
field cmn_ctl_axi_f1 (cmn_ctl_axi_f1) @0 {
bits 32;
access rw;
reset 0×0;
} }
}
}

Thus, as seen above, the tool is generating two blocks ‘ahb’ and ‘axi’ and re-defining the register in each block. For multiple shared registers, the resulting verification code will be much bigger than if domain had been used.

Also, without the domain associated read/write methods (as shown above) for accessing the shared registers will be at least a few lines of code per register for accessing it from a domain/interface. This makes writing the test scenarios complicated and wordy.

Using ‘domain’ in RALF and VMM RAL makes shared register implementation and access in verification environment easy. We hope that we would soon be able to have our automated flow leverage this effectively..

If you are interested to go through more details about our automation setup and register verification experiences, you might want to look at: http://www10.edacafe.com/link/DVCon-2011-Automated-approach-Register-Design-Verification-complex-SOC/34568/view.html

Posted in Automation, Modeling, Register Abstraction Model with RAL, Tools & 3rd Party interfaces | 4 Comments »

Using VMM template Generator to ramp up your testbench development

Posted by Amit Sharma on 25th October 2010

Amit Sharma, Synopsys
‘vmmgen’, the template generator for creating robust, extensible VMM compliant environments, has been available for a long time with VMM and it was upgraded significantly with VMM1.2. Though the primary functionality of ‘vmmgen’ is to help minimize VIP and environment development cycle by providing detailed templates for developing VMM Compliant verification environments, a lot of folks also use it to quickly understand how different VMM base classes can be used in different contexts. This is done as the templates uses a rich set of the latest VMM features to ensure the appropriate base classes and their features are picked up optimally.

Given that it has a wide user interaction mechanism which provides available features and options to the user, the user can pick up the modes which are most relevant to his or her requirement. It also provides them the option to provide their own templates thus providing a rich layer of customization. Based on the need, one can generate individual templates of different verification components or they can generate a complete verification environment which comes with a ’Makefile’ and an intuitive directory structure, thus propelling them on their way to catch the first set of bugs in their DUTs. I am sure all of you know where to pick up ‘vmmgen’ form. It available in the <VMM_HOME>/Shared/bin area or in $VCS_HOME/bin

Some of the rich set of features available now includes:

• Template Options:

– Complete environment generation

– Individual templates generation

• Options to create Explicitly phased environments or Implicitly phased environment or to mix Implicitly phased components and Explicitly phased components

• Usage of VMM Shorthand macros

• Creating RAL based environment, and providing Multiplexed domain support if required

• Hooking up VMM Performance Analyzer at the appropriate interfaces

• Hooking up the DS Scoreboard at the relevant interfaces (with options to chose from a range of integration options, e.g. : through callbacks, through TLM2.0 analysis ports, connect directly through to transactors, channels or notifications)

• Ability to hook up different generators (atomic, scenario, Multistream generators) at different interfaces

• Creating a scenario library and Multistream scenario creation

• Multi-driver generator support for different kinds of transactions in the same environment

• Factory support for transactions, scenarios and multi stream scenarios. Sample factory testcase which can explain the usage of transaction override from a testcase.

• ‘RTL config’ support for drivers and receivers.

• Various types of unidirectional and bi-directional TLM connections between generator and driver.

• Analysis ports/exports OR parameterized notify observers to broadcast the information from monitor to scoreboard and coverage collectors.

• Multi test concatenation support and management to run the tests

• Creating portable Interface wrapper object, and setting up interface connections to the testbench components using vmm_opts::set_object/get_object_obj

• Creating a Generic slave component

• Option to use default names or user provided names for different components

As you can see the above list itself is quite comprehensive and let me tell you that that it is not exhaustive as there are many more features in vmmgen.

With respect to the usage as well, there are multiple flavors. In the default mode, the user is taken through multiple detailed choices/options as he is creating/connecting different components in his verification environment. However, some folks might want to use ‘vmmgen’ within their own wrapper script/environment and for them there are options to generate the environments by providing all required options in the command line or through a configuration file… Some of these switches include

-SE [y/n] : Generates a complete environment with sub-environments

-RAL [y/n] : Create RAL based verification environments

-RTL [y/n] : Generates RTL configuration for the environment

-ENV <name>, -TR <name> : Provide the name for the environment class and transaction classes. names for multiple transaction class names can be provide as well:

vmmgen –l sv –TR tr1+tr2

-cfg_file <file_name> : Option to provide a configuration file for the options

There is an option to generate an environment quickly by taking the user through the minimum number of questions (-q).

Additionally, the user can provide his or her own templates through the –L <template directory> option.

As far as individual template generation goes, you have the complete list. Here, I am outlining this down for reference:

image

I am sure a lot of you have already been using ‘vmmgen’. For those, who haven’t, I encourage you to try out the different options with it. I am sure you will find this immensely useful and it will not only help you create verification components and environments quickly but will also make sure they are optimal and appropriate based on your requirements.

Posted in Automation, Coding Style, Customization, Modeling Transactions, Organization, Performance Analyzer, Scoreboarding, Tools & 3rd Party interfaces, VMM infrastructure | Comments Off

VMM Smart Log in DVT

Posted by Cristian Amitroaie on 1st April 2010

In this blog, I’d like to cover two cool features that are coming up with the DVT Integrated Development Environment (IDE). The 1st one is the possibility to color VMM log occurrences, the 2nd one a possibility to hyperlink these occurrences to the VMM source code.

Here is a snapshot showing VMM logs being colored and hyperlinked to source code:

vlogdt-vmm-smart-log

Not only providing hyperlinks from VMM log to source code, DVT also provides hyperlinks to VCS compilation/simulation errors and warnings. To enable these features, all you have to do is to define the right parameters in DVT Run Configuration window. For instance, you can specify the compilation/simulation commands and turn VMM log filtering from the corresponding tab.

APB_Run_Configuration

This is just one among many other VMM features that are available in DVT.

For more details see www.dvteclipse.com.

Posted in Debug, Tools & 3rd Party interfaces, VMM | Comments Off

f741f41b9a16c36c34d77c3a3aa3c20fDD