Verification Martial Arts: A Verification Methodology Blog

Archive for the 'Performance Analyzer' Category

Auto-Generation of Performance Charts with the VMM Performance Analyzer

Posted by Amit Sharma on 30th September 2011

Amit Sharma, Synopsys

Quoting from one of Janick’s earlier blog on the VMM Performance Analyzer Analyzing results of the Performance Analyzer with Excel,
”The VMM Performance Analyzer stores its performance-related data in a SQL database.SQL was chosen because it is an IEEEANSI/ISO standard with a multiplicity of implementation, from powerful enterprise systems like Oracle, to open source versions like MySQL to simple file-based like SQLite. SQL offers the power of a rich query and analysis language to generate the reports that are relevant to yourapplication.”

And given that everyone doesn’t understand SQL, he goes on to show how one can get VMM Performance Analyzer data from a SQLite database into an Excel spreadsheet and then subsequently analyze the data by doing any additional computation and creating the appropriate graphs. This involves a set of steps leveraging the SQlite ODBC (Open Database Conduit) and thus requires the installation of the same.

This article presents a mechanism how TCL scripts are used to bring in the next level of automation so that the users can retrieve the required data from the SQL DB and even automate the process of results analysis by auto-generating the relevant performance charts for statistical analysis.. Also, as users migrate to using DVE as a single platform for their design debug, coverage analysis, verification planning, it is shown how these scripts can be integrated into DVE, so that the generation process is a matter of using the relevant menus and clicking on the appropriate buttons in DVE.

For generating the SQL databases with the VMM Performance Analyzer, an SQLite Installation is required which can be obtained from www.sqlite.org. Once, you have installed it, you would need to set the SQLITE3_HOME environment variable to the path where its installed. Once that is done, these are the following steps that you need to follow to generate the appropriate graphs out of the data gets generated out of your batch regressions runs..

First, you need to download the utility from the link provided in the article DVE TCL Utility for Auto-Generation of Performance Charts

Once it is extracted, you can try it on the tl_bus examples that ships with the utility. You would need to go the directory vmm_perf_utility/tl_bus.

Use make to run the tl_bus which will generate the sql_data.db and sql_data.sql. Now, go to the ‘chart_utility’ directory

(cd vmm_perf_utility/chart_utility/)

The TCL scripts which are involved in the automation of the performance charts are in this directory.

This script vmm_perf_utility/chart_utility/chart.tcl  can then be executed from inside DVE as shown below

dve_an

Once, that is done, it will add will add a button “Generate Chart” in View menu.. BTW, adding a button is fairly simple..

eg:    gui_set_hotkey -menu “View->Generate Chart” -hot_key “G”

is how the button gets added..

Now,  click on a “Generate Chart” to select the sql database.

dve_an2

This will bring up the dialog box to select the SQL database..

dve_an3

Once, the appropriate data base is selected, the user can select which table to work with and then generate the appropriate.. The options would be provided to the user based on the data that is dumped into the SQL database.. From the combinations of charts, that is shown, select the graph that you want to generate and the required graphs will be generated for you. This is what you can see when you use the SQL DB generated for the TL bus example

dve_an4

dve_an5

Once, you have made the selections, you would see the following chart generated..

dve_an6

Now, obviously, you as a user would not just want the graphs to be  generated but you would also want these values to be available to you..

Thus, once you use this chart generation mechanism, the relevant .csv files corresponding to the graphs that you have generated would also be dumped for you..

This will be generated in the perfReports directory that would be created as well.. So, you can do any additional custom computation in Excel or by running your own scripts..  To generate the graphs for any other example, you just need to pick up the appropriate SQL DB  that was generated based on your simulation runs and then subsequently generate the reports and graphs of your interest.

So whether you use the Performance Analyzer in VMM (Performance and statistical analysis from HDL simulations using the VMM Performance Analyzer) or in UVM (Using the VMM Performance Analyzer in a UVM Environment) and even while you are doing your own PA customizations Performance appraisal time – Getting the analyzer to give more feedback , you can easily generate whatever charts you require which  would easily help you analyze all the  different performance aspects of the design you are verifying..

Posted in Automation, Coverage, Metrics, Customization, Performance Analyzer, Tools & 3rd Party interfaces | Comments Off

Using the VMM Performance Analyzer in a UVM Environment

Posted by Amit Sharma on 23rd August 2011

As a generic VMM package, the Performance Analyzer (PAN) is not based on nor requires specific shared resources, transactions or hardware structures. It can be used to collect statistical coverage metrics relating to the utilization of a specific shared resource. This package helps to measure and analyze many different performance aspects of a design. UVM doesn’t have a performance analyzer as a part of the base class library as of now. Given that the collection/tracking and analysis  of performance metrics of a design has become a key checkpoint in today’s verification, there is a lot of value in integrating the VMM Performance Analyzer in an UVM testbench. To demonstrate the same, we will use both VMM and UVM base classes in the same simulation.

Performance is analyzed based on user-defined atomic resource utilization called ‘tenures’. A tenure refers to any activity on a shared resource with a well-defined starting and ending point. A tenure is uniquely identified by an automatically-assigned identifier. We take the XBUS example in  $VCS_HOME/doc/examples/uvm_1.0/simple/xbus as a demo vehicle for the UVM environment.

Step 1: Defining data collection

Data is collected for each resource in a separate instance of the “vmm_perf_analyzer” class. These instances should be allocated in the build phase of the top level environment.

For example, in xbus_demo_tb.sv:

image

Step 2: Defining the tenure, and enable data collection

There must be one instance of the “vmm_perf_tenure” class for each operation that is performed on the  sharing resource. Tenures are associated with the instance of the “vmm_perf_analyzer” class that corresponds to the resource operated. In this case of the Xbus example, lets say we want to measure transcation throughput performance (i.e for the XBUS transfers).. This is how we will associate a tenure with the Xbus transaction. To denote the starting and ending of the tenure, we define two additional events in the XBUS Master Driver (started, ended). ‘started’ is triggered when the Driver obtains a transaction from the Sequencer, and ‘ended’ once the transaction is driven on the bus and the driver is about to indicate seq_item_port.item_done(rsp); At the same time,  ‘started’ is triggered, a callback is invoked to get the PAN to starting collecting statistics. Here is the relevant code.

image

Now, the Performance Analyzer  works on classes extended from vmm_data and uses the base class functionality for starting/stopping these tenures. Hence, the callback task which gets triggered at the appropriate points would have to have the functionality for converting the UVM transactions to a corresponding VMM one. This is how it is done.

Step 2.a: Creating the VMM counterpart of the XBUS Transfer Class

image

Step 2.b: Using the UVM Callback for starting/stopping data collection and calling the UVM -> VMM conversion routines appropriately.

image

The callback class needs to be associated with the driver as follows in the Top testbecnh (xbus_demo_tb)

image

Step 3: Generating the Reports..

In the report_ph of xbus_demo_tb, save, and write out the appropriate databases

image

Step 4. Run simulation , and analyze the reports for possible inefficiencies etc

Use -ntb_opts uvm-1.0+rvm +define+UVM_ON_TOP with VCS

Include vmm_perf.sv along with the new files in the included file list.  The following table shows the text report at the end of the simulation.

image

You can generate the SQL databases as well and typically you would be doing this across multiple simulations.. Once, you have done that, you can create your custom queries to the get the desired information out of the SQL database across your regression runs.  You can also analyze the results and generate the required graphs in Excel. Please see the following post : Analyzing results of the Performance Analyzer with Excel

So there you go,  the VMM Performance Performance Analyzer can fit in any verification environment you have.. So make sure that you leverage this package  to make the  RTL-level performance measurements that are needed to validate micro-architectural and architectural assumptions, as well as to tune the RTL for optimal performance.

Posted in Coverage, Metrics, Interoperability, Optimization/Performance, Performance Analyzer, VMM infrastructure, Verification Planning & Management | 6 Comments »

Performance appraisal time – Getting the analyzer to give more feedback

Posted by Amit Sharma on 28th January 2011

S. Prashanth, Verification & Design Engineer, LSI Logic

Performance appraisal time – Getting the analyzer to give more feedback

We wanted to use the VMM performance analyzer to analyze the performance of the bus matrix we are verifying. To begin with, we wanted these information while accessing a shared resources (slave memory).

· Throughput/Effective Bandwidth for each master in terms of Mbytes/sec

· Worst case latency for each master

· Initiator and Target information associated with every transaction

By default, the performance analyzer records the initiator id, target id, start time and end time of each tenure (associated with a corresponding transaction) in the SQL data base. In addition to the useful information provided by the Performance Analyzer, we needed the number of bytes transferred for each transaction to be dumped in the SQL data base. This was required for calculating throughput which in our case was the number of bytes transferred from the start time of the first tenure until the end time of the last tenure of a master. Given that we had a complex interconnect with 17 initiators, it was difficult for us to correlate an initiator id with their names. So we wanted to add initiator names as well in the SQL data base. Let’s see how these information can be added from the environment.

An earlier blog on performance analyzer “Performance and Statistical analysis from HDL simulations using the VMM Performance Analyzer”  provides useful information on how to use VMM performance analyzer in verification environment. Now, starting with that, let me outline the additional steps we took for getting the statistical analysis we desired

Step 1: Define the fields and their data types required to be added to the data base in a string (user_fields). i.e., “MasterName VARCHAR(255)” for initiator name and “NumBytes SMALLINT” for number of bytes. Provide this string to the performance analyzer instance during initialization.

class tb_env extends vmm_env;
vmm_sql_db_sqlite db; //Sqlite data base
vmm_perf_analyzer bus_perf;
string user_fields;
virtual function void build();
super.build();
db = new(“perf_data”); //Initializing the data base
user_fields = “MasterName VARCHAR(255), NumBytes SMALLINT”;
bus_perf = new(“BusPerfAnalyzer”, db, , , , user_fields);
endfunction
endclass

Step 2: When each transaction ends, get information about the initator name and the number of bytes transferred in a string variable (user_values) . Then provide the variable to the performance analyzer through the end_tenure() method.

fork begin

vmm_perf_tenure perf_tenure = new(initiator_id, target_id, txn);

string user_values;

bus_perf.start_tenure(perf_tenure);

txn.notify.wait_for(vmm_data::ENDED);

user_values = $psprintf(“%s, %0d”, initiator.get_object_name(), txn.get_num_bytes());

bus_perf.end_tenure(perf_tenure, user_values);

end

join_none




With this, the performance analyzer dumps the additional user information in an SQL data base. The blog “Analyzing results of Performance Analyzer with Excel”  explains how to extract information from the SQL database generated. Using the spreadsheet, we could create our own plots and ensure that  management has all the analysis it needs to provide the perfect appraisal.

Posted in Optimization/Performance, Performance Analyzer, Verification Planning & Management | 1 Comment »

Using VMM template Generator to ramp up your testbench development

Posted by Amit Sharma on 25th October 2010

Amit Sharma, Synopsys
‘vmmgen’, the template generator for creating robust, extensible VMM compliant environments, has been available for a long time with VMM and it was upgraded significantly with VMM1.2. Though the primary functionality of ‘vmmgen’ is to help minimize VIP and environment development cycle by providing detailed templates for developing VMM Compliant verification environments, a lot of folks also use it to quickly understand how different VMM base classes can be used in different contexts. This is done as the templates uses a rich set of the latest VMM features to ensure the appropriate base classes and their features are picked up optimally.

Given that it has a wide user interaction mechanism which provides available features and options to the user, the user can pick up the modes which are most relevant to his or her requirement. It also provides them the option to provide their own templates thus providing a rich layer of customization. Based on the need, one can generate individual templates of different verification components or they can generate a complete verification environment which comes with a ’Makefile’ and an intuitive directory structure, thus propelling them on their way to catch the first set of bugs in their DUTs. I am sure all of you know where to pick up ‘vmmgen’ form. It available in the <VMM_HOME>/Shared/bin area or in $VCS_HOME/bin

Some of the rich set of features available now includes:

• Template Options:

– Complete environment generation

– Individual templates generation

• Options to create Explicitly phased environments or Implicitly phased environment or to mix Implicitly phased components and Explicitly phased components

• Usage of VMM Shorthand macros

• Creating RAL based environment, and providing Multiplexed domain support if required

• Hooking up VMM Performance Analyzer at the appropriate interfaces

• Hooking up the DS Scoreboard at the relevant interfaces (with options to chose from a range of integration options, e.g. : through callbacks, through TLM2.0 analysis ports, connect directly through to transactors, channels or notifications)

• Ability to hook up different generators (atomic, scenario, Multistream generators) at different interfaces

• Creating a scenario library and Multistream scenario creation

• Multi-driver generator support for different kinds of transactions in the same environment

• Factory support for transactions, scenarios and multi stream scenarios. Sample factory testcase which can explain the usage of transaction override from a testcase.

• ‘RTL config’ support for drivers and receivers.

• Various types of unidirectional and bi-directional TLM connections between generator and driver.

• Analysis ports/exports OR parameterized notify observers to broadcast the information from monitor to scoreboard and coverage collectors.

• Multi test concatenation support and management to run the tests

• Creating portable Interface wrapper object, and setting up interface connections to the testbench components using vmm_opts::set_object/get_object_obj

• Creating a Generic slave component

• Option to use default names or user provided names for different components

As you can see the above list itself is quite comprehensive and let me tell you that that it is not exhaustive as there are many more features in vmmgen.

With respect to the usage as well, there are multiple flavors. In the default mode, the user is taken through multiple detailed choices/options as he is creating/connecting different components in his verification environment. However, some folks might want to use ‘vmmgen’ within their own wrapper script/environment and for them there are options to generate the environments by providing all required options in the command line or through a configuration file… Some of these switches include

-SE [y/n] : Generates a complete environment with sub-environments

-RAL [y/n] : Create RAL based verification environments

-RTL [y/n] : Generates RTL configuration for the environment

-ENV <name>, -TR <name> : Provide the name for the environment class and transaction classes. names for multiple transaction class names can be provide as well:

vmmgen –l sv –TR tr1+tr2

-cfg_file <file_name> : Option to provide a configuration file for the options

There is an option to generate an environment quickly by taking the user through the minimum number of questions (-q).

Additionally, the user can provide his or her own templates through the –L <template directory> option.

As far as individual template generation goes, you have the complete list. Here, I am outlining this down for reference:

image

I am sure a lot of you have already been using ‘vmmgen’. For those, who haven’t, I encourage you to try out the different options with it. I am sure you will find this immensely useful and it will not only help you create verification components and environments quickly but will also make sure they are optimal and appropriate based on your requirements.

Posted in Automation, Coding Style, Customization, Modeling Transactions, Organization, Performance Analyzer, Scoreboarding, Tools & 3rd Party interfaces, VMM infrastructure | Comments Off

Performance verification of a complex bus arbiter using the VMM Performance Analyzer

Posted by Shankar Hemmady on 20th September 2010

Performance verification of system bus fabrics is an increasingly complex problem. An article in EE Times by Kelly Larson, John Dickol and Kari O’Brien of MediaTek Wireless describes how they used the VMM Performance Analyzer to complete performance validation for an AXI bus arbiter:

http://www.eetimes.com/design/eda-design/4207764/Performance-verification-of-a-complex-bus-arbiter-using-the-VMM-Performance-Analyzer

Posted in Performance Analyzer, VMM infrastructure | Comments Off

Performance and statistical analysis from HDL simulations using the VMM Performance Analyzer

Posted by Badri Gopalan on 30th April 2009

Badri Gopalan, Principal Engineer, Synopsys

There are several situations where RTL-level performance measurements are needed to validate micro-architectural and architectural assumptions, as well as to tune the RTL for optimal performance.

A few examples of these low-level measurement requirements are:

· Throughput, latency and effects of configuration parameters of a memory or DMA controller under different traffic scenarios

· The statistical distribution from a prediction scheme for various workloads

· Latency of a complex multi-level arbiter under different conditions

· End-to-end latency, throughput, QoS of a network switch for various types of data, control traffic

The VMM Performance Analyzer application provides a flexible and powerful framework to collect, analyze and visualize such performance statistics from their design and verification environments. It consists of a few base classes which allows you to define the performance metrics to be tracked, and collect run-time information for these performance metrics into tables in an SQL database. The data can be collected over multiple simulation runs. You then interact with the database with your favorite database analysis tool or spreadsheet. The SQL language itself offers simple yet powerful data query capabilities, which can be run either interactively or scripted for batch-mode. Alternatively, you can load the data into a spreadsheet and perform your analysis and visualization there.

At a conceptual level, you first identify the different atomic performance samples to be collected for analysis. These are referred to as “tenures”. For example, a memory transaction on a bus (from a specific master to a slave) is a tenure. The VMM-PA does the work of assigning an ID, collecting and tracking attributes such as the start time, end time, initiator and target IDs, and other associated information (suspended states, aborts, completions etc.,) as rows in a table. Each table corresponds to an instance of vmm_perf_analyzer object. You can (and probably will) have multiple tables (and thus multiple instances of the vmm_perf_analyzer object) in your simulation, dumping performance data into the database.

Here is a code snippet which illustrates the process of creating a vmm_perf_tenure tenure (a row in a table), a vmm_perf_analyzer table, and a vmm_sql_db_ascii (or vmm_sql_db_sqlite) database (a collection of tables), with some explanations following the code:

1. class my_env extends vmm_env;

2.    vmm_sql_db_sqlite db;                     //the database itself

3. vmm_perf_analyzer mem_perf_an; //One table in the database

4.    virtual function void build;

5.       super.build();

6. this.db = new(“perf_data.db”); //”perf_data.db” created on disk

7. this.mem_perf_an = new(“Mem_performance”, this.db);

8.    endfunction: build

9. endclass: my_env

10.

11. //Now, start a thread which will dump performance data to

12. // Mem_performance table in the database. Any event can be used to

13. // start or terminate tenures: it is left to the user control

14. initial begin

15. vmm_perf_tenure mem_perf_tenure = new();

16.    forever begin: mem_perf_thread

17.       this.mem_mon.notify.wait_for(mem_monitor::STARTED);

18. this.mem_perf_an.start_tenure(mem_perf_tenure);

19.       this.mem_mon.notify.wait_for(mem_monitor::ENDED);

20. this.mem_perf_an.end_tenure(mem_perf_tenure);

21.    end: mem_perf_thread

22. end

23. virtual task my_env::report;            //report is part of the environment class.. only the PA relevant code is presented

24. this.mem_perf_an.save_db(); //write any buffered data to disk

25. this.mem_perf_an.report();    //simple pre-defined performance report

26. endtask: report

· In lines 2 and 7, the SQLite database is created using vmm_sql_db_sqlite base class. You could create a different flavor of database, for instance, a plain text database, in which case, a list of SQL commands is created, which could then be replayed on your SQL data engine of choice. (See the Reference guide for more details). Typically you have one SQL database per test, however, you certainly can open multiple databases in the same test.

· In lines 3 and 8, a table in the database is created using vmm_perf_analyzer base class, which will help track statistics related to resource, in this case, a memory interface. Typically you will have multiple tables in a test, which correspond to tracking of statistics of multiple resources in the DUT or environment. This would correspond to multiple instances of the vmm_perf_analyzer base class.

· In lines 15, 18 and 20, one transaction item (“tenure”) is created and stored in the table. The transaction item is created by the vmm_perf_tenure base class. The tenure management methods such as start_tenure(), end_tenure(), suspend_tenure(), resume_tenure(), abort_tenure() allow you to express the state of the monitored tenure and reflect those in the performance tables. You can of course control when to execute these methods from your test, either timing control statements, events, callbacks, or whatever. Callbacks registered to the various vmm_xactor classes in your environment are the most scalable way to hook these into your environment, but it is your choice.

· In lines 24 and 25, data is flushed into the database at the end of simulation (in the vmm_env::report phase, to be more precise), and a basic / sample report is generated. It is important to note that you will in all likelihood be generating custom reports from the SQL database itself. That is explored further below.

Now that you have a code snippet showing you the process of monitoring statistics on shared resources in the design, you want to be able to write your custom queries, reports, and charts off the database. One could do this in a few ways:

1. Connect a spreadsheet to the database, and use the spreadsheet capabilities to generate statistics, charts etc. There was an earlier blog post on how to accomplish this: refer to “Analyzing results of the Performance Analyzer with Excel” (http://www.vmmcentral.org/vmartialarts/?p=23)

2. Use a SQL engine such as SQLite, MySQL, PostGreSQL, or any others to read in the SQL commands and generate custom query scripts which can then be used in batch mode. SQLite (http://www.sqlite.org), for example, has various plugins, such as Perl, TCL, C/C++ etc., so you can write scripts or queries in your favorite languages. There are several publicly available and commercial front ends you can use to read in the SQL data and perform your analyses (I’ve used SqliteSPY http://www.yunqa.de/delphi/doku.php/products/sqlitespy/index in the past). There are several quick-start tutorials for the SQL syntax available on the internet which should get you up and running with SQL in short order. To generate plots, one could use applications such as gnuplot, R, octave etc. It is probably more convenient to use spreadsheets to create graphs of various kinds.

In the next blog item related to the VMM Performance Analyzer, I will discuss some other aspects of the Performance Analyzer application (all of which is available by reading the User Guide: http://vmmcentral.com/resources.html#docs). I will also provide some examples of SQL code which demonstrate the analyses you can perform.

Posted in Debug, Optimization/Performance, Performance Analyzer | 10 Comments »

Analyzing results of the Performance Analyzer with Excel

Posted by Janick Bergeron on 13th April 2009

jb_blog

Janick Bergeron, Synopsys Fellow

The VMM Performance Analyzer stores its performance-related data in a SQL database. SQL was chosen because it is an IEEEANSI/ISO standard with a multiplicity of implementation, from powerful enterprise systems like Oracle, to open source versions like MySQL to simple file-based like SQLite. SQL offers the power of a rich query and analysis language to generate the reports that are relevant to your application.

But not everyone knows SQL. You need an SQL-aware application to do fancier stuff. And guess what! Excel can import SQL data! And everyone knows Excel!

In this post, I will show how you can get VMM Performance Analyzer data from a SQLite database into an Excel spreadsheet. A similar mechanism can be used if you are using MySQL or any other SQL implementation that offers an ODBC (Open Database Conduit).

First, download and install the SQLite OBDC from http://www.ch-werner.de/sqliteodbc/sqliteodbc.exe on your PC and install.

Next, create a new OBDC Data source by opening the OBDC Data Source Administrator by navigating to Start -> Settings -> Control Panel -> Administrative Tools -> Data Sources (OBDC).

sqlite1

Click the “Add…” button to add a new data source. Scroll down to select the “SQLite3 OBDC Driver” entry.

sqlite2

C lick the “Finish” button. This will then open the OBDC data source configuration window. Specify a name for the data source and browse to the SQLite database file. A SQLite ODBC data source connects to a single database file. It is therefore a good idea to name this new data source according to the performance data it contains. Leave all other options to their default value.

sqlite3

Click “OK” to complete the creation of the new data source on your computer.

sqlite4

Repeat these steps for each SQLite database that will need to be analyzed using Microsoft Excel on your computer. Click “OK” when all data connections have been defined.

Start Excel with a blank workbook, then Select Data -> From Other Sources -> From Microsoft Query.

sqlite11
This will bring up a dialog box to select a data source. Select the data source corresponding to the desired SQLite database.

sqlite6

Click “OK”. The following error pops up. Don’t worry! The next step is to correct that error.

sqlite7

Click “OK” to close the error pop-up. This will bring up the Microsoft Query Wizard.

sqlite8

Click on “Options”. Modify any of the check boxes but make sure the “Tables” option is (or remains) checked. You must make a modification to at least one check box.

sqlite92

Click OK.  The Microsoft Query Wizard should now be populated with the list of tables found in the SQLite database file. From this point on, which table and which columns you select depend on the analysis you wish to perform.

For example, to perform an arbiter fairness analysis, you would select the “InitiatorID” and “active” columns of the “Arb” table.

sqlite10

Click “Next >” three times then return to Microsoft Excel by clicking “Finish”. The Import Data dialog box will appear.

sqlite13Click “OK” to insert the data as a table and voila! You now have a table populated from the data dynamically extracted from the SQLite database.

sqlite111

From there, a PivotChart can be used to display the average and maximum arbiter response time for each initiator. As you can see, the arbiter appears to be fair, given the small number of samples collected.

sqlite12

I hope you’ll find this step-by-step guide useful.

What cool thing have you done with the Performance Analyzer?

Posted in Debug, Optimization/Performance, Performance Analyzer | 5 Comments »

c903d6e259e580432d4bbfea9390f9d0$$$$$$$$$$$$$$$$$$$$$$$$$$