Verification Martial Arts: A Verification Methodology Blog

Performance and statistical analysis from HDL simulations using the VMM Performance Analyzer

Posted by Badri Gopalan on April 30th, 2009

Badri Gopalan, Principal Engineer, Synopsys

There are several situations where RTL-level performance measurements are needed to validate micro-architectural and architectural assumptions, as well as to tune the RTL for optimal performance.

A few examples of these low-level measurement requirements are:

· Throughput, latency and effects of configuration parameters of a memory or DMA controller under different traffic scenarios

· The statistical distribution from a prediction scheme for various workloads

· Latency of a complex multi-level arbiter under different conditions

· End-to-end latency, throughput, QoS of a network switch for various types of data, control traffic

The VMM Performance Analyzer application provides a flexible and powerful framework to collect, analyze and visualize such performance statistics from their design and verification environments. It consists of a few base classes which allows you to define the performance metrics to be tracked, and collect run-time information for these performance metrics into tables in an SQL database. The data can be collected over multiple simulation runs. You then interact with the database with your favorite database analysis tool or spreadsheet. The SQL language itself offers simple yet powerful data query capabilities, which can be run either interactively or scripted for batch-mode. Alternatively, you can load the data into a spreadsheet and perform your analysis and visualization there.

At a conceptual level, you first identify the different atomic performance samples to be collected for analysis. These are referred to as “tenures”. For example, a memory transaction on a bus (from a specific master to a slave) is a tenure. The VMM-PA does the work of assigning an ID, collecting and tracking attributes such as the start time, end time, initiator and target IDs, and other associated information (suspended states, aborts, completions etc.,) as rows in a table. Each table corresponds to an instance of vmm_perf_analyzer object. You can (and probably will) have multiple tables (and thus multiple instances of the vmm_perf_analyzer object) in your simulation, dumping performance data into the database.

Here is a code snippet which illustrates the process of creating a vmm_perf_tenure tenure (a row in a table), a vmm_perf_analyzer table, and a vmm_sql_db_ascii (or vmm_sql_db_sqlite) database (a collection of tables), with some explanations following the code:

1. class my_env extends vmm_env;

2.    vmm_sql_db_sqlite db;                     //the database itself

3. vmm_perf_analyzer mem_perf_an; //One table in the database

4.    virtual function void build;

5.       super.build();

6. this.db = new(“perf_data.db”); //”perf_data.db” created on disk

7. this.mem_perf_an = new(“Mem_performance”, this.db);

8.    endfunction: build

9. endclass: my_env

10.

11. //Now, start a thread which will dump performance data to

12. // Mem_performance table in the database. Any event can be used to

13. // start or terminate tenures: it is left to the user control

14. initial begin

15. vmm_perf_tenure mem_perf_tenure = new();

16.    forever begin: mem_perf_thread

17.       this.mem_mon.notify.wait_for(mem_monitor::STARTED);

18. this.mem_perf_an.start_tenure(mem_perf_tenure);

19.       this.mem_mon.notify.wait_for(mem_monitor::ENDED);

20. this.mem_perf_an.end_tenure(mem_perf_tenure);

21.    end: mem_perf_thread

22. end

23. virtual task my_env::report;            //report is part of the environment class.. only the PA relevant code is presented

24. this.mem_perf_an.save_db(); //write any buffered data to disk

25. this.mem_perf_an.report();    //simple pre-defined performance report

26. endtask: report

· In lines 2 and 7, the SQLite database is created using vmm_sql_db_sqlite base class. You could create a different flavor of database, for instance, a plain text database, in which case, a list of SQL commands is created, which could then be replayed on your SQL data engine of choice. (See the Reference guide for more details). Typically you have one SQL database per test, however, you certainly can open multiple databases in the same test.

· In lines 3 and 8, a table in the database is created using vmm_perf_analyzer base class, which will help track statistics related to resource, in this case, a memory interface. Typically you will have multiple tables in a test, which correspond to tracking of statistics of multiple resources in the DUT or environment. This would correspond to multiple instances of the vmm_perf_analyzer base class.

· In lines 15, 18 and 20, one transaction item (“tenure”) is created and stored in the table. The transaction item is created by the vmm_perf_tenure base class. The tenure management methods such as start_tenure(), end_tenure(), suspend_tenure(), resume_tenure(), abort_tenure() allow you to express the state of the monitored tenure and reflect those in the performance tables. You can of course control when to execute these methods from your test, either timing control statements, events, callbacks, or whatever. Callbacks registered to the various vmm_xactor classes in your environment are the most scalable way to hook these into your environment, but it is your choice.

· In lines 24 and 25, data is flushed into the database at the end of simulation (in the vmm_env::report phase, to be more precise), and a basic / sample report is generated. It is important to note that you will in all likelihood be generating custom reports from the SQL database itself. That is explored further below.

Now that you have a code snippet showing you the process of monitoring statistics on shared resources in the design, you want to be able to write your custom queries, reports, and charts off the database. One could do this in a few ways:

1. Connect a spreadsheet to the database, and use the spreadsheet capabilities to generate statistics, charts etc. There was an earlier blog post on how to accomplish this: refer to “Analyzing results of the Performance Analyzer with Excel” (http://www.vmmcentral.org/vmartialarts/?p=23)

2. Use a SQL engine such as SQLite, MySQL, PostGreSQL, or any others to read in the SQL commands and generate custom query scripts which can then be used in batch mode. SQLite (http://www.sqlite.org), for example, has various plugins, such as Perl, TCL, C/C++ etc., so you can write scripts or queries in your favorite languages. There are several publicly available and commercial front ends you can use to read in the SQL data and perform your analyses (I’ve used SqliteSPY http://www.yunqa.de/delphi/doku.php/products/sqlitespy/index in the past). There are several quick-start tutorials for the SQL syntax available on the internet which should get you up and running with SQL in short order. To generate plots, one could use applications such as gnuplot, R, octave etc. It is probably more convenient to use spreadsheets to create graphs of various kinds.

In the next blog item related to the VMM Performance Analyzer, I will discuss some other aspects of the Performance Analyzer application (all of which is available by reading the User Guide: http://vmmcentral.com/resources.html#docs). I will also provide some examples of SQL code which demonstrate the analyses you can perform.

Share and Enjoy:
  • Print
  • Digg
  • del.icio.us
  • Facebook
  • Google Bookmarks
  • LinkedIn
  • RSS
  • Twitter

10 Responses to “Performance and statistical analysis from HDL simulations using the VMM Performance Analyzer”

  1. Adiel Says:

    Be sure to have 32bit and 64bit sqlite3 for 32bit/64bit VCS modes see:
    http://cblfs.cross-lfs.org/index.php/SQLite3
    (do a make clean between 32bit and 64bit “./configure”)

    Otherwise you might see errors even when you have setup LD_LIBRARY_PATH and the file does exist i.e:
    *FATAL*[FAILURE] on SQLdb(sql_data.db) at 0:
    *VMM_SQL_ERROR* during dlopen(): /usr/lib/libsqlite3.so: cannot open shared object file: No such file or directory

    sqlite3 is not a big package and you can install it in your home area using –prefix before asking I.T to install it for everyone.

    -Adiel

  2. Rahul Says:

    Badri,

    The utilization of SQL database for the performance measurement sounds really good, I think it will help a lot in terms of performance measurement and analysis.

    For the database used in the system, will the table details be available openly? If yes is it possible to tweak of the database tables as per the need?

    Rahul

  3. Badri Gopalan Says:

    Hi Rahul,

    thanks for the comment !

    Yes, the table schema is visible, and extensible to add your custom entries in the database tables.
    I will put more details in my next blog item. Its all in the user guide as well.
    Feel free to ask other questions or provide other feedback, i will try to address them in the next blog post.

    Badri

  4. Rahul Says:

    Badri,

    Visible table schema where we can add our custom fields will really be very helpful. Can the functional and code coverage database can be queried along with the performance database?

    The challenges which we are facing is to collect the required data at a central place and analyze it. This includes, performance, functional coverage, bugs, passing and failing test count and any other information. Currently we have written scripts which acts like a glue logic to integrate various parameter which we need, to check the status of verification progress. I have described the challenges as verification management in one of the article http://www.nxtbook.com/nxtbooks/emedia/eetindia_20081016/#/0

    I am looking forward for your next blog, but I think the step towards getting the information in SQL with so much of flexibility in term of the access to the internal tables will be great help moving forward.

    Regards,
    Rahul

  5. On Verification: A Software-to-Silicon Verification » Blog Archive » Verification Methodologies: Standards will come; in the mean time get that chip verified! Says:

    [...] advantage of the productivity benefits of a vendor-supported methodology like the VMM, including advanced methodology applications, extensions into new domains, focused R&D investment and a worldwide support [...]

  6. Badri Gopalan Says:

    hi Rahul,

    Functional and code coverage databases have their own (typically vendor-specific) APIs. For instance VCS has a C and a TCL API into these databases. The databases themselves are optimized for storage and performance, and may not be implemented as an SQL database. There are also efforts afoot to have standardized API access to coverage databases.

    I’ve seen many users using the API to access these databases, and populate an SQL database with interesting (to them) stuff derived from these databases. I see you have described the challenges in your linked presentation.

    thanks for the comment.
    badri

  7. Rushabh Says:

    Hi Adiel
    I faced the issue of libsqlite3.so :cannot open shared object file .
    One simple way to solve this is use vcs -full64 if sqlite libraries are of 64bit.

  8. Verification Martial Arts » Blog Archive » Performance appraisal time – Getting the analyzer to give more feedback Says:

    [...] [...]

  9. Num Says:

    I wonder just how long it will take the rest of us realize it. Fantastic post, thanks.

  10. Ben Says:

    Rushabh, thanks for the vcs -full64 solution you mentioned, really helped me.

Leave a Reply

XHTML: You can use these tags: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <strike> <strong>