Posted by Srinivasan Venkataramanan on 25th March 2010
Srinivasan Venkataramanan, CVC Pvt. Ltd.
Abhishek Muchandikar, CAE, Verification Group, Synopsys
Raja Mahadevan, Sr. CAE, Verification Group, Synopsys
It is well understood and accepted fact that Assertions play a critical role in detecting a class of design errors (or bugs) in functional verification. Just like any other verification component in a robust, reusable environment, assertions need to be both controllable and observable from various levels including tests, regressions, command line etc. However an ad-hoc throw of assertions across the design and verification code doesn’t always consider this requirement upfront.
Recently standardized SystemVerilog 2009 construct checker..endchecker is definitely a good step towards creating a good encapsulation for these widely spread out assertions. In our recent book on SystemVerilog Assertions (2nd edition of SVA handbook, www.systemverilog.us/sva_info.html ) we cover this construct in depth. We also presented a case study of a Cache controller verification using these new constructs at DVCon 2010 (paper & code available from www.cvcblr.com on request).
The role of a methodology goes far beyond the constructs, it does utilize them but provides more controllability and observability for the end user trying to make sense out of all these various features to achieve his/her final goal of getting verification done.
In this series of blog entries we try and cover some of the key aspects of such methodology role in adopting ABV (Assertion Based Verification). We welcome reader comments to add more innovative thoughts and ideas to this puzzle. Our goal of this blog series is not to cover all possible aspects of ABV methodology as that would include a wide range of topics, many of them already well covered in VMM book (www.vmm-sv.org); rather in this blog series we look at application aspects of ABV methodology.
To start with, let’s partition the role of methodology into two major buckets: observability & controllability.
Under observability we will explore the following:
· Make assertions observable in native form within the methodology framework
· Tie the assertion results to the verification plan via VMM Planner hooks
Under controllability we will explore the following:
· Control the severity & verbosity of assertions from external world – command line, testcases etc.
· Control assertion execution during reset, exceptions, low power etc.
Making assertions natively observable in VMM
In simulation based verification, observability is primarily enabled by the kind of messages that get emitted during the run. Messaging service plays an important part in a verification environment indicating the progress of the simulation or providing additional information to debug a design malfunction. To ensure a consistent look and better organization of messages issued by various verification layers be it the transactor, scoreboards, or assertions, use of a standard messaging service is required. VMM provides a time-proven utility vmm_log to enable this key requirement. While the use of vmm_log in a typical VMM environment is well understood and widely deployed, the integration of the same to assertions is not that widely spoken about in the literature.
Assertion reporting as such always tends to be a loner in terms of the verification environment messaging family. This is due to the fact that assertion reporting has been handled in ad-hoc manner – many a times unattended (i.e. no action blocks at all), this can lead to simulator specific reports for assertion firings (pass/fail) where as the rest of testbench environment uses a consistent vmm_log style.
The drawback of such a use model is twofold:
· Absence of a single tightly integrated messaging service across the verification board
Assertion failures do not interact with the test bench environment and hence there is absolutely no way to effectively and correctly qualify the simulation. The limitation of such a behavior (in a regression setup), would never qualify the test as a “failures” unless some other post processing is duly placed
· Assertion results bear zero control over the verification simulation cycles
Quite often it is observed that the tests tend to run the entire simulation cycles even in the presence of assertions failures which maybe uncalled for and may warrant an immediate termination of the simulation.
Efficient incorporation of assertions in a verification environment calls for synchronization between assertion results and the verification environment. A common messaging service would be the key to such synchronization. The VMM messaging service “vmm_log” is a fine example of a standard messaging class which is seamlessly integrated into assertion checkers/properties which ensures consistency across the complete verification environment.
The user could force the simulation to quit via a `vmm_fatal macro or proceed with the simulation for a particular checker instance failure. The use of vmm_log for assertion error messaging gets recognized by the VMM environment leading to an effective simulation result. Integration of VMM messaging service provides extended flexibility to the user to control the simulation based on the severity of the checker instance.
Steps to integrate vmm_log with assertion reporting
· Declaration of a new package with a static VMM log object
vmm_log sva_vmm_log = new(“SVA_CHECKER”,$psprintf(“%m”));
endpackage : vmm_sva
· Inclusion of the package in the assertion file
import vmm_sva ::*;
// AHB master property check
@ (posedge HCLK)disable iff (!HRESETn) (
((SINGLE && HGRANT)
|-> ((NSEQ || IDLE)));
HburstSingleHtransNseq_check : assert property (HburstSingleHtransNseq)
else `vmm_fatal (sva_vmm_log, “AMBA Compliance Protocol Rules : ERRMSINGLE: Master has issued a SEQ/BUSY type SINGLE transfer”));
Bases on the severity of the assertion, you could terminate the simulation and also the testbench environment recognizers this failure and qualifies the simulation as a failure as depicted below
*FATAL*[FAILURE] on SVA_CHECKER(vmm_sva) at 195:
[AMBA Compliance Protocol Rules : ERRMSINGLE] Master has issued a SEQ/BUSY type SINGLE transfer
Simulation *FAILED* on /./ (/./) at 195: 1 errors, 0 warnings
$finish called from file “/tools/eda/snps/vcs-mx/etc/rvm/vmm.sv”, line 36499.
$finish at simulation time 195
V C S S i m u l a t i o n R e p o r t
Users can choose from the variety of vmm_log macros such as `vmm_error, `vmm_warning etc. to suit the relevant message being flagged by that assertion. With this subtle change/enhancement to the SVA action block one can leverage on VMM’s simulation controllability features such as error counting, simulation handling of errors (stop, debug, continue etc.). One can also promote/demote errors to warnings for instance.
A final note on the logger instance being shown in this example: while the above shown code works, typical usage would classify the messages originating from different portions of design/verification into individual logger instances.
In our next entry in this series, we will address the second aspect of observability – i.e. tie the results to Verification plan, so stay tuned!