Verification Martial Arts: A Verification Methodology Blog

Functional Coverage Driven VMM Verification scalable to 40G/100G Technology

Posted by Shankar Hemmady on September 9th, 2011

Amit Baranwal, ASIC Design Verification Engineer, ViaSat

As data rate increases to 100 Gbps and beyond, optical links suffer severely from various impairments in the optical channel, such as chromatic dispersion, polarization-mode dispersion etc. Traditional optical compensation techniques are expensive and complex. ViaSat has developed DSP IP cores for coherent, differential, burst and continuous, high data rate networks. These cores can be customized according to system requirements.

One of the inherent problems with verifying communication applications is that there is large amount of information which is arranged over space and time. These are generally dealt using Fourier Transform, equalization and other DSP techniques. Thus, we needed to come up with interesting stimulus to match these complex equations which exercises the full design. With horizontal and vertical polarization (Four I and Q streams running at 128 samples per cycle), there was high level of parallelism to deal. To address these challenges, we decided to go with Constraint Random Self Checking Test Bench Environment using SystemVerilog and VMM. We have extensively used the reusability, direct programming interface and scalable features with various interesting coverage techniques to minimize our efforts and meet aggressive deadlines of the project. Our system model was bit and cycle accurate developed using C language. Class configurations were used to allow different behaviors such as sampling output at every cycle vs valid cycle only. Parameterized VMM data classes were used for control signals and feedback path which required parameterized generators, drivers, monitors and scoreboards so that they can be scaled as required to match different filter designs and specifications.

Code and functional coverage was used as benchmark to gauge completeness of verification. We used lot of useful constructs from SV in FCM like – ‘ignore bins’ to remove any unwanted sets, helping us to avoid any overhead efforts and ‘illegal bins’ to catch error conditions and intersect keyword etc. Here is an example:

data_valid_H_trans: coverpoint ifc_data_valid.data_valid_H {

bins valid_1_285 = (0=>1[*1:285]=>0);

illegal_bins valid_286 = (0=>1[*286]);

bins one_invalid = (1=>0=>1);

illegal_bins two_invalid = (1=>0[*2:5]=>1);


Covergroup “data_valid_H_trans” covers a signal ‘data_valid_H’ which should never have consecutive 286 or more asserted cycles. Also, data_valid_H signal should never be low for two consecutive data cycle. These are interesting scenario’s and can be found in many designs under test where two blocks have dependency between each other and there is data input/output rate that needs to be met for maintaining the data integrity between blocks else the data might overflow/underflow or can induce other possible errors. In such situations, an illegal bin can be effectively used to continuously check this condition through out the simulation. An easy usage of an illegal bin, keeps an eye on this condition and if this condition ever occurs, VCS flags a runtime error

Another interesting feature that we found out was the capability to merge different coverage reports using flexible merging. As we move along the project, due to various reasons like any system specification changes, signal name change etc we might have to modify our cover groups. Currently, if we have a saved data base of vdb files from previous simulations and we run urg command to create directory of coverage report, we will find multiple cover groups with same name in the new coverage report, this can make things very confusing to identify which cover groups are of our interest. Thus, corrupting our previous efforts, coverage report and leading to more engineering efforts and resource usage. To counter this problem, flexible merging can be used.

To enable flexible merging –group flex_merge_drop option is passed with urg command.

Urg –dir simv1.vdb –dir simv2.vdb –group flex_merge_drop

Note: URG assumes the first specified coverage database as a reference for flexible merging.

This feature is available only for covergroup coverage and is very useful when the coverage model is still evolving and minor changes in the coverage model between the test runs might be required. To merge two coverpoints, they need to be merge equivalent. Requirements for merge equivalence are as follows -

1. For User defined coverpoints:

Coverpoint C1 is said to be merge equivalent to a coverpoint C2 only if the coverpoint names and width are the same.

2. For Autobin Coverpoints:

Coverpoint C1 is said to be merge equivalent to a coverpoint C2 only if the name, auto_bin_max and width are the same.

3. For Cross coverpoints:

Coverpoint C1 is said to be merge equivalent to a coverpoint C2 only if the crosspoint have same number of coverpoints

If the cover points are merge equivalent. The merged cover points will contain a union of all the cover points for different tests. If the cover points are not merge equivalent then merged coverpoint will only contain all the coverpoint bins in the most recent test run and older test run data is not considered.

To achieve our verification goals, SystemVerilog and VMM Methodology features were very helpful in achieving our verification goals by giving us a robust verification environment which was very productive and reusable over course of project. Moreover, it also gave us a head start to our next project verification efforts. To find more details, please refer to the paper I presented at SNUG, San Jose, 2011, “Functional Coverage Driven VMM Verification scalable to 40G/100G Technology”

One Response to “Functional Coverage Driven VMM Verification scalable to 40G/100G Technology”

  1. Blog Review: Sept. 14 | System-Level Design Says:

    [...] Amit Barawnwal, writing in Synopsys’ VMM Central, digs deep into functional coverage for optical links. This is [...]