Posted by paragg on 28th March 2013
Author - Bhushan Safi (E-Infochips)
Functional coverage has been the most widely accepted way by which we track the completeness of any constrained random testbench. However, does achieving 100% functional coverage means that the DUV is bug free? Certainly not , but it boosts the confidence of the verification engineer and management team.
Based on my experience of defining functional covergroups for different projects, I realized that coverage constructs and options in the SystemVerilog language have their own nuances for which one needs to keep an eye out. These “gotchas” have to be understood so that coverage can be used optimally to achieve appropriate usage results in correct alignment with the intent desired. Let me talk about some of these finer aspects of coverage so that you can use the constructs more productively.
Usage of ignore_bins
The ‘ignore_bins’ construct is meant to exclude a collection of bins from coverage. While using this particular construct, you might end up with multiple ‘shapes’ issues (By ‘shapes’ I mean “Guard_OFF” and “Guard_ON”, which appears in the report whenever ‘ignore_bins’ is used). Lets look at a simple usage of ignore_bins is as shown in figure 1.
Looking at the code in figure 1, we would assume that since we have set “cfg.disable = 1” the bin with value 1 would be ignored from the generated coverage report. Here we use the ‘iff’ condition to try to match our intent of not creating a bin for the variable under the said condition. However in simulations, where the sample_event is not triggered, we see that we end up having an instance of our covergroup which still expects both the bins to be hit. (See the generated report in figure 1). Why does this happen? If you dig deep into the semantics, you will understand that the “iff” condition will come into action only when the event sample_event is triggered. So if we are writing ‘ignore_bins’ for a covergroup which may/may not be sampled on each run then we need to look for an alternative. Indeed there is a way to address this requirement and that is through the usage of the multi-mastered intelligent ternary operator. Look at the code in figure 2 to see how the ternary operator is used to model the same intent.
Now the report is as you expect!!!
Using the above mentioned coding style we make sure that the bin which is not desired in specific conditions is ignored irrespective of the condition of whether or not the covergroup is being sampled. Also, we use the value “2’b11” to make sure that we don’t end up in ignoring a valid value for the variable concerned.
The coverage option called “detect_overlap” helps in issuing a warning if there is an overlap between the range list (or transition list) of two bins of a coverpoint. Whenever we have plenty of ranges to be covered, and there is a possibility of overlap, it is important to use this option.
Why is it important and how can you be impacted if you don’t use it? You might actually end up with incorrect and unwanted coverage results!
Let’s look at an example. In the above scenario, if a value of 25 is generated, the coverage scores reported would be 50% when the desired outcome would ideally have been 25%. This is because the value ‘25’ contributes to two bins out of four bins when that was probably not wanted. The usage of ‘detect_overlap’ would have warned you about this and you could have fixed the bins to make sure that such a scenario doesn’t occur.
Coverage coding for crosses and assigning weight
What does the LRM (Table 19-1—Instance-specific coverage options) say about the ’weight’ attribute? “
If set at the covergroup syntactic level, it specifies the weight of this covergroup instance for computing the overall instance coverage of the simulation. If set at the coverpoint (or cross) syntactic level, it specifies the weight of a coverpoint (or cross) for computing the instance coverage of the enclosing covergroup. The specified weight shall be a non-negative integral value.”
What kinds of surprises can a combination of cross and option.weight create?
The SystemVerilog LRM shows a very simple way of writing a cross. Let’s look at the code below.
The expectation here is that for a single simulation (expecting one of the bins to be hit), we will end up with 25 % coverage as we have specified the weight of the individual coverpoints to zero. However, what essentially happens is the following, 2 internal coverpoints for check_4_a and check_4_b are generated, which are used to compute the coverage score of the ‘crossed’ coverpoint here. So you’ll end up having a total of four coverpoints, two of which have option.weight specified to 0 (i.e. CHECK_A and CHECK_B) and two of which are coverpoints with option.weigh as 1 (i.e. check_4_a and check_4_b). Thus for a single simulation, you will not get the 25% coverage desired.
Now with this report we see the following issues:
- => We see four coverpoints while expectation is only two coverpoints
- => The weights of the individual coverpoints is set to be expected to zero as option.weight is set to ‘0’
- => The overall coverage numbers are undesired.
In order to avoid above disastrous results we need to take care of following aspects:
- => Use the type_option.weight = 0, instead of option.weight = 0.
- => Use the coverpoint labels instead of coverpoint names to specify the cross.
Hope my findings will be useful for you and you will use these options/attributes appropriately to get the best value out of your coverage metrics (without losing any sleep or debug cycles to figure out why they didn’t behave as you expected them to)!