Verification Martial Arts: A Verification Methodology Blog

Proxy Coverage

Posted by Andrew Piziali on January 22nd, 2010

Andy's Picture_1

Andrew Piziali, Independent Consultant

I was recently talking with a friend about a subject near and dear to my heart: functional coverage. Our topic was block level or subsystem coverage models that are not ultimately satisfied until paired with corresponding system level context. The question at hand was “Is it possible to use conditional block level coverage as a proxy for system level coverage by recording coverage points at the block level, conditioned upon subsequent observation of system level context?”

For example, let’s say I have an SoC DUV (design under verification) that contains an MPEG-4 AVC (H.264) video encoder with a variety of features enabled by configuration registers. I need to observe the encoder operating in a subset of modes—referred to as “profiles” in H.264 parlance—defined by permutations of the features. In addition, each profile is only stressed when the encoder is used by an application for which the profile was designed. For example, the “high profile” is intended for high definition television while the “scalable baseline profile” was designed for mobile and surveillance use. The profiles may be simulated at the block level but the applications are only simulated at the system level. How do I design, implement and fill this H.264 coverage model so that the model is populated using primarily block level simulations, with their inherent performance advantage, while depending upon system level application context? May I even use the block level coverage results as a proxy for the corresponding system level coverage?

I think this nut is tougher to crack than the black walnuts that recently fell from my tree. The last time I tried to crack those nuts I resorted to vise grips from my tool chest, leading to a number of unintended, but spectacular, results, but I digress. My friend and I were able to define a partial solution but it wasn’t at all clear whether or not a viable solution exists that leverages the speed of block level simulations. After you finish reading this post, I’d like to hear your thoughts on the matter. This is how far we got.

As with the top level design of any coverage model, we started with its semantic description (sometimes called a “story line”):

Record the encoder operating in each of its profiles while in use by the corresponding applications.

Next, we identified the attributes required by the model and their values:

Attribute Values
Profile BP, CBP, Hi10P, Hi422P, Hi444PP, HiP, HiSP, MP, XP
Application broadcast, disc_storage, interlaced_video, low_cost, mobile, stereo_video, streaming_video, video_conferencing
Feature 8×8_v_4×4_transform_adaptivity, Arbitrary_slice_ordering, B_slices, CABAC_entropy_coding, Chroma_formats, Data_partitioning, Flexible_macroblock_ordering, Interlaced_coding, Largest_sample_depth, Monochrome, Predictive_lossless_coding, Quantization_scaling_matrices, Redundant_slices, Separate_Cb_and_Cr_QP_control, Separate_color_plane_coding, SI_and_SP_slices

Then, we finished the top level design using a second order model (simplified somewhat for clarity):

clip_image001
The left column of the table serves as row headings: Attribute, Value, Sampling Time and Correlation Time. The remaining cells of each row contain values for the corresponding heading. For example, the “Attribute” names are “Profile,” “Application” and “Feature.” The values of attribute “Profile” are BP, CBP, Hi10P, etc. The time at which each attribute is to be sampled is recorded in the “Sampling Time” row. The time that the most recently sampled attribute values are to be recorded as a set is when the “application [is] started,” the correlation time. Finally, the magenta cells define the value tuples that compose the coverage points of the model.

The model is implemented in SystemVerilog, along with the rest of the verification environment and we begin running block level simulations. Since no actual H.264 application is run at the block level, we need to invent an expected application value whenever we record a coverage point defined by an attribute value set, lacking only an actual application. Why not substitute a proxy application value, to be replaced by an actual application value when a similar system level simulation is run? For example, if in a block level simulation we observe profile BP and feature ASO ({BP, *, ASO}), we could substitute the value “need_low_cost” for “low_cost” for the application value, recording the tuple {BP, need_low_cost, ASO}. This becomes a tentative proxy coverage point. Later on, when we are running a system level simulation with an H.264 application, whenever we observe a {BP, low_cost, ASO} event, we would substitute the much larger set of {BP, need_low_cost, ASO} block level events for this single system level event, replacing “need_low_cost” with “low_cost.” This would allow us to “observe” the larger set of system level {BP, low_cost, ASO} events from the higher performance block level simulations. How can we justify this substitution?

We could argue that a particular system level coverage point is an element of a set of block level coverage points because (1) it shares the same subset of attribute values with the system level coverage point and (2) the block level simulation is, in a sense, a superset of the system level simulation because it implicitly abstracts away the additional detail available in the system level simulation. Is there any argument that the profile value BP and feature value ASO are the same in the two environments? The second reason is clearly open to discussion.

This brings us back to the opening question, can we use conditional block level coverage as a proxy for system level coverage by recording coverage points at the block level, conditioned upon subsequent observation of system level context? If so, is this design approach feasible and reasonable? If not, why not? Have at it!

Share and Enjoy:
  • Print
  • Digg
  • del.icio.us
  • Facebook
  • Google Bookmarks
  • LinkedIn
  • RSS
  • Twitter

3 Responses to “Proxy Coverage”

  1. Janick Bergeron Says:

    Why do you define a “need_low_cost” point? What would be the meaning of an unfilled “need_low_cost” coverage point? Isn’t the mere existence of an unfilled “low_cost” coverage point implies that your “need” it??

    Plus, I don’t think you can justify that substitution.

    All you can be justified in covering is that you have covered {BP, low_cost, ASO} at the system level. You cannot infer that you have also covered all of the other block-level {BP, *, ASO} coverage points because there is nothing to say that the system-level simulation has hit the same functional conditions within the block.

    I don’t think you need this conditional coverage either.

    You say that “each profile is only stressed when the encoder is used by an application for which the profile was designed“. What is the definition of “stressed”? Those should be the coverage points. And whether those points are hit at the block or system level should not matter. And shouldn’t it be simpler to “stress” a block at the block level? In my experience, it is difficult to stress a block at the system level without running a LOT of simulation/cases. But if you truly need to run the applications to stress the profiles, why bother running the block-level simulations since you’ll have to run those (lengthy!) system simulations anyway?

    Your system-level coverage should be the combination of applications and applicable profiles. Your block-level coverage should be the profiles and the various “stress” points.

  2. Andrew Piziali Says:

    Janick, you asked:

    Why do you define a "need_low_cost" point? What would be the meaning of an unfilled "need_low_cost" coverage point? Isn’t the mere existence of an unfilled "low_cost" coverage point implies that your "need" it??

    The “need_low_cost” application attribute value—not “point” or “coverage point”—is required because in order to record an H.264 block level coverage point, three attribute values are required: profile, application and feature. This value is invented as a proxy for an anticipated “low_cost” value to be observed in a subsequent system level simulation. Hence, an “unfilled `need_low_cost’ coverage point” has no meaning. Regarding the existence of an unfilled “low_cost” coverage point,

    Plus, I don’t think you can justify that substitution.

    All you can be justified in covering is that you have covered {BP, low_cost, ASO} at the system level. You cannot infer that you have also covered all of the other block-level {BP, *, ASO} coverage points because there is nothing to say that the system-level simulation has hit the same functional conditions within the block.

    Let me be sure I understand what you mean by “All you can be justified in covering is that you have covered {BP, low_cost, ASO} at the system level.” I interpret this to mean you can only record {BP, low_cost, ASO} coverage points when these conditions are observed at the system level.” You go on to claim we cannot record the larger set of coverage points observed at the block level when corresponding system level context is seen. This is, in fact, the crux of the question but remains unanswered. Why must “the same functional conditions” be observed at the system level in order to populate this system level coverage model when most, if not all, of the points have been tentatively observed at the block level?

    I don’t think you need this conditional coverage either.

    You say that "each profile is only stressed when the encoder is used by an application for which the profile was designed". What is the definition of "stressed"? …

    By this I mean that design logic used in a particular profile is only active—i.e. controlling a data path or other logic—when a corresponding type of application is run.

    … Those should be the coverage points. …

    Absolutely! The question is whether or not these coverage points can be incrementally observed as the composite of two abstraction levels.

    … And whether those points are hit at the block or system level should not matter. …

    Except that, as I originally stated, “the block level simulation is, in a sense, a superset of the system level simulation because it implicitly abstracts away the additional detail available in the system level simulation.” In addition, because of dramatically faster block level simulations, coverage points may be observed and recorded at a far higher rate at the block level.

    … And shouldn’t it be simpler to "stress" a block at the block level? …

    Certainly.

    … In my experience, it is difficult to stress a block at the system level without running a LOT of simulation/cases. But if you truly need to run the applications to stress the profiles, why bother running the block-level simulations since you’ll have to run those (lengthy!) system simulations anyway?

    We run block level simulations for their performance advantage, observing all but one attribute value (application). That attribute is annotated to the corresponding block level coverage points when observed at the system level.

    Your system-level coverage should be the combination of applications and applicable profiles. Your block-level coverage should be the profiles and the various "stress" points.

    If so, then what ties together the observed DUV behaviors at the block and system levels? The proposed proxy coverage is aimed at observing DUV behavior at multiple abstraction levels to record composite coverage points. I am not yet convinced this avenue for measuring verification progress is not available.

  3. Adiel Says:

    Proxy Coverage is certainly an interesting topic. The way I see it is two-fold,
    -firstly, all info that gains you more confidence in your design being ready before system-integration occurs is useful.
    -secondly, as you are using quite an abstract algorithm to actually determine proxy coverage (i.e through cross binsof/interect) then this can easily lead to false optimism. In a more complex system-level DUV not everything may occur as idealistically as your proxy coverage definitions. Therefore, what you thought would be expected behaviour turns out that without the system-level as a reference model cannot be fully justified at the block-level.

    Given the second point above, if you were to implement proxy-coverage (as a guess-work of what you think will occur at system-level) then when you actually do run the system-level there needs to be cross-coverage check to ensure that all proxy coverage is valid against real-system-level coverage.

Leave a Reply

XHTML: You can use these tags: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <strike> <strong>