Verification Martial Arts: A Verification Methodology Blog

Archive for the 'Verification Planning & Management' Category

VMM Planner helps automate the PLAN development process and metric linkage for verification engineers.

Avoiding Redundant Simulation Cycles with your UVM VIP based simulation with a Simple Save-Restore Strategy

Posted by paragg on 6th March 2014

In many verification environments, you reuse the same configuration cycles across different testcases. These cycles might involve writing and reading from different configuration and status registers, loading program memories, and other similar tasks to set up a DUT for the targeted stimulus. In many of these environments, the time taken during this configuration cycles are very long. Also, there is a lot of redundancy as the verification engineers have to run the same set of verified configuration cycles for different testcases leading to a loss in productivity. This is especially true for complex verification environments with multiple interfaces which require different components to be configured.

The Verilog language provides an option of saving the state of the design and the testbench at a particular point in time. You can restore the simulation to the same state and continue from there. This can be done by adding appropriate built in system calls from the Verilog code. VCS provides the same options from the Unified Command line Interpreter (UCLI).

However, it is not enough for you to restore simulation from the saved state. For different simulations, you may want to apply different random stimulus to the DUT. In the context of UVM, you would want to run different sequences from a saved state as show below.

In the above example apart from the last step which varies to large extent, the rest of the steps once established need no iterations.

Here we explain how to achieve the above strategy with the simple existing UBUS example available in the standard UVM installation. Simple changes are made in the environment to show what needs to be done to bring in this additional capability. Within the existing set of tests, the two namely, “test_read_modify_write” and “test_r8_w8_r4_w4”, differs only w.r.t the master sequence being executed – i.e. “read_modify_write_seq” and “r8_w8_r4_w4_seq” respectively.

Let’s say that we have a scenario where we would want to save a simulation once the reset_phase is done and then start executing different sequences post the reset_phase the restored simulations. To demonstrate a similar scenario through the UBUS tests, we introduced a delay in the reset_phase of the base test (in a real test, this may correspond to the PLL lock, DDR Initialization, Basic DUT Configuration).

The following snippet shows how the existing tests are modified to bring in the capability of running different tests in different ‘restored’ simulations.

As evident in the code we made two major modifications.

  • Shifted the setting of the phase default_sequence from the build phase to the start of the main phase.
  • Get the name of the sequence as an argument from the command-line and process the string appropriately in the code to execute the sequence on the relevant sequencer.

As you can see, the changes are kept to a minimum. With this, the above generic framework is ready to be simulated.  In VCS, one of the different ways, the save/restore flow can be enabled as follows.

Thus above strategy helps in optimal utilization of the compute resources with simple changes in your verification flow. Hope this was useful and you manage to easily make the changes in your verification environment to adopt this flow and avoid redundant simulation cycles.

Posted in Automation, Coding Style, Configuration, Creating tests, Customization, Optimization/Performance, Organization, Reuse, Stimulus Generation, SystemVerilog, Tutorial, UVM, Uncategorized, Verification Planning & Management | 1 Comment »

DVE and UVM on wheels

Posted by Yaron Ilani on 22nd May 2012

Sometimes driving to work can be a little bit boring so a few days ago I decided to take advantage of this time slot to introduce myself and tell you a little bit about the behind-the-scenes of my video blog. Hope you’ll like it !

Hey if you’ve missed any of my short DVE videos – here they are:

Posted in Debug, UVM, Verification Planning & Management | Comments Off

Customizing UVM Messages Without Getting a Sunburn

Posted by Brian Hunter on 24th April 2012

The code snippets presented are available below the video.

https://www.youtube.com/watch?v=K7V505WKxuU&feature=youtu.be

my_macros.sv:

   `define my_info(MSG, VERBOSITY) \
      begin \
         if(uvm_report_enabled(VERBOSITY,UVM_INFO,get_full_name())) \
            uvm_report_info(get_full_name(), $sformatf MSG, 0, `uvm_file, `uvm_line); \
      end

  `define my_err(MSG)         \
      begin \
         if(uvm_report_enabled(UVM_NONE,UVM_ERROR,get_full_name())) \
            uvm_report_error(get_full_name(), $sformatf MSG, UVM_NONE, `uvm_file, `uvm_line); \
      end

   `define my_warn(MSG)        \
      begin \
         if(uvm_report_enabled(UVM_NONE,UVM_WARNING,get_full_name())) \
            uvm_report_warning(get_full_name(), $sformatf MSG, UVM_NONE, `uvm_file, `uvm_line); \
      end

   `define my_fatal(MSG)       \
      begin \
         if(uvm_report_enabled(UVM_NONE,UVM_FATAL,get_full_name())) \
            uvm_report_fatal(get_full_name(), $sformatf MSG, UVM_NONE, `uvm_file, `uvm_line); \
      end

my_init.sv:

  initial begin
      my_report_server_c report_server = new("my_report_server");

      if($value$plusargs("fname_width=%d", fwidth)) begin
         report_server.file_name_width = fwidth;
      end
      if($value$plusargs("hier_width=%d", hwidth)) begin
         report_server.hier_width = hwidth;
      end

      uvm_pkg::uvm_report_server::set_server(report_server);

      // all "%t" shall print out in ns format with 8 digit field width
      $timeformat(-9,0,"ns",8);
   end

my_report_server.sv:

class my_report_server_c extends uvm_report_server;
   `uvm_object_utils(my_report_server_c)

   string filename_cache[string];
   string hier_cache[string];

   int    unsigned file_name_width = 28;
   int    unsigned hier_width = 60;

   uvm_severity_type sev_type;
   string prefix, time_str, code_str, fill_char, file_str, hier_str;
   int    last_slash, flen, hier_len;

   function new(string name="my_report_server");
      super.new();
   endfunction : new

   virtual function string compose_message(uvm_severity severity, string name, string id, string message,
                                           string filename, int line);
      // format filename & line-number
      last_slash = filename.len() - 1;
      if(file_name_width > 0) begin
         if(filename_cache.exists(filename))
            file_str = filename_cache[filename];
         else begin
            while(filename[last_slash] != "/" && last_slash != 0)
               last_slash--;
            file_str = (filename[last_slash] == "/")?
                       filename.substr(last_slash+1, filename.len()-1) :
                       filename;

            flen = file_str.len();
            file_str = (flen > file_name_width)?
                       file_str.substr((flen - file_name_width), flen-1) :
                       {{(file_name_width-flen){" "}}, file_str};
            filename_cache[filename] = file_str;
         end
         $swrite(file_str, "(%s:%6d) ", file_str, line);
      end else
         file_str = "";

      // format hier
      hier_len = id.len();
      if(hier_width > 0) begin
         if(hier_cache.exists(id))
            hier_str = hier_cache[id];
         else begin
            if(hier_len > 13 && id.substr(0,12) == "uvm_test_top.") begin
               id = id.substr(13, hier_len-1);
               hier_len -= 13;
            end
            if(hier_len < hier_width)
               hier_str = {id, {(hier_width - hier_len){" "}}};
            else if(hier_len > hier_width)
               hier_str = id.substr(hier_len - hier_width, hier_len - 1);
            else
               hier_str = id;
            hier_str = {"[", hier_str, "]"};
            hier_cache[id] = hier_str;
         end
      end else
         hier_str = "";

      // format time
      $swrite(time_str, " {%t}", $time);

      // determine fill character
      sev_type = uvm_severity_type'(severity);
      case(sev_type)
         UVM_INFO:    begin code_str = "%I"; fill_char = " "; end
         UVM_ERROR:   begin code_str = "%E"; fill_char = "_"; end
         UVM_WARNING: begin code_str = "%W"; fill_char = "."; end
         UVM_FATAL:   begin code_str = "%F"; fill_char = "*"; end
         default:     begin code_str = "%?"; fill_char = "?"; end
      endcase

      // create line's prefix (everything up to time)
      $swrite(prefix, "%s-%s%s%s", code_str, file_str, hier_str, time_str);
      if(fill_char != " ") begin
         for(int x = 0; x < prefix.len(); x++)
            if(prefix[x] == " ")
               prefix.putc(x, fill_char);
      end

      // append message
      return {prefix, " ", message};
   endfunction : compose_message
endclass : my_report_server_c

Posted in Debug, Messaging, SystemVerilog, UVM, Verification Planning & Management | Comments Off

Why do we need an integrated coverage database for simulation and formal analysis?

Posted by Shankar Hemmady on 23rd January 2012

Closing the coverage gap has been a long-standing challenge in simulation-based verification, resulting in unpredictable delays while achieving functional closure. Formal analysis is a big help here. However, most of the verification metrics that give confidence to a design team are still governed by directed and constrained random simulation. This article describes a methodology that embraces formal analysis along with dynamic verification approaches to automate functional convergence: http://soccentral.com/results.asp?CatID=488&EntryID=37389

I would love to learn what you do to attain functional closure.

Posted in Coverage, Metrics, Formal Analysis, Verification Planning & Management | Comments Off

Closed Loop Register Verification using IDesignSpec and the Register Abstraction Layer

Posted by Amit Sharma on 26th September 2011

Nitin Ahuja, Agnisys Technology Pvt. Ltd

In the previous article titled “Automatic generation of Register Model for VMM using IDesignSpecTM ” we discussed how it is advantageous to use a register model generator such as IDesignSpecTM, to automate the process of RALF model generation. Taking it forward, in this article we will discuss how to close the loop on register verification.

Various forms of coverage are used to ensure that registers are functioning properly. There are three coverage models in VMM. They are:

1. reg_bits coverage: this model is used to make sure that all the bits in the register are covered. This model works by writing and reading both 1 and 0 on every register bit, hence the name. This is specified using “cover +b” in the RALF model.

2. field_vals coverage: field value coverage model is implemented at the register level and supports value coverage of all fields and cross coverage between fields and other cross coverage points within the same register. This is specified using “cover +f” in the RALF model. User can specify the cross coverage depending on the functionality.

3. Address map: this coverage model is implemented at block level and ensures that all registers and the memories in the block have been read from and written to. This is specified using “cover +a” in the RALF model.

We will discuss how coverage can be switched on/off and how the type of coverage can be controlled for each field directly from the register specification.

Once the RALF model is generated, the next step in verification is to generate the RTL and the SystemVerilog RAL model using ‘ralgen’. The generated RAL model along with the RTL can be compiled and simulated in the VMM environment to generate the coverage database. This database is used for the report generation and analysis.

Reports can be generated using IDesignSpecTM (IDS). IDS generated reports have advantages over other report in that it generates the reports in a much more concise way showing all the coverage at one glance.

Turning Coverage ON or OFF

IDesignSpecTM enables the users to turn ON/OFF all the three types of coverage from within the MS Word specification itself.

Coverage can be specified and controlled using the “coverage” property in IDesignSpecTM which has the following possible values:

image

The hierarchical “coverage” property enables users to control the coverage of the whole block or at the chip level.

Here is a sample of how coverage can be specified in IDesignSpecTM:

image

image

This would be the corresponding RALF file :

agnisys_ralf

image

The coverage bins for each CoverPoint along with the cross for the various CoverPoints can also be defined in the specification as shown below:

image

This would translate to the following RALF:

image

Now, the next step after RALF generation would be to generate the RAL Model from the IDS generated RALF.

RAL MODEL AND RTL GENERATION FROM RALF:

The IDS generated RALF can be used with the Synopsys ‘ralgen’ to generate the RAL  (VMM or UVM) model as well as the RTL.

RAL model can be generated by using the following command:

image

If you specify –uvm above in the fisrt ralgen invocation above, a UVM Register Model would be generated.

COMPILATION AND REPORT GENERATION:

Once the RTL and the RAL model are generated using the ‘ralgen’, the complete model can be compiled and simulated in the VMM environment using VCS.

To compile the model use the following command on the command line:

vcs -R +plusarg_save -sverilog -o “simv1″ -ntb_opts rvm+dtm +incdir+<directories to search `defines> <files to be compiled> +define+RAL_COVERAGE

The compilation and simulation generates the simulation database which is used for the generation of the coverage reports.

Coverage reports can be generated in various forms but the most concise form can be in the form of the graphics showing all the coverage at a glance. For this, a tcl script “ivs_simif.tcl” takes up the simulation database and generates the text based report on execution of the following command:

% ivs_simif.tcl -in simv.vdb –svg

For running the above command set the environment variable “IDS_SIM_DIR”, the text report are generated at this location. This will also tell IDS where to look for the simulation data file.

A detailed graphical view of the report can be generated from IDS with the help of this text report. To generate the graphical report in the form of “scalable vector graphics” (SVG) select the “SVG” output from the IDS config and regenerate.

Another way of generating the SVG could be by using the IDS-XML or the Doc/Docx specification of the model as the input to the IDS in batch mode to generate the graphical report of the simulation by using the following command:

% idsbatch <IDS_generated_XML or doc/docx specification> -out “svg” -dir output_directory

Coverage Reports

IDesignSpec generates two types of reports from the input database.

They are:

1. Field_vals report

2. Reg_bits report

Field_vals report:

Field_vals report gives the graphical view of the field_vals coverage and the address coverage of the various registers and their respective fields.

The amount of coverage for the field (CoverPoints) is depicted by the level of green color in the fields, while that for complete register (CoverGroup) is shown by the color of name of the register.

The address coverage for the individual register (CoverPoint) is shown by the color of the address of the register (green if addressed; black if not addressed), while that of the entire block (CoverGroup) is shown by the color of the name of the block.

The coloring scheme for all the CoverGroups i.e. register name in case of the field_vals coverage and block name in case of the address coverage is:

1. If the overall coverage is greater than or equal to 80% then the name appears in GREEN color

2. If the coverage is greater than 70% but less than 80% then it appears in YELLOW

3. For coverage less than 70% name appears in RED color

Figure1 shows the field_vals and address coverage.

image

Figure:  Closed loop register verification using RALF and IDS

The above sample gives the following coverage information:

a. 2 registers, T and resetvalue, are not addressed out of total of 9 registers. Thus the overall coverage of the block falls in the range >70% &<80% which is depicted by the color of the Stopwatch (name of the block).

b. All the fields of the registers are filled with some amount of the green color which shows the amount of the coverage. As an example field T1 of register arr is covered 100% thus it is completely filled and FLD4 of register X is covered only about 10%. The exact value of coverage can be obtained by hovering over the field to get the tooltip showing the exact coverage value

c. Color of the name of the register, for example X is red, show the overall coverage of the whole register , which is less than 70% for X.

Reg_bits report:

Reg_bits report gives the detailed graphical view of the reg_bits coverage and address coverage.

Address coverage for reg_bits is shown in the same way as for the address coverage in field_vals. Reg_bits coverage has 4 components, that is,

1. Written as 1

2. Read as 1

3. Written as 0

4. Read as 0

Each of the 4 components is allocated a specific region inside a bit. If that component of the coverage is hit then the corresponding region is shown as green else it is blank. The overall coverage of the entire register is shown by the color of the name of the register as in the case of the field_vals.

image

The above sample report shows that there is no issue in “Read as 1” for the ‘resetvalue’ register. While other types or read/write has not been hit completely.

Thus, in this article we described what the various coverage models for a register are and how to generate the RALF coverage model of the registers automatically with minimum effort. An intuitive visualization of the register coverage data will ease the effort involved in deciphering the coverage reports from simulation lengthy log files. This type of closed loop register verification ensures better coverage and high quality results in less time. Hope you found this useful.. Do share with me your feedback on the same and and also let me know if you want any additional details to get the maximum benefits from this flow..

Posted in Automation, Coverage, Metrics, Organization, Register Abstraction Model with RAL, Tools & 3rd Party interfaces, Verification Planning & Management | 1 Comment »

Using the VMM Performance Analyzer in a UVM Environment

Posted by Amit Sharma on 23rd August 2011

As a generic VMM package, the Performance Analyzer (PAN) is not based on nor requires specific shared resources, transactions or hardware structures. It can be used to collect statistical coverage metrics relating to the utilization of a specific shared resource. This package helps to measure and analyze many different performance aspects of a design. UVM doesn’t have a performance analyzer as a part of the base class library as of now. Given that the collection/tracking and analysis  of performance metrics of a design has become a key checkpoint in today’s verification, there is a lot of value in integrating the VMM Performance Analyzer in an UVM testbench. To demonstrate the same, we will use both VMM and UVM base classes in the same simulation.

Performance is analyzed based on user-defined atomic resource utilization called ‘tenures’. A tenure refers to any activity on a shared resource with a well-defined starting and ending point. A tenure is uniquely identified by an automatically-assigned identifier. We take the XBUS example in  $VCS_HOME/doc/examples/uvm_1.0/simple/xbus as a demo vehicle for the UVM environment.

Step 1: Defining data collection

Data is collected for each resource in a separate instance of the “vmm_perf_analyzer” class. These instances should be allocated in the build phase of the top level environment.

For example, in xbus_demo_tb.sv:

image

Step 2: Defining the tenure, and enable data collection

There must be one instance of the “vmm_perf_tenure” class for each operation that is performed on the  sharing resource. Tenures are associated with the instance of the “vmm_perf_analyzer” class that corresponds to the resource operated. In this case of the Xbus example, lets say we want to measure transcation throughput performance (i.e for the XBUS transfers).. This is how we will associate a tenure with the Xbus transaction. To denote the starting and ending of the tenure, we define two additional events in the XBUS Master Driver (started, ended). ‘started’ is triggered when the Driver obtains a transaction from the Sequencer, and ‘ended’ once the transaction is driven on the bus and the driver is about to indicate seq_item_port.item_done(rsp); At the same time,  ‘started’ is triggered, a callback is invoked to get the PAN to starting collecting statistics. Here is the relevant code.

image

Now, the Performance Analyzer  works on classes extended from vmm_data and uses the base class functionality for starting/stopping these tenures. Hence, the callback task which gets triggered at the appropriate points would have to have the functionality for converting the UVM transactions to a corresponding VMM one. This is how it is done.

Step 2.a: Creating the VMM counterpart of the XBUS Transfer Class

image

Step 2.b: Using the UVM Callback for starting/stopping data collection and calling the UVM -> VMM conversion routines appropriately.

image

The callback class needs to be associated with the driver as follows in the Top testbecnh (xbus_demo_tb)

image

Step 3: Generating the Reports..

In the report_ph of xbus_demo_tb, save, and write out the appropriate databases

image

Step 4. Run simulation , and analyze the reports for possible inefficiencies etc

Use -ntb_opts uvm-1.0+rvm +define+UVM_ON_TOP with VCS

Include vmm_perf.sv along with the new files in the included file list.  The following table shows the text report at the end of the simulation.

image

You can generate the SQL databases as well and typically you would be doing this across multiple simulations.. Once, you have done that, you can create your custom queries to the get the desired information out of the SQL database across your regression runs.  You can also analyze the results and generate the required graphs in Excel. Please see the following post : Analyzing results of the Performance Analyzer with Excel

So there you go,  the VMM Performance Performance Analyzer can fit in any verification environment you have.. So make sure that you leverage this package  to make the  RTL-level performance measurements that are needed to validate micro-architectural and architectural assumptions, as well as to tune the RTL for optimal performance.

Posted in Coverage, Metrics, Interoperability, Optimization/Performance, Performance Analyzer, VMM infrastructure, Verification Planning & Management | 6 Comments »

Performance appraisal time – Getting the analyzer to give more feedback

Posted by Amit Sharma on 28th January 2011

S. Prashanth, Verification & Design Engineer, LSI Logic

Performance appraisal time – Getting the analyzer to give more feedback

We wanted to use the VMM performance analyzer to analyze the performance of the bus matrix we are verifying. To begin with, we wanted these information while accessing a shared resources (slave memory).

· Throughput/Effective Bandwidth for each master in terms of Mbytes/sec

· Worst case latency for each master

· Initiator and Target information associated with every transaction

By default, the performance analyzer records the initiator id, target id, start time and end time of each tenure (associated with a corresponding transaction) in the SQL data base. In addition to the useful information provided by the Performance Analyzer, we needed the number of bytes transferred for each transaction to be dumped in the SQL data base. This was required for calculating throughput which in our case was the number of bytes transferred from the start time of the first tenure until the end time of the last tenure of a master. Given that we had a complex interconnect with 17 initiators, it was difficult for us to correlate an initiator id with their names. So we wanted to add initiator names as well in the SQL data base. Let’s see how these information can be added from the environment.

An earlier blog on performance analyzer “Performance and Statistical analysis from HDL simulations using the VMM Performance Analyzer”  provides useful information on how to use VMM performance analyzer in verification environment. Now, starting with that, let me outline the additional steps we took for getting the statistical analysis we desired

Step 1: Define the fields and their data types required to be added to the data base in a string (user_fields). i.e., “MasterName VARCHAR(255)” for initiator name and “NumBytes SMALLINT” for number of bytes. Provide this string to the performance analyzer instance during initialization.

class tb_env extends vmm_env;
vmm_sql_db_sqlite db; //Sqlite data base
vmm_perf_analyzer bus_perf;
string user_fields;
virtual function void build();
super.build();
db = new(“perf_data”); //Initializing the data base
user_fields = “MasterName VARCHAR(255), NumBytes SMALLINT”;
bus_perf = new(“BusPerfAnalyzer”, db, , , , user_fields);
endfunction
endclass

Step 2: When each transaction ends, get information about the initator name and the number of bytes transferred in a string variable (user_values) . Then provide the variable to the performance analyzer through the end_tenure() method.

fork begin

vmm_perf_tenure perf_tenure = new(initiator_id, target_id, txn);

string user_values;

bus_perf.start_tenure(perf_tenure);

txn.notify.wait_for(vmm_data::ENDED);

user_values = $psprintf(“%s, %0d”, initiator.get_object_name(), txn.get_num_bytes());

bus_perf.end_tenure(perf_tenure, user_values);

end

join_none




With this, the performance analyzer dumps the additional user information in an SQL data base. The blog “Analyzing results of Performance Analyzer with Excel”  explains how to extract information from the SQL database generated. Using the spreadsheet, we could create our own plots and ensure that  management has all the analysis it needs to provide the perfect appraisal.

Posted in Optimization/Performance, Performance Analyzer, Verification Planning & Management | 1 Comment »

Planning for Functional Coverage

Posted by JL Gray on 23rd November 2010

JL Gray, Vice President, Verilab, Austin, Texas, and Author of Cool Verification

Functional coverage cures cancer and the common cold. It helps kittens down from trees and old ladies up the stairs. Functional coverage is the greatest thing since sliced bread, right? So all we need to do is put some in our verification environments and our projects will be a success.

Well… perhaps not.

Many teams I’ve worked with have struggled with various aspects of functional coverage. The area I’d like to address today is the part most engineers spend the least amount of time on – planning (as opposed to implementing) your functional coverage model. One of the critically important uses for functional coverage is to help us (engineers and project managers alike) know when we are done with verification. We all know it’s impossible to 100% verify a complex chip. But it is possible to define what “done” means for our individual teams and projects. The functional coverage model gives us a way to correlate what we’ve accomplished in the testbench with some definition of “done” in our verification plan.

What that means is that the verification plan itself is the key to understanding where we are relative to where we need to be to finish a project. It is the document that managers, designers, verification engineers and others can all interact and agree on what constitutes “good enough.” Verification plans often contain lists of design features and statements about what types of tests should be run to verify these features. Unfortunately, that is almost never sufficient to give stakeholders insight into what should be covered.

Verification plans need to contain a sufficient level of detail to meet the following goals:

  • 1. Allow key stakeholders who may not know anything about SystemVerilog to see what will be covered and then provide a platform for them to give feedback to the verification engineer.
  • 2. Allow this discussion to take place before any SystemVerilog code has been written.
  • 3. Allow an engineer other than the author of the document to understand and implement the coverage model.
  • 4. Allow the project manager to understand what coverage goals should be reached for the module to be considered verified.

Point 3 is important in ways some engineers may not realize. During my verification planning seminars I often ask students if they’ve ever heard of a project’s “truck factor”. Basically, a project’s truck factor is the number of people on the project who could get hit by a truck before the project is in trouble. A verification plan should specify coverage goals in enough detail that someone besides you (this could include the you 6 months from now who’s forgotten all relevant details about the block in question) can understand what the original intent of the document was. For example, some plans I’ve seen make comments like “test all valid packet lengths”. What are the valid lengths? They should be specified in the plan. Are there certain corner cases you think are important to cover? Make sure you call out this information with sufficient detail so that the project manager can tell if you’ve actually accomplished your goals!

Adding this extra level of detail to your verification plan takes time. But without the detail, how can you easily share with your colleagues what you plan to do? And how can you know which features your testbench needs to support in order to meet your coverage goals?

As always, questions and comments welcome.

Posted in Coverage, Metrics, Verification Planning & Management | 2 Comments »

Reusing Your Block Level Testbench

Posted by JL Gray on 12th November 2010

JL Gray, Vice President, Verilab, Austin, Texas, and Author of Cool Verification

Building reusable testbench components is a desirable, but not always achievable goal on most verification projects. Engineers love the idea of not having to rewrite the same code over and over again, and managers love the idea that they can get more work out of their existing team by simply reusing code within a project and/or between projects. It is interesting, then, that I frequently encounter code written by engineers that cannot work in a reusable way.

Consider the following scenario. You’ve just coded up a new block level testbench to verify an OCP to AHB bridge. As any good engineer knows, your environment must be self-checking, so you create a VMM scenario that does the following:

class my_dma_scenario extends vmm_scenario;

  rand ocp_trans ocp;
  ahb_trans ahb;

  // ... 

  virtual task execute(ref int n);
    vmm_channel ocp_chan = this.get_channel("OCP");
    vmm_channel ahb_chan = this.get_channel("AHB");
    this.ocp.randomize();
    ocp_chan.put(this.ocp.copy());

    // Wait for the transaction to complete...

    ahb_chan.get(ahb) 

    // Compare actual and expected values
    // in the stimulus instead of the scoreboard.
    my_compare_function(ocp, ahb);

  endtask

endclass

Or, perhaps you decide to get fancy and update and create a situation where you add the expected transaction from the scenario directly to a scoreboard, which compares the results with transactions on the AHB interface as observed by a monitor. Either way, you go merrily about your business verifying the bridge. You get excellent coverage numbers, and eventually, with the help of the block’s designer, you feel you’ve fully verified the module at the block level. However, things are not going well in the full chip environment. Full chip tests are failing in unexpected ways, and other engineers debugging the issue feel it must be a bug in the brigde! They start assigning bugs to you and the block designer. If only you had some checkers operating at the full chip level you could prove your module was operating correctly…

Unfortunately, you have a problem. In your block level testbench, your checkers only work in the presence of stimulus, and in the full chip environment the stimulus components cannot be used. And at this stage of the project, there is no time to go back and create the additional infrastructure (monitor(s) and possibly a new scoreboard) required to run your self-checking module-level testbench at the full chip level.

Fortunately, there is another way to build a module level testbench so that it will be guaranteed to work at the full chip. Follow these simple rules:

  1. Always create a monitor, driver, and scenario generator for every testbench component.
  2. Never populate a scoreboard from a driver or scenario. Always pull in this information from a passive monitor.

In general, when writing your testbench components always assume they will need to run in the absence of testbench stimulus. Yes – this means additional up-front work. Writing checkers that work with passive monitors instead of the information you have available to you in the driver can be time consuming. Often, though, the amount of additional work required is much less than the time you will spend debugging issues without checkers at the full chip level. That being said, sometimes it is too difficult to implement purely passive checkers. You will have a good idea of whether or not this is the case based on the upfront work you did writing a comprehensive testplan. You do have a testplan, right?

Posted in Reuse, VMM, Verification Planning & Management | 3 Comments »

The Blind Leading the Deaf

Posted by Andrew Piziali on 28th September 2010

The Wall of Separation Between Design and Verification

by Andrew Piziali, Independent Consultant

I remember a time in my zealous past, leading a large microprocessor verification team, where one my junior engineers related how they had forcefully resisted examining the floating point unit RTL, explaining to the designer that they did not want to become tainted by the design! The engineer insisted on a dialog with the designer rather than reviewing the RTL. Their position was mine: we must maintain some semblance of separation between the design and verification engineers.

There has been an age old debate between whether or not there ought to be a “wall” between the design and verification teams. “Wall” in this context refers to an information barrier between the teams that minimizes the verification engineer’s familiarity with the details of the implementation (but not the specification!) of the design component being verified. Similarly, the wall minimizes the designer’s familiarity with the verification environment for their implementation.

Re-convergence Model

Re-Convergence Model

The intent of the wall is to allow two pairs of eyes (and ears!)—those of the designer and those of the verification engineer—to independently interpret the specification for a common component and then compare their results. The hypothesis is that if they reach the same conclusion, they are likely to have correctly interpreted the specification. If they do not, one or both are in error. This process is an example of the re-convergence model[1], where a design transformation is verified by performing a second parallel transformation and then comparing the two results. What are the pros and cons of the wall?

The argument in favor of the wall depends upon what we might call original impressions, the fresh insight provided by a person unfamiliar with a concept upon initial exposure. In this context, the verification engineer reading the specification will acquire an understanding of the design intent, independent of the designer, but only if study of its implementation is postponed. Why? Because nearly any implementation will be a plausible interpretation of the specification. The objective is to acquire two independent interpretations for comparison. Hence, influencing a second understanding with an initial implementation would defeat the purpose. What is the opposite position?

The argument against the wall is that a verification engineer and designer, working closely together, are more likely to gain a more precise understanding of the specification than either one working alone. The interactive exploration of possible specification interpretations, each implementing their understanding—the designer the RTL and software, the verification engineer the properties, stimulus, checking and coverage aspects of their verification environment—is argued to lead to convergence more quickly than each party working alone. Well, what should it be? Should verification engineers and designers scrupulously avoid one another, should they collaborate or should they find some intermediate interaction?

Pondering the answer brings to mind the metaphor of the blind leading the deaf, where each of two parties is crippled in a different way such that neither is able to grasp the whole picture. Nevertheless, working together they are able to progress further than working alone. Are the verification engineer and designer the blind leading the deaf? Before I weigh in with my opinion, I’d like to read yours. Type in the “Leave a Reply” box below to respond. Thanks!

—————————
[1] Writing Testbenches Using SystemVerilog, Janick Bergeron, 2006, Springer Science+Business Media, Inc.

Posted in Coverage, Metrics, Verification Planning & Management | 4 Comments »

Vaporware, Slideware or Software?

Posted by Andrew Piziali on 27th August 2010

The Role of the Technical Marketing Engineer in Verification

by Andrew Piziali, Independent Consultant

In our previous blog posts on the subject of verification for designers we addressed the role of the architect, software engineer and system level designer. We now turn our attention to perhaps the least understood—and oftentimes most vilified—member of the design team, the technical marketing engineer. But, before we explain why, what is the role of the technical marketing engineer? After all, not all companies have such a position.

The technical marketing engineer is responsible for determining customer product requirements and ensuring that these requirements are satisfied in the delivered product. They typically build product prototypes—initially slideware and later rudimentary functional code—that are evaluated by the customer while refining requirements. With each iteration the customer and engineer come closer to understanding precisely what the product must do and the constraints under which it must operate. This role differs from other traditional marketing functions such as inbound and outbound communication. Given that such a position exists, how does the technical marketing engineer contribute to the functional verification of a design?

Functional Verification Process

Although there are many definitions of functional verification, my favorite is one I recorded at Convex Computer Corporation some twenty years ago: “Demonstrate that the intent of a design is preserved in its implementation.” It is short, colloquial and simple:

  • • Demonstrate — Illustrate but not necessarily prove
  • • Intent — What behaviors are desired?
  • • Design — The various representations of the product before it is ultimately realized and shipped
  • • Preserve — Prevent corruption, scrambling and omission
  • • Implementation — Final realization of the product for the customer

The technical marketing engineer plays a key role in this process as demonstrated by one personal experience of mine in this role.

One company I worked for offered a product that assisted in producing clean, efficient, bug-free code through pre-compilation analysis. In this environment one could navigate through your code within its module (file) structure, as well as within its object structure. However, the programming language supported not only objects—language elements that are defined, inherited and instantiated–but also aspects—language elements that group object extensions into common concerns. I proposed adding an aspect browser to the product as a natural extension to its existing navigation facilities. The challenge was inferring the aspect structure of a program from its files and object structure because an aspect is not explicitly identified as such in the program.

Slide Presentation Model Use

I put together a slide presentation that illustrated the two existing navigation paradigms, as well as the proposed aspect navigation. The feature looked great from a user interface perspective, but could it be implemented? Since heuristics could be employed in identifying each aspect, I also illustrated each heuristic and its application by way of sample code and its structure. This animated slide presentation served as the first prototype demonstration of the aspect browser for the product. When reviewed with existing users, they were able to provide valuable feedback about the new feature and its utility and limitations. When subsequently referenced by the programmer implementing the feature, it served as a rough executable specification.

Returning to the vilified technical marketing engineer, why are some poor souls subject to this criticism? More often than not, the marketing engineer promised more capability, performance or features than could be delivered by the developers. It is easy to “Powerpoint” a feature that cannot be implemented so the marketing engineer must walk far enough down the implementation path to understand what is feasible. If they do, they will likely avoid this charge and remain a perceived asset by the design team. Moreover, their employer will retain a reputation for delivering quality “product-ware,” not vaporware or slideware!

Posted in Organization, Reuse, Verification Planning & Management | Comments Off

Fantasy Cache: The Role of the System Level Designer in Verification

Posted by Andrew Piziali on 12th July 2010

Andrew Piziali, Independent Consultant

As is usually the case, staying ahead of the appetite of a high performance processor with the memory technology of the day was a major challenge. This processor consumed instructions and data far faster than current PC memory systems could supply. Fortunately, spatial and temporal locality–the tendency for memory accesses to cluster near common addresses and around the same time–were on our side. These could be exploited by a cache that would present a sufficiently fast memory interface to the processor while dealing with the sluggish PC memory system behind the scenes. However, this cache would require a three level, on-chip memory hierarchy that had never been seen before in a microprocessor. Could it be done?

clip_image002

The system level designer responsible the cache design–let’s call him “Ambrose”–managed to meet the performance requirements, yet with an exceedingly complex cache design. It performed flawlessly as a C model running application address traces, rarely stalling the processor on memory accesses. Yet, when its RTL incarnation was unleashed to the verification engineers, it stumbled … and stumbled badly. Each time a bug was found and fixed, performance took a hit while another bug was soon exposed. Before long we finally had a working cache but it unfortunately starved the processor. Coupled with the processor not making its clock frequency target and schedule slips, this product never got beyond the prototype stage, after burning through $35 million and 200 man-years of labor. Ouch! What can we learn about the role of the system level designer from this experience?

The system level designer faces the challenge of evaluating all of the product requirements, choosing implementation trade-offs that necessarily arise. The architectural requirements of a processor cache include block size, hit time, miss penalty, access time, transfer time, miss rate and cache size. It shares physical requirements with other blocks such as area, power and cycle time. However, of particular interest to us are its verification requirements, such as simplicity, limited state space and determinism.

The cache must be as simple as possible while meeting all of its other requirements. Simplicity translates into a shorter verification cycle because the specification is less likely to be misinterpreted (fewer bugs), fewer boundary conditions to be explored (smaller search space and smaller coverage models), less simulation cycles required for coverage closure and fewer properties to be proven. A limited state space also leads to a smaller search space and coverage model.  Determinism means that from the same initial conditions and given the same cycle-by-cycle input the response of the cache is always identical from one simulation run to the next. Needless to say, this makes it far easier to isolate a bug than an ephemeral glitch that cannot be produced on demand. These all add up to a cost savings in functional verification.

Ambrose, while skilled in processor cache design, was wholly unfamiliar with the design-for-verification requirements we just discussed. The net result was a groundbreaking, novel, high-performance three level cache that could not be implemented.

Posted in Coverage, Metrics, Organization, Verification Planning & Management | 2 Comments »

Automating Coverage Closure

Posted by Janick Bergeron on 5th July 2010

“Coverage Closure” is the process used to reach 100% of your coverage goals. In a directed test methodology, it is simply the process of writing all of the testcases outlined in the verification plan. In a constrained-random methodology, it is the process of adding constraints, defining scenarios or writing directed tests to hit the uncovered areas in your functional and structural coverage model. In the latter case, it is a process that is time-consuming and challenging: you must reverse-engineer the design and verification environment to determine why specific stimulus must be generated to hit those uncovered areas.

Note that your first strategy should be to question the relevance of the uncovered coverage point. A coverage point describes an interesting and unique condition that must be verified. If that condition is already represented by another coverage point, or it is not that interesting (if no one on your team is curious about nor looking forward to analyzing a particular coverage point, then it is probably not that interesting), then get rid of the coverage point rather than trying to cover it.

Something that is challenging and time-consuming is an ideal candidate for automation. In this case, the Holy Grail is the automation of the feedback loop between the coverage metrics and the constraint solver. The challenge in automating that loop is correlating those metrics with the constraints.

Coverage ConvergenceFor input coverage, this correlation is obvious. For every random variable in a transaction description, there is usually a corresponding coverage point, then cross coverage points for various combinations of random variables. VCS includes automatic input coverage convergence. It automatically generates the coverage model based on the constraints in a transaction descriptor, then will automatically tighten the constraints at run-time to reach 100% coverage in very few runs.

For internal or output coverage, the correlation is a lot more obscure. How can a tool determine how to tweak the constraints to reach a specific line in the RTL code, trigger a specific expression or produce a specific set of values in an output transaction? That is where newly acquired technology from Nusym will help. Their secret sauce traces the effect of random input values on expressions inside the design. From there, it is possible to correlate the constraints and the coverage points. Once this correlation is known, it is a relatively straightforward process to modify the constraints to target uncovered coverage points.

A complementary challenge to coverage closure is identifying coverage points that are unreachable. Formal tools, such as Magellan, can help identify structural coverage points that cannot be reached and thus further trim your coverage space. For functional coverage, that same secret sauce from Nusym can also be used to help identify why existing constraints are preventing certain coverage points from being reached.

Keep in mind that filling the coverage model is not the goal of a verification process: it is to find bugs! Ultimately, a coverage model is no different than a set of directed testcases: it will only measure the conditions you have thought of and consider interesting. The value of constraint-random stimulus is not just in filling those coverage models automatically, but also in creating conditions you did not think of. In a constrained-random methodology, the journey is just as interesting—and valuable—as the destination.

Posted in Coverage, Metrics, Creating tests, Verification Planning & Management | 4 Comments »

Bruce Kenner: Programmer

Posted by Andrew Piziali on 8th April 2010

by Andrew Piziali and Gary Stringham

Once upon a time in a cubicle far, far away there was a brilliant software engineer named Bruce Kenner. He had designed an elegant compiler for a processor with an exposed pipeline, requiring compiler-scheduled instructions for a new instruction set architecture. No, this was not the pedestrian VLIW machine you may be familiar with but a machine having a programmer specified number of delay slots following each branch instruction. In order to manage resource hazards, Bruce had furnished the compiler with an oracle that had a full pipeline model of the processor. The oracle advised the scheduler about resources and their availability.Orca Compiler Flow

Despite Bruce’s obvious talent, his business card simply read “Bruce Kenner: Programmer.”Since Bruce was a software engineer that designed and implemented exceedingly complex machinery, we might ask what is the role of the software engineer in the verification process. How did Bruce verify the oracle, scheduler and compiler he was designing? What role in general does the software engineer play in the verification of a modern system? To address these questions and others, I asked embedded software developer Gary Stringham to join me in this discussion. Gary is the founder and president of Gary Stringham & Associates, LLC, specializing in embedded systems development.

The typical SoC today contains a dozen or more processors of various flavors: general purpose, DSP, graphics, audio, encryption, etc. These processors are distinguished from their digital hardware brethren in that they are generally pre-verified cores that faithfully perform whatever tasks the programmer specifies. Hence, the DUV (design under verification) becomes the code written by the programmer rather than the hardware on which it executes. We must demonstrate this code implementation preserves the original intent of the architect. To do so requires the contribution of the software engineer to the verification process. We illustrate the cost of bringing the software engineer into the design cycle too late with this story from Gary’s experience. He writes:

I was having trouble getting my device driver to work with a block in an SoC for a printer so I went to the hardware engineer for help. We studied what my device driver was doing and it seemed fine. Then, we compared it to what the corresponding verification test was doing and both were writing the same values to the same registers. However, when we examined the device driver more closely, we discovered that it was programming the registers in a different order than the test case so that it exposed a problem.

I had my reasons for writing those registers in the way that I did, having to do with the overall architecture of the software. This was based upon the order that the device driver received information during the printing process. The hardware engineer had no rhyme nor reason to write to the registers in the order he did. He did not—nor was he expected to—know the nuances of the software architecture that required the driver to do it the way it did. But, with the order he happened to write those registers in his verification test, he obscured a defect. At this point in the design cycle the SoC was already in silicon so I figured out a work-around in my device driver.

Now, in this particular case, there was no reason to believe there would be order sensitivity of register writes; the driver was setting up some configuration registers before launching the task. But, there was a sensitivity. If I, the software engineer, had been involved months earlier with the design of the verification test suite, I might have suggested to write the registers in the order I needed the device driver to do it and it would have exposed the order-dependent error.

With this in mind, let’s examine the software engineering roles we recommend. The initial role of the software engineer should be collaborating with the system architect to ensure that required hardware resources are available for the algorithms delegated to the software.[1] Parametric specifications and version dependencies should also be examined and verified. Often the software engineer is the only one intimately familiar with the software limitations of legacy blocks reused in the design. Hence, they need to examine these in light of the requirements of the current design. What space/time/performance/power trade-offs become apparent as various hardware components are considered?

Next, the software engineer should review early specifications with an eye toward implementation challenges. What requirements and features lead to code complexities that might jeopardize the implementation or verification schedule? Are obscure use cases addressed? What ambiguities exist in the specification that could lead to different interpretations by the design, verification and software engineers? What hardware resources do software engineers need to debug problems, such as debug hooks and ports?

At the same time, the software engineer should be contributing to the verification planning process[2] during either top-down or bottom-up specification analysis. During the more common top-down analysis, where each designer explains their understanding of the features and their interactions, the software engineer will be asking questions that aid the extraction of design features requiring software implementation. Likewise, the software engineer will be answering questions posed by verification engineers and designers about their understanding of the specification. As the verification plan comes together, the software engineer should be a periodic reviewer.

Finally, during hardware/software integration, the software engineer plays an integral role, having a first-hand understanding (usually!) of their code and its intended behavior. Both the designer and software engineer will reference the specification and verification plan to disambiguate results. Each will learn from the other as they observe their respective components contribute to system features.

Summarizing, the software engineer must be involved early and throughout the design cycle to ensure design intent is preserved, yet properly partitioned between hardware and software, in the final implementation. Make sure software engineering is represented during the specification and verification planning processes, all the way through final system integration to maximize your potential for success. Or, as one of my earliest finger(1) .plan files (you remember those, right?) used to read: “Fully functional first pass silicon.”

——————-
[1] Hardware/Firmware Interface Design: Best Practices for Improving Embedded Systems Development, Stringham, Elsevier, 2010

[2] ESL Design and Verification, Bailey, Martin and Piziali, Elsevier, 2007 Read the rest of this entry »

Posted in Interoperability, Reuse, Verification Planning & Management | Comments Off

Verification For the Rest of Us

Posted by Andrew Piziali on 29th March 2010

Andrew Piziali, independent consultant
Jim Bondi, DMTS, Texas Instruments

Functional verification engineers—also known as DV engineers—often think quite highly of themselves. Having mastered both hardware and software design, and each new design from top to bottom with an understanding exceeding all but the architects, we can see why they might end up with an inflated ego. Yet, responsibility for verification of the design is not theirs alone and sometimes not theirs at all!

In this next series of blog posts I am going to direct your attention to the role various members of a design team play in the verification process. Each will be co-authored by someone contributing to their design in the role under discussion. It is not uncommon these days for a small design team to lack any dedicated verification engineers.  Hence, the designers become responsible for the functional verification process embedded in, yet operating in parallel to, the design process.  What does that overall process look like?[1]

  1. Specification and Modeling
  2. Hardware/Software Partitioning
  3. Pre-Partitioning Analysis
  4. Partitioning
  5. Post-Partitioning Analysis and Debug
  6. Post-Partitioning Verification
  7. Hardware and Software Implementation
  8. Implementation Verification

Specification and modeling is responsible for exploring nascent design spaces and capturing original intent. The difficult choices of how to partition the design implementation between hardware and software components comes next. Then, analysis of each partitioning choice and debugging these high level models. Our first opportunity for functional verification follows post-partitioning analysis and debug, where abstract algorithm errors are discovered and eliminated. Hardware and software implementation is self explanatory, lastly leading to implementation verification, answering the question “Has the design intent been preserved in the implementation?”

This kick-off post in this series addresses the role of the architect in verification. My co-author, Jim Bondi, has been a key architect on numerous design projects at Texas Instruments ranging from embedded military systems to Pentium-class x86 processors to ultra low power DSP platforms for medical applications. The architect, whether a single individual or several, is responsible for specifying a solution to customer product requirements that captures the initial design intent of the solution. The resultant specification is iteratively refined during the first three stages of design.

In addition to authoring the original design intent, the second role of the architect in the verification process is preserving that intent and contributing to its precise conveyance throughout the remainder of the design process.[2] This begins during verification planning, where the scope of the verification problem is quantified and its solution specified. Verification planning itself begins with specification analysis, where the features of the design are identified and quantified. The complexity of most designs requires a top down analysis of the specification—first, because of its size (>20 pages) and second, because behavioral requirements must be distilled. This analysis is performed in a series of brainstorming meetings wherein each of the stakeholders of the design contribute: architect, system engineer, verification engineer, software engineer, hardware designer and project manager.

A brainstorming session is guided by someone familiar with the planning process. The architect describes each design feature and—through Q&A—its attributes are illuminated. These attributes and their associated values—registers, data paths, control logic, opcodes—are initially recorded in an Ishikawa diagram (also known as a “fish bone diagram”) for organizational purposes and then transferred to a coverage model design table as they are refined. Ultimately, each coverage model is implemented using a high level verification language (HVL), as part of the verification environment, and used to measure verification progress.

The seasoned architect knows that, even though modeling is mentioned only in the first design step above, it is most effective not only when started early but also continued iteratively throughout most of the design process. It is quite true that system modeling should be started early—as soon as possible and ideally before any RTL is written—when modeling can have its biggest impact on the design and offer its biggest return on model investment. In this early stage, modeling can best help tune the nascent architecture to the application, with the biggest resultant possible improvements in system performance and power. When used right, models are developed first and then actually drive the development of RTL in later design steps. This is contrary to the all- to-common tendency to jump prematurely to RTL representations of the design, and then perhaps use modeling mostly thereafter in attempts to help check and improve the RTL. Used in this fashion, the ability of modeling to improve the design is limited. More experienced architects have learned that modeling is best applied “up front” because it is here, before the design is cast in RTL, that up to 75% of the overall possible improvements in system performance and power can be realized.  The architect knows that a design process that jumps prematurely to RTL leaves much of this potential performance and power improvement on the table.

The seasoned architect also knows that, even though started early, modeling should be continued iteratively throughout most of the remainder of the design process. They know that, in fact, a set of models is needed to best support the design process. The first is typically an untimed functional model that becomes the design’s “golden” reference model, effectively an executable specification. As the design process continues, other models are derived from it, with, for example, timing added to derive performance models and power estimates added to derive power-aware models. In later stages, after modeling has been used “up front” to tune the architecture, optimal RTL can actually be derived from the models. Wherever verification is applied in the design process, whether before or after RTL appears, the models, as a natural form of executable golden reference, can support, or even drive, the verification process. Thus, in design flows that use modeling best, system modeling begins up front and is continued iteratively throughout most of the overall design process.

Indeed, the architect plays a crucial role in the overall design process and in the functional verification of the design derived from that process. They are heavily involved in all design phases affecting and involving verification, from authoring the initial design intent to ensuring its preservation throughout the rest of the design process.  The seasoned architect leverages a special set of system models to help perform this crucial role. Despite the verification engineer’s well-deserved reputation as a jack-of-all-trades, they cannot verify the design alone and may not even be represented in a small design team.  The architect is the “intent glue” that holds the design together until it is complete!

——————-
[1] ESL Design and Verification, Bailey, Martin and Piziali, Elsevier, 2007
[2] Functional Verification Coverage Measurement and Analysis, Piziali, Springer, 2004

Posted in Modeling, Organization, Verification Planning & Management | Comments Off

Moving from Block levels tests to a Chip level in a reusable world…

Posted by Srivatsa Vasudevan on 29th January 2010

In modern designs, one observes a fairly high degree of reuse in designs and verification environments. Cores and blocks are regularly reused from one design to another and increasing demands are being placed on quickly getting a functional chip out of the door to meet a customer need.

In such a world, schedule and functional chips are key to success.

At the block level, the design and verification engineers use a white box testing methodology to write tests that will completely test out the block under test. Usually in these environments, various blocks that interact with the design under test will usually be stubbed out or replaced with transactors as the case may be.

In an ideal world, the tests which were written once for the block level verification if reused 100% at the chip level would mean that the reuse was at 100%. That never usually happens.

Let’s look at the illustration between a verification engineer who’s busy building cores at the block level, comes in to talk to the verification lead of a chip he’s delivering to.

method_2

It is obvious that the requirements and goals for any tests at chip level are quite different. Most chip leads would first like to ensure that the core in question is indeed integrated at the chip level properly and the core does indeed talk to other cores properly.   The core ideally is a black box that would perform the functionality expected of it. The perspective on design verification maintained by the chip level verification engineer is usually vastly different from the block level perspective

Usually interconnect, clocking, reset, power-up/down, programming sequences, power domains, timing etc would be some examples  of things that come to mind when writing tests at the chip level. Once these major goals are met, one looks at other optimizations while attempting to verify the device.

That said, how does one now take the tests that are written for the block and re-use them at the top level especially if the tests are intended for different reasons? Is it even possible to minimize the effort in doing so? Don’t we need a mechanism to ensure that things that weren’t completely exercised at the block/subsystem level are tested at core level? What do we need for the handoff? How do the various verification class libraries now play a part in this picture?

The series herein explores how this is all done. A proper top/down methodology Coupled to a bottom’s up methodology can yield excellent results with minimal overlap.

Stay Tuned….

Posted in Reuse, Verification Planning & Management | 1 Comment »

Moving from Block levels tests to a Chip level in a reusable world…

Posted by Srivatsa Vasudevan on 29th January 2010

In modern designs, one observes a fairly high degree of reuse in designs and verification environments. Cores and blocks are regularly reused from one design to another and increasing demands are being placed on quickly getting a functional chip out of the door to meet a customer need.

In such a world, schedule and functional chips are key to success.

At the block level, the design and verification engineers use a white box testing methodology to write tests that will completely test out the block under test. Usually in these environments, various blocks that interact with the design under test will usually be stubbed out or replaced with transactors as the case may be.

In an ideal world, the tests which were written once for the block level verification if reused 100% at the chip level would mean that the reuse was at 100%. That never usually happens.

Let’s look at the illustration between a verification engineer who’s busy building cores at the block level, comes in to talk to the verification lead of a chip he’s delivering to.

image

It is obvious that the requirements and goals for any tests at chip level are quite different. Most chip leads would first like to ensure that the core in question is indeed integrated at the chip level properly and the core does indeed talk to other cores properly.   The core ideally is a black box that would perform the functionality expected of it. The perspective on design verification maintained by the chip level verification engineer is usually vastly different from the block level perspective

Usually interconnect, clocking, reset, power-up/down, programming sequences, power domains, timing etc would be some examples  of things that come to mind when writing tests at the chip level. Once these major goals are met, one looks at other optimizations while attempting to verify the device.

That said, how does one now take the tests that are written for the block and re-use them at the top level especially if the tests are intended for different reasons? Is it even possible to minimize the effort in doing so? Don’t we need a mechanism to ensure that things that weren’t completely exercised at the block/subsystem level are tested at core level? What do we need for the handoff? How do the various verification class libraries now play a part in this picture?

The series herein explores how this is all done. A proper top/down methodology Coupled to a bottom’s up methodology can yeild excellent results with minimal overlap.

Stay Tuned….

Posted in Reuse, Verification Planning & Management | Comments Off

Proxy Coverage

Posted by Andrew Piziali on 22nd January 2010

Andy's Picture_1

Andrew Piziali, Independent Consultant

I was recently talking with a friend about a subject near and dear to my heart: functional coverage. Our topic was block level or subsystem coverage models that are not ultimately satisfied until paired with corresponding system level context. The question at hand was “Is it possible to use conditional block level coverage as a proxy for system level coverage by recording coverage points at the block level, conditioned upon subsequent observation of system level context?”

For example, let’s say I have an SoC DUV (design under verification) that contains an MPEG-4 AVC (H.264) video encoder with a variety of features enabled by configuration registers. I need to observe the encoder operating in a subset of modes—referred to as “profiles” in H.264 parlance—defined by permutations of the features. In addition, each profile is only stressed when the encoder is used by an application for which the profile was designed. For example, the “high profile” is intended for high definition television while the “scalable baseline profile” was designed for mobile and surveillance use. The profiles may be simulated at the block level but the applications are only simulated at the system level. How do I design, implement and fill this H.264 coverage model so that the model is populated using primarily block level simulations, with their inherent performance advantage, while depending upon system level application context? May I even use the block level coverage results as a proxy for the corresponding system level coverage?

I think this nut is tougher to crack than the black walnuts that recently fell from my tree. The last time I tried to crack those nuts I resorted to vise grips from my tool chest, leading to a number of unintended, but spectacular, results, but I digress. My friend and I were able to define a partial solution but it wasn’t at all clear whether or not a viable solution exists that leverages the speed of block level simulations. After you finish reading this post, I’d like to hear your thoughts on the matter. This is how far we got.

As with the top level design of any coverage model, we started with its semantic description (sometimes called a “story line”):

Record the encoder operating in each of its profiles while in use by the corresponding applications.

Next, we identified the attributes required by the model and their values:

Attribute Values
Profile BP, CBP, Hi10P, Hi422P, Hi444PP, HiP, HiSP, MP, XP
Application broadcast, disc_storage, interlaced_video, low_cost, mobile, stereo_video, streaming_video, video_conferencing
Feature 8×8_v_4×4_transform_adaptivity, Arbitrary_slice_ordering, B_slices, CABAC_entropy_coding, Chroma_formats, Data_partitioning, Flexible_macroblock_ordering, Interlaced_coding, Largest_sample_depth, Monochrome, Predictive_lossless_coding, Quantization_scaling_matrices, Redundant_slices, Separate_Cb_and_Cr_QP_control, Separate_color_plane_coding, SI_and_SP_slices

Then, we finished the top level design using a second order model (simplified somewhat for clarity):

clip_image001
The left column of the table serves as row headings: Attribute, Value, Sampling Time and Correlation Time. The remaining cells of each row contain values for the corresponding heading. For example, the “Attribute” names are “Profile,” “Application” and “Feature.” The values of attribute “Profile” are BP, CBP, Hi10P, etc. The time at which each attribute is to be sampled is recorded in the “Sampling Time” row. The time that the most recently sampled attribute values are to be recorded as a set is when the “application [is] started,” the correlation time. Finally, the magenta cells define the value tuples that compose the coverage points of the model.

The model is implemented in SystemVerilog, along with the rest of the verification environment and we begin running block level simulations. Since no actual H.264 application is run at the block level, we need to invent an expected application value whenever we record a coverage point defined by an attribute value set, lacking only an actual application. Why not substitute a proxy application value, to be replaced by an actual application value when a similar system level simulation is run? For example, if in a block level simulation we observe profile BP and feature ASO ({BP, *, ASO}), we could substitute the value “need_low_cost” for “low_cost” for the application value, recording the tuple {BP, need_low_cost, ASO}. This becomes a tentative proxy coverage point. Later on, when we are running a system level simulation with an H.264 application, whenever we observe a {BP, low_cost, ASO} event, we would substitute the much larger set of {BP, need_low_cost, ASO} block level events for this single system level event, replacing “need_low_cost” with “low_cost.” This would allow us to “observe” the larger set of system level {BP, low_cost, ASO} events from the higher performance block level simulations. How can we justify this substitution?

We could argue that a particular system level coverage point is an element of a set of block level coverage points because (1) it shares the same subset of attribute values with the system level coverage point and (2) the block level simulation is, in a sense, a superset of the system level simulation because it implicitly abstracts away the additional detail available in the system level simulation. Is there any argument that the profile value BP and feature value ASO are the same in the two environments? The second reason is clearly open to discussion.

This brings us back to the opening question, can we use conditional block level coverage as a proxy for system level coverage by recording coverage points at the block level, conditioned upon subsequent observation of system level context? If so, is this design approach feasible and reasonable? If not, why not? Have at it!

Posted in Coverage, Metrics, Verification Planning & Management | 3 Comments »

Paved With Good Intentions: Examining Lost Design Intent

Posted by Adiel Khan on 7th December 2009

image-thumb.pngandys-picture_2

Adiel Khan, Synopsys CAE

Andrew Piziali, Independant Consultant

Remember the kick-off of your last new project, when the road to tape-out was paved with good intentions? Architects were brainstorming novel solutions to customer requirements? Designers were kicking around better implementation solutions? Software engineers were building fresh new SCM repositories? And you, the verification engineer, were excitedly studying the new design and planning its verification? Throughout all of this early excitement, all sorts of good intentions were revealed. Addressing the life story of each intention would make a good short story, or even a novel! Since we don’t have room for that, let’s just focus on the design intentions.

Design intent, how the architect intended the DUV (design under verification) to behave, originates in the mind’s eye of the architect. It is the planned behavior of the final implementation of the DUV. Between original intent and implementation the DUV progresses through a number of representations, typically referred to as models, wherein intent is unfortunately lost. However, intent’s first physical representation, following conception, is its natural language specification.

We may represent the space of design behaviors as the following Venn diagram:1

Each circle—Design Intent (AEGH), Specification (BEFH) and Implementation (CFGH)—represents a set of behaviors. AEGH represents the set of design requirements, as conveyed by the customer. BEFH represents the intent captured in the specification(s). CFGH represents intent implemented in the final design. The region outside the three sets (D) represents unintended, unspecified and unimplemented behavior. The design team’s objective is to bring the three circular sets into coincidence, leaving just two regions: H (intended, specified and implemented) and D. By following a single design intention from set Design Intent to Specification to Implementation, we learn a great deal about how design intent is lost.

An idea originally conceived appears in set AEGH (Design Intent) and, if successfully captured in the specification, is recorded in set EH. However, if the intent is miscommunicated or not even recorded and lost, it remains in set A. There is the possibility that a designer learns of this intent, even though it not recorded in the specification, and recaptures it in the design. In that case we find it in set G: intended, implemented, but unspecified.

Specified intent is recorded in set BEFH. Once the intent is captured in the specification it must be read, comprehended and implemented by the designer. If successful, it makes it to the center of our diagram, set H: intended, specified and implemented. Success! Unfortunately some specified requirements are missed or misinterpreted and remain unimplemented, absent from the design as represented by set E: intended, specified but unimplemented. Sometimes intent is introduced into the specification that was never desired by the customer and (fortunately) never implemented, such as set B: unintended, unimplemented, yet specified. Unfortunately, there is also unintended behavior that is specified and implemented as in set F. This is often the result of gratuitous embellishment or feature creep.

Finally, implemented intent is represented by set CFGH, all behaviors exhibited by the design. Those in set G arrived as originally intended but were never specified. Those in set H arrived as intended and specified. Those in set F were introduced into the specification, although unintended, and implemented. Behaviors in set C were implemented, although never intended nor specified! In order to illustrate the utility of this diagram, let’s consider a specific example of lost design intent.

We can think of each part of the development process as building a model. Teams write documentation as a specification model, such as a Microsoft Word document. System architects build an abstract algorithmic system model that captures the specification model requirements, using SystemC or C++. Designers build a synthesizable RTL model in Verilog or VHDL. Verification engineers build an abstract problem space functional model in SystemVerilog, SVA and/or e.

If any member of the team fails to implement an element of the upstream, more abstract model correctly (or at all), design intent is lost. The verification engineer can recover this lost design intent by working with all members of the team and giving the team observability into all models.

Consider an example where the system model (ex. C++) uses a finite state machine (FSM) to control the data path of the CPU whereas the specification model (ex. MS Word) implies how the data path should be controlled. This could be a specification ambiguity that the designer ignores, implementing the data path controller in an alternate manner, which he considers quite efficient.

Some time later the system architect may tell the software engineers that they do not need to implement exclusive locking because the data path FSM will handle concurrent writes to same address (WRITE0, WRITE1). However, the designer’s implementation is not based on the system model FSM but rather the specification model. Therefore, exclusive locking is required to prevent data corruption during concurrent writes. We need to ask: How can the verification engineer recover this lost design intent by observing all models?

Synopsys illustrates a complete solution to the problem in a free verification planning seminar that dives deep into this topic. However, for the purposes of this blog we offer a simplified example, using the design and implementation of a coverage model:

  1. Analyze the specification model along with the system model
  2. Identify the particular feature (ex. mutex FSM) and write its semantic description
  3. Determine what attributes contribute to the feature behavior
  4. Identify the attribute values required for the feature
  5. Determine when the feature is active and when the attributes need to be sampled and correlated

This top-level design leads to:

Feature CPU_datapath_Ctrl
Description Record the state transitions of the CPU data path controller
Attribute Data path controller state variable
Attribute Values IDLE, START, WRITE0, WRITE1
Sample Whenever the state variable is written

The verification engineer can now implement a very simple coverage model to explicitly observe the system model, ensuring entry to all states:

enum logic [1:0] {IDLE, START, WRITE0, WRITE1} st;

covergroup cntlr_cov (string m_name) with function sample (st m_state);

option.per_instance = 1;

option.name = m_name;

model_state: coverpoint m_state {

bins t0 = (IDLE   => IDLE);

bins t1 = (IDLE   => START);

bins t2 = (START  => IDLE);

bins t3 = (START  => WRITE0);

bins t4 = (WRITE0 => WRITE1);

bins t5 = (WRITE1 => IDLE);

bins bad_trans = default;

}

endgroup

planner

The verification engineer can link the feature “CPU_datapath_Ctrl” in his verification plan to the cntlr_cov covergroup. Running the system model with the verification environment and RTL implementation will reveal that bin “t4″ is never visited, hence state transition WRITE0 to WRITE1 is never observed. The team can review the verification plan to determine if the intended FSM controller should be improved in the design to conform to all design intent.

Although there are many other subsets of the design intent diagram we could examine, it is clear that a design intention may be lost through many recording and translation processes. By understanding this diagram and its application, we become aware of where intent may be lost or corrupted and ensure that our good intentions are ultimately realized.


1The design intent diagram is more fully examined in the context of a coverage-driven verification flow in chapter two of the book Functional Verification Coverage Measurement and Analysis (Piziali, 2004, Springer, ISBN 978-0-387-73992-2).

Posted in Coverage, Metrics, Organization, Verification Planning & Management | Comments Off

Say What? Another Look At Specification Analysis

Posted by Shankar Hemmady on 26th October 2009

Andy's Picture_2 Andrew Piziali, Independent Consultant

Have you ever been reviewing a specification and asked yourself “Say what?!” Then you’re not alone! One of the most challenging tasks we face as verification engineers is understanding design specifications. What does the architect mean when she writes “The conflabulator remains inoperative until triggered by a neural vortex?” Answering that question is part of specification analysis, the first step in planning the verification of a design, the subsequent steps being coverage model design, verification environment implementation, and verification process execution.

The specifications for a design—DUV, or “design-under-verification” for our purposes—typically include a functional specification and a design specification. The functional specification captures top level, opaque box, and implementation- independent requirements. Conversely, the design specification captures internal, clear box, implementation dependent behaviors. Each is responsible for conveying the architect’s design intent at a particular abstraction level to the design and verification teams. Our job is to ultimately comprehend these specifications in order to understand and quantify the scope of the verification problem and specify its solution. This comprehension comes through analyzing the specifications.

In order to understand the scope of the verification problem, the features of the DUV and their relationships must be identified. Hence, specification analysis is sometimes referred to as feature extraction. The features are described in the specifications, ready to be mined through our analysis efforts. Once extracted and organized in the verification plan, we are able to proceed to quantifying the scope and complexity of each by designing its associated coverage model. How do we tackle the analysis of specifications ranging from tens to hundreds of pages? The answer depends upon the size of the specification and availability of machine-guided analysis tools. For relatively small specifications, less than a hundred pages or so, bottom-up analysis ought to be employed. Specifications ranging from a hundred pages and beyond require top-down analysis.

Bottom-up analysis is the process of walking through each page of a specification: section-by-section, paragraph-by-paragraph, and sentence-by-sentence. As we examine the text, tables and figures, we ask ourselves what particular function of the DUV is addressed? What behavioral requirements are imposed? What verification requirements are implied? Is this feature amenable to formal verification, constrained random, or a hybrid technology? If formal is applicable, how might I formulate a declarative statement of the required behavior? What input, output and I/O coverage is needed? If this feature is more amenable to constrained random simulation, what are the stimulus, checking and coverage requirements?

Each behavioral requirement is a feature to be placed in the verification plan, in either the functional or design requirements sections, as illustrated below:

1 Introduction ………………………………… what does this document contain?

2 Functional Requirements …………….. opaque box design behaviors

2.1 Functional Interfaces ……………. external interface behaviors

2.2 Core Features ………………………. external design-independent behaviors

3 Design Requirements ………………….. clear box design behaviors

3.1 Design Interfaces …………………. internal interface behaviors

3.2 Design Cores ……………………….. internal block requirements

4 Verification Views ……………………….. time-based or functional feature groups

5 Verification Environment Design …. functional specification of the verification environment

5.1 Coverage …………………………….. coverage aspect functional specification

5.2 Checkers …………………………….. checking aspect functional specification

5.3 Stimuli ………………………………… stimulus aspect functional specification

5.4 Monitors ……………………………… data monitor functional specifications

5.5 Properties ……………………………. property functional specifications

Bottom-up analysis is amenable to machine-guided analysis, wherein an application presents a specification before the user. For each section of the spec, perhaps for each sentence, the tool asks if this describes a feature, what are its property, stimulus, checking and coverage requirements, and records this information so that it may be linked to the corresponding section of the verification plan. This facilitates keeping the specifications and the verification plan synchronized. The verification plan is incrementally constructed within a verification plan integrated development environment (IDE).

The alternative to bottom-up analysis is analyzing a specification from the top down, required for large specifications. Your objective here is to bridge the intent abstraction gap between the detail of the specification and the more abstract, incrementally written verification plan. Behavioral requirements are distilled into concise feature descriptions, quantified in their associated coverage models. Top-down analysis is conducted in brainstorming sessions wherein representatives from all stakeholders in the DUV contribute. These include the systems engineer, verification manager, verification engineer, hardware designer and software engineer. After the verification planning methodology is explained to all participants, each engineer contributing design intent explains their part of the design. The design is explored through a question-and-answer process, using a whiteboard for illustration. In order to facilitate a fresh examination of the design component, no pre-written materials should be used.

Whether bottom-up or top-down analysis is used, each design feature should be a design behavioral requirement, stating the intended behavior of the DUV. Both the data and temporal behaviors of each feature should be recorded. In addition to recording the name of each feature, the behavior should be summarized in a sentence or two semantic description. Optionally, design and verification responsibilities, technical references, schedule information and verification labor estimates may be recorded. If the verification plan is written in Microsoft Word, Excel or in HVP1 plain text, it may drive the downstream verification flow, serving as a design-specific verification user interface.

The next time you ask “Say what?!,” make sure you are methodically analyzing the specification using either of the above approaches and don’t hesitate to contact the author directly. Many bugs discovered during these exchanges are the least expensive of all!

1Hierarchical Verification Planning language

Posted in Coverage, Metrics, Organization, Verification Planning & Management | Comments Off

Make Your Coverage Count!

Posted by Shankar Hemmady on 31st August 2009

Andy Piziali Andrew Piziali, Independent Consultant

You are using coverage, along with other metrics, to measure verification progress as part of your verification methodology.1 2 Yet, lurking in the flow are the seeds of a bug escape that will blindside you. How so?

Imagine you are responsible for verifying an in-order, three-way x86 superscalar processor in the last millennium, before the Constrained Random Generation. Since your management wouldn’t spring for an instruction set architecture (ISA) test generator, you hired a team of new college grads to write thousands of assembly language tests. Within the allocated development time, the tests were written, they were functionally graded and achieved 100% coverage, and they all finally passed. Yeah! But, not so fast …

When first silicon was returned and Windows was booted on the processor, it crashed. The diagnosis revealed a variant of one of the branch instructions was misbehaving. (This sounds better than “It had a bug escape.”) How could this be? We reviewed our branch instruction coverage models and confirmed they were complete. Since all of the branch tests passed, how could this bug slip through?

Further analysis revealed this branch instruction was absent from the set of branch tests yet used in one of the floating point tests. Since the floating point test was aimed at verifying the floating point operation of the processor, we were not surprised to find it was insensitive to a failure of this branch instruction. In other words, as long as the floating point operations verified by the test behaved properly, the test passed, independent of the behavior of the branch instruction. From a coverage aspect, the complete ISA test suite was functionally graded rather than each sub-suite graded according to its functional requirements. Hence, we recorded full coverage.

The problem was now clear: the checking and coverage aspects of each test were not coupled, conditioning coverage recording on passing checked behavior. If we had either (1) functionally graded each test suite only for the functionality it was verifying or (2) conditionally recorded each coverage point based upon a corresponding check passing, this bug would not have slipped through. Using either approach, we would have discovered this particular branch variant was absent from the branch test suite. In the first case, that coverage point would have remained empty for the branch test suite. Likewise, in the second case we would not have recorded the coverage point because no branch instruction check would have been activated and passed.

Returning to the 21st century, the lesson we can take away from this experience is that coverage—functional, code and assertion—is suspect unless, during analysis, you confirm that for each coverage point a corresponding checker was active and passed. From the perspective of implementing your constrained random verification environment, each checker should emit an event (or some other notification), synchronous with the coverage recording operation, indicating it was active and the functional behavior was correct. The coverage code should condition recording each coverage point on that event. If you are using a tool like VMM Planner to analyze coverage, you may use its “-feature” switch to restrict the annotation of feature-specific parts of your verification plan to the coverage database(s) of that feature’s test suite.

You might ask if functional qualification3 would address this problem. Functional qualification answers the question “Will my verification environment detect, propagate and report each functional bug?” As such, it provides insight into how well your environment detects bugs but says nothing about the quality of the coverage aspect of the environment. I will address this topic in a future post if there is sufficient interest.

Remember, make your coverage count by coupling checking with coverage!

1Metric Driven Verification, 2008, Hamilton Carter and Shankar Hemmady, Springer

2Functional Verification Coverage Measurement and Analysis, 2008, Andrew Piziali, Springer

3“Functional Qualification,” “EDA Design Line,” June 2007, Mark Hampton

Posted in Coverage, Metrics, Organization, Verification Planning & Management | Comments Off

Give Me Some Space, Man!

Posted by Shankar Hemmady on 11th August 2009

andrew_piziali2 Andrew Piziali, Independent Consultant

A question I am often asked is “When and where should I use functional coverage and code coverage?” Since the purpose of coverage is to quantify verification progress, the answer lies in understanding the coverage spaces implemented by these two kinds of coverage.

A coverage space represents a subset of the behavior of your DUV (design under verification), usually of a particular feature. It is defined by a set of metrics, each a parameter or attribute of the feature quantified by the space. For example, the coverage space for the ADD instruction of a processor may be defined by the product of the absolute values of ranges of the operands (remember “addends?”) and their respective signs. In order to understand the four kinds of coverage metrics, we need to discuss the coverage spaces from which they are constructed.

A coverage metric is determined by its source—implementation or specification—and its author—explicit or implicit. An implementation metric is derived from the implementation of the DUV or verification environment. Hence, the width of a data bus is an implementation metric, as is the module defining a class. Conversely, a specification metric is derived from the DUV functional or design specification. A good example is the registers and their characteristics defined in a specification.

The complementary coverage metric classification is determined by whether the metric is explicitly chosen by an engineer or implicit in the metric source. Hence, an explicit metric is chosen or invented by the verification engineer in order to quantify some aspect of a DUV feature. For example, processor execution mode might be chosen for a coverage metric. Alternatively, an implicit metric is inherent in the source from which the metric value is recorded. This means things like module name, line number and Boolean expression term are implicit metrics from a DUV or verification environment implementation. Likewise, chapter, paragraph, line, table and figure are implicit metrics from a natural language document, such as a specification.

Combining the two metric kinds—source and author—leads to four kinds of coverage metrics, each defining a corresponding kind of coverage space:

  1. Implicit implementation metric

  2. Implicit specification metric

  3. Explicit implementation metric

  4. Explicit specification metric

An example of an implicit implementation metric is a VHDL statement number. The register types and numbers defined by a functional specification are an implicit specification metric. Instruction decode interval is an explicit implementation metric. Finally, key pressed-to-character displayed latency is an example of an explicit specification coverage metric.

Each metric kind may be used to define an associated kind of coverage space. The astute reader may also wonder about coverage spaces defined by a mix of the above metric kinds. If such a hybrid space more precisely quantifies the verification progress of a particular feature, use it! To the best of my knowledge, you’d have to design and implement this space in much the same way as any functional coverage space because no commercial tool I am aware of offers this kind of coverage.

With an understanding of the kinds of coverage spaces, we can now classify functional and code coverage and figure out where they ought to be used. Functional coverage, making use of explicit coverage metrics—independent of their source—defines either an explicit implementation space or an explicit specification space. Code coverage tools provide a plethora of built-in implicit metric choices. Hence, it defines implicit implementation spaces. Where you want to measure verification progress relative to the DUV functional specification, where features are categorized and defined, functional coverage is the appropriate tool. Where you want to make sure all implemented features of the DUV have been exercised, you should use code coverage. Lastly, when your code coverage tool does not provide sufficient insight, resolution or fidelity into the behavior of the DUV implementation, functional coverage is required to complement the implicit spaces it does offer.

Functional coverage can tell you the DUV is incomplete, missing logic required to implement a feature or a particular corner case, whereas code coverage cannot. On the other hand, code coverage can easily identify unexercised RTL, while functional coverage cannot. Functional coverage requires a substantial up-front investment for specification analysis, design and implementation yet relieves the engineer of much back-end analysis. Code coverage, on the other hand, may be enabled at the flip of a switch but usually requires a lot of back-end analysis to sift the false positives from the meaningful coverage holes. Both are required—and complementary—but their deployment must be aligned with the stage of the project and DUV stability.

Some smart alec will point out that you can’t measure verification progress using coverage alone, and you’re right! Throughout this discussion I assume each feature, with its associated metrics, has corresponding checkers that pipe up when the DUV behavior differs from the specified behavior. (I’ll leave the topic of concurrent behavior recording and checking for another day.)

If you’d like to learn much more about designing, implementing, using and analyzing coverage, the following books delve much more deeply into verification planning, management and coverage model design:

Posted in Coverage, Metrics, Organization, Verification Planning & Management | Comments Off

11cbb2793bc4d6c31a5ca9a1efcf77e5$$$$$$$