Verification Martial Arts: A Verification Methodology Blog

Archive for the 'Messaging' Category

Customizing UVM Messages Without Getting a Sunburn

Posted by Brian Hunter on 24th April 2012

The code snippets presented are available below the video.

https://www.youtube.com/watch?v=K7V505WKxuU&feature=youtu.be

my_macros.sv:

   `define my_info(MSG, VERBOSITY) \
      begin \
         if(uvm_report_enabled(VERBOSITY,UVM_INFO,get_full_name())) \
            uvm_report_info(get_full_name(), $sformatf MSG, 0, `uvm_file, `uvm_line); \
      end

  `define my_err(MSG)         \
      begin \
         if(uvm_report_enabled(UVM_NONE,UVM_ERROR,get_full_name())) \
            uvm_report_error(get_full_name(), $sformatf MSG, UVM_NONE, `uvm_file, `uvm_line); \
      end

   `define my_warn(MSG)        \
      begin \
         if(uvm_report_enabled(UVM_NONE,UVM_WARNING,get_full_name())) \
            uvm_report_warning(get_full_name(), $sformatf MSG, UVM_NONE, `uvm_file, `uvm_line); \
      end

   `define my_fatal(MSG)       \
      begin \
         if(uvm_report_enabled(UVM_NONE,UVM_FATAL,get_full_name())) \
            uvm_report_fatal(get_full_name(), $sformatf MSG, UVM_NONE, `uvm_file, `uvm_line); \
      end

my_init.sv:

  initial begin
      my_report_server_c report_server = new("my_report_server");

      if($value$plusargs("fname_width=%d", fwidth)) begin
         report_server.file_name_width = fwidth;
      end
      if($value$plusargs("hier_width=%d", hwidth)) begin
         report_server.hier_width = hwidth;
      end

      uvm_pkg::uvm_report_server::set_server(report_server);

      // all "%t" shall print out in ns format with 8 digit field width
      $timeformat(-9,0,"ns",8);
   end

my_report_server.sv:

class my_report_server_c extends uvm_report_server;
   `uvm_object_utils(my_report_server_c)

   string filename_cache[string];
   string hier_cache[string];

   int    unsigned file_name_width = 28;
   int    unsigned hier_width = 60;

   uvm_severity_type sev_type;
   string prefix, time_str, code_str, fill_char, file_str, hier_str;
   int    last_slash, flen, hier_len;

   function new(string name="my_report_server");
      super.new();
   endfunction : new

   virtual function string compose_message(uvm_severity severity, string name, string id, string message,
                                           string filename, int line);
      // format filename & line-number
      last_slash = filename.len() - 1;
      if(file_name_width > 0) begin
         if(filename_cache.exists(filename))
            file_str = filename_cache[filename];
         else begin
            while(filename[last_slash] != "/" && last_slash != 0)
               last_slash--;
            file_str = (filename[last_slash] == "/")?
                       filename.substr(last_slash+1, filename.len()-1) :
                       filename;

            flen = file_str.len();
            file_str = (flen > file_name_width)?
                       file_str.substr((flen - file_name_width), flen-1) :
                       {{(file_name_width-flen){" "}}, file_str};
            filename_cache[filename] = file_str;
         end
         $swrite(file_str, "(%s:%6d) ", file_str, line);
      end else
         file_str = "";

      // format hier
      hier_len = id.len();
      if(hier_width > 0) begin
         if(hier_cache.exists(id))
            hier_str = hier_cache[id];
         else begin
            if(hier_len > 13 && id.substr(0,12) == "uvm_test_top.") begin
               id = id.substr(13, hier_len-1);
               hier_len -= 13;
            end
            if(hier_len < hier_width)
               hier_str = {id, {(hier_width - hier_len){" "}}};
            else if(hier_len > hier_width)
               hier_str = id.substr(hier_len - hier_width, hier_len - 1);
            else
               hier_str = id;
            hier_str = {"[", hier_str, "]"};
            hier_cache[id] = hier_str;
         end
      end else
         hier_str = "";

      // format time
      $swrite(time_str, " {%t}", $time);

      // determine fill character
      sev_type = uvm_severity_type'(severity);
      case(sev_type)
         UVM_INFO:    begin code_str = "%I"; fill_char = " "; end
         UVM_ERROR:   begin code_str = "%E"; fill_char = "_"; end
         UVM_WARNING: begin code_str = "%W"; fill_char = "."; end
         UVM_FATAL:   begin code_str = "%F"; fill_char = "*"; end
         default:     begin code_str = "%?"; fill_char = "?"; end
      endcase

      // create line's prefix (everything up to time)
      $swrite(prefix, "%s-%s%s%s", code_str, file_str, hier_str, time_str);
      if(fill_char != " ") begin
         for(int x = 0; x < prefix.len(); x++)
            if(prefix[x] == " ")
               prefix.putc(x, fill_char);
      end

      // append message
      return {prefix, " ", message};
   endfunction : compose_message
endclass : my_report_server_c

Posted in Debug, Messaging, SystemVerilog, UVM, Verification Planning & Management | Comments Off

How do I debug issues related to UVM objections?

Posted by Vidyashankar Ramaswamy on 19th January 2012

Recently one of the engineers I work with in the networking industry was describing the challenges in debugging the UVM timeout error message. I was curious and looked into his test bench. After spending an hour or so, I was able to point out the master/slave driver issue where the objection was not dropped and the simulation thread hung waiting for the objections to drop. Then I started thinking, why not use the run time option to track the status of the objection: +UVM_OBJECTION_TRACE? Well, this printed detailed messages about the objections, a lot more than what I was looking for! The problem now was to decipher the overwhelming messages spitted by the objection trace option! In a hierarchical test bench, there could be hundreds of component, and you might be debugging some SoC level test bench which you didn’t write or are familiar with. Here is an excerpt of the message log using the built in objection trace:

….
VM_INFO @ 0: reset [OBJTN_TRC] Object uvm_test_top.env.master_agent.mast_drv dropped 1 objection(s): count=0 total=0
UVM_INFO @ 0: reset [OBJTN_TRC] Object uvm_test_top.env.master_agent.mast_mon raised 1 objection(s): count=1 total=1UVM_INFO @ 0: reset [OBJTN_TRC] Object uvm_test_top.env.master_agent added 1 objection(s) to its total (raised from source object ): count=0 total=2
UVM_INFO @ 0: reset [OBJTN_TRC] Object uvm_test_top.env added 1 objection(s) to its total (raised from source object ): count=0 total=2
….
UVM_INFO @ 0: reset [OBJTN_TRC] Object uvm_test_top.env.master_agent.mast_mon dropped 1 objection(s): count=0 total=0
UVM_INFO @ 0: reset [OBJTN_TRC] Object uvm_test_top.env.slave_agent.drv raised 1 objection(s): count=1 total=1
….

As a verification engineer, you want to begin debugging the component or part of the test bench code which did not lower the objection as soon as possible. You want to minimize looking into the unfamiliar test bench code as much as possible or stepping through the code using a debugger.

The best way is to call the display_objections() just before timeout has been reached. As there is no callback available in the timeout procedure, I thought of writing the following few lines of code which can be forked off in any task based phase. I would recommend doing this in your base test which can be extended to create feature-specific tests. You can save some CPU processing cycles by coding this into a run time option:

top = uvm_root::get();
fork
begin
#(top.phase_timeout -1);
phase.phase_done.display_objections();
end
join_none

Output of the log message using the above code is shown below:

————————————————————————————
The total objection count is 2
————————————————————————————
Source              Total
Count               Count             Object
————————————————————————————
0                          2                   uvm_top
0                          2                       uvm_test_top
0                          2                           env
0                          1                               master_agent
1 1 mast_drv
0                          1                               slave_agent
1 1 slv_drv
———————————————————————————-

From the above table, it is clear that the master and slave driver did not drop the objection. Now you can look into the master and slave driver components, and further debug why these components did not drop their objection. There are many different ways to achieve the same results. I welcome you to share your thoughts and ideas on this.

Posted in Debug, Messaging, UVM | 1 Comment »

Role of methodology in Assertion Based Verification (ABV)

Posted by Srinivasan Venkataramanan on 25th March 2010

Srinivasan Venkataramanan, CVC Pvt. Ltd.

Abhishek Muchandikar, CAE, Verification Group, Synopsys

Raja Mahadevan, Sr. CAE, Verification Group, Synopsys

It is well understood and accepted fact that Assertions play a critical role in detecting a class of design errors (or bugs) in functional verification. Just like any other verification component in a robust, reusable environment, assertions need to be both controllable and observable from various levels including tests, regressions, command line etc. However an ad-hoc throw of assertions across the design and verification code doesn’t always consider this requirement upfront.

Recently standardized SystemVerilog 2009 construct checker..endchecker is definitely a good step towards creating a good encapsulation for these widely spread out assertions. In our recent book on SystemVerilog Assertions (2nd edition of SVA handbook, www.systemverilog.us/sva_info.html ) we cover this construct in depth. We also presented a case study of a Cache controller verification using these new constructs at DVCon 2010 (paper & code available from www.cvcblr.com on request).

The role of a methodology goes far beyond the constructs, it does utilize them but provides more controllability and observability for the end user trying to make sense out of all these various features to achieve his/her final goal of getting verification done.

In this series of blog entries we try and cover some of the key aspects of such methodology role in adopting ABV (Assertion Based Verification). We welcome reader comments to add more innovative thoughts and ideas to this puzzle. Our goal of this blog series is not to cover all possible aspects of ABV methodology as that would include a wide range of topics, many of them already well covered in VMM book (www.vmm-sv.org); rather in this blog series we look at application aspects of ABV methodology.

To start with, let’s partition the role of methodology into two major buckets: observability & controllability.

Under observability we will explore the following:

· Make assertions observable in native form within the methodology framework

· Tie the assertion results to the verification plan via VMM Planner hooks

Under controllability we will explore the following:

· Control the severity & verbosity of assertions from external world – command line, testcases etc.

· Control assertion execution during reset, exceptions, low power etc.

Making assertions natively observable in VMM

In simulation based verification, observability is primarily enabled by the kind of messages that get emitted during the run. Messaging service plays an important part in a verification environment indicating the progress of the simulation or providing additional information to debug a design malfunction.  To ensure a consistent look and better organization of messages issued by various verification layers be it the transactor, scoreboards, or assertions, use of a standard messaging service is required. VMM provides a time-proven utility vmm_log to enable this key requirement. While the use of vmm_log in a typical VMM environment is well understood and widely deployed, the integration of the same to assertions is not that widely spoken about in the literature.

Assertion reporting as such always tends to be a loner in terms of the verification environment messaging family. This is due to the fact that assertion reporting has been handled in ad-hoc manner – many a times unattended (i.e. no action blocks at all), this can lead to simulator specific reports for assertion firings (pass/fail) where as the rest of testbench environment uses a consistent vmm_log style.

The drawback of such a use model is twofold:

· Absence of a single tightly integrated messaging service across the verification board

Assertion failures do not interact with the test bench environment and hence there is absolutely no way to effectively and correctly qualify the simulation. The limitation of such a behavior (in a regression setup), would never qualify the test as a “failures” unless some other post processing is duly placed

· Assertion results bear zero control over the verification simulation cycles

Quite often it is observed that the tests tend to run the entire simulation cycles even in the presence of assertions failures which maybe uncalled for and may warrant an immediate termination of the simulation.

Efficient incorporation of assertions in a verification environment calls for synchronization between assertion results and the verification environment. A common messaging service would be the key to such synchronization. The VMM messaging service “vmm_log” is a fine example of a standard messaging class which is seamlessly integrated into assertion checkers/properties which ensures consistency across the complete verification environment.

The user could force the simulation to quit via a `vmm_fatal macro or proceed with the simulation for a particular checker instance failure. The use of vmm_log for assertion error messaging gets recognized by the VMM environment leading to an effective simulation result. Integration of VMM messaging service provides extended flexibility to the user to control the simulation based on the severity of the checker instance.

Steps to integrate vmm_log with assertion reporting

· Declaration of a new package with a static VMM log object

package vmm_sva;

`include “vmm.sv”

vmm_log sva_vmm_log = new(“SVA_CHECKER”,$psprintf(“%m”));

endpackage : vmm_sva

· Inclusion of the package in the assertion file

module sva_file(….)

import vmm_sva ::*;

// AHB master property check

property HburstSingleHtransNseq;

@ (posedge HCLK)disable iff (!HRESETn) (

((SINGLE && HGRANT)

)

|-> ((NSEQ || IDLE)));

endproperty

HburstSingleHtransNseq_check  : assert property (HburstSingleHtransNseq)

else `vmm_fatal (sva_vmm_log, “AMBA Compliance Protocol Rules : ERRMSINGLE: Master has issued a SEQ/BUSY type SINGLE transfer”));

endmodule

Simulation Result:

Bases on the severity of the assertion, you could terminate the simulation and also the testbench environment recognizers this failure and qualifies the simulation as a failure as depicted below

*FATAL*[FAILURE] on SVA_CHECKER(vmm_sva) at                  195:

[AMBA Compliance Protocol Rules : ERRMSINGLE]   Master has issued a SEQ/BUSY type SINGLE transfer

Simulation *FAILED* on /./ (/./) at                  195: 1 errors, 0 warnings

$finish called from file “/tools/eda/snps/vcs-mx/etc/rvm/vmm.sv”, line 36499.

$finish at simulation time                  195

V C S   S i m u l a t i o n   R e p o r t

Users can choose from the variety of vmm_log macros such as `vmm_error, `vmm_warning etc. to suit the relevant message being flagged by that assertion. With this subtle change/enhancement to the SVA action block one can leverage on VMM’s simulation controllability features such as error counting, simulation handling of errors (stop, debug, continue etc.). One can also promote/demote errors to warnings for instance.

A final note on the logger instance being shown in this example: while the above shown code works, typical usage would classify the messages originating from different portions of design/verification into individual logger instances.

In our next entry in this series, we will address the second aspect of observability – i.e. tie the results to Verification plan, so stay tuned!

Posted in Assertion Based Verification, Debug, Messaging | 5 Comments »

Watch out for “totally vacuous” assertion attempts in your verification

Posted by Srinivasan Venkataramanan on 10th March 2010

Srinivasan Venkataramanan, CVC Pvt. Ltd.

Abhishek Muchandikar, Sr RnD Engineer, Verification Group, Synopsys

Sadanand Gulwadi, Sr. Staff CAE, Verification Group, Synopsys Inc., Mt. View, CA, USA

Assertions for protocol checking has been popular for quite some time now. With several off-the-shelf assertion/monitor IPs available from EDA vendors and providers such as CVC (www.cvcblr.com), end users need not have to spend too much time thinking about what assertions to add in their designs, how to code them etc. All that users would need to do is to create suitable bind files and off they go!

While using assertions sounds simple and straightforward, there are scenarios for which users need to watch out. The methodology of using assertions and leveraging on assertion indications from simulations is vital for ROI of Assertion Based Verification. There are several assertion coding guidelines available with VMM book – infact a whole chapter is dedicated to writing effective assertions (Chapter-3, see: http://www.springer.com/engineering/circuits+%26+systems/book/978-0-387-25538-5).

VMM also provides guidelines on how to integrate assertions errors via standard messaging route such as `vmm_error. An additional methodology note that we developed while working with a large DSP customer is to identify quickly “totally vacuous” assertions in a simulation run.

For example, if assertions did fire, then you have likely found a bug. However if assertions remained silent throughout a simulation, then you cannot afford to be too happy, for the assertions may have been “totally vacuous”. This is a term that my ex-colleague and I coined during our work at a large DSP customer here. They were adding assertions and looking for improved productivity early on rather than having to go through detailed assertion coverage metrics, various dump files, and so forth at a later stage While standard garden variety vacuity is quite well understood and explained in our regular SVA trainings (SVA Trainings), the concept of “totally vacuous” is a step beyond that. Totally vacuous assertions are those that were *never* successful during the entire simulation run. This can be due to a few reasons:

1. The assertions were never attempted at all (perhaps the CLOCK was undriven, or was part of a gated clock of a low power domain etc.)

2. The antecedent of a property was never satisfied etc.

Further, observation of the assertion behaviour stated above might indicate the following potential issues in your design or verification plan:

1. Incorrect clock network connections, clock enables, and so forth.

2. Test-case is weak and does not address a key portion of your design (a coverage hole).

For instance, consider the following assertion (taken from our SVA handbook, www.systemverilog.us/sva_info.html)

/* Behavior: if the transfer type is BUSY then on the corresponding

data transfer , the slave must provide a zero wait state OKAY response

(default clock) */

property pResponseToBusyMustBeZeroWaitOKAY (hResp,hReady,hTrans);

@(posedge hclk) (((hTrans == BUSY) && (hReady == 1)) |->

##1 ((hResp == OKAY) && (hReady == 1)) );

endproperty: pResponseToBusyMustBeZeroWaitOKAY

Consider the case when the stimulus didn’t drive the hTrans to be BUSY throughout the simulation. This is strange and means that the stimulus is weak. However when (and how) does a user find this out? With the usual wisdom of “No news is GOOD news”, it is very easy to ignore this important coverage hole and move on with other work. However, the customer desired such strange assertion behaviour to be flagged as early as possible – at the end of every simulation run.

VCS does have the ability to catch such “totally vacuous” assertions and accordingly reports the following at the end of a simulation run:

**** Following assertions did not fire at all during simulation. *****

“/proj/cvc_mips/ahb_mip/master.sva “, 48:

a_pResponseToBusyMustBeZeroWaitOKAY: Antecedent was never satisfied

“/proj/cvc_mips/ahb_mip/slave.sva “, 32:

a_pLowPowerGclk: No attempt started

The above output is default in VCS without the need for any additional user-driven options and frees the user from the additional steps of enabling assertion coverage, debug, etc. Assertion coverage and debug are indeed powerful features for analyzing problems, but should be turned on only after ensuring the assertions are not “totally vacuous” – a glaring weakness that requires being flagged early on by default. It is all about productivity at the end of the day – if a tool can help improve productivity it is always welcome :-)

So the next time VCS prints such a message, you had better watch out before calling it a day!

Posted in Assertion Based Verification, Debug, Messaging | Comments Off

Managing VMM Log verbosity in a smart way

Posted by Srinivasan Venkataramanan on 1st March 2010

Srinivasan Venkataramanan, CVC Pvt. Ltd.

Vishal Namshiker, Brocade Communications

Any complex system requires debugging at some point or the other. To ease the debug process, a good, proven coding practice is to add enough messages for the end user to aid in debug. However as systems become mature the messages tend to become too many and quickly users feel a need for controlling the messages. VMM provides a comprehensive log scheme that provides enough flexibility to let users control what-how-and-when to see certain messages (See: http://www.vmmcentral.org/vmartialarts/?p=259).

As we know the usage of `vmm_verbose/`vmm_debug macros requires the +vmm_log_default=VERBOSE run time argument. However when using this, there are tons of messages coming from VMM base classes too – as they are under the VERBOSE/DEBUG severity. Users at Brocade did not prefer to have these messages when debugging problems in user code. Parsing through these messages and staying focussed on debugging the problem at hand was tedious if post-processing of the log file was not implemented. Sure the messages from VMM base classes are useful to one set of/class of problems, but if the current problem is with user code, user would like to be able to exclude them easily. An interesting problem of contradictory requirements perhaps? Not really, VMM base class is well architected to handle this situation.

In VMM, there are two dimensions to control which messages user would like to see. The verbosity level specifies the minimum severity to display and you’ll see every message with a severity greater to equal to it. The other dimension/classification is based on TYPE. There are several values for the TYPE such as NOTE_TYP, DEBUG_TYP etc. Most relevant here is the INTERNAL_TYP – a special type intended to be used exclusively by VMM base class code. All debug related VMM library messages are classified under INTERNAL_TYP. You can use vmm_log::disable_types() method.

A quick example to do this inside the user_env is below:

virtual function void my_env::build();
super.build();

this.log.disable_types(.typs(vmm_log::INTERNAL_TYP),
.name(“/./”),.inst( “/./”) );
endfunction : build

This is a typical usage if everyone in the team agrees to such a change. However if a localized change is needed for few runs alone, one can combine the power of VCS’s Aspect Oriented Extensions (AOE) made to SystemVerilog. In this case, user supply a separate file as shown below:

///////////  disable_vmm_msg.sv
extends disable_log(vmm_log);
after function new(string name = “/./”,
string instance = “/./”,
vmm_log under = null);
this.disable_types(.typs(vmm_log::INTERNAL_TYP));
endfunction:new

endextends

Add this file to the compile list and voila! BTW, during recent SystemVerilog extensions discussion at DVCon 2010, AOP extensions are being requested by more users to be added to the LRM standard. With its due process, a version of AOP is likely to be added to the LRM in the future (let’s hope in the “near future” :) ).

Posted in Debug, Messaging, SystemVerilog, VMM | Comments Off

Viewing VMM log details in waveforms

Posted by Avinash Agrawal on 17th December 2009


Avinash Agrawal, Corporate Applications, Synopsys

Often engineers need a combination of logfile outputs and waveforms, to look at their simulations. And they wonder if it is possible to look at waveforms and get information on the number of simulation errors that might have occurred in a simulation upto any particular point of simulation time.

The good news is that the VMM log service helps a user track the source of different messages to different verification components.

When VMM macros such as `vmm_error, `vmm_note etc are used at different places in the verification environment, the user is able to view the corresponding information in the simulation output. This information includes the time at which the message was logged, the verification component and the specified instance of which the message was issued from. However, it can be very useful if the timing of the errors or warnings in the simulation output can be correlated with waveforms in simulation. For example, if there is a protocol violation message issued from one of the testbench monitors, the user can map the time when the message was issued to the actual signals in the waveform. That way user can quickly uncover the relevant problem in the DUT.

The VMM message service vmm_log consists of the vmm_log_format object to control the format of the messages. The vmm_log_format object also gets the information of the type/severity of the messsages. The vmm_log class for each component uses the default implementation of all these methods. The user can easily extend the vmm_log_format class and add in his modifications. Modifications can be either to trigger an assertion or incrememt a variable which can be dumped into the waveform window. This way, the  engineer can correlate the errors with the change in the variable or an assertion in the waveform window.

The following code shows how this can be done:

module top();
  int error_count;

initial begin
 $vcdpluson();
end

endmodule

program P;

class env_format extends vmm_log_format;  //extending vmm_log_format
                                          // for adding in user modifications
      virtual function string format_msg(string name,
                                         string inst,
                                         string msg_typ,
                                         string severity,
                                         ref string lines[$]);
         if (msg_typ == "FAILURE" && severity == "!ERROR!") begin
            top.error_count++;  //incrementing error count for warnings and errors
         end
//or trigger an assertion which can also be seen in the waveform
        assert (~(msg_typ == "FAILURE" && severity == "!ERROR!"));
          format_msg = super.format_msg(name, inst, msg_typ, severity, lines);
      endfunction
endclass

class xactor extends vmm_xactor;

  int id;

  function new(int id, string instance);
    super.new("xactor", instance);
    this.id = id;
  endfunction

  virtual task main();
    super.main();
    `vmm_note(log, "This is a note message");
    #5;
    `vmm_error(log, "This is an error message");
    if (id == 0)
      #10 `vmm_error(log, "This is an error message");
    else
      #30 `vmm_error(log, "This is an error message");
  endtask
endclass

class env extends vmm_env;
  xactor x1;
  xactor x2;
  vmm_log log;

  function new();
    env_format fmt;
    log = new("env", "class");
    fmt = new();
    log.set_format(fmt);
  endfunction

  virtual function void build();
    super.build();
    x1 = new(0, "x1");
    x2 = new(1, "x2");
  endfunction

  virtual task start();
    super.start();
    x1.start_xactor();
    x2.start_xactor();
  endtask

  virtual task wait_for_end();
    super.wait_for_end();
    #1000;
  endtask

  virtual task stop();
    super.stop();
    x1.stop_xactor();
    x2.stop_xactor();
  endtask

endclass

initial begin
  env e = new;
  e.run();
end

endprogram

Posted in Debug, Messaging | Comments Off

Navigate through sea of log messages in a SoC env – smart application of vmm_log

Posted by Shankar Hemmady on 13th August 2009

srinivasan_VenkataramanBagath_SinghSrinivasan Venkataramanan & Bagath Singh, CVC Pvt. Ltd. Jaya Chelani, Quartics Technologies

Consider an SoC with several interfaces being verified.  It is quite common to have each interface report its activity via display messages.  Now let’s take up a case where-in a specific user is debugging say the AXI interface for failure, performance analysis etc.  While the full log file is providing the overall picture, it is quite possible to get lost quickly in the sea of messages.

Won’t it be nice to have AXI report all its activities to its own log file? This would greatly reduce the analysis/debug time for a given task.  However doing a change inside the testbench is not a good practice, if one were to do it via `ifdef, config etc.  Also, it may be restricted to only few tests/runs, and not for entire regression.

Huh, such a common scenario, you wonder.  Yes, and that’s why vmm_log has that capability built-in.  By default, loggers direct messages to STDOUT, but can be easily asked to direct them to a specific file. The function vmm_log::log_start(file_pointer) performs just this and can be used at will during run time.

1. program automatic sep_log_files;

2.   `include “vmm.sv”

3.   vmm_log axi_logger, cvc_prop_if_logger;

4.   int axi_fp;

5.   initial begin : b1

6.     axi_fp = $fopen (“axi.log”, “w”);

7.     axi_logger = new (“AXI Log”, “0″);

8.     cvc_prop_if_logger = new (“CVC Log”, “0″); // Defaults to STDOUT

9.     axi_logger.log_start(axi_fp); // AXI alone is being sent to a separate LOG file

10.    `vmm_note (axi_logger, “Message from AXI Interface”);

11.    `vmm_note (cvc_prop_if_logger, “Messages from CVC Proprietary Interface”);

12.   end : b1

13.  endprogram : sep_log_files

Line 6 opens a file named “axi.log” for writing. Line 9 ensures that all messages form axi_log to the new log file pointer (created in Line 6). Note that one can point to any logger inside the env via hierarchical path and hence this can be done at a testcase level, if desired.

A few additional notes

1. As these methods work on specific log instance, they ought to be used once the log instance is constructed, typically after the vmm_env::build() phase.

2. There is also a counterpart to stop logging vmm_log::log_stop(file_pointer) to a separate file (it continues sending messages to STDOUT).

3. By design, the vmm_log::log_start() sends a copy of the message to the specified file in addition to STDOUT. This is done so that the complete log (sent to STDOUT) is intact from all loggers in the environment.

4. If it is desired by the user to avoid the duplication of log messages, the user can use an explicit call to vmm_log::log_stop(STDOUT).

5. These two methods can also be useful in performance measurement during a specific time window in a simulation run. One can setup notifiers to indicate the start & stop of the time window, and run a parallel thread to correspondingly perform log_start()  log_stop(). More on this in another blog post.

Posted in Debug, Messaging, SystemVerilog, Tutorial | 2 Comments »

Message, message on the wall!

Posted by Janick Bergeron on 19th June 2008

Why does the VMM message interface (the vmm_log class) have a start_msg() and a text() method that must be used in this convoluted way:

if (log.start_msg(vmm_log::DEBUG_TYP, vmm_log::TRACE_SEV)) begin
   log.text("This is a trace message");
   log.end_msg();
end

Why not the much simpler one-liner:

log.message(vmm_log::DEBUG_TYP, vmm_log::TRACE_SEV,
            "This is a trace message");

which would then eliminate the need for the macro:

`vmm_trace(log, "This is a trace message");

Given the examples above, there is absolutely no reason. However, this example illustrates why:

`vmm_trace(log, $psprintf("Read 'h%h from 'h%h with status %s",
                           data, addr, status.name()));

which expands to:

if (log.start_msg(vmm_log::DEBUG_TYP, vmm_log::TRACE_SEV)) begin
   log.text($psprintf("Read 'h%h from 'h%h with status %s",
                      data, addr, status.name()));
   log.end_msg();
end

The $psprintf() (and all other formatting system tasks $sformat(), $format(), $write(), $display(), etc…) may be simple to use but they are very run-time expensive. And if you are not going to display a message, why incur the cost of composing its image?

When using a single procedure call, the value of all of its arguments must be determined before it is called. Thus, using this approach:

log.trace($psprintf("Read 'h%h from 'h%h with status %s",
                    data, addr, status.name());

incurs the cost of creating the message image every single time. And most of the time, this debug message will simply be filtered out (think about the thousands and thousands of regression runs where debug is not enabled!).

On the other hand, checking first if messages of a certain type or severity are going to be filtered out or not and only then composing the image of the message improves your run-time performance.

By how much? Of course, it depends on the number of messages that will eventually get filtered out. But just to give you an idea, I ran this experiment using VCS:

program p;

initial
begin
   int i;
   string msg;
   i = $urandom() % 2; // See footnote1
   if (i == 1) i--;
   repeat (100000) begin
`ifdef GUARD
      if (i)
`endif
      msg = $psprintf("%s, line %0d: Message #%0d at %t",
                      `__FILE__, `__LINE__, 0, $time());
   end
end

endprogram

With `GUARD defined, which causes the $psprintf() call to be skipped, I get run-times of approximately 0.025 seconds. With `GUARD undefined, which causes the $psprintf() call to be executed, I get run-times of approximately 0.230 second or 10x slower simulation performance.

Personally, I think the performance gain is worth the little extra bit of code to write. Remember to always optimize the right thing: you’ll write that code once but you’ll run it thousands and thousands of times. So saving a few lines of codes is not always the right decision.

1 I use a convoluted way to set i to 0 to prevent an optimizing compiler from optimizing the entire if statement away.

Posted in Debug, Messaging, Optimization/Performance | 2 Comments »

84c609b3d057db22c5b267cd3b5f94e9yyyyyyy