Verification Martial Arts: A Verification Methodology Blog

Pipelined RAL Access

Posted by Amit Sharma on May 12th, 2011

Ashok Chandran, Analog Devices

Many times, we come across scenarios where a register can be accessed from multiple physical interfaces in a system. An example would be a homogenous multi-core system. Here, each core may be able to access registers within the design through its own interfaces. In such scenarios, defining a “domain” (a testbench abstraction for physical interfaces) for each interface may be an overhead.

· From a system verification point of view, it does not make any difference as to which core accesses the registers since they are identical. The flexibility to bring in ‘random selection of interfaces’ can provide additional value.

· Defining a ‘domain’ for each interface in such scenario requires duplication of registers/ or their instantiation.

· Also, the usage of multiple “domains” for homogenous multi-core systems would prevent us from seamlessly reusing our code from block level to system level. This is because as we will have to incorporate the domain definition within the testbench RAL access code when we migrate to system level as the same wouldn’t have been needed during our register abstraction code in the block level.

Another related scenario is where we need to support multiple outstanding transactions at a time. Different threads could initiate distinct transactions which can return data out of order (as in AXI protocol). The default implementation of RAL allows only one transaction at a time for each domain in consideration.

VMM pipelined RAL comes to our rescue in such cases. This mechanism allows multiple RAL accesses to be simultaneously processed by the RAL access layer. This feature of VMM can be enabled with – `define VMM_RAL_PIPELINED_ACCESS. This define adds a new state to vmm_rw::status_e – vmm_rw::PENDING. When vmm_rw::PENDING is returned as status from execute single()/execute_burst(), the transaction initiating thread is kept blocked till vmm_data::ENDED notification is received for vmm_rw_access. New transactions can now be initiated from other testbench threads and pending transactions cleared in parallel when response is received from the system.

image

As shown in the figure above, transactions initiated by thread A (A0 and A1) can be processed / queued even while transactions from thread B (B0 and B1) are in progress. Here A can be processed by one interface and B by the other. Alternately, A and B can be driven together from same interface in case the protocol supports multiple outstanding accesses.

The code below shows how the user can create his execute_single() functionality to use pipelined RAL for a simple protocol like APB. For protocols like AXI which allow multiple outstanding transactions from same interface, the physical layer transactor can control the sequence further using the vmm_data::ENDED notification of the physical layer transaction.

virtual task execute_single(vmm_rw_access tr);

   apb_trans apb = new; //Physical layer transaction

   apb.randomize() with {
      addr == tr.addr;
      if(tr.kind == vmm_rw::READ) {
         dir == READ;
      } else {
         dir == WRITE;
      }
      resp == OKAY;
      interface_id inside{0,1}; //the interface_id property in the physical layer transaction maps to the different physical interface instances
      };

   if(tr.kind == vmm_rw::WRITE) apb.data = tr.data;
   //Fork out the access in  parallel
   
fork begin
   //Get copies for thread
      automatic apb_trans pend = apb;
      automatic vmm_rw_access rw = tr;

      //Push into the physical layer BFM
      
      this.my_intf[pend.interface_id].in_chan.sneak(pend);

      //Wait for transaction completion from the physical layer BFM
      
pend.notify.wait_for(vmm_data::ENDED);

      //Get the response and read data
      
if(pend.resp == apb_trans::OKAY) begin
         rw.status = vmm_rw::IS_OK;
      end else begin
         rw.status = vmm_rw::ERROR;
      end

      if(rw.kind == vmm_rw::READ) begin
         rw.data = pend.data;
      end

      // End of this transaction – Indicate to RAL
      
rw.notify.indicate(vmm_data::ENDED);
   end join_none

   //Return pending status to RAL access layer
   
tr.status = vmm_rw::PENDING;

endtask:execute_single

For more details on creating “Pipelined Accesses”,  you might want to go through the section “Concurrently Executing Generic Transactions” in the VMM RAL User Guide

Share and Enjoy:
  • Print
  • Digg
  • del.icio.us
  • Facebook
  • Google Bookmarks
  • LinkedIn
  • RSS
  • Twitter

3 Responses to “Pipelined RAL Access”

  1. ashokcj (Ashok Chandran) Says:

    http://www.vmmcentral.org/vmartialarts/2011/05/pipelined-ral-access/

  2. Bharat Says:

    Hi Ashok,

    I tried usnig your example to implement pipeline RAL access.
    But it gives an error for last line in aboev code ” tr.status = vmm_rw::PENDING”
    Error says PENDING is not defined in vmm_rw class.
    Also I checked in vmm1.2 area, I did not find usage of PENDING in vmm_rw. Also I am define VMM_RAL_PIPELINED_ACCESS

    Actually I have to implement overalling address and data phase. e.g. when data of current transaction and address phase of next transaction can come at same clock. For that I tried to use your example.

    I am using vmm1.2 and RAL v1.15

    Regards,
    Bharat.

  3. amit Says:

    Its available in VCS 2010.06 and the VMM1.2.1 version available in VMMcentral.. (June 2010)

Leave a Reply

XHTML: You can use these tags: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <strike> <strong>