Verification Martial Arts: A Verification Methodology Blog

Archive for the 'Modeling' Category

SNUG-2012 Verification Round Up – Language & Methodologies – I

Posted by paragg on 25th February 2013

As in the previous couple of years, last year’s SNUG – Synopsys User Group showcased an amazing number of useful user papers   leveraging the capabilities of the SystemVerilog language and verification methodologies centered on it.

I am always excited when I see this plethora of useful papers and I try to ensure that I set aside some time to go through all these user experiences.  Now, as we wait for SNUG, Silicon Valley to kick-start the SNUG events for this year, I would want to look back at some of the very interesting and useful paper from the different SNUGs of the year 2012.  Let me start with talking about a few papers in the area of the System Verilog language and SV methodologies.

Papers leveraging the SystemVerilog language and constructs

Hillel Miller of Freescale in the paper “Using covergroups and covergroup filters for effective functional coverageuncovers the mechanisms available for carving out the coverage goals. In the p1800-2012 of the SystemVerilog LRM, new constructs are provided just for doing this. The construct that is focused on is the “with” construct. The new construct provides the ability to carve out of a multidimensional range of possibilities for a sub-range of goals. This is very relevant in a “working” or under development setup that requires frequent reprioritization to meet tape-out goals.

The paperTaming Testbench Timing: Time’s Up for Clocking Block Confusionsby Jonathan Bromley, Kevin Johnston of Verilab, reviews the key features and purpose of clocking blocks and then examines why they continue to be a source of confusion and unexpected behavior for many verification engineers. Drawing from the authors’ project and mentoring experience, it highlights typical usage errors and how to avoid them. They clarify the internal behavior of clocking blocks to help engineers understand the reasons behind common problems, and show techniques that allow clocking blocks to be used productively and with confidence. Finally, they consider some areas that may cause portability problems across simulators and indicate how to avoid them.

Inference of latches and flops based on coding styles has always been a topic creates multiple viewpoints. There are other such scenarios of synthesis and simulation mismatches that one typically comes across. To address all such ambiguity, language developers have provided different constructs to provide for an explicit resolution based on the intent. To help us gain a deeper understanding of the topic, Don Mills of Microchip Technology Inc., presented the related concepts in the paper “Yet Another Latch and Gotchas Paper” @ SNUG Silicon Valley. This paper discusses and provides solutions to issues that designers using SystemVerilog for design come across, such as: Case expression issue for casez and casex, Latches generated when using unique case or priority case, SRFF coding style problems with synthesis, SystemVerilog 2009 new definition of logic

Gabi Glasser from Intel presented the paper “Utilizing SystemVerilog for Mixed-Signal Validation@ SNUG Israel, where he proposed a mechanism for simplifying analysis and increasing coverage for mixed signal simulations.  The method proposed here was to take advantage of SystemVerilog capabilities, which enables defining a hash (associative) array with unlimited size. During the simulation, vectors are created for required analog signals, allowing them to be analyzed within the testbench along or at the end of the simulation, without requiring saving these signals into a file. The flow change enables the ability to launch a large scale mixed signal regression while allowing an easier analysis of coverage data.

Design pattern is a general reusable solution to a commonly recurring problem within a given context. The benefit of using design patterns is clear: it gives a common language for designers when approaching a problem, and gives a set of tools, widely used, to solve issues as they come up.  The paper Design Patterns In Verification” by Guy Levenbroun of Qualcomm explores several common problems, which might rise, during the development of a testbench, and how we can use design patterns to solve these problems. The patterns are categorized majorly into following areas: creational (eg factory), structural (eg composite) and behavioral (eg template) are covered in the paper.

Arik Shmayovitsh, Avishay Tvila, Guy Lidor of Sigma Designs , in their paper “Truly reusable Testbench-to-RTL  connection for System Verilog , presents  a novel approach of  connecting the DUT and testbench using consistent semantics while  reusing the testbench. This is achieved by abstracting the connection layer of each testbench using the SystemVerilog ‘bind’ construct. This ensures that the only thing that is required to be done to reuse the testbench for a new DUT would be to identify the instance of the corresponding DUT.

In the paper, A Mechanism for Hierarchical Reuse of Interface Bindings, Thomas Zboril of Qualcomm (Canada) explores another method to instantiate SV interfaces, connect them to the DUT and wrap the virtual interfaces for use in the test environment. This method allows the reuse of all the code when the original block level DUT becomes a lower level instance  in a larger subsystem or chip. The method involves three key mechanisms: Hierarchical virtual interface wrappers, Novel approach of using hierarchical instantiation of SV interfaces, Another novel approach of automatic management of hierarchical references via SV macros (new)

Thinh Ngo & Sakar Jain of Freescale Semiconductor, in their paper, “100% Functional Coverage-Driven Verification Flow propose a coverage driven verification flow that can efficiently achieve 100% functional coverage during simulation. The flow targets varied functionality, focuses at transaction level, measures coverage during simulation, and fails a test if 100% of the expected coverage is not achieved. This flow maps stimulus coverage to functional coverage, with every stimulus transaction being associated with an event in the coverage model and vice versa. This association is derived from the DUT specification and/or the DUT model. Expected events generated along with stimulus transactions are compared against actual events triggered in the DUT. The comparison results are used to pass or fail the test. 100% functional coverage is achieved via 100% stimulus coverage. The flow enables every test with its targeted functionality to meet 100% functional coverage provided that it passes.

Papers on Verification Methodology

In the paper, Top-down vs. bottom-up verification methodology for complex ASICs, Paul Lungu & Zygmunt Pasturczyk of Ciena at Canada covers the simulation methodology used for two large ASICs requiring block level simulations. A top-down verification methodology was used for one of the ASICs while a larger version needed an expanded bottom-up approach using extended simulation capabilities. Some techniques and verification methods such as chaining of sub environments from block to top-level are highlighted  along with challenges and solutions found by the verification team. The paper presents a useful technique of  of passing a RAL (Register Abstraction Layer) mirror to the C models which are used as scoreboards in the environment. The paper also presents a method of generating stable clocks inside the “program” block.

In the paper,Integration of Legacy Verilog BFMs and VMM VIP in UVM using Abstract Classes by Santosh Sarma of Wipro Technologies(India) presents an alternative approach where Legacy BFMs written in Verilog and not implemented using ‘Classes’ are hooked up to higher level class based components to create a standard UVM VIP structure. The paper also discusses an approach where existing VMM Transactors that are tied to such Legacy BFMs can be reused inside the UVM VIP with the help of the VCS provided UVM-VMM Interoperability Library. The implementation makes use of abstract classes to define functions that invoke the BFM APIs. The abstract class is then concretized using derived classes which give the actual implementation of the functions in the abstract class. The concrete class is then bound to the Verilog instance of the BFM using the SystemVerilog “bind” concept. The concrete class handle is then used by the UVM VIP and the VMM Transactor to interact with the underlying Verilog BFM. Using this approach the UVM VIP can be made truly reusable by using run time binding of the Verilog BFM instance to the VIP instead of using hardcoded macro names or procedural calls.

A Unified Self-Check Infrastructure - A Standardized Approach for Creating the Self-Check Block of Any Verification Environmentby John Sotiropoulos, Matt Muresa , Massi Corba of Draper Laboratories Cambridge, MA, USA presents a structured approach for developing a centralized “Self-Check” block for a verification environment. The approach is flexible enough to work with various testbench architectures and is portable across different verification methodologies. Here, all of the design’s responses are encapsulated under a common base class, providing a single “Self-Check” interface for any checking that needs to be performed. This abstraction, combined with a single centralized scoreboard and a standardized set of components, provides the consistency needed for faster development and easier code maintenance. It expands the concept of ‘self-check’ to incorporate the white-box monitors (tracking internal DUT state changes etc.) and Temporal Models (reacting to wire changes) along-with traditional methodologies for enabling self-checking.

For VMM users looking at migrating to UVM, there is another paper from Courtney Schmitt of Analog Devices, Inc.Transitioning to UVM from VMMdiscusses the process of transitioning to a UVM based  environment from VMM Differences and parallels between the two verification methodologies are presented to show that updating to UVM is mostly a matter of getting acquainted with a new set of base classes. Topics include UVM phases, agents, TLM ports, configuration, sequences, and register models. Best practices and reference resources are highlighted to make the transition from VMM to UVM as painless as possible.

Posted in Announcements, Coverage, Metrics, Creating tests, Customization, Modeling, Optimization/Performance, Reuse, SystemVerilog, UVM, Uncategorized, VMM, VMM infrastructure | 3 Comments »

The One stop shop: get done with everything you need to do with your registers

Posted by Amit Sharma on 14th July 2011

Ballori Bannerjee, Design Engineer, LSI India

Processes are created, refined and improved upon and the change in productivity which starts with a big leap subsequently slows down and at the same time as the complexity of tasks increases, the existing processes can no longer scale up. This drives the next paradigm shift in moving towards new process and automation. As in the case of all realms of technology, this is true in the context of the Register development and validation flow as well.. So, let’s look at how we changed our process to get the desired boost in productivity that we wanted..

This following flowchart represents our legacy register design and validation process.. This was a closed process and served us well initially when the number of registers, their properties etc were limited.. However, with the complex chips that we are designing and validating today, does this scale up?

register_verif

As an example, in a module that we are implementing, there are four thousand registers. Translating into number of fields, for 4000 32-bit registers we have 128,000 fields, with different hardware and software properties!

Coding the RTL with address decoding for 4000 registers, with fields having different properties is a week’s effort by a designer. Developing a re-usable randomized verification environment with tests like reset value check, read-write is another 2 weeks, at the least. Closure on bugs requires several feedbacks from verification to update design or document. So overall, there is at least a month’s effort plus maintenance overhead anytime the address mapping is modified or a register updated/added.

This flow is susceptible to errors where there could be disconnect between document, design, verification and software.

So, what do we do? We redefine the process! And this is what I will be talking about, our automated register design and verification (DV) flow which streamlines this process.

AUTOMATED REGISTER DESIGN AND VERIFICATION FLOW

The flow starts with the designer modeling the registers using a high level register description language. In our case , we use SystemRDL, and then leverage third party tools are available to generate the various downstream components from the RDL file:

· RTL in Verilog/VHDL

· C/C++ code for firmware

· Documentation ( different formats)

· High level verification environment code (HVL) in VMM

This is shown in below. The RDL file serves as a one-stop point for any register update required following a requirement change.

register2

Automated Register DV Flow

Given, that its critical to create an efficient object oriented abstraction layer to model registers and memories in a design under test, we exploit VMM RAL for the same. How do we generate the VMM RAL Model? This is generated from RALF. Many 3rd party tools are available to generate RALF from various inputs formats and we use one of them to generate RALF from SystemRDL

Thus, a complete VMM compliant randomized, coverage driven register verification environment can be created by extending the flow such that:

i. Using 3rd party tool, from SystemRDL the verification component generated is RALF, Synopsys’ Register Abstraction Layer File.

ii. RALF is passed through RALGEN, a Synopsys utility which converts the RALF information to a complete VMM based register verification environment. This includes automatic generation of pre-defined tests like reset value check, bit bash tests etc of registers and complete functional coverage model, which would have taken considerable staff-days of effort to write.

The flowchart below elucidates the process.

register3

Adopting the automated flow, it took 2 days to write the RDL. The rest of components were generated from this source. A small amount of manual effort may be required for items like back-door path definition, but it is minimal and a one-time effort. The overall benefits are much more than the number of staff days saved and we see this as something which gives us perpetual returns.. I am sure, a lot of you would already be bringing in some amount of automation in your register design and verification setup, and if you aren’t, its time you do it J

While, we are talking about abstraction and automation, lets look at another aspect in register verification.

Multiple Interfaces/Views for a register

It is possible to have registers in today’s complex SOC designs which need to be connected to two or more different buses and accessed differently. The register address will be different for the different physical interfaces it is shared between. So, how do we model this..

This can be defined in SystemRDL by using a parent addressmap with bridge property, which contains sub addressmaps representing the different views.

For example:

addrmap dma_blk_bridge {
bridge;// top level address map
reg commoncontrol_reg {
shared; // register will be shared by multiple address maps
field {
hw=rw;
sw=rw;
reset=32’h0;
} f1[32];
};

addrmap {// Define the Map for the AHB Side of the bridge
commoncontrol_reg cmn_ctl_ahb @0×0; // at address=0
} ahb;

addrmap { // Define the Map for the AXI Side of the bridge
commoncontrol_reg cmn_ctl_axi @0×40; // at address=0×40
} axi;
};

The equivalent of multiple view addressmap, in RALF is domain.

This allows one definition of the shared register while allowing access from each domain to it, where register address associated with each domain may be different .The following code is RALF with domain implementation for above RDL.

register commoncontrol_reg {
shared;
field f1 {
bits 32;
access rw;
reset ‘h0;
}
}

block dma_blk_bridge {
domain ahb {
bytes 4;
register commoncontrol_reg =cmn_ctl_ahb @’h00 ;
}

domain axi {
bytes 4;

register commoncontrol_reg=cmn_ctl_axi @’h40 ;
}
}

Each physical interface is a domain in RALF. Only blocks and systems have domains, registers are in the block. For access to a register from one interface/domain RAL provides read/write methods which can be called with the domain name as argument. This is shown below..

ral_model.STATUS.write(status, data, “pci”);

ral_model.STATUS.read(status, data, “ahb”);

This considerably simplifies the verification environment code for the shared register accesses. For more on the same, you can look at : Shared Register Access in RAL though multiple physical interfaces

However, unfortunately, in our case, the tools we used did not support multiple interfaces and the automated flow created a the RALF having effectively two or more top level systems re-defining the registers. This can blow up the RALF file size and also verification environment code.

system dma_blk_bridge {
bytes 4;
block ahb (ahb) @0×0 {
bytes 4;
register cmn_ctl_ahb @0×0 {
bytes 4;
field cmn_ctl_ahb_fl(cmn_ctl_ahb_f1)@0{
bits 32;
access rw;
reset 0×0;
} }
}

block axi (axi) @0×0 {
bytes 4;
register cmn_ctl_axi @0×40 {
bytes 4;
field cmn_ctl_axi_f1 (cmn_ctl_axi_f1) @0 {
bits 32;
access rw;
reset 0×0;
} }
}
}

Thus, as seen above, the tool is generating two blocks ‘ahb’ and ‘axi’ and re-defining the register in each block. For multiple shared registers, the resulting verification code will be much bigger than if domain had been used.

Also, without the domain associated read/write methods (as shown above) for accessing the shared registers will be at least a few lines of code per register for accessing it from a domain/interface. This makes writing the test scenarios complicated and wordy.

Using ‘domain’ in RALF and VMM RAL makes shared register implementation and access in verification environment easy. We hope that we would soon be able to have our automated flow leverage this effectively..

If you are interested to go through more details about our automation setup and register verification experiences, you might want to look at: http://www10.edacafe.com/link/DVCon-2011-Automated-approach-Register-Design-Verification-complex-SOC/34568/view.html

Posted in Automation, Modeling, Register Abstraction Model with RAL, Tools & 3rd Party interfaces | 4 Comments »

Transactor generator with VMM technology for efficient usage of CPU resources.

Posted by Oded Edelstein on 5th April 2010

OdedEdelsteinPic

Oded Edelstein – Founder and CEO of SolidVer

Background:

Many network designs require an efficient transactor generator
to cover DUT functionality.

In a random test we would like to cover all scenarios,
but also to use the CPU mostly on cases which push the design to its edge.

In this VMM example, I will demonstrate 3 cases, and solutions for a better
usage of CPU resources.

Cases:

Case A – In network designs packet size can vary between 40 Bytes, for small packets
and 10KBytes, for large MTU packets.
A test which is based on the number of packets(Transactions), might be very short
or very long depends on the total size of packets. The long tests scenarios can
be covered in a separate random test.

Case B – Some network designs forward packets to different channels(queues) with
different levels of bandwidth support. Random generation of channel
number does not cover many cases (e.g. filling a certain queue with packets),
since the probability that the same channel will be chosen one after the other,
in a system with many channels is very low.

Case C – In some projects, the transactor generates many packets to all queues
while some queues are randomly configured to a low bandwidth. This causes the
test to be very long, until all packets are being forwarded.
At the beginning of the test, the DUT is very busy – almost every cycle, it gets data.
But after the high bandwidth queues got all packets, the low bandwidth queues
continue slowly to get packets, until all packets have been forwarded.
Now, most of the DUT queues are empty and the DUT is using only a small
portion of its performance ability. That has no added value for coverage.

Solutions:

The following code example, shows a simple solution for the above cases.
The solution is based on the following techniques:

1. Test length is defined based on sum of packets size, instead of the number of
packets(transactions).

2. Add the random test a basic case where a number of packets are send  to the same channel
one after the other.
Low probability cases need to be identified and added as case inside
the random test (cases inside directed tests are not good enough for a good coverage).

3. No packets are generated in advance for all queues.
Packets are randomly generated and driven, on the fly, only to available queues.
From my experience, random tests can generate all parameters, and a good coverage can be achieved.
At the same time, a much better coverage can be achieved if idle periods, which consume
CPU during the test are identified and handle correctly.

Code Example:

//——————————————————————————————————
//
//  packet.sv
//
//——————————————————————————————————
class packet extends vmm_data;

rand byte payload[];
rand int packet_size;
rand int channel_num;

static int cnt;

constraint c_payload_size { payload.size == packet_size; }

constraint c_pkt_size_dist { packet_size  dist { 40:= 20,
[41:200]:= 50,
[200:2000]:= 5,
[2000:10000]:= 1,
10000 := 1};}

`vmm_data_member_begin(packet)
`vmm_data_member_scalar_array(payload, DO_ALL)
`vmm_data_member_scalar(packet_size, DO_ALL)
`vmm_data_member_scalar(channel_numm, DO_ALL)
`vmm_data_member_end(packet)

endclass : packet

//—————————————————————————–
// VMM Macros – Channel and Atomic Generator
//—————————————————————————–
`vmm_channel(packet)
`vmm_atomic_gen(packet, “Packet atomic generator”)

//——————————————————————————————————
//  End file packet.sv
//——————————————————————————————————

//——————————————————————————————————
//
//  bfm_master.sv
//
//————————————————————————————————————

//
// SUM_PACKETS_SIZE_IN_TEST : The sum of packets size in bytes that the BFM will drive the DUT.
// We used sum of packets size, to define test length, instead of number of packets, since packet
// size distribution could randomly vary between 40 bytes (small packets) – 10KByes (Large MTU packets).
// This could cause for some seeds to be very long, with no significant added value for coverage.
// These long scenarios were tested in a separate random test.
//
`define SUM_PACKETS_SIZE_IN_TEST 10000000 // 10MB

//
// In this example The DUT gets a packet with a channel number.
// The DUT holds a sperate FIFO for every channel.
//
`define MAX_NUMBER_OF_CAHNNELS   16
class bfm_master extends vmm_xactor;

vmm_log log;

// Packet Transaction channels
//
packet_channel    packet_chan;

//
// The DUT will send the BFM back pressure signal, separately for every channel,
// when the channel FIFO, inside the DUT is full.
// avail_channel_list – Holds a bit for every channel, The BFM can drive packets only on channels
//                      which are not back pressured.
//
bit [(`MAX_NUMBER_OF_CAHNNELS-1):0] avail_channel_list;
int done = 0;

extern function new (string instance,
integer stream_id,
packet_channel packet_chan);
extern function int generate_stream_size();
extern function int generate_avail_channel();

extern virtual task main();
extern virtual task drive_packet(packet packet_trans);

endclass: bfm_master

function bfm_master::new(string instance,
integer stream_id,
packet_channel packet_chan);

super.new(“BFM MASTER”, instance, stream_id);
log = new(“BFM MASTER”, “BFM MASTER”);
if (packet_chan == null) packet_chan = new(“BFM MASTER INPUT CHANNEL”, instance);
this.packet_chan = packet_chan;

endfunction: new

//—————————————————————————–
// main() – Main task for driving packets.
//—————————————————————————–

task bfm_master::drive_packet(packet packet_trans);
// drive the packet …
endtask: drive_packet

function int bfm_master::generate_avail_channel();
….
endfunction

function int bfm_master::generate_stream_size();
….
endfunction
task bfm_master::main();

//
// The sum of packets that the BFM will drive the DUT. when this value is
// above SUM_PACKETS_SIZE_IN_TEST the BFM will stop driving packets
//
int sum_packet_data_sent = 0;

//
// The channel on which the DUT will drive packets. This channel should not be back pressured
// while driving packets
//
int channel_num;

//
// packet_stream_size
// The number of packets that will be sent one after the other to the same channel.
// The idea behind this variable is to get a good coverage for cases where number
// of packets are sent to the same channel one after the other, to quickly fill the FIFO.
// Otherwise the BFM will generate statistically every time, a different
// channel. The probability that the same channel will be generated one after the other
// is very low. e.g. Statistically the probability that the same channel will be
// generated 5, 10, or 20 times, one after the other, is 16 power 5, 16 power 10 or
// 16 power 20, which is a very low probability.
//

int packet_stream_size;

// Counter for the number of packets that were driven in the test.
//
int packet_cnt = 0;
int i;
packet packet_trans;
super.main();
while(sum_packet_data_sent < `SUM_PACKETS_SIZE_IN_TEST) begin
// gen random channel from avail_channel_list;
channel_num = generate_avail_channel();
packet_stream_size =  generate_stream_size();

for(i = 0; i < packet_stream_size; i++ ) begin
this.wait_if_stopped_or_empty(this.packet_chan);
if(avail_channel_list[channel_num] == 1) begin
packet_chan.get(packet_trans);
packet_trans.channel_num = channel_num;
drive_packet(packet_trans);
sum_packet_data_sent = sum_packet_data_sent + packet_trans.packet_size;
packet_cnt++;
`vmm_note(log, $psprintf(“drive packet = %0d  size = %0d  channel = %0d  stream index = %0d  sum = %0d “,
packet_cnt, packet_trans.packet_size, channel_num, i, sum_packet_data_sent));
end
else begin
break;
end
end // for loop
end// end while loop
done = 1;

endtask: main

//——————————————————————————————————
//
//  End file bfm_master.sv
//
//——————————————————————————————————

Posted in Automation, Modeling, Optimization/Performance | 1 Comment »

Verification For the Rest of Us

Posted by Andrew Piziali on 29th March 2010

Andrew Piziali, independent consultant
Jim Bondi, DMTS, Texas Instruments

Functional verification engineers—also known as DV engineers—often think quite highly of themselves. Having mastered both hardware and software design, and each new design from top to bottom with an understanding exceeding all but the architects, we can see why they might end up with an inflated ego. Yet, responsibility for verification of the design is not theirs alone and sometimes not theirs at all!

In this next series of blog posts I am going to direct your attention to the role various members of a design team play in the verification process. Each will be co-authored by someone contributing to their design in the role under discussion. It is not uncommon these days for a small design team to lack any dedicated verification engineers.  Hence, the designers become responsible for the functional verification process embedded in, yet operating in parallel to, the design process.  What does that overall process look like?[1]

  1. Specification and Modeling
  2. Hardware/Software Partitioning
  3. Pre-Partitioning Analysis
  4. Partitioning
  5. Post-Partitioning Analysis and Debug
  6. Post-Partitioning Verification
  7. Hardware and Software Implementation
  8. Implementation Verification

Specification and modeling is responsible for exploring nascent design spaces and capturing original intent. The difficult choices of how to partition the design implementation between hardware and software components comes next. Then, analysis of each partitioning choice and debugging these high level models. Our first opportunity for functional verification follows post-partitioning analysis and debug, where abstract algorithm errors are discovered and eliminated. Hardware and software implementation is self explanatory, lastly leading to implementation verification, answering the question “Has the design intent been preserved in the implementation?”

This kick-off post in this series addresses the role of the architect in verification. My co-author, Jim Bondi, has been a key architect on numerous design projects at Texas Instruments ranging from embedded military systems to Pentium-class x86 processors to ultra low power DSP platforms for medical applications. The architect, whether a single individual or several, is responsible for specifying a solution to customer product requirements that captures the initial design intent of the solution. The resultant specification is iteratively refined during the first three stages of design.

In addition to authoring the original design intent, the second role of the architect in the verification process is preserving that intent and contributing to its precise conveyance throughout the remainder of the design process.[2] This begins during verification planning, where the scope of the verification problem is quantified and its solution specified. Verification planning itself begins with specification analysis, where the features of the design are identified and quantified. The complexity of most designs requires a top down analysis of the specification—first, because of its size (>20 pages) and second, because behavioral requirements must be distilled. This analysis is performed in a series of brainstorming meetings wherein each of the stakeholders of the design contribute: architect, system engineer, verification engineer, software engineer, hardware designer and project manager.

A brainstorming session is guided by someone familiar with the planning process. The architect describes each design feature and—through Q&A—its attributes are illuminated. These attributes and their associated values—registers, data paths, control logic, opcodes—are initially recorded in an Ishikawa diagram (also known as a “fish bone diagram”) for organizational purposes and then transferred to a coverage model design table as they are refined. Ultimately, each coverage model is implemented using a high level verification language (HVL), as part of the verification environment, and used to measure verification progress.

The seasoned architect knows that, even though modeling is mentioned only in the first design step above, it is most effective not only when started early but also continued iteratively throughout most of the design process. It is quite true that system modeling should be started early—as soon as possible and ideally before any RTL is written—when modeling can have its biggest impact on the design and offer its biggest return on model investment. In this early stage, modeling can best help tune the nascent architecture to the application, with the biggest resultant possible improvements in system performance and power. When used right, models are developed first and then actually drive the development of RTL in later design steps. This is contrary to the all- to-common tendency to jump prematurely to RTL representations of the design, and then perhaps use modeling mostly thereafter in attempts to help check and improve the RTL. Used in this fashion, the ability of modeling to improve the design is limited. More experienced architects have learned that modeling is best applied “up front” because it is here, before the design is cast in RTL, that up to 75% of the overall possible improvements in system performance and power can be realized.  The architect knows that a design process that jumps prematurely to RTL leaves much of this potential performance and power improvement on the table.

The seasoned architect also knows that, even though started early, modeling should be continued iteratively throughout most of the remainder of the design process. They know that, in fact, a set of models is needed to best support the design process. The first is typically an untimed functional model that becomes the design’s “golden” reference model, effectively an executable specification. As the design process continues, other models are derived from it, with, for example, timing added to derive performance models and power estimates added to derive power-aware models. In later stages, after modeling has been used “up front” to tune the architecture, optimal RTL can actually be derived from the models. Wherever verification is applied in the design process, whether before or after RTL appears, the models, as a natural form of executable golden reference, can support, or even drive, the verification process. Thus, in design flows that use modeling best, system modeling begins up front and is continued iteratively throughout most of the overall design process.

Indeed, the architect plays a crucial role in the overall design process and in the functional verification of the design derived from that process. They are heavily involved in all design phases affecting and involving verification, from authoring the initial design intent to ensuring its preservation throughout the rest of the design process.  The seasoned architect leverages a special set of system models to help perform this crucial role. Despite the verification engineer’s well-deserved reputation as a jack-of-all-trades, they cannot verify the design alone and may not even be represented in a small design team.  The architect is the “intent glue” that holds the design together until it is complete!

——————-
[1] ESL Design and Verification, Bailey, Martin and Piziali, Elsevier, 2007
[2] Functional Verification Coverage Measurement and Analysis, Piziali, Springer, 2004

Posted in Modeling, Organization, Verification Planning & Management | Comments Off

Verification in the trenches: Implementing Complex Synchronization Between Components Using VMM1.2

Posted by Ambar Sarkar on 5th February 2010

ambar Dr. Ambar Sarkar, Chief Verification Technologist, Paradigm Works Inc.

Why is it tricky to get transactors and other verification components to work in sync with each other, especially  if they come from different projects?  It is likely that they worked well within their source projects,  but their phases (build, configure, reset, start, shutdown etc) were implemented quite differently compared to other components. These differences are usually driven by the inherent protocol requirements or team preferences. For example, consider the verification of an SOC with an AXI  host interface and a PCIe Root Complex. You will likely get your host interface transactor out of reset and execute a configuration sequence before you let your PCIe end point transactor send in requests. So you would not want to run the phases of these two transactors in lock step.

While there are countless ways to implement the phases and their sequencing, one can broadly classify a component  as being either explicitly or implicitly driven, depending on how its phases are invoked.

Implicit phasing: In my earlier post, we discussed how one can often easily coordinate the execution of various verification components. Simply put, as long as one is able to distribute the execution of the component between predetermined methods (called phases), the components can execute in lock-step with one another without requiring any additional coding by the verification engineer. This is called implicit phasing. Implicit phasing may suffice in many cases, but the challenge is to agree on the same set of phases and their sequencing. You basically will need a way to define additional  phases and potentially even rearrange their implicit calling sequence.

Explicit phasing: In contrast, explicit phasing requires the environment writer to explicitly call and synchronize the phases of the components. Typically, it takes some work to get such components to play well with one another.  This happens more often for legacy or externally developed components. In such cases, the  developers may not have known about the predetermined phases so they could not have broken down the implementation quite the way the target environment expects. Explicit phasing is often unavoidable in environments with components from multiple sources, since you may need to carefully control and coordinate the phases by hand to accommodate their differing implementation assumptions.

So the challenge we are discussing today is really about making these explicit and implicit phased components get their phases to match and cooperate during their phase transitions.

This is where vmm_timeline helps. Simply put, vmm_timeline object encapsulates your implicitly phased object and allows it to be called as an explicitly phased object.  It lets you define your own phases and the sequence in which you want to execute them. The ability to customize phases is critical, as you may need to define additional phases to fit in with the way the explicitly phased target  environment expects its phases to execute.

Here is an example that shows how an implicitly-phased component(my_implicit_comp) is being executed within an explicitly-phased my_env. Notice how the my_tl(derived from vmm_timeline) is used.

Step a. Create a vmm_timeline object and instantiate the components

// Implicitly phased comp
class my_implicit_comp extends vmm_group;
`vmm_typename(my_implicit_comp)

function new(string name = “”,
vmm_object parent = null);
super.new(“my_implicit_comp”, name, null);
super.set_parent_object(parent);
endfunctionvirtual function void build_ph();
super.build_ph();

endfunction

endclass

// Create a vmm_timeline class to wrap this implicitly phased component

class my_tl extends vmm_timeline;
`vmm_typename(my_tl)
my_implicit_comp comp1;

function new(string name = “”,
vmm_object parent = null);
super.new(“my_tl”, name, parent);
endfunction

virtual function void build_ph();
super.build_ph();

// Create an instance
this.comp1 = my_implicit_comp::create_instance(this, “comp1”);
endfunction

endclass

Step b. Instantiate in top-level vmm_env and call out the implicit methods

// Instantiate the vmm_timeline object in the top environment and call its phases explicitly.class my_env extends vmm_env;
`vmm_typename(my_env)
my_tl tl;

function new();
super.new(“env”);
endfunction

virtual function void build();
super.build();
this.tl = new(“tl”, this);
endfunction

virtual task start();
super.start();
tl.run_phase(“start”);
`vmm_note(log, “Started…”);
endtask

virtual task wait_for_end();
super.wait_for_end();
fork
// run_test phase corresponds best here
tl.run_phase(“run_test”);
begin
`vmm_note(log, “Running…”);
#100;
end
join
endtask

virtual task stop();
super.stop();

// shutdown phase corresponds best here
tl.run_phase(“shutdown”);
`vmm_note(log, “Stopped…”);
endtask

Note that the converse is also true. Explicitly phased components can be incorporated into implicitly driven environments. You need to encapsulate them in a parent class derived from the vmm_subenv class and define how each implicit phase of the parent class can be mapped to the proper explicit phase(s) of the original component. Then you can simply instantiate this parent class in the target environment. For further details, search the string “Mixed Phasing” in the VMM 1.2 User Guide.

In summary, vmm_timeline helps you manage different phasing and sequencing needs of verification components by making it easier for explicitly and implicitly phased components to interact. No wonder that under the hood of VMM1.2, vmm_timeline is used to implement advanced features such as multi-test concatenation.

This article is the 4th in the Verification in the trenches series. Hope you found this article useful. If you would like to hear about any other related topic, please comment or drop me a line at ambar.sarkar@paradigm-works.com. Also, if you are starting out fresh, please check out the free VMM1.2 environment generator.

Posted in Communication, Modeling, Phasing, Reuse | Comments Off

Using Explicitly-Phased Components in an Implicitly-Phased Testbench

Posted by JL Gray on 11th December 2009

In my last post, I described the new VMM 1.2 implicit phasing capabilities.  I also recommended developing any new code based off of implicit phasing.  Obviously, though, companies that have been using the VMM for quite some time will have developed all of their existing testbench components using explicit phasing.  It is relatively straightforward (and in some sense almost trivial) to use an explicitly phased component in an implicitly phased testbench.

Remember that the whole point of explicit phasing is that users cycle components through the desired phases by manually calling functions and tasks within the component itself. vmm_env contains the following methods:

  • gen_cfg
  • build
  • reset
  • config_dut
  • start
  • wait_for_end
  • stop
  • cleanup
  • report

vmm_subenv contains the following relevant methods:

  • new
  • configure
  • start
  • stop
  • cleanup
  • report

In an explicitly-phased environment, subenv methods are called manually by integrators, usually from the equivalent method in vmm_env. There are two approaches for instantiating a vmm_subenv-based component in an implicitly-phased testbench. The default approach is to simply allow the implicit phasing mechanism to call these explicit phases for you. Explicitly phased components are identified by the implicit phasing mechanism, and methods are called using a standard (and not entirely unexpected) mapping:

Implicit Phase Explicit Phase Called
build_ph vmm_subenv::new[1]
configure_ph vmm_subenv::configure
start_ph vmm_subenv::start
stop_ph vmm_subenv::stop
cleanup_ph vmm_subenv::cleanup
report_ph vmm_subenv::report

[1] Users must call vmm_subenv::new manually.

Now, you might want to phase your vmm_subenv in a non-standard way. If that’s the case, the first thing you’ll need to do is disable the automatic phasing. Here’s how. First, instantiate a null phase:

vmm_null_phase_def null_ph = new();

Next, override the phases you don’t want to start automatically. For example:

my_group.override_phase(“start”, null_ph);
my_group.override_phase(“stop”, null_ph);

Finally, call the explicit phases from the parent object’s implicit phases.  A complete example is shown below.

class testbench_top extends vmm_group;
bus_master_subenv bus_master;
vmm_null_phase_def null_ph = new();

function void build_ph();
bus_master = bus_master_subenv::create_instance(this, “bus_master”);
bus_master.override_phase(“start”, null_ph);
bus_master.override_phase(“stop”, null_ph);
endfunction: build_ph

task reset_ph();
bus_master.start();
// wait 1000 clocks…
bus_master.stop();
endtask: reset_ph

endclass: testbench_top

Posted in Communication, Modeling, Phasing, Reuse, VMM | Comments Off

Implicit and Explicit Phasing

Posted by JL Gray on 20th November 2009

JLGray JL Gray, Verilab, Inc.

In my last post I discussed the difference between phases and threads in the VMM. Phases and threads are conventions used to simplify testbench development. The classic VMM approach has been to use the vmm_env and vmm_subenv classes to manage phases of testbench components, leaving vmm_xactor to deal with thread management. Phases were managed “explicitly”, that is to say, users had complete control over when to step the environment and its subcomponents through individual phases via function and task calls.

The VMM 1.2 introduces the concept of “implicit” phasing. Both implicit and explicit phasing can be used to accomplish many of the same goals, albeit in different ways. In an implicitly-phased testbench, functions and tasks representing phases are called automatically at the appropriate times. A global controller (vmm_simulation) works in conjunction with specially configured schedulers (vmm_timeline) to walk all testbench components through relevant phases. vmm_simulation acts like a conductor, keeping all of the various testbench components in sync during pre-test, test, and post-test portions of a typical simulation.

A natural question to ask is, “Are there any benefits to implicit phasing over and above the explicit phasing techniques I’m using today?” When testbench components are walked through phases automatically, there are a few interesting possibilities that arise. For starters, it becomes possible to add and remove phases. Let’s imagine a simulation that has the following phases1:

  • reset_ph
  • training_ph
  • config_dut_ph
  • start_ph
  • shutdown_ph

What happens if I need to use a piece of verification IP that has implemented a training phase that is not applicable in my test environment? In an explicitly phased environment, I’d need to have control of the code that called the sub-component’s training phase in order to ensure the task in question was not called. In an implicitly-phased environment, I can simply delete the phase from being executed on that component:

my_enet_component.override_phase(“training”, vmm_null_phase_def);

In addition to adding and removing phases, you can also alias one phase to another so that the two phases overlap. For example, let’s say I need the reset phase of one portion of my design to overlap with the training phase of another. I could reconfigure the phasing of the components so they occurred in parallel instead of serially. Here are the steps:

  • Disable the reset phase for the group 1

    group1.override_phase(“reset”, vmm_null_phase_def);

  • Create a new user-defined phase for group 1 called “reset_during_training”

    class reset_during_training_phase_def extends
    vmm_fork_task_phase_def #(group1);
    `vmm_typename(reset_during_training_phase_def)

    virtual task do_task_phase(group1 obj);
    if(obj.is_unit_enabled())
    obj.reset_ph();
    endtask:do_task_phase

    endclass:reset_during_training_phase_def

  • Alias the new user-defined phase to the “training” phase

    vmm_timeline tl = vmm_simulation::get_top_timeline();
    tl.add_phase(“training”, reset_during_training_phase_def);

Another benefit of implicit phasing is that vmm_xactor, which normally manages threads, is also phased. Because of this, your transactors are now aware of the stage of the simulation we are in at any given moment. They can then change their behavior based on this knowledge. Because of this, VIP developer could configure a transactor to automatically start during the reset phase, stop during training, and continue during the start phase. Users could easily reconfigure the transactor to start and stop at other times as needed.

One thing to note is that implicitly phased components can be phased explicitly if desired by the user. This means that implicit phasing provides a superset of the explicit functionality provided by vmm_env and vmm_subenv.

The addition of implicit phasing means developers now have a choice to make when building a verification environment. Should you build it based on implicitly or explicitly-phased components? Luckily, it is possible to use implicitly-phased components in an explicitly-phased environment, and vice versa. My recommendation is for users to create all new testbench code using implicit phasing. If you are used to explicit phasing, the new development style can seem perplexing. However, as I mentioned, implicit phasing is effectively a superset of explicit phasing in its capabilities. Adopting the new methodology across your entire team will ensure the additional capabilities discussed above are available in your testbench in the future without users having to make changes down the road.

1 The listed phases are a subset of the actual pre-defined implicit phases available

Posted in Modeling, Phasing | Comments Off

Phases vs. Threads

Posted by JL Gray on 4th November 2009

JLGray JL Gray, Consultant, Verilab, Austin, Texas, and Author of Cool Verification

Building a testbench for personal use is easy. Building a testbench that can be used by others is much more difficult. To make it easier for verification IP written by different people to interoperate, modern verification methodologies support the concept of standardized “phases” during a simulation run. Phases are a way to help verification engineers communicate in a standard language about what is meant to be taking place at any given time during a simulation.  For example, an explicitly phased VMM testbench built using vmm_env contains the following phases of execution:

· gen_cfg

· build

· reset

· config_dut

· start

· wait_for_end

· stop

· cleanup

· report

Ideally, each of these phases serves a clear purpose. If I want to reset the DUT, a good way to do it is to instrument the reset phase with the appropriate reset logic.  Similarly, the bulk of the simulation activity will likely occur during the wait_for_end phase.  The VMM now has support for implicit phasing. In an implicitly-phased system, components in the verification environment are stepped through each phase automatically by a global controller called vmm_simulation (and its associated “timelines”).  I will discuss timelines in a separate post.  In both the explicit and implicitly phased cases, the phases serve as guides through the simulation. However, most of the real work of the testbench will be accomplished by threads spawned off from these phases.

It is easy to spawn threads using a simple fork/join, but the VMM provides tools to make managing threads easier. In the VMM, the vmm_xactor base class provides support for managing threads and is at the same time phase-aware.  How does it do this?  For starters, vmm_xactor is now implicitly phased by the top level vmm_simulation controller.  However, users maintain full control over the ability to start and stop the transactor, just as they did in earlier versions of the VMM.  That means that a user could start a transactor during any VMM phase, and stop the transactor during the same or a later phase. The transactor could then query the current phase and change its behavior depending on the state of the simulation. Users can also modify their behavior via the use of callbacks.

Here is a diagram demonstrating the interaction between threads and an example subset of the new VMM implicit phases.

clip_image002

The diagram demonstrates activities that take place during specific phases of the testbench. It also shows that threads may start in one phase (such as the host generator starting in the reset phase) and stop in another (in this case, the shutdown phase).  The astute reader will note that I didn’t really need standardized phases at all to handle this. I could have done all of the activities described above in the “run” phase. In fact, that’s what many people do, even today where other phases are available.  The issue, as I stated at the beginning of the article, is that by standardizing when we do specific types of activities, our verification IP will be easier to reuse in other compatible environments.

Posted in Communication, Modeling, Phasing | 1 Comment »

VMM data macros are cool, but how do I customize the constructor?

Posted by Shankar Hemmady on 27th August 2009

Srinivasan VenkataramanPawan BellamkondaSrinivasan Venkataramanan, CVC

Pawan Bellamkonda, Brocade

During our recent VMM training at CVC, we learned about VMM data member macros, and our engineers liked it. Some of our teams at Brocade have started adopting it in their projects right away! We see that we can avoid much of the lengthy code and increase readability with these new macros. We will surely avoid making silly mistakes which might be hard to debug later.

However as with any built-in automation, there are always scenarios where-in user level customization of some or all of the methods is required. VMM provides this flexibility for overriding the default behavior of virtual methods of vmm_data class. In one of our blocks we needed to tweak the constructor of the transaction. One question that perplexed us was:

“I have built a transaction class extending from vmm_data. We have used the short hand macro `vmm_data_member…..  to get all the functions automatically. But while creating an object of this transaction, we want to pass a configuration class object as argument in the new function. How should we override the new() function alone when we use the short hand macros? When we tried using do_new() (like overriding other functions), it did not work.”

As we explored a bit, we found another macro specifically meant for this:

`vmm_data_new(<class_name>)

This macro should be used before the beginning of data-member macros. This lets the succeeding macros do all the work except the “new” function implementation.

class s2p_xactn extend vmm_data;
rand bit [7:0] pkt_len, pkt_pld;

`vmm_data_new(s2p_xactn)
function new(int my_own_arg = 2);
`vmm_note (log, $psprintf (“my val is %0d”, my_own_arg));
endfunction : new

`vmm_data_member_begin(s2p_xactn)
`vmm_data_member_scalar(pkt_len, DO_ALL)
`vmm_data_member_scalar(pkt_pld, DO_ALL)
`vmm_data_member_end(s2p_xactn)
endclass : s2p_xactn

As with traditional martial arts, functional verification too has some slightly different styles/requirements that makes each project interesting and unique. To its credit, we feel that VMM is as proven as traditional martial arts: it can be tailored to different requirements while providing a standardized means of combat.

Posted in Automation, Coding Style, Customization, Modeling | 2 Comments »

class factory

Posted by Wei-Hua Han on 26th August 2009

Weihua Han, CAE, Synopsys

As a well-known Object-Oriented technique, class factory has actually been applied in VMM since inception. For instance, in the vmm atomic and scenario generators, by assigning different blueprints to randomized_obj and scenario_set[] properties, these generators can generate transactions with user specified patterns. Using the class factory pattern, users create an instance with a pre-defined method (such as allocate() or copy()) instead of the constructor. This pre-defined method will create an instance from the factory not just the type of the variable being assigned.

VMM1.2 now simplifies the application of the class factory pattern within the whole verification environment so that users can easily replace any kind of object, transaction, scenario and transactor by a similar object. Users can easily follow the steps below to apply the class factory pattern within the verification environment.

1. define “new”, “allocate”, “copy” methods for a class and create the factory for the class.

class vehicle_c extends vmm_object;

//defines the new function. each argument should have default values

function new(string name=”",vmm_object parent=null);

super.new(parent,name);

endfunction

//defines allocate and copy methods

virtual function vehicle_c allocate();

vehicle_c it;

it = new(this.get_object_name,get_parent_object());

allocate = it;

endfunction

virtual function vehicle_c copy();

vehicle_c it;

it = new this;

copy = it;

endfunction

//these two macros will define necessary methods for class factory and create factory for the class

`vmm_typename(vehicle_c);

`vmm_class_factory(vehicle_c);

endclass

`vmm_typename, `vmm_class_factory will implement the necessary methods to support the class factory pattern, like get_typename(), create_instance(), override_with_new(), override_with_copy(), etc.

Users can also use `vmm_data_member_begin and `vmm_data_member_end to implement the “new”, “copy”, “allocate” methods conveniently.

2. create an instance using the pre-defined “create_instance()” method

To use the class factory, the class instance should be created with pre-defined create_instance() method instead of the constructor. For example:

class driver_c extends vmm_object;

vehicle_c myvehicle;

function new(string name=”",vmm_object parent=null);

super.new(parent,name);

endfunction

task drive();

//create an instance from create_instance method

myvehicle = vehicle_c::create_instance(this,”myvehicle”);

$display(“%s is driving %s(%s)”, this.get_object_name(),

myvehicle.get_object_name(),

myvehicle.get_typename());

endtask

endclass

program p1;

driver_c Tom=new(“Tom”,null);

initial begin

Tom.drive();

end

endprogram

For this example, the output is:

Tom is driving myvehicle(class $unit::vehicle_c)

3.  define a new class

Let’s now define the following new class which is derived from the original class vehicle_c:

class sedan_c extends vehicle_c;

`vmm_typename(sedan_c);

function new(string name=”",vmm_object parent=null);

super.new(name,parent);

endfunction

virtual function vehicle_c allocate();

sedan_c it;

it = new(this.get_object_name,get_parent_object());

allocate = it;

endfunction

virtual function vehicle_c copy();

sedan_c it;

it = new this;

copy = it;

endfunction

`vmm_class_factory(sedan_c);

endclass

And we would like to create myvehicle instance from this new class without modifying driver_c class.

4. override the original instance or type with the new class

VMM1.2 provides two methods for users to override the original instances or type.

  • override_with_new:(string name, new_class factory, vmm_log log,string fname=”",int lineno=0)

With this method, when create_instance() is called, a new instance of new_class will be created through facory.allocate() and returned.

  • override_with_copy(string name, new_class factory,vmm_log log, string fname=”", int lineno=0)

With this method, when create_instance() is called, a new instanced of new_class will be created through factory.copy() and returned.

For both methods, the first argument is the instance name, as specified in the create_instance() method, which users hope to override with the type of new_class. Users can use powerful name matching mechanism defined in VMM to specify the override happens on dedicated instance or all the instances of one class in the whole verification environment.

The code below will override all vehicle_c instances with sedan_c type in the environment:

program p1;

driver_c Tom=new(“Tom”,null);

vmm_log log;

initial begin

//override all vehicle_c instances with type of sedan_c

vehicle_c::override_with_new(“@%*”,sedan_c::this_type,log);

Tom.drive();

end

endprogram

And the output of the above code is:

Tom is driving myvehicle(class $unit::sedan_c)

If users only want to override one dedicated instance with a copy of another instance, users can call override_with_copy using the following code:

vehicle_c::override_with_copy(“@%Tom:myvehicle”,another_sedan_c_instance,log);

As the above example shows, with the class factory pattern short-hand macros provided with VMM1.2, users can easily use class factories patterns to replace transactors, transactions and other verification components without modifying the testbench code. I find this very useful for increasing the reusability of verification components.

Posted in Configuration, Modeling, SystemVerilog, Tutorial, VMM, VMM infrastructure | 1 Comment »

Protocol Layering Using Transactors

Posted by Janick Bergeron on 9th June 2009

jb_blog Janick Bergeron
Synopsys Fellow

Bus protocols, such as AHB, are ubiquitous and often used in examples because they are simple to use: some control algorithm decides which address to read or write and what value to expect or to write. Pretty simple.

But data protocols can be made a lot more complex because they can often be layered arbitrarily. For example, an ethernet frame may contain an IP segment of an IP frame that contains a TCP packet which carries an FTP frame. Some ethernet frames in that same stream may contain HDLC-encapsulated ATM cells carrying encrypted PPP packets.

How would one generate stimulus for these protocol layers?

One way would be to generate a hierarchy of protocol descriptors representing the layering of the protocol. For example, for an ethernet frame carrying an IP frame, you could do:

class eth_frame extends vmm_data;
rand bit [47:0] da;
rand bit [47:0] sa;
rand bit [15:0] len_typ;
rand ip_frame payload;
rand bit [31:0] fcs;

endclass

class ip_frame extends vmm_data;
eth_frame transport;
rand bit [3:0] version;
rand bit [3:0] IHL;

rand bit [7:0] data;
endclass

That works if you have exactly one IP frame per ethernet frame. But what if your IP frame does not fit into the ethernet frame and it needs to be segmented? This approach works when you have a one-to-one layering granularity, but not when you have to deal with one-to-many (i.e. segmentation), many-to-one (i.e. reassembly, concatenation) or plesio-synchronous (e.g. justification) payloads.

This approach also limits the reusability of the protocol transactions: the ethernet frame above can only carry an IP frame. How could it carry other protocols? or random bytes? How could the IP frame above be transported by another protocol?

And let’s not even start to think about error injection…

One solution is to use transactors to perform the encapsulation. The encapsulator would have an input channel for the higher layer protocol and an output channel for the lower layer protocol.

class ip_on_ethernet extends vmm_xactor;
ip_frame_channel in_chan;
eth_frame_channel out_chan;

endclass

The protocol transactions are generic and may contain generic references to their payload or transport layers.

class eth_frame extends vmm_data;
vmm_data transport[$];
vmm_data payload[$];

rand bit [47:0] da;
rand bit [47:0] sa;
rand bit [15:0] len_typ;
rand bit [  7:0] data[];
rand bit [31:0] fcs;

endclass

class ip_frame extends vmm_data;

vmm_data transport[$];
vmm_data payload[$];

rand bit [3:0] version;
rand bit [3:0] IHL;

rand bit [7:0] data;
endclass

The transactor main() task, simply waits for higher-layer protocol transactions, packs them into a byte stream, then lays the byte stream into the payload portion of new instances of the lower-layer protocol.

virtual task main();
super.main();

forever begin
bit [7:0] bytes[];
ip_frame ip;
eth_frame eth;

this.wait_if_stopped_or_empty(this.in_chan);
this.in_chan.activate(ip);

// Pre-encapsulation callbacks (for delay & error injection)…

this.in_chan.start();
ip.byte_pack(bytes, 0);
if (bytes.size() > 1500) begin

`vmm_error(log, “IP packet is too large for Ethernet frame”);
continue;
end

eth = new(); // Should really use a factory here

eth.da = …;
eth.sa = …;
eth.len_typ = ‘h0800;  // Indicate IP payload

eth.data = bytes;
eth.fcs = 32’h0000_0000;

ip.transport.push_back(eth);
eth.payload.push_back(ip);

// Pre-tx callbacks (for delay and ethernet-level error injection)…

this.out_chan.put(eth);
eth.notify.wait_for(vmm_data::ENDED);

this.in_chan.complete();

// Post-encapsulation callbacks (for functional coverage)…

this.in_chan.remove();
end
endtask

When setting the header fields in the lower-layer protocol, you can use values from the higher-layer protocols (like setting the len_typ field to 0×0800 above, indicating an IP payload), you can use values configured in the encapsulator (e.g. a routing table) or they can be randomly generated with appropriate constraints:

if (!route.exists(ip.da)) begin
bit [47:0] da = {$urandom, $urandom};  // $urandom is only 32-bit

da[41:40] = 2’b00; // Unicast, global address
route[ip.da] = da;
end
eth.da = route[ip.da];

The protocol layers observed by your DUT are then defined by the combination and order of these encapsulation transactor.

vmm_scheducler instances may also be used at various points in the layering to combine multiple streams (maybe carrying different protocol stacks and layers) into a single stream.

Posted in Modeling, Modeling Transactions, Phasing, Structural Components, SystemVerilog, Tutorial | 3 Comments »

26fff4d49a1c53557d06a454e07bb99bzzzzzzzzzzzzzzzzz