The ESL Edge


Are you Positive about that Verification Approach?

There have recently been several articles published that relate to System Level Virtual Prototypes (SLVP) and the use of software to verify them. For example Synopsys’ Achim Nohl and Frank Schirrmeister penned an article titled Software Driven Verification. In it they say “…more and more designers not only use embedded software to define functionality, but also use it to verify the surrounding hardware.” They go to say “Such a shift is quite fundamental and has the potential to move the balance from today’s verification, which is primarily done using SystemVerilog, to verification using embedded software on the processors.” They go on to show in the paper the various roles that the virtual-prototype can play in the verification process. It is a well written paper, but it leaves out an enormous issue that must be addressed and one that I have written about in the past, but clearly need to come back to.

When we think about the way in which verification is performed today, we start with individual blocks written in RTL. We create testbenches for these and attempt to get 100% coverage of all of the functionality that we expect that block to perform. There may be some assumptions that we make about how the block is to be used, but in many cases, the developer does not really know about them. An extreme case of this is the development of a piece of IP intended to be sold to other companies. Here the way in which the block is to be used is totally unknown, except that its usage should conform to a set of documents provided along with the IP block or by an industry standard. For a general purpose interface block this may include many thousands of configuration register bits, different ways in which a bus can communicate with it and so much more. One way to think about it is that there are few constraints on the inputs of the block, and this means that all possible functionalities need to be verified.

Now as that block is integrated into a sub-system, the ways in which that block can be used become more constrained, certain aspects of it become fixed and that makes certain pieces of functionality in the block unreachable. Thus for one particular instantiation of that block, some of the unit testing that was done was unnecessary. Had we know about the environment it was to be used in we could have saved that effort. Unfortunately if that block is used in several locations in the design, each instantiation may have used a different sub-set of functionality, and thus we would have to ensure that the full union of functionality was covered. Even more so, if we hope to re-use that block in a future design, it is probably just worth verifying the whole thing because making assumptions about the usage environment can lead to disasters such as the Arianne 5 rocket.

Now let’s move up to larger system, those containing a processor. Several years back there was a debate about whether a microprocessor model should be included in the design when performing verification. The arguments was that if you left the processor out, the simulation would run faster and you could pump in whatever bus cycles you wanted to fully verify the design. This could be fed by constrained random pattern generation and it would be easier to get to verification closure. The other side of the argument was basically, the use of an ISS model does not slow down verification in any perceptible way and you can write tests in software much quicker or even use some aspects of the actual software, such as drivers. This means that you start to see real life traffic rather than make-believe bus operations. So which is right? Actually they are both right and this brings us back to the major point. When you execute actual software you have constrained the design in that it only needs to perform the exact functionality necessary to execute that software. If you change the software, you may change the way in which it uses the hardware and you may now execute cases that you did not execute before. There have been many cases where only certain versions of software will run on specified versions of hardware. This is a result of only having done the system verification using software. Simply put we are under-verifying the hardware.

On the other hand, if we take the processor out of the equation and drive bus cycles, we may be verifying many conditions that could never happen in real life because the processor is incapable of driving that sequence of operations using that timing. So this is a situation where we are over-verifying the hardware. But what is the happy middle ground – well, the industry has not come up with an answer to that yet.

SLVPs are a game changer in many ways. They can take what used to be a bottom up verification process and make it a top down verification process. We can start by verifying that the abstract model of hardware is capable of running software and thus we can ensure that the overall specification is correct before we even begin implementation. Now that is something we have never been able to do before. We can substitute abstract blocks, within the SLVP, with implementation blocks to ensure those blocks in effect match the requirements as defined by the virtual model – but we can never stop doing block level verification. If we do not relax the constraints at each step in the verification process then we are likely to have software updates suddenly expose hardware problems. Of course, if you never intend to modify the software once the product has shipped, then you can avoid all block level verification as it has already been sufficiently verified, but in most cases that is not true.

The terms I use, which may not be ideal, are that when we verify that a system actually perform a task as defined by the specification, I call that an act of positive verification. So running actual software is positive verification. When we perform verification that attempts to ensure that all bugs are removed from a block or system, I call that negative verification. In real life positive and negative verification have to be balanced and that balance point is dependent on the type and usage of the system. Do you have better terms for these? If so, please let me know.

Brian Bailey – keeping you covered

2 Responses to “Are you Positive about that Verification Approach?”

  1. 1
    Mike Bradley Says:

    As far as the terms pos/neg verification; I use a different nomenclature:

    “Validation” Is to test that the device/system conforms to the specification

    “Verification” Is to test that there or no bugs, or that the system/block responds in an acceptable manner when used in a strange way.

    I would also add that SLVP’s are very useful to test system performance. For example, when the perfect storm of inputs occur to max out peripheral usage, interrupt generation, etc. This cannot be completely done with just a bus functional model, since the reaction time and load of the processor are not taken into account.

  2. 2
    admin Says:

    Hi Mike. If we are considering the top level, then I agree with you the terms validation and verification are the perfect fit. When going through the integration stages, we have something in between. You could say that it is validation of the partial specifications, but then if that is to be used in multiple places or in multiple designs, then you may need to do more verification than would actually be necessary for validation in any one instance.
    The second reason why I added the terms is that I want to make people realize that there is a difference in the two verification approaches and that the right balance has to be found. Similarly a fully validated design is not sufficient, it must be validated and verified to some extent in order to be robust.

Leave a Reply

© 2017 The ESL Edge | Entries (RSS) and Comments (RSS)

Design by Web4 Sudoku - Powered By Wordpress