The ESL Edge


To the Virtual Prototype and Beyond

Ah, the joys of summer vacations and scheduling around them. I was disappointed that Synopsys was to have a press release while I was on vacation – especially when it was on one of my favorite topics – virtual prototypes. To make matters worse, by the time I got back, their key guy was about to start his vacation – but Mark Serughetti agreed to spend some time talking to me and we had a great chat.

Now given that everyone else will have already regurgitated their release and the plentiful other pieces of information that Synopsys normally supplies with their releases, I will not bore you with that, except to put a context in place. Synopsys bought several companies recently including CoWare, and Vast and a few years before that Virtio. This is their bringing together of those technologies and using them to create Virtualizer – a product that focuses on analysis and debug, and flow integration. All models developed for those prior products are fully compatible with this product, but going forward the preferred modeling language is SystemC TLM, although any language can be used under the hood for even more performance.


They have indentified two primary user groups for the product – those who create models and platforms and those who use them. Different capabilities are necessary for those two groups. They supply a library of models to go along with the product, plus some reference designs that can help people get a prototype up quicker than starting from scratch.

When a virtual prototype is developed for SW development, most people today attempt to use it in the same way that would use a development board. They attach a debugger to it and start doing functional verification of the SW. But a virtual prototype is capable of doing so much more, and it will take time for the methodologies to change. In addition, the prototype is the place where the HW and SW come together and this provides a whole new set of things that people could do. For example, power is not dictated by either HW or SW individually, but it is affected by the way they interact with each other. The analysis capabilities that Synopsys is providing concentrate on this type of need and allow their existing methodologies (compilers, debuggers etc) to be used for the functional verification.

So why does it take so long for methodologies to make significant change? Mark explained that up until now, the big issue has been about the availability of models, and the ease with which a platform can be put together. Many of those problems have now been mitigated which makes it more likely that more companies will start using prototypes. But there are also secondary issues such as group communications, roles, and responsibilities in the user organizations that slow down adoption, especially for the new types of capabilities we are now beginning to see. Who creates and maintains the models, what models do they need to create and what levels of abstraction are going to be used? Companies are now gaining enough experience to decide on these issues, and to identify what they expect of the platform. Different users have different needs and a way has to be found to integrate those varying needs in an economic manner.

I asked Mark about the role he thinks Synopsys has in defining new flows for the system level. This includes things such as the definition of good verification practices for this level, how hybrid prototyping needs to be setup and used and the impact that this can have on things such as how a model is transformed over time. He said that the customers are basically the ones who need to decide what flows work for them and the tool must support those flows rather than the tool defining the flow. Only when there is significant alignment in the industry does it make sense to bring this into a standardized flow with added forms of automation. I think he is spot on with this, and is part of the learning flow associated with a new abstraction and moving into ground that has never been covered today. We are where RTL methodologies were about 20 years ago and at this point we are all in learning mode. The important thing is to be open, flexible and to listen. I think that is exactly what Synopsys is doing today and they are taking the right approach, but as an industry we have to work on moving forward as quickly as we can. We need a system level verification methodology that includes how stimulus is generated, coverage is defined and all that good stuff, but across multiple levels of abstraction using many different models. We also need to consider things other than functional verification as we have now introduced capabilities for power and performance as well as functionality being distributed amongst hardware and software. Treating them as separate entities is only part of the picture.

Brian Bailey – keeping you covered

8 Responses to “To the Virtual Prototype and Beyond”

  1. 1
    nosnhojn Says:

    Brian, question for you… I agree with you when you say:

    “But there are also secondary issues such as group communications, roles, and responsibilities in the user organizations that slow down adoption, especially for the new types of capabilities we are now beginning to see.”

    Seeing as how you’ve probably got about as much experience as anyone with modeling and prototyping… how would you rate the impact of these secondary issues? Are organizations waiting for the technology to advance before they look at how communications, roles, responsibilities, etc need to change or is the technology ready and it’s these secondary issues that are holding things back?


  2. 2
    admin Says:

    They are somewhat dependent on each other, but at this time I think the limiter is mostly in the organizational issues. I can remember when we came out with Seamless for HW/SW co-verification (that was well over 10 years ago) we were often the ones to introduce the head of the hardware team to the head of the software team. They did not communicate at all. Things have progressed but the SW guys still see the role of a virtual prototype as a platform for verification of code, not as a platform for optimization, architectural exploration – of the software – or other tasks that would put them as equal partners in the system level decision making. So, we wont see incredible tool development in that area until the basics start to be adopted, and it will take even longer for standardized methodologies to emerge.

  3. 3
    nosnhojn Says:

    Brian, follow-up question… do you see a path for how organizational impediments to virtual prototyping are cleared? What has to happen? Is it part of a solution that comes from tool vendors? Is it something that has to be initiated by the hardware team or software team. Or initiated collectively perhaps? Any insight there?

  4. 4
    admin Says:

    In our industry one of two things make change happen. Either 1) something fails miserably and an organization has to change to prevent it from happening again or 2) there is such an obvious advantage that comes from the change and even better if it can provide a competitive advantage. Power may be the agent of change. A product where the software runs 10% faster will probably not sell many more units but a product that has 10% better power consumption might. This assumes that each optimization is equally likely. I believe that power can probably be optimized a lot more than performance.

  5. 5
    Gaurav Jalan Says:

    Hi Brian,

    When you say power are you referring to the dynamic power dissipation while running a particular application on a given architecture. We are still lagging on standard power modeling process isn’t it?

  6. 6
    Everett Lumpkin Says:

    Can virtual prototyping for software make inroads at the costs (for models and tools) demanded by EDA companies? Embedded development in general expects tools at price points of less than $1000 (compilers, debuggers, lint tools). I watched in horror as a company imploded man-years of model development and microcontroller supplier relationships for the want of a license renewal. The benefits (quality and time) and cost savings spoke for themselves, but the capital expenses just didn’t compare against the “normal” expenses associated with going backwards to do things the old way.

  7. 7
    Brian Says:

    In response to Gaurav Jalan: Exactly, plus with power management hardware that is controlled by software, the software has to be optimizing itself in a dynamic manner.

  8. 8
    Brian Says:

    There are two sets of costs – the model development costs and the model utilization costs. Model development costs are probably born by the hardware group and given that this is quite possibly a model they will be using for architectural exploration or even high-level synthesis, then yes, they will pay traditional EDA costs for them. The costs to run the models have to be inline with the costs that the software group traditionally expects – if they continue to just use them for debug. However, if the models enable them to do things that they could never do before and in turn make the product better – more competitive – then it is possible that they could expect more for them. There are some software groups who are willing to pay for emulations and FPGA prototyping so I think to say that tools have to be almost free is perhaps clinging on to a cliche of the past that may be changing. People will pay for value. Most existing software tools are commodities.

Leave a Reply

© 2017 The ESL Edge | Entries (RSS) and Comments (RSS)

Design by Web4 Sudoku - Powered By Wordpress