Ah, the joys of summer vacations and scheduling around them. I was disappointed that Synopsys was to have a press release while I was on vacation – especially when it was on one of my favorite topics – virtual prototypes. To make matters worse, by the time I got back, their key guy was about to start his vacation – but Mark Serughetti agreed to spend some time talking to me and we had a great chat.
Now given that everyone else will have already regurgitated their release and the plentiful other pieces of information that Synopsys normally supplies with their releases, I will not bore you with that, except to put a context in place. Synopsys bought several companies recently including CoWare, and Vast and a few years before that Virtio. This is their bringing together of those technologies and using them to create Virtualizer – a product that focuses on analysis and debug, and flow integration. All models developed for those prior products are fully compatible with this product, but going forward the preferred modeling language is SystemC TLM, although any language can be used under the hood for even more performance.
They have indentified two primary user groups for the product – those who create models and platforms and those who use them. Different capabilities are necessary for those two groups. They supply a library of models to go along with the product, plus some reference designs that can help people get a prototype up quicker than starting from scratch.
When a virtual prototype is developed for SW development, most people today attempt to use it in the same way that would use a development board. They attach a debugger to it and start doing functional verification of the SW. But a virtual prototype is capable of doing so much more, and it will take time for the methodologies to change. In addition, the prototype is the place where the HW and SW come together and this provides a whole new set of things that people could do. For example, power is not dictated by either HW or SW individually, but it is affected by the way they interact with each other. The analysis capabilities that Synopsys is providing concentrate on this type of need and allow their existing methodologies (compilers, debuggers etc) to be used for the functional verification.
So why does it take so long for methodologies to make significant change? Mark explained that up until now, the big issue has been about the availability of models, and the ease with which a platform can be put together. Many of those problems have now been mitigated which makes it more likely that more companies will start using prototypes. But there are also secondary issues such as group communications, roles, and responsibilities in the user organizations that slow down adoption, especially for the new types of capabilities we are now beginning to see. Who creates and maintains the models, what models do they need to create and what levels of abstraction are going to be used? Companies are now gaining enough experience to decide on these issues, and to identify what they expect of the platform. Different users have different needs and a way has to be found to integrate those varying needs in an economic manner.
I asked Mark about the role he thinks Synopsys has in defining new flows for the system level. This includes things such as the definition of good verification practices for this level, how hybrid prototyping needs to be setup and used and the impact that this can have on things such as how a model is transformed over time. He said that the customers are basically the ones who need to decide what flows work for them and the tool must support those flows rather than the tool defining the flow. Only when there is significant alignment in the industry does it make sense to bring this into a standardized flow with added forms of automation. I think he is spot on with this, and is part of the learning flow associated with a new abstraction and moving into ground that has never been covered today. We are where RTL methodologies were about 20 years ago and at this point we are all in learning mode. The important thing is to be open, flexible and to listen. I think that is exactly what Synopsys is doing today and they are taking the right approach, but as an industry we have to work on moving forward as quickly as we can. We need a system level verification methodology that includes how stimulus is generated, coverage is defined and all that good stuff, but across multiple levels of abstraction using many different models. We also need to consider things other than functional verification as we have now introduced capabilities for power and performance as well as functionality being distributed amongst hardware and software. Treating them as separate entities is only part of the picture.
Brian Bailey – keeping you covered