A couple of weeks ago I saw a press release go by. The title was “Faraday Accelerates the Development of SoCs with Model-Based Design”. They claimed that this helped them speed up simulation by more than 200X and reduce gate count by more than 50%. Not bad I thought. But then I stopped to think for a bit and asked myself – why would model-based design do that? Do I correctly understand what model-based design actually means?
First step – I went to Wikipedia to see what they had to say. There I found out that text-based tools are inadequate for the complex nature of modern control systems. They go on to say that “Because of the limitations of graphical tools, design engineers previously relied heavily on text-based programming and mathematical models. However, developing these models was difficult, time-consuming, and highly prone to error. In addition, debugging text-based programs was a tedious process, requiring much trial and error before a final fault-free model could be created…”.
In other words, because of the limitations of graphical methods we used text methods, but these are too difficult to use so we should use model-based techniques. OK – so what are model-based tools? Well it appears that “These challenges are overcome by the use of graphical modeling tools […]. These tools provide a very generic and unified graphical modeling environment, they reduce the complexity of model designs by breaking them into hierarchies of individual design blocks.” Now wait a minute! We have used hierarchy even when we did gate-level design, which was graphical. These things do not represent complexity in any way. About the only useful thing I found there was “Designers can thus achieve multiple levels of model fidelity by simply substituting one block element with another.” Ah, now that is an idea we have aspired to for quite a while but the problem has always been in the way in which interfaces map in hardware. An interface in the software world can remain the same even when the abstraction of the model itself changes, but in the hardware world we have to refine the interfaces just as much as we refine the model contents.
Cadence recently took a stab at this problem in their (ok, I was one of the co-authors) book titled “TLM-Driven Design and Verification Methodology”. Here a subset of the OSCI TLM 1.0 standard was identified and then extended with some of the notions from TLM 2.0. The result was an interface description that could be used to connect models at the TLM level and was also synthesizable. The same interface that was used to connect functional blocks together could be used to connect the functional block to an interface model, such as an APB interface, and thus the interfaces physically got refined while at the same time, the original functional interfaces would fade away in the synthesis process. Of course, there were some tricky bits along the way. For example many blocks needed to know how the data was to be made available and the use of simplistic interfaces did not allow “knowledge” to be passed across the interfaces allowing them to become optimized.
I have also heard from the software world that synthesizing things where interfaces were involved, or OS calls required, generally resulted in inefficient implementations for much the same reasons. The other problem is that software synthesis does not have many of the constraints necessary to allow for anything more than localized analysis and optimization whereas in the hardware world we basically constrain what can be described so that it does enable such optimizations.
So, going back to the press release. It is clear that what they are really describing are the gains that come from employing abstraction. They talk about how abstraction enables them to explore the design architecturally and that provides the increase in simulation performance. Architectural exploration can also explain why they had reduced gate counts. While a 50% reduction in gate count is certainly not typical of the figures I hear, it is possible.
In summary, I don’t think this has anything to do with model-based design. I has to do with the gain associated with moving to a higher level of abstraction and it really doesn’t matter if you want to do that textually, graphically, or model-based, you are likely to see significant productivity and optimization gains. Long live whatever development process you prefer!
Brian Bailey – keeping you covered.