The ESL Edge


Grappling with Model-Based Design

A couple of weeks ago I saw a press release go by. The title was “Faraday Accelerates the Development of SoCs with Model-Based Design”. They claimed that this helped them speed up simulation by more than 200X and reduce gate count by more than 50%. Not bad I thought. But then I stopped to think for a bit and asked myself – why would model-based design do that? Do I correctly understand what model-based design actually means?

First step – I went to Wikipedia to see what they had to say. There I found out that text-based tools are inadequate for the complex nature of modern control systems. They go on to say that “Because of the limitations of graphical tools, design engineers previously relied heavily on text-based programming and mathematical models. However, developing these models was difficult, time-consuming, and highly prone to error. In addition, debugging text-based programs was a tedious process, requiring much trial and error before a final fault-free model could be created…”.

In other words, because of the limitations of graphical methods we used text methods, but these are too difficult to use so we should use model-based techniques. OK – so what are model-based tools? Well it appears that “These challenges are overcome by the use of graphical modeling tools […]. These tools provide a very generic and unified graphical modeling environment, they reduce the complexity of model designs by breaking them into hierarchies of individual design blocks.” Now wait a minute! We have used hierarchy even when we did gate-level design, which was graphical. These things do not represent complexity in any way. About the only useful thing I found there was “Designers can thus achieve multiple levels of model fidelity by simply substituting one block element with another.” Ah, now that is an idea we have aspired to for quite a while but the problem has always been in the way in which interfaces map in hardware. An interface in the software world can remain the same even when the abstraction of the model itself changes, but in the hardware world we have to refine the interfaces just as much as we refine the model contents.

Cadence recently took a stab at this problem in their (ok, I was one of the co-authors) book titled “TLM-Driven Design and Verification Methodology”. Here a subset of the OSCI TLM 1.0 standard was identified and then extended with some of the notions from TLM 2.0. The result was an interface description that could be used to connect models at the TLM level and was also synthesizable. The same interface that was used to connect functional blocks together could be used to connect the functional block to an interface model, such as an APB interface, and thus the interfaces physically got refined while at the same time, the original functional interfaces would fade away in the synthesis process. Of course, there were some tricky bits along the way. For example many blocks needed to know how the data was to be made available and the use of simplistic interfaces did not allow “knowledge” to be passed across the interfaces allowing them to become optimized.

I have also heard from the software world that synthesizing things where interfaces were involved, or OS calls required, generally resulted in inefficient implementations for much the same reasons. The other problem is that software synthesis does not have many of the constraints necessary to allow for anything more than localized analysis and optimization whereas in the hardware world we basically constrain what can be described so that it does enable such optimizations.

So, going back to the press release. It is clear that what they are really describing are the gains that come from employing abstraction. They talk about how abstraction enables them to explore the design architecturally and that provides the increase in simulation performance. Architectural exploration can also explain why they had reduced gate counts. While a 50% reduction in gate count is certainly not typical of the figures I hear, it is possible.

In summary, I don’t think this has anything to do with model-based design. I has to do with the gain associated with moving to a higher level of abstraction and it really doesn’t matter if you want to do that textually, graphically, or model-based, you are likely to see significant productivity and optimization gains. Long live whatever development process you prefer!

Brian Bailey – keeping you covered.

2 Responses to “Grappling with Model-Based Design”

  1. 1
    Gene Bushuyev Says:

    It’s definitely true that there is only one method known to mankind how to deal with growing complexities with the same resources (human brain), and that is increasing level of abstraction. But I don’t believe the assumption that “it really doesn’t matter if you want to do that textually, graphically, or model-based” is valid. There are natural qualities of human brain that make one way of representing abstraction more suitable than another. I believe human brain is much better equipped to deal with images than with verbiage. It my unscientific experiments with GBL library I could definitely see how much easier it was for me to reason, modify, experiment, and remember the graphical representation of the system than the generated code. The reason I think is due to the fact that graphical representation is more compact than corresponding code, it clearly and explicitly expresses relationships and dependencies, which are implicit in code (name, position, type, scope, etc.). Graphical representation can be more easily customized to emphasize important and de-emphasize unimportant details. It also naturally discourages clutter, while a code mud is a common problem, which requires experience and discipline to avoid. Additionally, reasoning in terms of system behavior using event-based declarative programming techniques in model-based design is much easier than in terms of traditional imperative programming. The wonderful quality of the event is that it’s completely oblivious to the consequences of its own cause, thus providing many blessings of very loose coupling.

  2. 2
    Ken Karnofsky Says:


    You’re asking great questions about Model-Based Design. The Wikipedia page isn’t all that helpful in answering them, however. It’s not inaccurate, but it reflects a collection of perspectives and leaves the reader grasping for the essential reasons that lead companies adopt Model-Based Design. I’d rewrite a lot of it if it were up to me, but that’s not the way Wikipedia works.

    Abstraction is certainly a big part of the value of Model-Based Design. But it’s not a matter of graphical vs. textual. In Faraday’s case, they were able to use graphical languages (Simulink for time-driven systems, and Stateflow for finite state machines) as well as a textual language (MATLAB for algorithms). It’s not just one abstraction, but several domain-specific ones in one environment. Faraday engineers used these multidomain system models to simulate and optimize their design. Also, Model-Based Design isn’t simply about working at a higher level of abstraction, but also providing ways to leverage models with different levels of abstraction, such as reusing C-code or HDL models in a Simulink model.

    In Model-Based Design, the use of system models as executable specifications throughout the workflow is just as important as the abstraction. Faraday used automatic C code generation from their models to produce a programmer’s view of the design for software development and architecture exploration. They also used automatic HDL code generation to develop FPGA prototypes. Throughout the process, the system model allowed them to continuously test their design and implementation against the system specification they had captured in the model.

    Faraday’s results are impressive, but they’re not alone. In 2010, Embedded Market Forecasters (EMF) conducted an analysis to determine the impact of Model-Based Design on the total cost of development (TCD). Data was collected from companies worldwide from automotive, aerospace, communications, industrial automation, and medical industries. The results show that:

    1) Development teams who used Model-Based Design had an average 37% lower TCD compared to teams that did not use Model-Based Design.
    2) The number of developers used per project was smaller for Model-Based Design across all geographic areas and all vertical markets examined.
    3) The percent of developer months lost to design cancellation or projects behind schedule was consistently less for Model-Based Design development projects across all verticals and geographic areas.

    You can view a webinar additional results from this study, presented by Dr. Jerry Krasner of EMF at

    Ken Karnofsky

Leave a Reply

© 2018 The ESL Edge | Entries (RSS) and Comments (RSS)

Design by Web4 Sudoku - Powered By Wordpress