Part of the  

Chip Design Magazine

  Network

About  |  Contact

Model-Driven Development Is Key to Low Power

Mentor Graphics explains why the model is the design at a system engineering forum hosted by Portland State University (PSU), INCOSE, and the IEEE.

Panelists from industry, national laboratories, and the Portland State University’s System Engineering graduate program recently gathered for an open forum on model-driven engineering. The event was hosted in collaboration with PSU, the International Council on Systems Engineering (INCOSE), and the IEEE.

Among those invited to speak was Bill Chown, Product and Marketing Manager from Mentor Graphics’ System Design group. He spoke about Model-Driven Development (MDD), a contemporary approach in which the model is the design and implementation is directly derived from the model.

“At first glance, Model-Driven Development might seem a long way from low-power design, where typical approaches focus on execution speeds, power regions, and switching efficiency,” explained Chown in a follow-up conversation. “But major gains in conserving power will only be made from optimization at the architectural level, when whole functions can be reassigned, implemented efficiently, or even eliminated completely.”

How does an MDD approach work within the typical life-cycle development of a product or system? What follows is a partial explanation, provided by Chown.

+++++++++++++++++++++

Most systems (or products) start out with an idea – one that is initially ill-formed, incomplete. How can we refine that idea, develop the thoughts, and arrive at something that others will appreciate – and that can be implemented most effectively within the system design constraints?

First, we need to insert some clarity and start to define what is required – and what a “requirement” really is. Once we can describe the requirements, we can pass them on to the ever-broader team that it takes to implement today’s products. And it is their interpretation of those requirements that will actually shape the product that eventually emerges from the process.

By building a concept “model,” designers can explore the problem as described and start to elaborate it.

As we look at the idea, we ask questions, challenge concepts, and build on the initial premise. A concept model helps by setting out those concepts and enabling the queries.

As we do this, we begin to extract actual requirements – “the system shall weigh less than 300 gm,” “power in operation cannot exceed 3000 mWhours,” etc. – and generate the initial requirements “specification.” We will not model the implementation of these requirements initially, but will factor them into the system design as constraints to track through to implementation.

From the initial concept model can be derived a potential solution – or, more probably, several potential solution choices. These models may be minor variants on the concept or radically different approaches. But we can consider the same requirements, interchange the approaches of the potential solutions with the starting idea source (typically a person), and start to ask the next set of questions.

The goal here is to start to evolve a solution specification that can be shared with stakeholders, can be acted upon in the downstream design flow, and will satisfy the constraints once we are in a position to validate them.

Executable Specification

The term “executable specification” is often raised as an approach to addressing the disconnect between textual customer-facing initial needs and an actionable, concrete, and deliverable design specification. However, there can be – and usually needs to be – several steps to build up an executable specification that is useful and can support those goals.

An executable specification needs to deliver on the two key topics introduced previously: show what we have and enable initial questions to be asked. Answering questions and responding – with possible changes to the concept or early design or significant features of the eventual product that should be highlighted – are essential capabilities to be sought in a good executable specification. A well-understood specification can lead to a successful design and an effective design process that can be reused time and again on the path to implementation. This, in turn, permits consideration of those key design constraints at a stage where changes can be made – and with a true model-driven flow, be rapidly developed into measurable implementations.

Partition

Assignment of functionality to an implementation path is often performed very early in the design process. It is therefore performed without sufficient information to make an informed decision. Furthermore, once selected, that tends to become the only choice. Later learning is very hard to incorporate, leaving the design architecture prematurely frozen.

Typically, we do not know enough to make the optimum partitioning choices. Design decisions are thus frozen into less than optimal configurations, and constraints will be hard to meet by trying to adjust final implementations. To improve this step and enable a more effective and flexible flow, more information is essential – as is an architecture of the process that will enable change.

Here, the use of models – the fundamental premise of MDD, and in fact already in many of our current processes – can make the crucial difference. Models allow us to exercise the proposed partitioning and ask the next set of questions. Models can grow and evolve to add key information, and thus develop to enable the design validation needed.

Functional Virtual Platform

The next questions relate to overall system functionality and behavior – initially at a high abstraction level, but extending to recognize implementation characteristics. To find out if we have assembled the full set of design elements required and to see if they interact with each other as expected, we should use a virtual platform. At the initial stage, the virtual platform does not need great detail, timing, or similar constraints. But it does need to enable rapid execution to exercise all of the functionality. What the virtual platform does enable – and at a significant and important stage in the process – is the checkpoint between these partitioned disciplines to ensure continuing consistency.

SystemC has become a popular basis for an executable virtual platform that can incorporate the artifacts of hardware behavior, such as bus cycles and timing, without having to go into the detail and execution performance burden of fully elaborated RTL design. Performance can be estimated, system conflicts identified, and resources planned. In an effective MDD flow, the necessary building blocks of the virtual platform can be generated from the higher-level models used in the executable specification, linked together with standard library models like processor instruction-set simulators, and run.

This virtual platform can then be augmented with wrappers that add more comprehensive timing, execution speed, and power knowledge – while enabling that validation of constraints identified in the initial requirements and executable specification.

Should the architecture be found to miss key needs, a Model-Driven Development flow enables revisiting those prior partitioning choices, adjustment of the architecture to provide alternative potential implementation solutions, and the rapid regeneration of a new virtual-platform configuration. Multiple architectures can be assessed and required functionality delivered for validation – enabling the consideration of power-sensitive design areas and the deployment of more efficient techniques before committing to a fixed target platform.

Leave a Reply