Part of the  

Chip Design Magazine

  Network

About  |  Contact

Archive for December, 2012

STMicroelectronics Pushes SOI While Leaving the Mobile Space

Thursday, December 20th, 2012

Why is one of Europe’s leading semiconductor IDMs pushing into leading-edge, 28-nm FD-SOI technology while leaving a market where such technology might be useful?

It was a chance meeting that made me wonder about two recent announcements from one of the world’s largest semiconductor companies.

Last week, I attended an IEDM briefing in which STMicroelectronics presented silicon-verified data to further confirm the manufacturability of its 28-nm Fully Depleted Silicon-on-Insulator (FD-SOI) technology (see “FinFETs or FD-SOI?“). Ed Sperling, Editor-in-Chief for SemiMD, summed it up this way:

“What’s particularly attractive about FD-SOI is that it can be implemented at the 28-nm node for a boost in performance and a reduction in power. The mainstream process node right now is 40 nm. And while Intel introduced its version of a finFET transistor called Tri-Gate at 22 nm, TSMC and GlobalFoundries plan to introduce it at the next node—whether that’s 16 nm or 14 nm. That leaves companies facing a big decision about whether to move all the way to 16/14 nm to reap the lower leakage of finFETs, whether to move to 20 nm on bulk, or whether to stay longer at 28 nm with FD-SOI.”

Joel Hartmann, Executive VP Front-End Manufacturing & Process R&D, STMicroelectronics, presents SoC-level, 28-nm Planar Fully Depleted silicon results at IEDM 2012.

I didn’t realize until later that week, but – on the same day as its 28-nm FD-SOI technology announcement – STMicroelectronics stated that it would curtail its presence in the mobile-handset space via the Ericsson partnership. As Chris Ciufo noted in his “All Things Embedded” blog, Ericsson will remain in only two market domains: Sense and Power and Automotive as well as Embedded Processing. “For the former, device categories include MEMS, sensors, power discretes, advanced analog, automotive powertrain, automotive safety (such as Advanced Driver Assistance Systems [ADASs]), automotive body, and the red-hot In-Vehicle Infotainment (IVI) category,” wrote Ciufo.

In the embedded processing market, the company will “focus on the core of the electronics systems” and ditch wireless broadband. Target areas include microcontrollers, imaging, digital consumer, application processors, and digital ASICs.

Considered together, these two announcements beg the following question: If STMicroelectronics is only interested in the sensor, automotive, and “embedded” markets, why does the company need to work at leading-edge process nodes – like 28 nm on FD-SOI? This question arose during a recent chance meeting with Juergen Jaeger, Sr. Product Manager at Cadence Design Systems.

Jaeger suggested a possible answer by noting that Moore’s Law generally provides a cost savings with power and performance benefits at lower processing nodes. “This makes sense for both automotive infotainment and networking technologies,” explained Juergen. “But it doesn’t make too much sense for gearbox, engine, anti-lock brakes, or steering systems, since they need high reliability and tolerance.” Those requirements tend to restrict devices to fully tested, high-node geometries.

Jaeger reminded me that infotainment systems-on-a-chip (SoCs) are very complex devices requiring integrated network and wireless systems – in addition to an array of audio/video codecs that must drive multiple LCD screens within today’s cars.

Additionally, STMicroelectronics’ move to FD-SOI is one way to mitigate the risk facing leading-edge bulk CMOS processes. As Sperling observed, “At 28 nm and beyond, however, bulk has run out of steam, which is why Intel has opted for finFETs.” Meanwhile, FD-SOI offers power and performance benefits while staying on today’s planar-transistor manufacturing processes.

In the end, the push toward FD-SOI technology at exiting 28-nm nodes may play well into a number of low-power and high-performance chip markets. This is not a path without risk. But it does highlight the accelerating convergence of SOI and bulk CMOS at leading-edge nodes. And it should strengthen STMicroelectronics’ strong position in the automotive infotainment space.

Originally posted on “IP Insider.”

Model-Driven Development Is Key to Low Power

Thursday, December 6th, 2012

Mentor Graphics explains why the model is the design at a system engineering forum hosted by Portland State University (PSU), INCOSE, and the IEEE.

Panelists from industry, national laboratories, and the Portland State University’s System Engineering graduate program recently gathered for an open forum on model-driven engineering. The event was hosted in collaboration with PSU, the International Council on Systems Engineering (INCOSE), and the IEEE.

Among those invited to speak was Bill Chown, Product and Marketing Manager from Mentor Graphics’ System Design group. He spoke about Model-Driven Development (MDD), a contemporary approach in which the model is the design and implementation is directly derived from the model.

“At first glance, Model-Driven Development might seem a long way from low-power design, where typical approaches focus on execution speeds, power regions, and switching efficiency,” explained Chown in a follow-up conversation. “But major gains in conserving power will only be made from optimization at the architectural level, when whole functions can be reassigned, implemented efficiently, or even eliminated completely.”

How does an MDD approach work within the typical life-cycle development of a product or system? What follows is a partial explanation, provided by Chown.

+++++++++++++++++++++

Most systems (or products) start out with an idea – one that is initially ill-formed, incomplete. How can we refine that idea, develop the thoughts, and arrive at something that others will appreciate – and that can be implemented most effectively within the system design constraints?

First, we need to insert some clarity and start to define what is required – and what a “requirement” really is. Once we can describe the requirements, we can pass them on to the ever-broader team that it takes to implement today’s products. And it is their interpretation of those requirements that will actually shape the product that eventually emerges from the process.

By building a concept “model,” designers can explore the problem as described and start to elaborate it.

As we look at the idea, we ask questions, challenge concepts, and build on the initial premise. A concept model helps by setting out those concepts and enabling the queries.

As we do this, we begin to extract actual requirements – “the system shall weigh less than 300 gm,” “power in operation cannot exceed 3000 mWhours,” etc. – and generate the initial requirements “specification.” We will not model the implementation of these requirements initially, but will factor them into the system design as constraints to track through to implementation.

From the initial concept model can be derived a potential solution – or, more probably, several potential solution choices. These models may be minor variants on the concept or radically different approaches. But we can consider the same requirements, interchange the approaches of the potential solutions with the starting idea source (typically a person), and start to ask the next set of questions.

The goal here is to start to evolve a solution specification that can be shared with stakeholders, can be acted upon in the downstream design flow, and will satisfy the constraints once we are in a position to validate them.

Executable Specification

The term “executable specification” is often raised as an approach to addressing the disconnect between textual customer-facing initial needs and an actionable, concrete, and deliverable design specification. However, there can be – and usually needs to be – several steps to build up an executable specification that is useful and can support those goals.

An executable specification needs to deliver on the two key topics introduced previously: show what we have and enable initial questions to be asked. Answering questions and responding – with possible changes to the concept or early design or significant features of the eventual product that should be highlighted – are essential capabilities to be sought in a good executable specification. A well-understood specification can lead to a successful design and an effective design process that can be reused time and again on the path to implementation. This, in turn, permits consideration of those key design constraints at a stage where changes can be made – and with a true model-driven flow, be rapidly developed into measurable implementations.

Partition

Assignment of functionality to an implementation path is often performed very early in the design process. It is therefore performed without sufficient information to make an informed decision. Furthermore, once selected, that tends to become the only choice. Later learning is very hard to incorporate, leaving the design architecture prematurely frozen.

Typically, we do not know enough to make the optimum partitioning choices. Design decisions are thus frozen into less than optimal configurations, and constraints will be hard to meet by trying to adjust final implementations. To improve this step and enable a more effective and flexible flow, more information is essential – as is an architecture of the process that will enable change.

Here, the use of models – the fundamental premise of MDD, and in fact already in many of our current processes – can make the crucial difference. Models allow us to exercise the proposed partitioning and ask the next set of questions. Models can grow and evolve to add key information, and thus develop to enable the design validation needed.

Functional Virtual Platform

The next questions relate to overall system functionality and behavior – initially at a high abstraction level, but extending to recognize implementation characteristics. To find out if we have assembled the full set of design elements required and to see if they interact with each other as expected, we should use a virtual platform. At the initial stage, the virtual platform does not need great detail, timing, or similar constraints. But it does need to enable rapid execution to exercise all of the functionality. What the virtual platform does enable – and at a significant and important stage in the process – is the checkpoint between these partitioned disciplines to ensure continuing consistency.

SystemC has become a popular basis for an executable virtual platform that can incorporate the artifacts of hardware behavior, such as bus cycles and timing, without having to go into the detail and execution performance burden of fully elaborated RTL design. Performance can be estimated, system conflicts identified, and resources planned. In an effective MDD flow, the necessary building blocks of the virtual platform can be generated from the higher-level models used in the executable specification, linked together with standard library models like processor instruction-set simulators, and run.

This virtual platform can then be augmented with wrappers that add more comprehensive timing, execution speed, and power knowledge – while enabling that validation of constraints identified in the initial requirements and executable specification.

Should the architecture be found to miss key needs, a Model-Driven Development flow enables revisiting those prior partitioning choices, adjustment of the architecture to provide alternative potential implementation solutions, and the rapid regeneration of a new virtual-platform configuration. Multiple architectures can be assessed and required functionality delivered for validation – enabling the consideration of power-sensitive design areas and the deployment of more efficient techniques before committing to a fixed target platform.