Part of the  

Chip Design Magazine

  Network

About  |  Contact

Archive for September, 2012

What Drives ASIC Prototyping with FPGAs in 2012 and Beyond?

Thursday, September 27th, 2012

The latest results from the annual CDT survey point to changes in the reasons behind ASIC prototyping – from hardware, software, and systems to IP.

This year’s Chip Design Trends (CDT) “ASIC/ASSP FPGA-based Prototyping” (2012) survey reinforced past trends while providing a few surprises. The survey yielded much data, so let’s start with a high-level overview.

In 2012, hardware-software co-design and co-verification were again the number-one reason for ASIC designers to use FPGA-based prototypes (see Figure 1). Not surprisingly, hardware chip verification was the second leading driver, followed by software and then system verification.

Figure 1: Current and planned reasons why ASIC/ASSP chip designers use FPGA-based prototypes. Courtesy of Chip Design Trends (CDT)

A surprise came when designers were asked about future planned projects. All of the above current motivators were still there. But respondents indicated that software development would fall behind IP development and verification as an important issue. This probably means that IP development and verification has proven to be a sore spot for today’s designers.

How do these trends for 2012 compare to years past? Hardware-software co-design and co-verification remain the biggest reason for the FPGA prototyping of ASICs, followed by hardware-chip verification (see Figure 2). In 2012, software development continues to climb as an important driver while system-integration issues fall. IP development and verification has mixed results, suggesting that this factor requires further investigation. I’ll try to cross-correlate the IP trend with other data in a future article.

Figure 2: Shown are cumulative reasons for FPGA prototyping – from 2008 through 2012. Courtesy of Chip Design Trends (CDT)

Hot Chips, Si IPOs, Wright’s Law and Desert Spaceports

Friday, September 21st, 2012

John and Sean talk about the Hot Chips show, the decline of Silicon IPOs, Wright vs. Moore’s Law, and spaceports in the desert.

Industry Trends and Experts

John Blyler Interview on ChipEstimate.TV

 

Originally published on “IP Insider.”

IDF 2012 Shifts Focus to Cloud and Mobility

Tuesday, September 11th, 2012

A wide range of processor types ranging from datacenter to smartphones should enable the accelerated growth of software applications for Intel-based devices.

Once again, the opening keynote at the Intel Developer’s Forum (IDF) was a visually dazzling event. But something was missing. To understand what, you need to compare this year’s event with the previous one.

Last year – at IDF 2011 – Intel CEO Paul Ortellini talked about the ongoing transformations in transistor technology. Mainly, he focused on the growing consumer market for embedded products. There, transformations have been based on the ever-increasing availability of transistors and device improvements, such as 3D structures and ever-decreasing process geometries.

This year – at IDF 2012 – Intel’s Architecture Group VP and GM,  David “Dadi” Perlmutter, explained how computing was shaping the future of datacenter cloud computing to device mobility. He showcased Intel’s ongoing efforts with developers to create applications from cloud to intelligent systems that would “touch everyone on Mother Earth.” Connecting global users in this way requires a wide spectrum of processor technology from the mobile-based Medfield “Atom” (millions of transistors) to the server-grade Xeon (billions of transistors).

Today, both of these devices are in production. Medfield-based smartphones are available in Asia and Europe. Xeon E5 servers are found in many of today’s datacenters.  Interestingly, during the post-keynote “question and answer” session, Perlmutter emphasized that the Xeon E5 wasn’t intended as a replacement to Intel’s high-performance-computing (HPC) iTanium processor.

A common thread between IDF 2011 and IDF 2012 is the Ultrabook. These very thin and low-power laptops are powered by Intel’s core processors like the Haswell. One of the more impressive demonstrations benchmarked the third-generation Core processors (32 nm), or Ivy Bridge, with the upcoming fourth-generation Core processors (22 nm), based on the Haswell microarchitecture (see Figure).

The Haswell architecture beats Ivy Bridge by about one-half in this graphic-intensive demonstration.

One device missing from this year’s event was the Claremont, an experimental prototype processor. This Near Threshold Voltage (NTV) processor uses a novel, ultra-low-voltage circuit powered by a postage-stamp-sized solar cell. The Claremont was demonstrated during the 2011 keynote. This class of processor operates close to the transistor’s turn-on or threshold voltage – hence the NTV name.

Several weeks ago, in mid-August 2012, Intel Labs presented an update of a Claremont-based processor prototype at the Hot Chips forum. The speaker talked about the energy benefits of NTV computing using Intel’s IA-32, 32-nm CMOS processor technology. An important goal for the Claremont prototype was to extend the processor’s dynamic performance – from NTV to higher, more common computing voltages (as in the smartphone-based Medfield) while maintaining energy efficiency.

This year’s keynote theme was about the wide range of products – from smartphones to datacenter servers – being connected by a spiral of software. Developers were encouraged to make a difference to the world by creating useful products based on this range of technology.

Datapath Designs, Near Threshold Voltages, and Deeply Depleted Channels

Friday, September 7th, 2012

What do these tongue-twisting technical phrases have in common? They were all part of the morning session on the last day of the Hot Chips forum.

The catch-all title of “Technology and Scalability” was appropriate for the morning session of the last day at the Hot Chips forum. Michael Parker from Altera began the session by highlighting advances in the floating-point accuracy of floating-point-gate-array (FPGA) devices. FPGAs are inherently better at fixed-point calculations, in part due to their routing architecture. Conversely, accurate floating-point calculations are dependent upon multiplier density for the extensive use of adders, multipliers, and other trigonometric functions. Often, these functions are pulled from libraries to form an inefficient multiplier implementation.

Last day at Hot Chips 2012

According to Parker, Altera took a different approach by using a new floating-point fused datapath implementation instead of the existing IEEE-based method. The datapath approach removes the typical normalization and de-normalization steps required in the multiplier-based IEEE representation.

However, the datapath approach only achieves this high floating-point accuracy on smaller matrix functions (like FFTs), where low-power GFlops-per-Watt performance and low latency – thanks to enough on-chip memory – are the primary requirements.

Next up was Gregory Ruhl, who shared Intel Lab’s efforts to develop a Claremont-based processor prototype. He talked about the energy benefits of Near Threshold Voltage (NTV) computing using Intel’s IA-32, 32-nm CMOS processor technology.

Readers may remember the NTV processor (code-named “Claremont”) from last year’s Intel Developer Forum. My tweet from that show referenced a solar-powered Claremont demonstration in which the Claremont powered a short video clip of a playful kitten:

Dark_Faust#IDF2011 Cat dies when rainy days cut sun to solar-powered Intel processor. Very cool.#semieda #eda

Figure 2: Intel’s researchers have created a prototype chip that could allow a computer to power up on a solar cell the size of a postage stamp.

The Claremont relies on an ultra-low-voltage circuit to greatly reduce energy consumption. This class of processor operates close to the transistor’s turn-on or threshold voltage – hence the NTV name. Threshold voltages vary with transistor type. Typically, though, they are low enough to be powered by a postage-stamp-sized solar cell.

The other goal for the Claremont prototype was to extend the processor’s dynamic performance – from NTV to higher, more common computing voltages – while maintaining energy efficiency.

Ruhl’s results showed that the technology works for ultra-low-power applications that require only modest performance – from SoCs and graphics to sensor hubs and many-core CPUs. Reliable NTV operation was achieved using unique, IA-based circuit-design techniques for logic and memories.

Further developments are needed to create standard NTV circuit libraries for common, low-voltage CAD methodologies. Apparently, such NTV designs require a re-characterized, constrained standard-cell library to achieve such low corner voltages.

Finishing the session on “Technology and Scalability” was a presentation by Robert Rogenmoser from SuVolta, a semiconductor company focused on reducing CMOS power consumption. Rogenmoser talked about ways to reduce transistor variability for low-power, high-performance chips.

Transistor variability at today’s lower process geometries comes from the typical sources of wafer yield variations and local transistor-to-transistor differences. Such variability has forced the semiconductor industry to look at new transistor technologies, especially for lower-power chips.

What is the solution? Rogenmoser discussed the pros and cons of three transistor alternatives: FinFET or TriGate; fully depleted (FD) silicon-on-insulator (SoI); and deeply depleted channel (DDC) transistors (see Figure 3). FinFET or TriGate promise high drive current, but face manufacturing, cost, and intellectual-property (IP) challenges. The latter point refers to IP changes required to support the new 3D-transistor-gate structures.

According to Rogenmoser, FD-SoI transistor technology enjoys the benefits of undoped channels. But it lacks the capability of multi-voltages and a limited supply chain. According to him, DDC transistors were the best solution. This process offered straightforward insertions into bulk planar CMOS – especially from 90 nm to 20 nm and below. In terms of performance, DDC transistors are less variable with tighter corners. They also require simpler manufacturing steps. Equally important was the ease of migration of existing IP to the DDC process, he explained.

Rogenmoser concluded by explaining how DDC technology can bring back common low- power tools to lower nodes (e.g., dynamic voltage and frequency scaling, body biasing, and low-voltage operation).

Figure 3a: FinFET – TriGate transistor technology
Figure 3b: FD-SoI transistor technology
Figure 3c: DDC transistor technology

 

Next week: The Rest of the Story