Part of the  

Chip Design Magazine


About  |  Contact

Posts Tagged ‘DAC’

Next Page »

Has The Time Come for SOC Embedded FPGAs?

Tuesday, August 30th, 2016

Shrinking technology nodes at lower product costs plus the rise of compute-intensive IOT applications help Menta’s e-FPGA outlook.

By John Blyler, IP Systems


The following are edited portions of my video interview the Design Automation Conference (DAC) 2016 with Menta’s business development director, Yoan Dupret. – JB

John Blyler's interview with Yoan Dupret from Menta

Blyler: You’re technology enables designers to include an FPGA almost anywhere on a System-on-Chip (SOC). How is your approach unique from others that purport to do the same thing?

Dupret: Our technology enables placement of an Field Programmable Gate Array (FPGA) onto a silicon ASIC, which is why we call it an embedded FPGA (e-FPGA). How are we different from others? First, let me explain why others have failed in the past while we are succeeding now.

In the past, the time just wasn’t right. Further, the cost of developing the SOC was still too high. Today, all of those challenges are changing. This has been confirmed by our customers and from GSA studies that explain the importance of having some programmable logic inside an ASIC.

Now, the time is right. We have spent the last few years focusing on research and development (R&D) to strengthen our tools, architectures and to build out competencies. Toolwise, we have a more robust and easier to use GUI and our architecture has gone through several changes from the first generation.

Our approach uses standard cell-based ASICs so we are not disruptive to the EDA too flow of our customers. Our hard IP just plugs into the regular chip design flow using all of the classical techniques for CMOS design. Naturally, we support testing with standard scan chain tests and impressive test coverage. We believe our FPGA performance is better than the competitions in terms of numbers of lookup tables per of area, of frequencies, and low power consumption.

Blyler:  Are you targeting a specific area for these embedded FPGAs, e.g., IOT?

Dupret: IOT is one of the markets we are looking at but it is not the only one. Why? That’s because the embedded FPGA fabric can actually go anywhere you have RTL, which is intensively parallel programming based (see Figure 1). For example, we are working on a cryptographic algorithms inside the e-FPGA for IOT applications. We have tractions on the filters for digital radios (IIR and FLIR filters), which is another IOT application. Further, we have customers in the industrial and automotive audio and image processing space

Figure 1: SOC architecture with e-FPGA core, which is programmed after the tape-out. (Courtesy of Menta)

Do you remember when Intel bought Altera, a large FPGA company? This acquisition was, in part, for Intel’s High Performance Computing (HPC) applications. Now they have several big FPGAs from Altera just next to very high frequency processing cores. But there is another way to do achieve this level of HPC. For example, a design could consists of a very big parallel intensive HPC architecture with a lot of lower frequency CPUs and next to each of these CPUs you could have an e-FPGa.

Blyler: At DAC this year, there are a number of companies from France. Is there something going on there? Will it become the next Silicon Valley?

Dupret: Yes, that is true. There are quite some companies doing EDA. Others are doing IP, some of which are well known. For example, Dolphin, is based in Grenoble and it is also part of the ecosystem there.

Blyler: That’s great to see. Thank you, Yoan.

To learn more about Menta’s latest technology: “Menta Delivers Industry’s Highest Performing Embedded Programmable Logic IP for SoCs.”

One EDA Company Embraces IP in an Extreme Way

Tuesday, June 7th, 2016

Silvaco’s acquisition of IPextreme points to the increasing importance of IP in EDA.

By John Blyler, Editorial Director

One of the most promising directions for future electronic design automation (EDA) growth lies in semiconductor intellectual property (IP) technologies, noted Laurie Balch in her pre-DAC (previously Gary Smith) analysis of the EDA market. As if to confirm this observation, EDA tool provider Silvaco just announced the acquisition of IPextreme.

At first glance, this merger may seems like an odd match. Why would an EDA tool vendor who specializes in the highly technical analog and mixed signal chip design space want to acquire an IP discovery, management and security company? The answer lies in the past.

According to Warren Savage, former CEO of IPextreme, the first inklings of a foundation for the future merger began at DAC 2015.  The company had a suite of tools and an ecosystem that enabled IP discovery, commercialization and management. What they lacked was a strong sale channel and supporting infrastructure.

Conversely, Silvaco’s EDA tools were used by other companies to create customized analog chip IP.  This has been the business model for most of the EDA industry where EDA companies engineer and market their own IP. Only a small portion of the IP created by this model have been made commercially available to all.

According to David Dutton, the CEO of Silvaco, the acquisition of IPextreme’s tools and technology will allow them to unlock their IP assets and deliver this underused IP to the market. Further, this strategic acquisition is part of Silvaco’s 3-year plan to double its revenues by focusing – in part – on strengthening it’s IP offerings in the IOT and automotive vertical markets.

Savage will now lead the IP business for Silvaco. The primary assets from IPextreme will now be part of Silvaco, including:

  • Xena – A platform for managing both the business and technical aspects of semiconductor IP.
  • Constellations – A collective of independent, likeminded IP companies and industry partners that collaborate at both the marketing and engineering levels.
  • Coldfire processor IP and various interface cores.
  • “IP Fingerprinting” – A package, which allows IP owners to “fingerprint” their IP so that their customers can easily discover it in their chip designs and others using ”DNA analysis” software without the need for GDSII tags.

The merger should be mutually beneficial for both companies. For example, IPextreme and its Constellation partners will now have access to a worldwide sales force and associated infrastructure resources.

On the other hand, Silvaco will gain the tools and expertise to commercialize their untapped IP cores. Additionally, this will complement the existing efforts of customers who use Silvaco tools to make their own IP.

As the use of IP grows, so will the need for security. To date, it has been difficult for companies to tell the brand and type of IP in their chip designs. This problem can arise when engineers unknowingly “copy and paste” IP from one project to another. The “IP fingerprinting” technology developed by IPextreme creates a digital representation of all the files in a particular IP package. This representation is entered into a Core store that can then be used by other semiconductor companies to discover what internal and third-party IP is contained in their chip designs.  This provides a way for companies to protect against the accidental reuse of their IP.

According to Savage, there is no way to reverse engineer a chip design from the fingerprinted digital representation.

Many companies seem to have a disconnect between the engineering, legal and business side of their company. This disconnect causes a problem when engineers use IP without any idea of the licensing agreements attached to that IP.

“The problem is gaining the attention of big IP providers who are worried about the accidental reuse of third-party IP,” notes Savage. “Specifically, it represents a liability exposure problem.”

For smaller IP providers, having their IP fingerprint in the CORE store could potentially mean increased revenue as more instances of their IP become discoverable.

In the past, IP security measures have been implemented with limited success with hard and soft tags. (see, “Long Standards, Twinkie IP, Macro Trends, and Patent Trolls”) But tagging chip designs in this way was never really implemented in the major EDA place and route tools, like Synopsys’s IC Compiler. According to Savage, even fabs like TSMC don’t follow the Accellera tagging system, but have instead created their on security mechanisms.

For added security, IPextreme’s IP Fingerprinting technology does support the tagging information, notes Savage.

Our Day at DAC – Day 1 (Monday)

Monday, June 2nd, 2014

Here are the brief observations on noteworthy presentations, cool demonstrations and hall-way chats from the editorial staff covering “Day 1″ at DAC 2014 – John Blyler, Gabe Moretti and Hamilton Carter.


DAC Report from Hamilton Carter:

Puuurrrple, so much purple!  The stage at the packed Synopsys, Samsung, ARM briefing this morning was backed by ceiling to floor Synopsys-purple curtains.  The Samsung vision video played on the two large screens on either side of the stage.  To steal a phrase from “Love Actually”, Samsung’s vision is that “touch-screens are… everywhere” .  Among the envisioned apps were a touch screen floor for your kids’ room, complete with planetarium app; a touchscreen window for your Town-Car so you can adjust the thermostat in the car as your driver taxis you to your destintion; and finally a touchscreen gadget for the kitchen that when laid flat weighs the food and registers the number of calories in the amount you’ve sliced off on its cutting board tough screen, displays the recipe you’re using when upright, and finally, get ready for it… checks the ‘safety’ of your food displaying an all clear icon complete with a rad safe emblem.  Apparently the future isn’t completely utopian!

Phil Dworsky, director of strategic alliances, for Synopsys introduced the three featured speakers, Kelvin Low, of Samsung, Glenn Dukes of Synopsys, and Rob Aitken from ARM, and things got under way.  The key impetus of the presentation was that the Samsung/Synopsys/ARM collaboration on 14 nm 3D finfet technology is ready to go.  The technology has been rolled out on 30 test chips and 5 customer chips that are going into production.

Most of the emphasis was on the 14 nm process nodes, but the speakers were also quick to point out that the 28 nm node wasn’t going away anytime soon  With its single patterning, and reduced power consumption, it’s seen as a perfect fit for mobile devices that don’t need the cutting edge of performance yet.

Interesting bits:

  • It was nice to visit with Sanjay Gupta, previously of IBM Austin, who is now at Qualcomm, San Diego.
  • While smart phones have been outshipping PCs for a while, tablets are now predicted to outship PCs starting in 2015.
  • Bryan Bailey of verification fame was one of the raffle winners.  He’s now a part of the IoT!
  • IoT predictions are still in the Carl Sagan range, there will be ‘billions and billions’.
  • Samsung GLOBALFOUNDRIES has a fab, Fab8, in Saratoga, NY.
  • Last year’s buzzword was ‘metric driven’, this year’s is ‘ecosystem’ so far.  The vision being plugged is collaborations of companies and/or tools that work as a ‘seamless, [goes without saying], ecosystem’.

Catching up with Amiq

I got to catch up with Christian from Amiq this morning.  Since they’re planted squarely in the IDE business, Amiq gets the fun job of working directly with silicon design and verification engineers.  There products on display this year include their Eclipse based work environment, with support for e, and SystemVerilog built in, their verification-code-centric linting tool Verissimo, and their documentation generation system Specador.

IC Manage

I’m always drawn in by a good ‘wrap a measurable, or at least documentable flow around your design process story’, so I dropped by the IC Manage booth this morning.

Their product encapsulates many of the vagaries of the IC development flow into a configuration management tool.  The backbone of the tool can be customized to the customer’s specific flow via scripts, and it provides a real-time updated HTML based view of what engineers are up to as project development unfolds.


DAC Report from Gabe Moretti:

Power Management and IP

Moscone South is all about IP and low power.  This is the 51st DAC and my 34th.  Time flies.  The most intimidating thing is that the Apple Developers Forum is going on at the same time, and they have TV trucks and live interview on the street.  We of course do not.  It was nice to hear Antun Domic as one of the two keynote speakers this morning  His discussion on how the latest EDA tools are used to produce designs fabricated with processes as old as 180 nanometers was refreshing.  In general people equate the latest EDA tools with the latest semiconductor process.  Yet one needs to manage power even at 180 nanometers.

Chip Estimate runs a series of talks from IP developers in its booth.  I listened to Peter Mc Guiness of Imagination Technologies talk about advances in image processing.  it was interesting to hear him talk about lane departure warning as an automotive feature employing such technology.  Now I know how it works in one of my cars.  On the other hand to hear how the retail industry is planning to use facial recognition to choose for me what I should be interested in purchasing is not so reassuring.  But, on the other hand, its use in robotics applications is fashinating.


DAC Report from John Blyler:

I. IP Panel: The founders for several successful private IP companies shared their experiences with an audience of near 50 attendees. The panelist included CAST, IPExtreme, Methods2Business, and Recore Systems. The main takeaways were that starting an IP company takes passion and a plan.  But neither will work if you don’t have some product to offer and a few key relationships in the industry. (Warren said you need 3 key customers to start.) I’ll write more about this panel later. Here’s a link to a pre-DAC position statements from the panelist.

II. NI and Cadence – The Best of Both Worlds

George Zafiropoulos, VP, Solutions Marketing at National Instruments (NI)-AWR, has brought his many years of chip design and verification experience from the EDA industry to NI. He spoke at the DAC Cadence Theater about post- and pre-silicon verification being the best of both worlds. Those worlds consist of NI, which has traditionally been used for post-silicon verification testing, and Cadence, which is known for pre-silicon design and verification. George has proposed the use of NI test hardware and software to do pre-silicon verification in combination with Cadence’s emulation tools, i.e, Palladium. This proposed combination of tools elicited many questions from the audience who were more familiar with the pre-silicon tools than the post-silicon testers. Verification languages were an issue for those who had never used the Mindstorm or other NI graphic tools suits. I’m sure we’ll learn more on this potential partnership between NI and Cadence tool suites.

III. Visionary Talk by Wally Rhines, CEO, Mentor Graphics (prior to the afternoon keynote):

The title described it all; “EDA Grows by Solving New Problems.” Wally’s vision focused on how EDA industry will grow even with the constraints on its relatively flat revenue. As he noted back in the 2004 DAC keynote, the largest growth with EDA tools is associated with the adoption of new methodologies, e.g., ESL, DFM, and FPGAs. Further, tools that support new methodologies have been the main drives of growth in the PCB and semiconductor worlds.

“EDA need to tap into new budgets … for emulation, embedded software … and in new markets,” explained Rhines. “The automotive industry is at the same stage of development as was the chip design industry in the 1970s. Their development process will have to be automated and with new tools.”

Another growth market will be hardware cyber security.

What Color is Your Semiconductor IP Box?

Thursday, June 14th, 2012

Black, white and even grey box testing techniques from the world of hardware and software integration are finding a place in semiconductor IP subsystems.

Much has been said about the need to incorporate software into chip and board level hardware design. Cadence’s EDA360 vision is but one example of the realization that silicon and software must be co-developed to achieve the optimal system design.

Integrating the disciplines of hardware (chip) and software design is not an easy task. It requires a systems engineering approach throughout the entire development life cycle, but particularly during the design and integration phase. While hardware and software engineers are experts in their respective “white box” domains, System Engineers have experience integrating the resulting “black box” subsystems.

Let’s be clear on terminology. “White box” refers to a method of testing where the internal workings of electronic components or software code are known. This is where hardware or software domain-specific experience and knowledge are needed.

Black box testing refers to functional tests where the internal workings of the hardware or software subsystems are unknown. In Black Box testing, only the input and output parameters of the subsystem are known. This is the realm of Systems Engineering.

To successfully integrate hardware and software subsystems, a Systems Engineer must have a working knowledge of both domains. He or she must be able to communicate effectively with both the hardware and software engineers during the white-box testing that precedes full system integration during the black-box testing phase.

Further, in the shrinking time-to-market (TTM) windows of today’s electronic systems, software development must start before hardware is fully available. This need has resulted in a co-design methodology between hardware and software which in turn affects traditional white- and black-box testing. The Systems Engineer must be involved in both co-design and co-testing to ensure validation and verification of system level requirements.

What does all of this mean to the world of semiconductor IP design? One of the problems with integration of large blocks of third party IP – think ARM cores – is that signals may cross different clocking domains. By design, semiconductor IP cores – logic, cell or chip layout – are black boxes for the SoC integration team. How does a System Engineer ensure successful integration in a purely black box environment? One way is to have access one or two relevant white-box parameters, resulting in what has been dubbed grey-box testing.

Grey-box testing is black-box testing with some knowledge of internal data structures and algorithms. It can be thought of as selective white box testing but without full access to the software’s source code. Black box models provide input and output signals, but nothing else. In IP integration, a grey-box model may provide a single level of register logic to enable inter-block analysis. This additional knowledge should provide greater test cover that result in fewer end-product failures. One example of the grey-box approach was provided by Blue Pearl’s recent partnership with Xilinx – announced during DAC 2012 – to develop grey-box testing for ARM core based System-on-Chips (SoCs).

White-, black- and grey-box testing strategies are but one of many issues faced in SoC IP integration. (see, “Experts at the Table: IP Subsystems”) Yet all of these issues are but a subset of challenges encountered in the integration of larger hardware and software systems – e.g., at the board, module and top system level. The goal is that successful approaches are applied throughout all level of the system hierarchy.


If you’d like to explore these and other hardware-software integration issues, then you might enjoy attending this online course which starts on June 25, 2012.

Originally published on “IP Insider.”


Free counter and web stats

Images of Day 3 at DAC 2012

Friday, June 8th, 2012

Day 3 of DAC, captured in pictures and captions.

Honorable mentions (other companies that I visited or meant to visit):

Austrialian Semiconductor Technology Company

  • VWorks – Atanas Parashkevov, CTO and VP. Part of the Austrialian Semiconductor Technology Company (ASTC)
  • Calypto – Shawn McCloud, VP of Marketing


Systems, Software and IP Merge at DAC

Monday, June 4th, 2012

Here’s the first day of DAC, captured in pictures and captions.


SoC Costs Cut by Multi-Platform Design

Friday, June 1st, 2012

Upward SoC cost trend blunted as designers reused software, verified IP and fewer blocks, reports long-time EDA analyst Gary Smith.

During last year’s Design Automation Conference (DAC), EDA-veteran analyst Gary Smith predicted that it cost slightly over $75 million to design the average high-end System-on-Chip (SoC). This was way over the $50 million targeted by IDM-fabless companies and even further from the $25 million start-up level preferred by funding institutions.

Shortly after that prediction, several companies reported building SoCs around the $40million level. How did they beat the expectation? First, they used previously developed software. Second they used IP that came with verification suites. Lastly, these companies significantly decreased the number of SoC blocks – below the preferred five core blocks. Taken together, these three factors constituted a methodology nicknamed the Multi-Platform Based Design approach.

In essence, this approach was based on the integration of existing platforms enhanced with a new application level to add competitive advantage. The greatest cost savings was realized from the reduction of new core designs.

The multi-platform based design platform has three levels: functional, foundation and application. The functional level represents the core of the SoC design, the broadest of the three platforms. Typically, it often comes from a third party, e.g., ARM Cortex A9 processing system, that is not geared to specific industry or product. If it comes from an in-house design, then it consists of all reused cores. This level provides no competitive advantage since it uses third party cores or IP.

The Foundation platform, also usually from a third party vendor, provides only a slight industry or market differentiation. Most foundation cores are focused on the mobile and consumer electronic markets, e.g., Nvidia’sTegra 3, TI’s OMAP and Qualcomm’s Snapdragon platforms. While enabling differentiation for a particular market segment – often the mobile or consumer electronic markets – foundation cores still provide only a small competitive advantage. Together, the functional and foundation platforms make up between 75 to 90 percent of the total gates in the SoC design.

At the top of the multi-platform based design is the application level, which provides the most market differentiation. This level consists of in-house or proprietary designs, e.g. IP or software from car-maker Audi’s navigation and infotainment systems. The drawback is that this level has the shortest product life cycle.

Applications that are popular can move from the application-level to the foundation level, as in the case of GPS and GPU SoCs. Foundation suppliers then begin to include these popular IPs in their regular offerings. If the application involves processing – like a GPU – then it may even evolve into the functional-level.

Those companies that create a popular application offering have a sustainable advantage, which becomes very hard for competitors to surpass. Smith cited the example of the PC- market. IBM developed the original PC, but within a decade Intel had taken over the market thanks to their platform approach. Now, as the processing has shifted to low-power mobile devices, Intel’s platform has been surpassed by ARMs.

Smith suggested that the good news for DAC is that the platform companies will find a welcomed business for their IP in the evolving system-level EDA market.

Points-of-Interest in the “DAC Zone”

Thursday, May 31st, 2012

If I had my choice, these are the papers and events that I would attend at the upcoming Deign Automation Conference (DAC).

As Sean “Rod Sterling” O’Kane intones: “… you’re moving into a land of both substance and possibilities … You’ve just crossed over into the DAC Zone.”


In that same spirit, I’ve scoured the upcoming DAC schedule to find the papers and events of both substance and possibilities. What follows is my list of activities that grabbed my attention – my DAC “must-sees.”

There is just one problem: I’m not the captain of my fate at trade shows. Typically, my scheduled is decided by others. But if your fate is freer, then I humbly submit these entries for your consideration in “the DAC Zone.”


Sunday (May 3, 2012)

7pm – Come hear the 24th annual update on the state of EDA by Gary Smith.

This year’s talk will focus on multi-platform designs and how these platforms are dramatically cutting the cost of design. (Location: Marriott Hotel, Salon 6) 


Monday (May 4, 2012)

8:30am – System-Level Exploration of Power, Temperature, Performance, and Area for Multicore Architectures

Summary: With the proliferation of multicore architectures, system designers critically need simulation tools to perform early design space exploration of different architectural configurations. Designers typically need to evaluate the effect of different applications on power, performance, temperature, area and reliability of multicore architectures. (Location: 305, Tutorial repeats at 11:30am and 3:30pm)

11:30 am – Dr. John Heinlein of ARM will present the “IP Talks!” keynote. ( booth #1202) 

12:15 pm – A celebration of the 10th Anniversary of OpenAccess – Si2 Open Luncheon (Location: 303)

1:00 pm – Xilinx’s Tim Vanevenhoven will probably talk about the challenges of FPGA IP integration. Tim is an engaging speaker. Be sure to ask him about his recent cart-athlon experience. ( Booth 1202)

3:15pm - Pavilion Panel: The Mechanics of Creativity

What does it take to be an idea machine? Design is an inherently creative process, but how can we be creative on demand? How can we rise above mundane tasks with flashes of brilliance? Discover secrets of technical and business creativity and calculated risk taking, and share stories of innovation. (Location: Booth #310)

Moderator: Karen Bartleson from Synopsys, Inc.

Speakers: Dee McCrorey from Risktaking for Success LLC; Sherry Hess from AWR Corp.;    Lillian Kvitko from Oracle


Tuesday (June 5, 2012)

8:30 am - Keynote: Scaling for 2020 Solutions 

Comparing the original ARM design of 1985 to those of today’s latest microprocessors, ARM’s Mike Muller will look at how far has design come and what EDA has contributed to enabling these advances in systems, hardware, operating systems, and applications and how business models have evolved over 25 years. He will then speculate on the needs for scaling designs into solutions for 2020 from tiny embedded sensors through to cloud based servers which together enable the internet of things. He will look at what are the major challenges that need to be addressed to design and manufacture these systems and proposes some solutions. (Location: 102/103)

10am – Pavilion Panel: Hogan’s Heroes: Learning from Apple

Apple. We admire their devices, worship their creators and praise their stock in our portfolios. Apple is synonymous with creative thinking, new opportunities, perseverance and wild success. Along the road, Apple set new technical and business standards. But how much has the electronics industry, in particular EDA, “where electronics begins,” learned from Apple? It depends. (Location: Booth #310)

Moderator: Jim Hogan from Tela Innovation, Inc.

Speakers: Jack Guedjf from Tensilica, Inc.; Tom Collopy from Aggios, Inc.; and Jan Rabaey – Univ. of California, Berkeley


(Why did the DAC committee schedule these two powerful talks at the same time?)

10am – Software and Firmware Engineering for Complex SoCs

Summary: Early software development is crucial for today’s complex SoCs, where the overall software effort typically eclipses the hardware effort. Further, delays in software directly impact the time to market of the end product. The presentations in this session explore how to architect ASIPs for wireless applications, how to bridge RTL and firmware development, and approaches in pre-silicon software development. (Location: 106)

Speakers from IMEC, Marvell, and Intel

11am – (Research Paper) Design Automation for Things Wet, Small, Spooky, and Tamable - Realizing Reversible Circuits Using a New Class of Quantum Gates

Summary: The future of design automation may well be in novel technologies and in new opportunities. This session begins with design techniques that in the past may have applied exclusively to electronic design automation, but now are applied to the wet (microfluidics), the small(nanoelectronics), and the spooky (quantum). The papers cover routing and placement, pin assignment, cell design, and technology mapping applied to microfluidics biochips, quantum gates, and silicon nanowire transistors. (Location: 300)

1:30pm – Can EDA Combat the Rise of Electronic Counterfeiting?

Summary: The Semiconductor Industry Association (SIA) estimates that counterfeiting costs the US semiconductor companies $7.5B in lost revenue, and this is indeed a growing global problem. Repackaging the old ICs, selling the failed test parts, as well as gray marketing, are the most dominant counterfeiting practices. Can technology do a better job than lawyers? What are the technical challenges to be addressed? What EDA technologies will work: embedding IP protection measures in the design phase, developing rapid post-silicon certification, or counterfeit detection tools and methods? (Location: 304)

– I’ve been discussion this area with growing interest:

1:30pm – 9.1: Physics Matters: Statistical Aging Prediction under Trapping/Detrapping

With shrinking device sizes and increasing design complexity, reliability has become a critical issue. Besides traditional reliability issues for power delivery networks and clock signals, new challenges are emerging. This session presents papers that cover a wide spectrum of reliability issues including long-term device aging, verification of power and 3-D ICs, and high-integrity, low-power clock networks. (Location: 300)


2pm – Stephen Maneatis of True Circuits will undoubtedly highlight trends in low node PLL and DLL IP, a critical element in all ICs.


4pm – Self-Aware and Adaptive Technologies: The Future of Computing Systems? — 14.1: Self-Aware Computing in the Angstrom Processor

Summary: This session will present contributions from industry and universities toward the realization of next-generation computing systems based on Self-Aware computing. Self-Aware computing is an emerging system design paradigm aimed at overcoming the exponentially increasing complexity of modern computing systems and improving performance, utilization, reliability, and programmability. In a departure from current systems which are based on design abstractions that have persisted since the 1960s which place significant burden on programmers and chip designers, Self-Aware systems mitigate complexity by observing their own runtime behavior, learning, and taking actions to optimize behaviors automatically. (Location: 304)




Wednesday (June 6, 2012)


9:15am – Dark Side of Moore’s Law

Semiconductor companies double transistor counts every 22 months, yet device prices stay relatively the same. This has been a windfall for customers but not for chip makers, who have exponentially increasing design costs every new cycle. Venture capitalist Lucio Lanza and panelists will discuss what it will take to bring design costs and profitability back into harmony with Moore’s Law. (Location: Booth #310)

Moderator: Lucio Lanza – Lanza TechVentures

Speakers: John Chilton from Synopsys, Consultant Behrooz Abdi and Steve Glaser from Xilinx




John Chilton from Synopsys




9:30am – Low-Power Design and Power Analysis –  22.2: On the Exploitation of the Inherent Error Resilience of Wireless Systems under Unreliable Silicon

For some applications, it is sometimes worth giving up a limited amount of precision or reliability if that leads to significant power savings. Similarly, being able to operate “off the grid” means one needs to give up the certainty of traditional power sources to enable power harvesting opportunities. The papers in this session illustrate the trade-offs inherent in operating in extreme low-power regimes. (Location: 306)


10:45am – Keynote: Designing High Performance Systems-on-Chip

Experience state-of-the art design through the eyes of two experts that help shape these advanced chips! In this unique dual-keynote – IBM’s Joshua Friedrich and Intel’s Brad Heaney, the design process at two leading companies will be discussed. The speakers will cover key challenges, engineering decisions and design methodologies to achieve top performance and turn-around time. The presentations describe where EDA meets practice under the most advanced nodes, so will be of key interest to both designers and EDA professionals alike. (Location: 102/103)


1:30pm – Design Challenges and EDA Solutions for Wireless Sensor Networks

The good folks at CEA-LETI, Grenoble, France, aim to present a complete overview of the state-of-the-art technologies and key research challenges for the design and optimization of wireless sensor networks (WSN). Thus, it will specifically cover ultra-low-power (ULP) computing architectures and circuits, system-level design methods, power management, and energy-scavenging mechanisms for WSN. A key aspect of this special session is the interdisciplinary nature of the discussed challenges in WSN conception, which go from basic hardware components to software conception, which requires an active engagement of both academic and industrial professionals in the EDA field, computer and electrical engineering, computer science, and telecommunication engineering. (Location: 304)


3pm – Synopsys’s John Swanson speaks on verification IP. Afterward, Cadence’s Susan Peterson will talk on the same topic. Might be worth listening to see how the two EDA giants differentiate one another. ( Booth 1202)


3:30pm Cadence’s Susan Peterson will address the audience on verification IP. You’ll probably want to catch the prior Synopsys presentation, too.


3:30pm – Pavilion Panel: Teens Talk Tech

High school students tell us how they use the latest tech gadgets, and what they expect to be using in three to five years. They give insights into the next killer applications and what they would like to see in the next generation of hot new electronics products that we should be designing now. (Location: Booth #310)

Moderator: Kathryn Kranen from Jasper Design Automation

Speakers: Students from Menlo High School, Atherton, CA


4pm – Breaking out of EDA: How to Apply EDA Techniques to Broader Applications

Throughout its history, myriads of innovations in EDA (Electronic Design Automation) have enabled high performance semiconductor products with leading edge technology. Lately we have observed several research activities where EDA innovations have been applied to broader applications with complex nature and the large scale of data sets. The session provides some tangible results of these multi-disciplinary works where non-traditional EDA problems directly benefit from the innovation of EDA research. The examples of non-EDA applications vary from bio-medical applications to smart water to human computing. (Location: 304)


4:30pm – Pavilion Panel: Hardware-Assisted Prototyping and Verification: Make vs. Buy?

As ASIC and ASSP designs reach billions of gates, hardware-assisted verification and/or prototyping is becoming essential, but what is the best approach? Should you buy an off-the-shelf system or build your own? What criteria – time-to-market, cost, performance, resources, quality, ease of use – are most important? Panelists will share their real world design trade-offs. (Location: Booth #310)

Moderator: Gabe Moretti from Gabe on EDA

Speakers: Albert Camilleri from Qualcomm, Inc.; Austin Lesea from Xilinx, Inc.; and Mike Dini from The Dini Group, Inc.




Thursday (June 7, 2012)


11am – Keynote: My First Design Automation Conference – 1982

C. L. Liu talks about his first DAC experience: It was June 1982 that I had my first technical paper in the EDA area presented at the 19th Design Automation Conference. It was exactly 20 years after I completed my doctoral study and exactly 30 years ago from today. I would like to share with the audience how my prior educational experience prepared me to enter the EDA field and how my EDA experience prepared me for the other aspects of my professional life.


1:30pm – It’s the Software, Stupid! Truth or Myth?

It’s tough to differentiate products with hardware. Everyone uses the same processors, third party IP and foundries; now it’s all about software.  But, is this true?  Since user response, power consumption and support of standards rely on hardware, one camp claims software is only as good as the hardware it sits on. Opponents argue that software differentiates mediocre products from great ones. A third view says only exceptional design of both hardware and software creates great products – and the tradeoffs make great designers. Watch industry experts debate whether it’s really all about software. (Location: 305)

Chair: Chris Edwards from the Tech Design Forum

Speakers: Serge Leef from Mentor Graphics Corp.; Chris Rowen from Tensilica, Inc.; Debashis Bhattacharya from FutureWei Technologies, Inc.; Kathryn S. McKinley from Microsoft Research, Univ. of Texas; and Eli Savransky from NVIDIA Corp.


3:30pm – Parallelization and Software Development: Hope, Hype, or Horror?

With the fear that the death of scaling is imminent, hope is widespread that parallelism will save us. Many EDA applications are described as “embarrassingly parallel,” and parallel approaches have certainly been effectively applied in many areas. Before the panel begins, come hear perspective on software development and the challenges associated with writing good software that are only exacerbated by the growing need to write robust, testable, and efficient parallel applications. Then watch the panelists debate future productive directions and dead ends to developing and deploying parallel algorithms. Find out if claims to super speedups are exaggerated and if the investment in parallel algorithms is worth the high development cost. (Location: 305)

Chair: Igor Markov from the Univ. of Michigan

Speakers: Anirudh Devgan from Cadence Design Systems, Inc.; Kunle Olukotun from Stanford Univ.; Daniel Beece from IBM Research; Joao Geada from CLK Design Automation, Inc.; and Alan J. Hu from the Univ. of British Columbia


3:30pm – Research Paper: Wild And Crazy Ideas

It cannot get any crazier! Your friends on Facebook verify your designs. Your sister is eavesdropping on your specification. Do not take “no” for implication. Build satisfying circuits with noise. Let spin-based synapses make your head spin. Use parasitics to build 3-D brains. (Location: 308)

– 53.1: CrowdMine: Towards Crowdsourced Human-Assisted Verification

Chair:   Farinaz Koushanfar from Rice Univ.

Speakers: Wenchao Li from the Univ. of California, Berkeley; Sanjit A. Seshia from the Univ. of California, Berkeley; and Somesh Jha from the Univ. of Wisconsin



Works in progress


55.18 — Using a Hardware Description Language as an Alternative to Printed Circuit Board Schematic Capture

This paper proposes using hardware description languages (HDLs) for PC board schematic entry. Doing so provides benefits already known to ASIC and FPGA designers including the ability to design using standard and open languages, the ability to edit designs using familiar text editors, the availability source code control systems for collaboration and the tracking and managing of design changes, and the use of IDE’s to help in the design entry process. This talk will introduce PHDL – an HDL specifically developed for doing PC board design capture and describe examples of its initial use for PC board designs.

Speakers from Brigham Young Univ.

55.21 — TinySPICE: A Parallel SPICE Simulator on GPU for Massively Repeated Small Circuit Simulations

Nowadays variation-aware IC designs require many thousands or even millions of repeated SPICE simulations for relatively small nonlinear circuits. In this work, we present a massively parallel SPICE simulator on GPU, TinySPICE, for efficiently analyzing small nonlinear circuits, such as standard cell designs, SRAMs, etc. Our GPU implementation allows for a large number of small circuit simulations in GPU’s shared memory that involve novel circuit linearization and matrix solution techniques, and eliminates most of the GPU device memory accesses during the Newton-Raphson iterations, which thereby enables extremely high-throughput SPICE simulations on GPU. Compared with CPU-based SPICE simulations, TinySPICE achieves up to 264X speedups for SRAM yield analysis without loss of accuracy.

Speakers from Michigan Technological University



Originally published on – “IP Insider”

Conservation of Design Pain

Thursday, December 1st, 2011

Regardless of system-design approach, painful tradeoffs are still needed–usually during integration.


Earlier this month, Steve Leibson shared his “prognostications from the ICCAD panel” concerning the shape of things to come for the EDA and chip design industry.


The part of this blog that caught my attention was the comments made by Patrick Groeneveld, Magma’s Chief Technologist and the General Chair for DAC 2012. Groeneveld acknowledged two paths to handling chip design complexity: partitioning and reuse. But he believed that both of the paths were evil since they introduce inefficiencies in the overall design.


Leibson disagreed; pointing out that the divide-and-conquer method was a tried and ture approach, dating back to theRoman Empire. “…it’s an approach that seems to have withstood the test of time. However, a divide-and-conquer strategy does indeed lead to suboptimal design in terms of efficient resource use. I just don’t know of any engineering discipline that avoids such inefficiencies when tackling projects of comparable complexity. Is it hubris to think that electrical engineering and chip design are somehow different?


Both Groeneveld and Leibson offer classic arguments to the age-old problem of dealing with complexity. There are no new solutions to this dilemma, only a re-shifting of unpleasant trade-offs. In a broader sense, this re-shifting can be thought of as maintaining the “Conservation of Design Pain.” I use the word “design” for brevity and rhythm. To be correct, I should have used “development” since the pain is spread across the full system/product life-cycle effects, from design through manufacturing.



This law of “pain” acknowledges the shifting of difficult decisions to different parts of the development cycle, depending upon the methodology. For example, both partitioning and reuse are useful techniques that overcome certain design complexities by increasing the design pain in other areas, namely, in integration.


Centuries of systems engineering confirm that most systems work best when they have low coupling and high cohesion between subsystems. This is a golden rule in the partitioning between (and within) hardware and software systems. Reuse follows the same rule, with the added advantage of functionally verified blocks of design.


By reducing complexity, both partitioning and reuse simplify the work of design engineers. For example, by utilizing code or hardware reuse, engineers don’t have to design everything, which affords them more time to concentrate on designing in there area of  expertise. This leads to greater specialization, which is can be good.


But it also leads to a greater need for reintegration and often increases the complexity of interfaces. This effectively shifts the “pain” from the module to the interface subsystem.


Shifting pain from one part of the development cycle is the result of dealing with complexity. If recent trends are any indication, then the integration engineers are in for a world of hurt.

Why Gamers Matter to DAC

Thursday, June 10th, 2010

If you ask the organizers of the Design Automation Conference (DAC) about the future of the show, they’ll point to a strong number of attendees. If you ask the same question of DAC exhibitors, without exception they will all point to the already high and steadily rising costs of show floor space – not to mention union labor and booth support costs.


These two inter-related trends beg certain questions. For example, how will the continued shrinking of the show floor at DAC affect the conference as a whole, in terms of attractiveness to the attendees as well as the their registration costs? The continuing consolidation of the EDA market doesn’t help as the number of potential exhibitors thinning out.


What about start-up companies? First, there is evidence to suggest that the number of EDA start-ups is shrinking due in part to a decrease in venture capital investments. Second, most start-ups simply can not afford to exhibit at DAC.


What about the technical papers and presentations at DAC? More and more companies are spending their show dollars to generate and support these valuable technical sessions. That’s great, but it suggests that DAC may change back into a much smaller IEEE conference. The affect of such a change on exhibitors is obvious – they will be greatly reduced. How such changes would affect the number of attendees is unclear?


How about virtualizing the DAC, i.e., migrating the show into a virtual conference? While this is hardly a new idea, its time may have finally arrived. The benefits are too compelling to ignore – reduced costs for both exhibitors and attendees, while possibly leading to an increase in registration numbers.


Granted, these efforts are already underway, but with mixed results. I have yet to find a conference attendee in any space who prefers virtual conferences to the real thing. This may be a generational issue, but that has yet to be proven. Simply put, most folks find virtual conferences dull and poorly supported by the presenters and exhibitors. This leads to poor attendance. Part of that problem is one of technology, namely, you need a fast machine and specialized equipment to make the virtual attendee experience closer to the real thing.


But the lack of “sensory” experience – called augmented reality in the mainstream – is one of the topics of another show taking place near DAC in LA. The show is called the Electronic Entertainment Expo or E3. Yes – it’s a gamer’s show. But it is also the place where Microsoft will be highlighting the new Xbox man-machine interface, available in time for Christmas’10. Dubbed Project Natal, it’s nothing short of an affordable full-body recognition platform. Not surprisingly, hardware heavy-weights Intel and nVidia will be making strategic announcements at the show as well.


This technology will affect the way we experience virtual conferences in the near future. As always, consumer game technologies will open doors that quickly expand into other markets. In this case, gamer technology may breathe new life into highly technical, niche “virtual” conferences like DAC of the future.


Look for my coverage next week from both DAC and E3.

Next Page »