Part of the  

Chip Design Magazine

  Network

About  |  Contact

Archive for May, 2012

Points-of-Interest in the “DAC Zone”

Thursday, May 31st, 2012

If I had my choice, these are the papers and events that I would attend at the upcoming Deign Automation Conference (DAC).

As Sean “Rod Sterling” O’Kane intones: “… you’re moving into a land of both substance and possibilities … You’ve just crossed over into the DAC Zone.”

  

In that same spirit, I’ve scoured the upcoming DAC schedule to find the papers and events of both substance and possibilities. What follows is my list of activities that grabbed my attention – my DAC “must-sees.”

There is just one problem: I’m not the captain of my fate at trade shows. Typically, my scheduled is decided by others. But if your fate is freer, then I humbly submit these entries for your consideration in “the DAC Zone.”

++++++++++++++++

Sunday (May 3, 2012)

7pm – Come hear the 24th annual update on the state of EDA by Gary Smith.

This year’s talk will focus on multi-platform designs and how these platforms are dramatically cutting the cost of design. (Location: Marriott Hotel, Salon 6) 

+++++++++++++++++

Monday (May 4, 2012)

8:30am – System-Level Exploration of Power, Temperature, Performance, and Area for Multicore Architectures

Summary: With the proliferation of multicore architectures, system designers critically need simulation tools to perform early design space exploration of different architectural configurations. Designers typically need to evaluate the effect of different applications on power, performance, temperature, area and reliability of multicore architectures. (Location: 305, Tutorial repeats at 11:30am and 3:30pm)

11:30 am – Dr. John Heinlein of ARM will present the “IP Talks!” keynote. ( Chipestimate.com booth #1202) 

12:15 pm – A celebration of the 10th Anniversary of OpenAccess – Si2 Open Luncheon (Location: 303)

1:00 pm – Xilinx’s Tim Vanevenhoven will probably talk about the challenges of FPGA IP integration. Tim is an engaging speaker. Be sure to ask him about his recent cart-athlon experience. (Chipestimate.com Booth 1202)

3:15pm - Pavilion Panel: The Mechanics of Creativity

What does it take to be an idea machine? Design is an inherently creative process, but how can we be creative on demand? How can we rise above mundane tasks with flashes of brilliance? Discover secrets of technical and business creativity and calculated risk taking, and share stories of innovation. (Location: Booth #310)

Moderator: Karen Bartleson from Synopsys, Inc.

Speakers: Dee McCrorey from Risktaking for Success LLC; Sherry Hess from AWR Corp.;    Lillian Kvitko from Oracle

+++++++++++++++++++++

Tuesday (June 5, 2012)

8:30 am - Keynote: Scaling for 2020 Solutions 

Comparing the original ARM design of 1985 to those of today’s latest microprocessors, ARM’s Mike Muller will look at how far has design come and what EDA has contributed to enabling these advances in systems, hardware, operating systems, and applications and how business models have evolved over 25 years. He will then speculate on the needs for scaling designs into solutions for 2020 from tiny embedded sensors through to cloud based servers which together enable the internet of things. He will look at what are the major challenges that need to be addressed to design and manufacture these systems and proposes some solutions. (Location: 102/103)

10am – Pavilion Panel: Hogan’s Heroes: Learning from Apple

Apple. We admire their devices, worship their creators and praise their stock in our portfolios. Apple is synonymous with creative thinking, new opportunities, perseverance and wild success. Along the road, Apple set new technical and business standards. But how much has the electronics industry, in particular EDA, “where electronics begins,” learned from Apple? It depends. (Location: Booth #310)

Moderator: Jim Hogan from Tela Innovation, Inc.

Speakers: Jack Guedjf from Tensilica, Inc.; Tom Collopy from Aggios, Inc.; and Jan Rabaey – Univ. of California, Berkeley

 

(Why did the DAC committee schedule these two powerful talks at the same time?)

10am – Software and Firmware Engineering for Complex SoCs

Summary: Early software development is crucial for today’s complex SoCs, where the overall software effort typically eclipses the hardware effort. Further, delays in software directly impact the time to market of the end product. The presentations in this session explore how to architect ASIPs for wireless applications, how to bridge RTL and firmware development, and approaches in pre-silicon software development. (Location: 106)

Speakers from IMEC, Marvell, and Intel

11am – (Research Paper) Design Automation for Things Wet, Small, Spooky, and Tamable - Realizing Reversible Circuits Using a New Class of Quantum Gates

Summary: The future of design automation may well be in novel technologies and in new opportunities. This session begins with design techniques that in the past may have applied exclusively to electronic design automation, but now are applied to the wet (microfluidics), the small(nanoelectronics), and the spooky (quantum). The papers cover routing and placement, pin assignment, cell design, and technology mapping applied to microfluidics biochips, quantum gates, and silicon nanowire transistors. (Location: 300)

1:30pm – Can EDA Combat the Rise of Electronic Counterfeiting?

Summary: The Semiconductor Industry Association (SIA) estimates that counterfeiting costs the US semiconductor companies $7.5B in lost revenue, and this is indeed a growing global problem. Repackaging the old ICs, selling the failed test parts, as well as gray marketing, are the most dominant counterfeiting practices. Can technology do a better job than lawyers? What are the technical challenges to be addressed? What EDA technologies will work: embedding IP protection measures in the design phase, developing rapid post-silicon certification, or counterfeit detection tools and methods? (Location: 304)

– I’ve been discussion this area with growing interest:

1:30pm – 9.1: Physics Matters: Statistical Aging Prediction under Trapping/Detrapping

With shrinking device sizes and increasing design complexity, reliability has become a critical issue. Besides traditional reliability issues for power delivery networks and clock signals, new challenges are emerging. This session presents papers that cover a wide spectrum of reliability issues including long-term device aging, verification of power and 3-D ICs, and high-integrity, low-power clock networks. (Location: 300)

 

2pm – Stephen Maneatis of True Circuits will undoubtedly highlight trends in low node PLL and DLL IP, a critical element in all ICs.

 

4pm – Self-Aware and Adaptive Technologies: The Future of Computing Systems? — 14.1: Self-Aware Computing in the Angstrom Processor

Summary: This session will present contributions from industry and universities toward the realization of next-generation computing systems based on Self-Aware computing. Self-Aware computing is an emerging system design paradigm aimed at overcoming the exponentially increasing complexity of modern computing systems and improving performance, utilization, reliability, and programmability. In a departure from current systems which are based on design abstractions that have persisted since the 1960s which place significant burden on programmers and chip designers, Self-Aware systems mitigate complexity by observing their own runtime behavior, learning, and taking actions to optimize behaviors automatically. (Location: 304)

 

 

+++++++++++++++++++

Wednesday (June 6, 2012)

 

9:15am – Dark Side of Moore’s Law

Semiconductor companies double transistor counts every 22 months, yet device prices stay relatively the same. This has been a windfall for customers but not for chip makers, who have exponentially increasing design costs every new cycle. Venture capitalist Lucio Lanza and panelists will discuss what it will take to bring design costs and profitability back into harmony with Moore’s Law. (Location: Booth #310)

Moderator: Lucio Lanza – Lanza TechVentures

Speakers: John Chilton from Synopsys, Consultant Behrooz Abdi and Steve Glaser from Xilinx

 

 

 

John Chilton from Synopsys

 

 

 

9:30am – Low-Power Design and Power Analysis –  22.2: On the Exploitation of the Inherent Error Resilience of Wireless Systems under Unreliable Silicon

For some applications, it is sometimes worth giving up a limited amount of precision or reliability if that leads to significant power savings. Similarly, being able to operate “off the grid” means one needs to give up the certainty of traditional power sources to enable power harvesting opportunities. The papers in this session illustrate the trade-offs inherent in operating in extreme low-power regimes. (Location: 306)

 

10:45am – Keynote: Designing High Performance Systems-on-Chip

Experience state-of-the art design through the eyes of two experts that help shape these advanced chips! In this unique dual-keynote – IBM’s Joshua Friedrich and Intel’s Brad Heaney, the design process at two leading companies will be discussed. The speakers will cover key challenges, engineering decisions and design methodologies to achieve top performance and turn-around time. The presentations describe where EDA meets practice under the most advanced nodes, so will be of key interest to both designers and EDA professionals alike. (Location: 102/103)

 

1:30pm – Design Challenges and EDA Solutions for Wireless Sensor Networks

The good folks at CEA-LETI, Grenoble, France, aim to present a complete overview of the state-of-the-art technologies and key research challenges for the design and optimization of wireless sensor networks (WSN). Thus, it will specifically cover ultra-low-power (ULP) computing architectures and circuits, system-level design methods, power management, and energy-scavenging mechanisms for WSN. A key aspect of this special session is the interdisciplinary nature of the discussed challenges in WSN conception, which go from basic hardware components to software conception, which requires an active engagement of both academic and industrial professionals in the EDA field, computer and electrical engineering, computer science, and telecommunication engineering. (Location: 304)

 

3pm – Synopsys’s John Swanson speaks on verification IP. Afterward, Cadence’s Susan Peterson will talk on the same topic. Might be worth listening to see how the two EDA giants differentiate one another. (Chipestimate.com Booth 1202)

 

3:30pm Cadence’s Susan Peterson will address the audience on verification IP. You’ll probably want to catch the prior Synopsys presentation, too.

 

3:30pm – Pavilion Panel: Teens Talk Tech

High school students tell us how they use the latest tech gadgets, and what they expect to be using in three to five years. They give insights into the next killer applications and what they would like to see in the next generation of hot new electronics products that we should be designing now. (Location: Booth #310)

Moderator: Kathryn Kranen from Jasper Design Automation

Speakers: Students from Menlo High School, Atherton, CA

 

4pm – Breaking out of EDA: How to Apply EDA Techniques to Broader Applications

Throughout its history, myriads of innovations in EDA (Electronic Design Automation) have enabled high performance semiconductor products with leading edge technology. Lately we have observed several research activities where EDA innovations have been applied to broader applications with complex nature and the large scale of data sets. The session provides some tangible results of these multi-disciplinary works where non-traditional EDA problems directly benefit from the innovation of EDA research. The examples of non-EDA applications vary from bio-medical applications to smart water to human computing. (Location: 304)

 

4:30pm – Pavilion Panel: Hardware-Assisted Prototyping and Verification: Make vs. Buy?

As ASIC and ASSP designs reach billions of gates, hardware-assisted verification and/or prototyping is becoming essential, but what is the best approach? Should you buy an off-the-shelf system or build your own? What criteria – time-to-market, cost, performance, resources, quality, ease of use – are most important? Panelists will share their real world design trade-offs. (Location: Booth #310)

Moderator: Gabe Moretti from Gabe on EDA

Speakers: Albert Camilleri from Qualcomm, Inc.; Austin Lesea from Xilinx, Inc.; and Mike Dini from The Dini Group, Inc.

 

 

 +++++++++++++++++

Thursday (June 7, 2012)

 

11am – Keynote: My First Design Automation Conference – 1982

C. L. Liu talks about his first DAC experience: It was June 1982 that I had my first technical paper in the EDA area presented at the 19th Design Automation Conference. It was exactly 20 years after I completed my doctoral study and exactly 30 years ago from today. I would like to share with the audience how my prior educational experience prepared me to enter the EDA field and how my EDA experience prepared me for the other aspects of my professional life.

 

1:30pm – It’s the Software, Stupid! Truth or Myth?

It’s tough to differentiate products with hardware. Everyone uses the same processors, third party IP and foundries; now it’s all about software.  But, is this true?  Since user response, power consumption and support of standards rely on hardware, one camp claims software is only as good as the hardware it sits on. Opponents argue that software differentiates mediocre products from great ones. A third view says only exceptional design of both hardware and software creates great products – and the tradeoffs make great designers. Watch industry experts debate whether it’s really all about software. (Location: 305)

Chair: Chris Edwards from the Tech Design Forum

Speakers: Serge Leef from Mentor Graphics Corp.; Chris Rowen from Tensilica, Inc.; Debashis Bhattacharya from FutureWei Technologies, Inc.; Kathryn S. McKinley from Microsoft Research, Univ. of Texas; and Eli Savransky from NVIDIA Corp.

 

3:30pm – Parallelization and Software Development: Hope, Hype, or Horror?

With the fear that the death of scaling is imminent, hope is widespread that parallelism will save us. Many EDA applications are described as “embarrassingly parallel,” and parallel approaches have certainly been effectively applied in many areas. Before the panel begins, come hear perspective on software development and the challenges associated with writing good software that are only exacerbated by the growing need to write robust, testable, and efficient parallel applications. Then watch the panelists debate future productive directions and dead ends to developing and deploying parallel algorithms. Find out if claims to super speedups are exaggerated and if the investment in parallel algorithms is worth the high development cost. (Location: 305)

Chair: Igor Markov from the Univ. of Michigan

Speakers: Anirudh Devgan from Cadence Design Systems, Inc.; Kunle Olukotun from Stanford Univ.; Daniel Beece from IBM Research; Joao Geada from CLK Design Automation, Inc.; and Alan J. Hu from the Univ. of British Columbia

 

3:30pm – Research Paper: Wild And Crazy Ideas

It cannot get any crazier! Your friends on Facebook verify your designs. Your sister is eavesdropping on your specification. Do not take “no” for implication. Build satisfying circuits with noise. Let spin-based synapses make your head spin. Use parasitics to build 3-D brains. (Location: 308)

– 53.1: CrowdMine: Towards Crowdsourced Human-Assisted Verification

Chair:   Farinaz Koushanfar from Rice Univ.

Speakers: Wenchao Li from the Univ. of California, Berkeley; Sanjit A. Seshia from the Univ. of California, Berkeley; and Somesh Jha from the Univ. of Wisconsin

 

+++++++++++++++++

Works in progress

 

55.18 — Using a Hardware Description Language as an Alternative to Printed Circuit Board Schematic Capture

This paper proposes using hardware description languages (HDLs) for PC board schematic entry. Doing so provides benefits already known to ASIC and FPGA designers including the ability to design using standard and open languages, the ability to edit designs using familiar text editors, the availability source code control systems for collaboration and the tracking and managing of design changes, and the use of IDE’s to help in the design entry process. This talk will introduce PHDL – an HDL specifically developed for doing PC board design capture and describe examples of its initial use for PC board designs.

Speakers from Brigham Young Univ.

55.21 — TinySPICE: A Parallel SPICE Simulator on GPU for Massively Repeated Small Circuit Simulations

Nowadays variation-aware IC designs require many thousands or even millions of repeated SPICE simulations for relatively small nonlinear circuits. In this work, we present a massively parallel SPICE simulator on GPU, TinySPICE, for efficiently analyzing small nonlinear circuits, such as standard cell designs, SRAMs, etc. Our GPU implementation allows for a large number of small circuit simulations in GPU’s shared memory that involve novel circuit linearization and matrix solution techniques, and eliminates most of the GPU device memory accesses during the Newton-Raphson iterations, which thereby enables extremely high-throughput SPICE simulations on GPU. Compared with CPU-based SPICE simulations, TinySPICE achieves up to 264X speedups for SRAM yield analysis without loss of accuracy.

Speakers from Michigan Technological University

 

+++++++++

Originally published on Chipestimate.com – “IP Insider”

Does Innovation Lie Beyond Software?

Tuesday, May 22nd, 2012

The romance between hardware and software sours, then sweetens after a “three-some”. But no divorce is planned. – Review by Chris Ciufo

Today, it is fashionable to suggest that innovation lies beyond what is common. But perhaps innovation lies in seeing the familiar in a new way.

John von Neumann (1903-57)

In the beginning, there was hardware. But hardware soon tired of its single purpose flip-flops, NAND gates and 555-timers. It longed for a playmate, something to give its existence greater meaning. In desperation, hardware called upon the great architect Von Neumann for help. Being a compassionate yet playful deity, Von Neumann created software.

For a while, all was good. But in time, software grew beyond hardware. Basic firmware evolved into operating systems and eventually application programs which were completely detached from the hardware. For its part, hardware was happy because it could do so many more things than before. But the two wares seemed to be growing apart.

Pleased with its detachment, software soon found that it could model hardware. This capability created an entirely new industry called electronic design automation (EDA). Modeling enabled an architectural idea known as reuse which led to the business of intellectual property (IP).

Software enjoyed its new life until something terrible happened. Hardware became commoditized – cheap, standardized and very reliable. Once that happened, software began to feel lonely and bored. In despair, it called upon the old architect for a new playmate. But Von Neumann had been replaced with a polymorphic-being known as the Consumer. This rather vague and easily confused architect didn’t care about hardware or software. It only cared about the experience provided by its interaction with both hardware and software.

It seemed as if neither software nor hardware were as important as before. Software sighed, wondering if there was something beyond it. Would software – and maybe even hardware – have to serve a different purpose?

Software decided to model this problem. How could the wares provide the experience sought by the Consumer? According to the model, familiar concepts such as power, performance, size and cost would be critical factors in achieving this experience. This, in turn, meant that software and hardware would have to work together in ways that they had never done before.

Finally, software understood. To achieve the experience desired by the Consumer, software and hardware would have to innovate, i.e., to play together in new and different ways. This realization pleased software very much. But before telling hardware the good news, software thanked the Consumer for making its life more interesting again. Unfortunately, the Consumer had already become bored and was busy inventing blended reality. But that’s another story.

++++++++
Originally published on Chipestimate.com “IP Insider”



Free counter and web stats


Print’s role in Semiconductor IP Design

Friday, May 18th, 2012

Print content continues its steady decline in our everyday lives. But what is the real impact on semiconductor IP designs?

Few semiconductor IP designers use print exclusively for the development of their System-on-Chips (SoC). But print still plays some role a part in the creation of complex chips. But for how much longer?

Maybe not much longer, according to a number of different perspectives that emerged this week:

What role does print play in the design of your SoCs? Drop me a quick line to let me know!

 

+++++++++

Originally published on Chipestimate.com – “IP Insider”

DDM and PLM Tools Challenge Semiconductor IP Reuse

Tuesday, May 15th, 2012

Recent data suggests the value and shortcomings of design data management (DDM) and project lifecycle management (PLM) tools to improve IP reuse.

Last time, I focused on the potential long revenue tail of chip design afforded by the extraction, packaging and selling of semiconductor IP. IPextreme is one example of a company that enables the extraction, package and creation of licensable IP products. Websites like the GSA portal, IPestimate’s Constellation platform and Chipestimate.com can help with the distribution and potential sales – among other things.

Yet all of these companies with the exception of Chipestimate.com live outside the realm of the chip development process. Chipestimate.com provides estimation tools like InCyte Chip Estimator and Cadence Chip Planning System (CCPS) that allow designers to make IP tradeoffs in terms of power, performance and even cost. But who can help manage the actual process of chip design?

The answer lays in the world of design data management (DMM) tools, a broad category that encompasses such companies as ICManage, Cliosoft, Methodics, Numertrics, Satin Technology and others. Reaching to even higher levels of design abstraction is software-hardware tools aimed at complete system-level development. (That’s a discussion for another day – or perhaps a month of days.)

Let’s return to our level of abstraction of the chip design and semiconductor IP creation. A recent study found that one of the top factors driving the use of chip-level design management systems is “IP Reuse/Logistics Management (43%).” This data comes from the annual Global Design Management report sponsored by ICManage. Further, the year-over-year data suggests that improved IP reuse and logistics management continue to be illusive goals for most SoC developers.

The report also found shortcomings in existing tool offerings by noting that the most critical feature for IP reuse/logistics management is bug notification and tracing (50 %), followed closely by integrating and assembling the IP in the design (48%) and efficiently making internal IP available for reuse (47%).

Regardless of the shortcomings, any tools that can help manage the growing complexity of the chip design and IP reuse processes are welcomed in our industry.

References:

  • Collaboration Penalty Is Steep For Engineers - System-Level Design sat down to discuss chip-design productivity and quality issues with Srinath Anantharaman, president and founder of Cliosoft; Ronald Collett, president and CEO of Numetrics Management Systems; and Michel Tabusse, CEO and co-founder of Satin Technologies.
  •  The IP Blame Game - The topic of IP quality in the SoC era is difficult to define, and solutions to problems relating to IP quality, verification, and use are hard to find. Debates rage between IP users, suppliers, and EDA vendors about where the responsibility lies for making quality IP available for use and re-use in an efficient, predictable, and scalable manner.
  • EDA Extends Board Design into Manufacturing - A recent EDA and PCB acquisition represents a significant merger between the worlds of electronic and mechanical manufacturing.
++++++++
Originally published on Chipestimate.com “IP Insider”

Low-Power Undercurrents at GlobalPress 2012

Thursday, May 10th, 2012

While not the primary theme at this year’s Globalpress eSummit 2012, low power concerns were present in almost every presentation as these snippets reveal.

Altera – Jeff Waters, Seniro VP and GM

  • HardCopy (structured ASIC product) can further reduce power as compared to FPGAs by hardwiring a good portion of the chip. For reference, a chip company needs 30 million units at $10 per unit for an ASIC implementation to make sense.
  • Servers become more application specific to handle social media, financial, and other segments. The growth in these segments means that servers must also become more power sensitivity. One approach – used by IBM – is to mix CPUs with specialized accelerators to help reduce power by removing general purpose processors. [Interesting footnote: Intel is working with FPGA vendors Altera and Acrhonix to develop both desktop and server chips.]

 

Tensilica – Chris Rowen, PhD, CTO

  • For mobile phone designs, the voice requirements are outpacing bothMoore’s Law and battery technology. Designers will need to innovate more than just ride the wave of silicon technology (Moore’s Law). Mobile phones need increasing lower power matched with higher performance. Unfortunately, battery technology only improves by a couple percent per year.
  • Advanced audio and voice methods for mobile devices have become much more DSP intensive to handle noise control as in a car, beam forming microphone arrays and Always-on Voice recognition. The later needs low latency but also low power. Local extraction of phonemes (the individual sounds used to create speech) using Hidden Markov Models (used in speech recognition). These devices need to be “Always on” so you can have the illusion of being “Always off.”
  • The host CPU in a smart phone can not keep up with audio requirements. Power is critical. ARM cortex process is a great CPU, but not a great DSP. There is a big gap in performance – 15x between running audio codec on optimized DSP verses a general CPU.

 

… Editorial in progress – more to follow …

Google’s Software Process Challenges Semiconductor IP

Friday, May 4th, 2012

A recent FCC report blames Google’s software development process for its WiFi privacy breach. How stable is the process for semiconductor IP creation?

The findings were shocking! An FCC report concluded that a lone engineer was to blame for Google’s most controversial breach of online privacy. The report highlighted “apparently serious shortcomings in Google’s software development process.”

“These include claims from Google engineers that they were free to add code to a project without supervision if they thought they “could improve it”, a failure to follow through on a recommendation to have the privacy matter screened by one of the company’s in-house lawyers, and the pre-approval by a senior manager of a document before it was even written.” – Financial Times

This is simply shocking! How could these horrendous missteps have happened? Perhaps to meet deadlines, improve product performances or the morale of the developers?

Certainly these shortcomings never happen at other software development companies! After all, what manager would approve changing the code to improve performance? (Answer: Almost any.) Or what engineer or manager wouldn’t enjoy meeting with legal beagles to explain any technical issue? (Answer: Almost all.)  And what manager hasn’t pre-approved some paperwork to get a project going or back on track? (Answer: The great majority.) In case of documentation, the FCC report doesn’t indicate if the document was a software specification, user guide or product brochure.

My somewhat sarcastic point is that all of these shortcomings in Google’s software development process are common practices – sometimes even best practices. Anyone who has led a team of software developers in the real world – as I have – can attest to the occasional transgression to meet deadlines, stay on budget, improve the product or just get the job done.

Even the most serious allegation in the FCC report seems inconclusive. The search company originally blamed the privacy breach on a lone engineer who intended to write software to collect WiFi network data, not personal information. The validity of that claim should be easy enough to discern by looking at the code. A peer review of the software would quickly confirm the developer’s guilt or innocence.

Blaming the software development process is a tried-and-true way to divert responsibility from management. Neither the FCC report nor Google’s responses provide much insight. One is tempted to ask how Google measures the maturity of their internal software process. Do they use a standard Capability Maturity Model Integration (CMMI) approach or something similar? Searching Google for the answer is frustrating at best. Try it.

Characteristics of the Capability Maturity Model "best practices" for software development.

Some readers may wonder what all of this has to do with semiconductor IP development. Software development challenges are headaches faced by all engineers, programmers and managers – from applications and middleware down through firmware and even – gasp! – chip specific RTL. Have you ever wonder if a method exists to evaluate the software development process of semiconductor IP? Not the verification of IP functionality, but the validation of the development process itself?

Warren Savage, CEO of IP-Extreme, shared insights into this question and others during an interview at the recent GlobalPress eSummit. Look for Savage’s comments in my next blog.

 

++++++++++

Originally posted on “IP Insider”