Part of the  

Chip Design Magazine


About  |  Contact

Archive for December, 2011

APAC Surges Ahead in Global IP Market

Wednesday, December 21st, 2011

A recent report by Technavio Insights confirms strong growth in IP usage and development in Asian countries. What will this mean to the future of chip design?


It has been a notable year for the semiconductor intellectual property (IP) industry. Technavio Insights, research platform of Infiniti Research, expects the global semiconductor IP market to grow at 7.75 percent year-over-year until FY2014. This growth is traced to continuous advancement in chips and electronic devices, in addition to demands in wireless, analog and optical technologies.


Total revenue for the global semiconductor IP market is driven by three key regions: theAmericas, the Europe Middle East and Africa (EMEA) and the Asia Pacific (APAC) regions. (Courtesy of TechNavio Analysis)


The birthplace of the world’s semiconductor industry – theAmericas– continues to contribute the largest share of total revenue for the global IP industry.  TechNavio reports that most of the IP vendors in the EMEA region cater to customers either in theUSor in the APAC region. This is why the EMEA region contributes only 26 percent to the overall global revenue.


Despite its relative delay into the semiconductor market, the APCA region has already outgrown the EMEA area in terms of market share. A rapidly increase in IP related activities have spurred this growth in the APAC region, as have the shifting of R&D centers from theUSandEuropeto minimize production costs.


Strong growth in the Asian IP market is one of the reasons for the introduction of major IP portals inJapanandChina. These portals will also make local Asian IP available for global consumption in the design of future chips. Such trends will lend an interesting twist to questions of IP security, theft, quality and verification. All of which will make for interesting future blogs.

(Originally posted on


Technical Trade-offs Leave Long Tail

Thursday, December 15th, 2011

Architectural trade-offs – typically resulting in IP – made early in a design can affect a company’s market participation for years to come.


RF Telescope at Arecibo Picks up Dr. Who

Monday, December 12th, 2011

Yes – it’s a fake story. Still, it would be interesting to see the mathematics showing the reflected signal strength from 25 light years away.

Why is this story fake? As pointed out on Skeptic Friends, the first clue was the date: April 1 2009. Secondly, the story site looked like a BBC webpage the URL was hosted at, not BBC.

Still, I thought it was real enough to write the following post: “Set the Way-Back machine for 41 MHz, Mr. Peabody. I’m going to listen to 50 year old Dr Who reruns from 25 light years away! (Thx to Paula for pointing this out.)”

47 Year Old Television Signals Bouncing Back to Earth

“While searching deep space for extra-terrestrial signals, scientists at the Arecibo Observatory in Puerto Rico have stumbled across signals broadcast from Earth nearly half a century ago.”

It’s too bad that the story is fake AND that it cited Arecibo, an important U.S. assest that is struggling for funding. Here’s a story from my first visit to this remarkable and remote research facility: “Remote RF Telescope Bring Sci-Fi To Reality

RF Telescope at Arecibo, Puerto Rico.


Voltage Spikes Lead to Deeper Integration

Friday, December 9th, 2011

The move by Silicon Labs toward power systems integration on MCUs points to a larger trend followed by Silicon Blue and IMEC, among others.

Efficient voltage regulation is critical to the design of ultra-low powered systems.

I was reminded of this point during a recent interview with Silicon Labs. The company had just announced improvements to both its microcontroller unit (MCU) and wireless MCU for power-sensitive embedded applications. Silicon Labs claimed that their low-power technology enables 40 percent less system current draw and up to 65 percent longer battery life than competing MCU products.

The system that withdraws the least amount of energy from the battery will achieve the lowest power usage – all other conditions being equal. One way to reduce energy usage is by using highly efficient voltage conversion techniques to draw less current – both in steady state and transient or “spiking” scenarios. Improved energy efficiency was a key part of the recent Silicon Labs announcement. But it also lead to an interesting side discussion about energy scavenging systems.

One way to make battery power last longer is through power efficiency. Another is by restoring energy to the battery, e.g., with an alternative energy system. I asked Silicon Labs if their improved MCUs platforms would interface with energy scavengers.

Keith Odland, the company’s  MCU marketing manager, explained that the challenge with interfacing to energy scavenging devices lies with the power inputs. As an example, he cited the use of piezoelectric elements – common in scavenging systems. Even though these devices output micojoules of power, they can still create large voltage spikes in the tens to hundreds order of magnitude.

The voltage regulation techniques that the company has incorporated into their MCUs to improve energy efficiency will also help prepare them to handle future energy scavenging systems. “All of these improvements will … accommodate non-traditional energy sources – things like switching regulators that are boost converters; switching regulators that are buck converters; wide operating ranges; linear regulating systems and temporary energy storage devices.”

Odland did caution that, while intriguing, many energy scavenging devices don’t yet have the economic drivers to push them beyond what is available in most battery platforms. “The exceptions are devices embedded into bridge suspensions and things on top of radio towers that have high servicing costs,” he said. Today, it is still cheaper to replace a $0.15 battery then design a new energy scavenging system.

He noted that the market is beginning to see more creative energy scavenging systems come into main stream, e.g., tire pressure monitoring systems. (see, “Power Bits: Smarter Tires, After CMOS”)

Another alternative power source that is gaining momentum is solar. Here, too, ultra low power is critical to success as was recently demonstrated by the Citizen Watch’s selection of Silicon Blue’s ultra-low power FPGA IP into their solar-powered, Eco-Drive Satellite Wave watch. Citizen claims that this is the world’s first light-powered GPS-synchronized watch.

Regardless of when alternative energy sources like solar and scavengers go main stream, having high efficiency energy conversion capabilities integrated into the same chip as the processor will help designers both now and in the future.

Conservation of Design Pain

Thursday, December 1st, 2011

Regardless of system-design approach, painful tradeoffs are still needed–usually during integration.


Earlier this month, Steve Leibson shared his “prognostications from the ICCAD panel” concerning the shape of things to come for the EDA and chip design industry.


The part of this blog that caught my attention was the comments made by Patrick Groeneveld, Magma’s Chief Technologist and the General Chair for DAC 2012. Groeneveld acknowledged two paths to handling chip design complexity: partitioning and reuse. But he believed that both of the paths were evil since they introduce inefficiencies in the overall design.


Leibson disagreed; pointing out that the divide-and-conquer method was a tried and ture approach, dating back to theRoman Empire. “…it’s an approach that seems to have withstood the test of time. However, a divide-and-conquer strategy does indeed lead to suboptimal design in terms of efficient resource use. I just don’t know of any engineering discipline that avoids such inefficiencies when tackling projects of comparable complexity. Is it hubris to think that electrical engineering and chip design are somehow different?


Both Groeneveld and Leibson offer classic arguments to the age-old problem of dealing with complexity. There are no new solutions to this dilemma, only a re-shifting of unpleasant trade-offs. In a broader sense, this re-shifting can be thought of as maintaining the “Conservation of Design Pain.” I use the word “design” for brevity and rhythm. To be correct, I should have used “development” since the pain is spread across the full system/product life-cycle effects, from design through manufacturing.



This law of “pain” acknowledges the shifting of difficult decisions to different parts of the development cycle, depending upon the methodology. For example, both partitioning and reuse are useful techniques that overcome certain design complexities by increasing the design pain in other areas, namely, in integration.


Centuries of systems engineering confirm that most systems work best when they have low coupling and high cohesion between subsystems. This is a golden rule in the partitioning between (and within) hardware and software systems. Reuse follows the same rule, with the added advantage of functionally verified blocks of design.


By reducing complexity, both partitioning and reuse simplify the work of design engineers. For example, by utilizing code or hardware reuse, engineers don’t have to design everything, which affords them more time to concentrate on designing in there area of  expertise. This leads to greater specialization, which is can be good.


But it also leads to a greater need for reintegration and often increases the complexity of interfaces. This effectively shifts the “pain” from the module to the interface subsystem.


Shifting pain from one part of the development cycle is the result of dealing with complexity. If recent trends are any indication, then the integration engineers are in for a world of hurt.