Part of the  

Chip Design Magazine


About  |  Contact

Archive for June, 2012

Nano-technology in a Macro-world

Friday, June 29th, 2012

Imec, Belgium’s R&D leader, will join with global partners at Semicon West to showcase nanotechnology advances in almost every imaginable market.

This year at the beginning of the Semicon West conference, the best and the brightest from Belgium’s premier nanotechnology R&D center and their worldwide partners will gathered again to share their latest successes.

Last year’s Imec Technology Forum (ITF) showcased emerging nano-technology across a wide range of markets, from as ICT, healthcare, imaging & visualization, and energy. For me, two technology areas really stood out. One was advances in process, sensor and wireless design (see, “IP That Senses and Cares”). The other area concerned developments in the RF world – a key enabler for our modern connected work (see, “MEMS And Packaging Hold Keys To Radio Connectivity“).

Looking to build upon last year’s successful attendance, the Imec Technology Forum (ITF) will again be jointly organized with SEMI in conjunction with Semicon West. This year it is sponsored by the MEMS Industry Group (MIG), Flanders Investment & Trade (FIT), Semiconductor Industry Association (SIA) and International Electronics Manufacturing Initiative (iNEMI).

Luc Van den hove, Imec’s president and CEO, will set the stage at this year’s event with opening remarks about connections, collaboration and innovation. Industry speakers will share their expertize throughout the event. Specific topics include logic and memory scaling beyond 15nm; silicon-photonics; CMOS in life science applications; CMOS for wearable diagnostics and monitoring; and the next consumer applications.

ITF takes place on Monday (7/9/12) in Salon 8 and 9 at the downtown Marriott Marquis. This forum is by invitation only for all CEO, CTO and executive staff level professionals. Look for highlights from the forum in my blog after the event.

Originally posted on “IP Insider

Related content:

Intelligent Embedded Systems elude Definition

Friday, June 29th, 2012

Although a boon to semiconductor sensor, analog and RF-wireless IP providers, few practitioners seem able to clearly define intelligent embedded systems.

The term “intelligent embedded systems” is popping up more often in blogs, public relations announcements – especially from Intel, and in trade journals. But what exactly does the term mean?

The key descriptor seems to be the interconnectivity of the system. A recent IDC report defines a traditional embedded system as a fixed function, isolated system. Conversely, an intelligent embedded system is distinguished by its high performance, highly programmable microprocessors, internet connectivity, and high-level operating systems, among other things.

In the introduction to her Intel Press book, Satwant Kaur defines intelligent (embedded) systems by describing over 101 related implementation scenarios. There scenarios are meant to demonstrate the technology utopia possible by, “the marriage of the two – embedded and intelligent systems.”

At first glance – and even second more puzzled stare – it appears as if the phrase “intelligent embedded systems” is more of a marketing term than a useful description of the next evolution of embedded systems.

Whatever you want to call it, today and tomorrows embedded systems are growing at a staggering rate. IDC estimates that intelligent systems revenue will increase by near $700 Billion over the next three years (see figure). It is interesting to note that IDC doesn’t include PCs or mobile phones in this forecast for intelligent systems.

One thing that I’ve learned about intelligent embedded systems is that they include a lot of sensor, analog and RF-wireless subsystems. This is great news for the semiconductor IP community, as these subsystems should represent a large boost in analog and RF-wireless IP usage and revenue.

Reference stories:

Originally posted on “IP Insider

Goodbye BIOS – Hello UEFI

Wednesday, June 20th, 2012

Efforts to modernize traditional PC boot firmware leads to Intel’s collaboration with Phoenix Technologies on UEFI BIOS client and server development.

Phoenix Technologies, a long-time developer of PC Basic Input Output System (BIOS) firmware, recently announced an agreement with Intel to jointly develop the new reference Unified Extensible Firmware Interface (UEFI).

All PCs have built-in BIOS software (boot firmware) that is the first code to run when the PC is powered on. As a BIOS replacement, the UEFI specification defines the software interface between an operating system and platform firmware. It is meant to modernize today’s traditional BIOS booting process for all PC form factors by offering processor-independent architectures and related devices drivers. UEFI capable systems are already been shipped by major PC OEMs.


Most PC motherboard suppliers license a BIOS core from a third-party company. The board suppliers then customize the BIOS to address different hardware needs. Today, major BIOS vendors include American Megatrends (AMI), Insyde Software, and Phoenix Technologies.

Phoenix’s latest BIOS product, a software tool suite called Secure Core Technology (SCT) 3.0, allows board suppliers to easily customize their BIOS for both unique hardware and user interface requirements. SCT is targeted at servers, notebooks, desktop and embedded devices and provides a universal build environment so a single BIOS can be used across numerous operating systems and silicon platforms.

In addition to collaboration with Intel, Phoenix has relationships with AMD, ARM, Microsoft and others.

Rich Geruson, President and CEO of Phoenix, says that the company’s BIOS products are differentiated from competitors by its clean code, friendly user interface and level of engineering experience. “Our engineers typically have up to 5-7 years of experience versus our competition,” explained Geruson. “Plus, our architecture provides very fast boot time and optimizes power usage.”

According to Steve Chan, Phoenix’s CTO, the company has engagements with Intel’s PC client group to collaborate on a reference BIOS. Additionally, the company is providing engineering support to Intel’s server group. When asked if the reference BIOS technology would be used on Intel’s embedded products including mobile, Chan could only say that some of the technology could be applied to different platforms.

Phoenix was founded in 1979 and hold over 200 patents of which 75% relate to BIOS technology.

Annual “ASIC Prototyping with FPGAs” survey

Tuesday, June 19th, 2012

There’s still time to win a $15 Amazon gift card by participating in the annual “ASIC Prototyping with FPGAs” survey. The deadline for responses is Thursday, June 28, 2012.




What Color is Your Semiconductor IP Box?

Thursday, June 14th, 2012

Black, white and even grey box testing techniques from the world of hardware and software integration are finding a place in semiconductor IP subsystems.

Much has been said about the need to incorporate software into chip and board level hardware design. Cadence’s EDA360 vision is but one example of the realization that silicon and software must be co-developed to achieve the optimal system design.

Integrating the disciplines of hardware (chip) and software design is not an easy task. It requires a systems engineering approach throughout the entire development life cycle, but particularly during the design and integration phase. While hardware and software engineers are experts in their respective “white box” domains, System Engineers have experience integrating the resulting “black box” subsystems.

Let’s be clear on terminology. “White box” refers to a method of testing where the internal workings of electronic components or software code are known. This is where hardware or software domain-specific experience and knowledge are needed.

Black box testing refers to functional tests where the internal workings of the hardware or software subsystems are unknown. In Black Box testing, only the input and output parameters of the subsystem are known. This is the realm of Systems Engineering.

To successfully integrate hardware and software subsystems, a Systems Engineer must have a working knowledge of both domains. He or she must be able to communicate effectively with both the hardware and software engineers during the white-box testing that precedes full system integration during the black-box testing phase.

Further, in the shrinking time-to-market (TTM) windows of today’s electronic systems, software development must start before hardware is fully available. This need has resulted in a co-design methodology between hardware and software which in turn affects traditional white- and black-box testing. The Systems Engineer must be involved in both co-design and co-testing to ensure validation and verification of system level requirements.

What does all of this mean to the world of semiconductor IP design? One of the problems with integration of large blocks of third party IP – think ARM cores – is that signals may cross different clocking domains. By design, semiconductor IP cores – logic, cell or chip layout – are black boxes for the SoC integration team. How does a System Engineer ensure successful integration in a purely black box environment? One way is to have access one or two relevant white-box parameters, resulting in what has been dubbed grey-box testing.

Grey-box testing is black-box testing with some knowledge of internal data structures and algorithms. It can be thought of as selective white box testing but without full access to the software’s source code. Black box models provide input and output signals, but nothing else. In IP integration, a grey-box model may provide a single level of register logic to enable inter-block analysis. This additional knowledge should provide greater test cover that result in fewer end-product failures. One example of the grey-box approach was provided by Blue Pearl’s recent partnership with Xilinx – announced during DAC 2012 – to develop grey-box testing for ARM core based System-on-Chips (SoCs).

White-, black- and grey-box testing strategies are but one of many issues faced in SoC IP integration. (see, “Experts at the Table: IP Subsystems”) Yet all of these issues are but a subset of challenges encountered in the integration of larger hardware and software systems – e.g., at the board, module and top system level. The goal is that successful approaches are applied throughout all level of the system hierarchy.


If you’d like to explore these and other hardware-software integration issues, then you might enjoy attending this online course which starts on June 25, 2012.

Originally published on “IP Insider.”


Free counter and web stats

Images of Day 3 at DAC 2012

Friday, June 8th, 2012

Day 3 of DAC, captured in pictures and captions.

Honorable mentions (other companies that I visited or meant to visit):

Austrialian Semiconductor Technology Company

  • VWorks – Atanas Parashkevov, CTO and VP. Part of the Austrialian Semiconductor Technology Company (ASTC)
  • Calypto – Shawn McCloud, VP of Marketing


Images of Day 2 at DAC

Wednesday, June 6th, 2012

Day 2 of DAC, captured in pictures and captions.

Video: ChipEstimate.TV DAC 2012: “Sean and John: Tuesday’s DAC Walkabout

Pictures from throughout the day:

I’m missing more than a few captions. If you recognized any of the folks in these pictures, please forward to me their name and title. Thx!




Systems, Software and IP Merge at DAC

Monday, June 4th, 2012

Here’s the first day of DAC, captured in pictures and captions.


ASIC/ASSP Prototyping-Verification with FPGA 2012 Survey

Saturday, June 2nd, 2012

We are conducting a global survey on ASIC-ASSP Prototyping with FPGA and would appreciate about 10 minutes of your time to answer a number of key questions. Please answer all questions so we have a comprehensive data set.

Extension Media (publishers of Chip Design, Embedded Intel® Solutions, and FPGA and CPLD Solutions Resource Catalog and PLD Designer) is conducting its annual research study about the evolving trends in ASIC-ASSP prototyping and verification with FPGA platforms. We are interested in hearing from hardware, software and system engineers-architectures working in the chip and embedded development communities.

In appreciation for your time, 40 respondents who complete the survey will be selected at random to receive a $15.00 gift certificate from

The drawing will be held on July 10, 2012. To qualify, respondents must complete the survey by Thursday, June 28, 2012.

Your assistance is greatly appreciated. Thank you — John Blyler, Editorial Director


Click here to take survey


SoC Costs Cut by Multi-Platform Design

Friday, June 1st, 2012

Upward SoC cost trend blunted as designers reused software, verified IP and fewer blocks, reports long-time EDA analyst Gary Smith.

During last year’s Design Automation Conference (DAC), EDA-veteran analyst Gary Smith predicted that it cost slightly over $75 million to design the average high-end System-on-Chip (SoC). This was way over the $50 million targeted by IDM-fabless companies and even further from the $25 million start-up level preferred by funding institutions.

Shortly after that prediction, several companies reported building SoCs around the $40million level. How did they beat the expectation? First, they used previously developed software. Second they used IP that came with verification suites. Lastly, these companies significantly decreased the number of SoC blocks – below the preferred five core blocks. Taken together, these three factors constituted a methodology nicknamed the Multi-Platform Based Design approach.

In essence, this approach was based on the integration of existing platforms enhanced with a new application level to add competitive advantage. The greatest cost savings was realized from the reduction of new core designs.

The multi-platform based design platform has three levels: functional, foundation and application. The functional level represents the core of the SoC design, the broadest of the three platforms. Typically, it often comes from a third party, e.g., ARM Cortex A9 processing system, that is not geared to specific industry or product. If it comes from an in-house design, then it consists of all reused cores. This level provides no competitive advantage since it uses third party cores or IP.

The Foundation platform, also usually from a third party vendor, provides only a slight industry or market differentiation. Most foundation cores are focused on the mobile and consumer electronic markets, e.g., Nvidia’sTegra 3, TI’s OMAP and Qualcomm’s Snapdragon platforms. While enabling differentiation for a particular market segment – often the mobile or consumer electronic markets – foundation cores still provide only a small competitive advantage. Together, the functional and foundation platforms make up between 75 to 90 percent of the total gates in the SoC design.

At the top of the multi-platform based design is the application level, which provides the most market differentiation. This level consists of in-house or proprietary designs, e.g. IP or software from car-maker Audi’s navigation and infotainment systems. The drawback is that this level has the shortest product life cycle.

Applications that are popular can move from the application-level to the foundation level, as in the case of GPS and GPU SoCs. Foundation suppliers then begin to include these popular IPs in their regular offerings. If the application involves processing – like a GPU – then it may even evolve into the functional-level.

Those companies that create a popular application offering have a sustainable advantage, which becomes very hard for competitors to surpass. Smith cited the example of the PC- market. IBM developed the original PC, but within a decade Intel had taken over the market thanks to their platform approach. Now, as the processing has shifted to low-power mobile devices, Intel’s platform has been surpassed by ARMs.

Smith suggested that the good news for DAC is that the platform companies will find a welcomed business for their IP in the evolving system-level EDA market.