Part of the  

Chip Design Magazine


About  |  Contact

Posts Tagged ‘embedded’

Next Page »

World of Sensors Highlights Pacific NW Semiconductor Industry

Tuesday, October 25th, 2016

Line-up of semiconductor and embedded IOT experts to talk at SEMI Pacific NW “World of Sensors” event.

The Pacific NW Chapter of SEMI will be holding their Fall 2016 event highlighting the world of sensors. Mentor Graphics will be hosting the event on Friday, October 28, 2016 from 7:30 to 11:30 am.

The event will gather experts in the sensor professions who will share their vision of the future and the impact it may have on the overall semiconductor industry. Here’s a brief list of the speaker line-up:

  • Design for the IoT Edge—Mentor Graphics
  • Image Sensors for IoT—ON Semiconductor
  • Next Growth Engine for Semiconductors—PricewaterhouseCoopers
  • Expanding Capabilities of MEMS Sensors through Advanced Manufacturing—Rogue Valley Microdevices
  • Engineering Biosensors for Cell Biology Research and Drug Discovery—Thermo Fisher Scientific

Register today and meet and network with industry peers from these companies, Applied Materials, ASM America, Brewer Science, Cascade Microtech, Delphon Industries, FEI Company, Kanto, Microchip Technology, SSOE Group, VALQUA America and many more.

See the full agenda and Register today.

Has The Time Come for SOC Embedded FPGAs?

Tuesday, August 30th, 2016

Shrinking technology nodes at lower product costs plus the rise of compute-intensive IOT applications help Menta’s e-FPGA outlook.

By John Blyler, IP Systems


The following are edited portions of my video interview the Design Automation Conference (DAC) 2016 with Menta’s business development director, Yoan Dupret. – JB

John Blyler's interview with Yoan Dupret from Menta

Blyler: You’re technology enables designers to include an FPGA almost anywhere on a System-on-Chip (SOC). How is your approach unique from others that purport to do the same thing?

Dupret: Our technology enables placement of an Field Programmable Gate Array (FPGA) onto a silicon ASIC, which is why we call it an embedded FPGA (e-FPGA). How are we different from others? First, let me explain why others have failed in the past while we are succeeding now.

In the past, the time just wasn’t right. Further, the cost of developing the SOC was still too high. Today, all of those challenges are changing. This has been confirmed by our customers and from GSA studies that explain the importance of having some programmable logic inside an ASIC.

Now, the time is right. We have spent the last few years focusing on research and development (R&D) to strengthen our tools, architectures and to build out competencies. Toolwise, we have a more robust and easier to use GUI and our architecture has gone through several changes from the first generation.

Our approach uses standard cell-based ASICs so we are not disruptive to the EDA too flow of our customers. Our hard IP just plugs into the regular chip design flow using all of the classical techniques for CMOS design. Naturally, we support testing with standard scan chain tests and impressive test coverage. We believe our FPGA performance is better than the competitions in terms of numbers of lookup tables per of area, of frequencies, and low power consumption.

Blyler:  Are you targeting a specific area for these embedded FPGAs, e.g., IOT?

Dupret: IOT is one of the markets we are looking at but it is not the only one. Why? That’s because the embedded FPGA fabric can actually go anywhere you have RTL, which is intensively parallel programming based (see Figure 1). For example, we are working on a cryptographic algorithms inside the e-FPGA for IOT applications. We have tractions on the filters for digital radios (IIR and FLIR filters), which is another IOT application. Further, we have customers in the industrial and automotive audio and image processing space

Figure 1: SOC architecture with e-FPGA core, which is programmed after the tape-out. (Courtesy of Menta)

Do you remember when Intel bought Altera, a large FPGA company? This acquisition was, in part, for Intel’s High Performance Computing (HPC) applications. Now they have several big FPGAs from Altera just next to very high frequency processing cores. But there is another way to do achieve this level of HPC. For example, a design could consists of a very big parallel intensive HPC architecture with a lot of lower frequency CPUs and next to each of these CPUs you could have an e-FPGa.

Blyler: At DAC this year, there are a number of companies from France. Is there something going on there? Will it become the next Silicon Valley?

Dupret: Yes, that is true. There are quite some companies doing EDA. Others are doing IP, some of which are well known. For example, Dolphin, is based in Grenoble and it is also part of the ecosystem there.

Blyler: That’s great to see. Thank you, Yoan.

To learn more about Menta’s latest technology: “Menta Delivers Industry’s Highest Performing Embedded Programmable Logic IP for SoCs.”

Review of Jama, ARM Techcon and TSMC OIP Shows

Friday, November 14th, 2014

October issues of the “Silicon Valley High-Tech Traveler Log” – with Sean O’Kane and John Blyler

Three events from TSMC, ARM and JAMA Software highlight the breadth and depth of IP development that (hopefully) results in manual-less consumer apps.

A few week’s ago, I attended three shows -  Jama’s Software Product Delivery SummitTSMC’s Open Innovation Platform (OIP) and ARM’s Techcon. While each event was markedly different there was an unintentional common thread, i.e., all three dealt with the interplay between hardware and software IP systems – albeit on different levels of the supply chain.

Each of these shows characterized that interplay in different ways. For TSMC, it was a focus on deep semiconductor manufacturing-related IP. Conversely, Jama Software dealt with product delivery issues for which embedded hardware and application software played a major role. Embedded software on boards running the company’s flagship processors and ecosystem IP hardware peripherals was the focus at the ARM Techcon. Why are these various instantiations of IP important?

Read the rest of the story at: IP-Based Technology without Manuals?







Chinese Embedded-Design Contest Offers Insight

Wednesday, January 9th, 2013

Reviewing the list of first-, second, and third-place winners reveals the technology direction of China’s university/industrial embedded-development community.

In recent years, many have speculated about the trends in China’s home-grown technology and related global-patent problems. Few real insights have emerged. According to the often-cited 2011 Financial Times Alphaville report:

“… we suspect that these data mostly just tell the typical story of China’s rise up the technology value chain and its use of industrial policy to accelerate growth on the back of already-existing technologies.”

Perhaps another way to gauge the direction of China’s internal technology is by looking at designs coming out of the country’s university-industrial complex.

Although admittedly biased, one place to start is the annual Intel Cup Undergraduate Electronic Design Contest – Embedded System Design Invitational. The bi-yearly event was initiated by the Chinese government, hosted by Shanghai Jiao Tong University, and has been solely sponsored by Intel Corp. since 2002.

The contest provides an opportunity for undergraduate students to design a working system based on an assigned Intel embedded platform over a period of three months. Each team consists of three members and a faculty mentor.

This year’s winning project was a Chinese sign-language translation system that helps deaf individuals communicate with the hearing world.

What do the runner-up projects tell us about the direction of Chinese technology? Unfortunately, not much. As you can see from the list of first-, second-, and third-place winners (see Appendix), most designs seem typical of embedded projects in the U.S. and Europe. Perhaps more telling was the interesting wording and specific topical focus of various projects, such as the following:

  • Fresh Food Every Day from Intel ATOM-Processor Icebox
  • Fairy in the car
  • Prison On Fire – A monitoring system based on the “Internet of things” and video-analysis technology
  • Happy Chess Player

The list of university-Intel partnered projects seems little different from similar contests held in other parts of the world. This seems to confirm the findings of the earlier analysis by the Financial Times.

Still, this similarity verifies the universal importance of embedded applications in the medical, automotive, and consumer markets. For semiconductor intellectual-property (IP) system-on-a-chip (SoC) designers, this emphasizes the importance of designing chips that easily integrate with board-level hardware and software. The march toward system-level design (e.g., Cadence’s EDA360 approach) continues!


Originally published on “IP Insider.”

Free counter and web stats

Many Cores but Little Parallelism

Friday, July 27th, 2012

Has the move to thin and light mobile devices sidetracked the much hyped rise of parallel coding programs for many-core chips?

Has the much-touted move to many-core systems resulted in an increase of parallel code creation? This question was recently posed by Andrew Binstock, editor-in-chief at Dr. Dobbs:  “Will Parallel Code Ever Be Embraced?

Why should semiconductor intellectual property (IP) designers care about changes in the world of parallel code creation?  Inductive reasoning would suggest that less parallel code means a decreased need for many-core systems and related integration circuitry, which in turn means fewer processor and interface IP is needed.

One of the biggest challenges in parallel code development is dealing with resource concurrency. The original goal of many parallel-coding tools was to make this concurrency easier for the programmers, thus encouraging better use of multi-core processing hardware. Binstock believes these efforts have fallen short, except perhaps in the world of server systems. “No one is threading on the client side unless they are writing games, work for an ISV, or are doing scientific work,” he explained.

Does this mean that future designs will revert back to single core processing architectures? This seems unlikely, since faster single core processors at the lowest geometric nodes suffer from serious power leakage and parasitic challenges. That is but one reason why many-core processing architectures emerged in the first place.

While the many-core chips are not going away, neither does parallel processing code seem to be increasing. What does the future hold?

Binstock suggests a trend where several stacks of low-voltage ARM-type chips run on tiny inexpensive systems that use far less power than Intel’s Xeon chips. I don’t know why he compares the ARM chips to Intel’s server-grade Xeon as opposed to the more appropriate client-grade Atom chip.

He envisions a system where the PC (client) becomes a “server” of sorts to a bunch of smaller embedded machines, each hosting its own app. One example might be an ARM chip running a browser, while another runs a multimedia application, etc. Each application program would run on its own processor, eliminating any problems of concurrency. Thus, many low power, low performance cores can be used without the need for any parallel code development. Instead of scaling up (more threads in one process), developers have scaled out (more processes).

Interesting, this is exactly the model that emerged several years ago when Intel introduced its first embedded dual-core system. One core would run a Windows or Linux operating system while the other would operate a machine on an assembly line. (see, “Dual Core Embedded Processors Bring Benefits And Challenges”  )

It would seem that embedded designers see little need to move beyond the use of a basic client-server architecture for software design, as opposed to the much hyped parallel coding model.

For a sanity check, I asked Intel’s Max Domeika for his thoughts on the trends on embedded/client parallel coding. Here’s what he had to say:

“I understand what Andrew is saying. I’d characterize it as a bit disillusioned with the multicore hype cycle that the industry went through. I’d claim the industry shifted its focus from multicore to mobile and its focus on thin and light. (I work on mobile now and not multicore.) Thin and light doesn’t yet need and can’t have as many cores as a big desktop/server.  So one possible narrative is that the part of the industry that would want to scale up is sidetracked by mobile. These mobile devices do offer multiple cores and do support multithreading. So, I think it will happen over time, just slower than perhaps some of the folks would think/like.”


Originally published on “IP Insider.”

Software Entrepreneur Helps Guide EDA Giant

Friday, July 20th, 2012

Cadence’s Jim Ready, founder of MontaVista, tries to bridge the software understanding gap between EDA designers versus embedded-application developers.

Many of us are still trying to grasp what it means for electronics design automation (EDA) tool vendors to expand into the world of non-RTL software development. It is an old story, namely, that of a predominately hardware-oriented company trying to embrace the seemingly counter-culture of the software development community.

Regardless, Cadence seems to be serious about their intentions to successfully integrate the software mindset into their hardware culture. Recent evidence of this intention is found in the hiring of Jim Ready, past founder MontaVista – an embedded Linux company. According to Richard Goering’s recent interview, Ready was brought into Cadence to help advise Lip-Bu Tan, the CEO, and others about software issues related to embedded systems.

Goering’s interview with Ready is an excellent read that helps clarify the role that EDA vendors will play in the embedded space, e.g., via virtual platforms, co-development, and even open systems support. [Q&A: Jim Ready Discusses EDA Connection to Embedded Software Development]

This discussion was reminiscent of another interview with Ready, one that I conducted years ago while he was still at MontaVista. At that time, Cadence’s EDA360 concept of SOC, software and system realization had yet to be delineated in a manifesto. My main concern during the interview with Ready was to understand how an open systems developed platform like Linux faced could support proprietary operating systems (OSs) like Intel’s Moblin (remember that one) and others. What follows are the slightly edited portions of that interview that remain relevant to Ready’s current comments.

Embedded Linux Faces Low Power Demand and Open Source Commercialization - Embedded Linus magazine, 2009.

Blyler: Earlier, you said that a complete embedded development platform is analogous to an ASSP in the semiconductor world. By analogy, MontaVista’s “Linux 6” product becomes an application specific OS management environment that includes Linux, Moblin RTOS and customer unique software programs.

Ready: Going back to the ASSP analogy, you might say that our (Linux 6) embedded Linux development environment is sort of the TSMC for software. Intel is now licensing the Atom core at the TSMC foundry so others can build their own system-onchip (SoC) based on the Atom architecture. One reason that they do this is because Intel cannot predict all the different configurations of Atom that people might use. We experience the same challenge with our embedded Linux platform. Users have the capability through the integration environment … to configure and maintain their own instance of Linux, Moblin, and/or open source software that is unique to their requirements, to their products.

Blyler: Is it like an Integrated Development Environment (IDE) but for the operating system?

Ready: It’s more around source code management and change and build management systems, but on steroids. If you go to and you grab one of those at any instances, because of the churn of open source the probability of that actually working is very low. It can range from “working perfectly” to “oh my gosh” because there are dead links. It’s hard for volunteers to keep this going.

(A complete embedded development platform) is the configuration management and infrastructure for our assemblage for all the software that we supply in an open system, such that customers can insert their own selection from open source and or their own stuff in an environment that keeps that consistent and builds are repeatable. What we provide is fully tested. It’s under our control and works.

It’s getting this front end of very intriguing open source into a more regularized and commercialized – in a sense, more normal – software process that people would expect to have for their software. If one presumes that open source is just perfect software out there for the taking, it’s not true.

Originally published on “IP Insider.”


Free counter and web stats

Intelligent Embedded Systems elude Definition

Friday, June 29th, 2012

Although a boon to semiconductor sensor, analog and RF-wireless IP providers, few practitioners seem able to clearly define intelligent embedded systems.

The term “intelligent embedded systems” is popping up more often in blogs, public relations announcements – especially from Intel, and in trade journals. But what exactly does the term mean?

The key descriptor seems to be the interconnectivity of the system. A recent IDC report defines a traditional embedded system as a fixed function, isolated system. Conversely, an intelligent embedded system is distinguished by its high performance, highly programmable microprocessors, internet connectivity, and high-level operating systems, among other things.

In the introduction to her Intel Press book, Satwant Kaur defines intelligent (embedded) systems by describing over 101 related implementation scenarios. There scenarios are meant to demonstrate the technology utopia possible by, “the marriage of the two – embedded and intelligent systems.”

At first glance – and even second more puzzled stare – it appears as if the phrase “intelligent embedded systems” is more of a marketing term than a useful description of the next evolution of embedded systems.

Whatever you want to call it, today and tomorrows embedded systems are growing at a staggering rate. IDC estimates that intelligent systems revenue will increase by near $700 Billion over the next three years (see figure). It is interesting to note that IDC doesn’t include PCs or mobile phones in this forecast for intelligent systems.

One thing that I’ve learned about intelligent embedded systems is that they include a lot of sensor, analog and RF-wireless subsystems. This is great news for the semiconductor IP community, as these subsystems should represent a large boost in analog and RF-wireless IP usage and revenue.

Reference stories:

Originally posted on “IP Insider

Embedded Software Developers Need Their Space

Friday, May 20th, 2011

By John Blyler

Improvements to several IDEs should make life a bit easier for time-constrained, globally separated, and processor-centric embedded-software developers.

At this year’s Embedded Systems Conference (ESC) event, software-development environments took center stage for both a prominent electronic-design-automation (EDA) tool and an integrated-circuit (IC) hardware company. In fact, the EDA-systems tool vendor, Mentor Graphics, won an award for its efforts to improve integrated development environments (IDEs) for embedded designers.

During ESC 2011, VDC Research Group, Inc. (VDC) gave the annual best-in-show “Embeddy” software award to Mentor Graphics’ Embedded Sourcery System Analyzer. The Embeddy is given to the company that’s announcing the most cutting-edge product or service for embedded-software developers and system engineers.

According to Mentor, the System Analyzer tool is designed to help embedded-software developers “visualize and analyze system data to identify and debug or decode problem areas easily and improve design performance.” System Analyzer is part of the Embedded Sourcery Codebench suite, an IDE based upon the open-source GNU tool chain. The IDE supports the embedded development of specific processors including NetLogic Microsystems’ XLP multicore processor, Freescale’s Kinetis, and Xilinx’s Zynq.

The traditional approach to debugging software code relies on breakpoints. They’re used to set aside troublesome code blocks and printf() statements in order to examine the data stored up to the breakpoints. System Analyzer improves this process by collecting trace and profile data from a variety of sources within the system. This information is plotted against a timeline as well as in relationship to other system activity. As a result, embedded developers should be able to debug code problems with greater ease and efficiency.

Although Microchip didn’t win an award at ESC for improving its software-development environment, the company was given the 2010 EDN Innovation Award for its human-machine-interface technology—specifically, the mTouch Metal-Over-Capacitive touch-sensing technology.

The company’s open-source MPLAB X IDE improvements are noteworthy because of the addition of several features including the following:

  • Team-collaboration tools for bug tracking and source-code control
  • Support for multiple, simultaneous compiler versions
  • Code completion and context menus via advanced editor


MPLAB X is based on the Oracle-sponsored, open-source NetBeans platform, which has a wide range of third-party plug-ins. The company’s IDE platform supports all of its 8-, 16-, and 32-bit microcontrollers and related digital signal controllers and memory devices.

Today’s software developers rely on IDEs to do their jobs. Thanks to EDA and IC companies, those workspaces just got an improved view.

Impressions from ESC 2011

Friday, May 6th, 2011

By John Blyler

Here are my rough impressions from the last four days of attendance at the Embedded Systems Conference.

The weather was warm and inviting as is shined through the large windows at the San Jose McEnery Conference Center. Inside, the show floor was full with exhibitions.

Attendance to the show floor felt a bit light, but it was consistently even through each day. The training and educations sessions were reportedly well attended.

The main draw on the show floor was the huge museum-like, skeletal display of a T-Rex.  The dinosaur exhibition, provided by Green Hills Software, was very cool but did seem a tad out of place.

By contrast, the many robots located throughout and sometimes roaming the show floor was also cool. They were definitely in place for an embedded conference.

Since I spend a large amount of my time in meetings, I missed the main keynote event delivered by Steve Wozniak, Co-Founder – Apple Computer, Inc. No matter, since Wozniak is also the keynote speaker at next month’s Design Automation Conference. I’ll hear his message at DAC.

Let me stay on the DAC-ish theme of EDA companies to cover their announcements at this week’s ESC event.

Most of the press – well, those of us left standing after the great changes in the media and publication world – were in attendance at Cadence’s in-booth press conference. The purpose of the gathering was to announce the next installment of the company’s EDA360 strategy, namely, “System Realization.”

The words described a need and approach for hardware and software SoC co-design and co-verification. The facts seemed to be the integration of Cadence’s very successful emulation platform (Palladium) with their higher level FPGA and virtual prototyping systems. As I listened to Senior VP and CMO John Bruggeman’s flawless delivery of the message, I couldn’t help but think back to the days of Cadence earlier attempts at co-design, namely, with ESL and SPW. It didn’t help that the ESC announcement was short on specific details, since the “System Realization” activities were still in early engagements with customers.

Not quite as spectacular(1) but still note-worthy was Mentor’s announcements of improvements in their long term goal of owning the system space. Here, system encompasses SoC design, manufacturing, packaging, board design and manufacturing through mechatronics. The company’s announcement focused on the software side of the system, namely, a new integrated development environment base on the open industry GNU tool chain.

Synopsys had a small booth at ESC, but provided no major announcements at the show.

But who goes to ESC to learn about the latest news from the EDA community? It’s the embedded space that counts. I’ll report details about all the embedded news later next week.

(1) What I meant to write was “Not quite the spectacle…” Even editors make misteaks. I mean, mistakes. — JB

ATOM Leader Leaves Intel

Tuesday, March 22nd, 2011

By John Blyler

Departure of Intel’s Senior VP and GM of the Ultra Mobility Group may cast doubt upon or show commitment to the company’s embedded mobile market strategy.

The man closely associated with Intel’s ATOM-based push into the mobile computing world has apparently resigned from his post as chief of the company’s Ultra Mobility Group (UMG). For now, Anand Chandrasekher’s position will be co-managed by Intel Architecture Group’s Mike Bell and Dave Whalen.

Chandrasekher’s resignation has renewed speculation about Intel’s commitment to the mobile market. The company is competing directly with current mobile IP processor leader ARM. Also, Intel recently faced a software setback when cell-phone giant Nokia announced a partnership with Microsoft over Intel’s Meego mobile operating system platform. (see “OSCON Shows Breadth of Open Source Software)

Intel issued the following comments:  “Intel remains committed to this business,” said David Perlmutter, executive vice president and Intel Architecture Group general manager. “We continue to make the investments needed to ensure that the best user experience on smartphones and handhelds runs on Intel Architecture, and to ship a phone this year. “We’d like to take this opportunity to thank Anand for numerous contributions to Intel over his 24-year career here, and wish him well in his future endeavors.”

Anand Chandrasekher - Senior Vice President and General Manager of Intel's Ultra Mobility Group (UMG)

Next Page »