Here’s the sitch: A colleague calls you in the morning about a small, 6 hour long, local embedded computing show. You decide to attend, but arrive in the middle of a technical session. What do you do? Attend the remainder of the session or tour the small exhibitor floor?
I chose the later and talked with as many exhibitors as possible. It turned out to be time well spend, since most exhibitors have a good understanding of the technology behind their products. Here are the “tidbits” of technology and design that I picked up while attending the “Real-Time & Embedded Computing Conference” in Portland, OR.
Power-over-Ethernet technology alleviates the need for extra power cables to areas that require peripherals like a display, e.g. areas in supermarkets, vehicles and airplanes. The benefits for PoE are well understood. But how much power does PoE actually provide and why?
Originally, Power-over-Ethernet (PoE) delivered 14 watts. Today, PoE provides 30w over Cat 5/7 cable with a future goal of 45w over Cat 7, according to Thomas Winslow, Western Area Sales Manager for PFU Systems, a Fujitsu company. Why the increase in PoE power, especially in a market where lower power is the guiding design principle? Winslow believes that Intel – among others – is pushing the 45w version of PoE. Here’s the reasoning:
–> Intel’s embedded Atom processor (~5w) + control/video chipset (~15w) + LCD driver and peripherals (~15 to 20w) = ~ 40w or greater
Why is the chipset wattage higher than the power consumption of the processor? One reason is the functionality. The Input/Output Controller chipset also contains the graphic engines for many common applications. On a PC, where graphics are a high priority, the “chipset” contains fairly sophisticated graphics functionality. When you move to the server world, data transfer becomes the priority. “There is no real time graphics to worry about, explained Winslow. That’s is why the data throughput on a server chipset is faster than on a PC where video manipulations are consuming most of the power.
Did you know that the size and cost of the memory modules are the determining factor for the size of the board? Winslow, from PFU Systems, explained that error-correcting code (ECC) memory that fits into the smaller form fact boards, like a Type 2 COM Express, is three times as expensive as the same type of memory in a larger Type 1 COM Express board. Aside from the cost of miniaturizing the ECC memory for a small footprint, there is also the challenge of dealing with the increased heat from the smaller memory.
Carrier boards allow the customer to customize the inputs/outputs of a board while incorporating a commercial off-the-shelf (COTS), standard form factor (SFF) daughter board from an OEM, e.g., a PoE board mounting onto carrier board. Ahe carrier board also allows the customer to add their own IP, as well as upgrade processor technology in the future, e.g., from Intel’s Celeron to a new Dual Core system.
Last one from PFU Systems: Many embedded computing providers use only standard chips, like processors, memory, interfaces, etc. In other words, they don’t have any ASICs or ASSPs. This means that these companies don’t need to deal with the headaches caused in the creation, maintenance and supply of device drivers. Since all the components are standard, every operating system provides all the needed drivers. That’s a sweet deal.
How do you debug multicore systems? This is a major issue, according to Jerry Flake, US Sales Manager for Lauterback Development Tools. The company makes embedded debug systems and supports both ARM and Intel processors. For ARM processors, the answer to the debug question is found in Coresight, a new technology used for multicore debugging through the use of trace macrocells. Multicore architectures need a way to direct different trace sources into a single external trace interface. Coresight can combine multiple traces from various cores into one funnel that is then output and captured by a debugger.
It is sometimes easier to use the significant processing power of the PC to crunch signal processing data than to do the same task on the often limited processing systems found in embedded systems. That’s why Spectrum Signal Processing was highlighting a Linux-based platform that combined both a PCI Express-based carrier card (there’s that term again – see above) that plugged into an Intel-based server for increased signal processing capability. Keith La Rose, Director of Sales at Spectrum, provided a lively discussion about the benefits of combining the flexibility of embedded system boards with the raw processing power of a server – a reoccurring theme at this show.
There were several other well know exhibitors at that show, including Advantech, VersaLogic, Montavista, GreenHills, VIA and others. Instead of visiting the other booths, I headed for the next available technical session in which Jeffrey Schaffer, Sr Field Applications Engineer for QNX (now Rim). His official topic was; “Developing Next Generation HMI’s for Embedded Systems.” The talk centered around the use of Adobe Flash in embedded systems. This was a timely topic in light of Apple’s ongoing refusal to use Flash in any Apple iPhone products. Jeffrey’s presentation was full of great design tidbit for embedded programmers.
A generous lunch was provided at the show, which helped ensure a good attendance by the engineering community. All in all, I was glad that I attended.