Part of the  

Chip Design Magazine

  Network

About  |  Contact

Posts Tagged ‘automotive’

A Holistic Approach to Automotive Memory Qualification

Tuesday, January 3rd, 2017

 

The Robustness Validation approach in design of automotive memory components addresses reliability and safety margins between design and actual application.

By John Blyler, Editorial Director, JB Systems

Improved reliability is just one of the benefits claimed in using the supply-chain sensitive Robustness Validation (RV) approach to qualifying non-volatile memory (NVM) components for automotive electronic application. The following is a summarized and paraphrased coverage of a paper presented by the author, Valentin Kottler, Robert Bosch GmbH, at the IEEE IEDM 2016. — JB

Today’s cars have many electronic systems to control motor, transmission, and infotainment systems. Future vehicles will include more telematics to monitor performance as well as car-to-car communication. As the number of electronic applications in the car increases so does the need for non-volatile memories to store program code, application data and more.

Automotive applications place special requirements on electronic components, most noticeably regarding the temperature range in which the components must operate. Automotive temperature ranges can vary -40 to 165 C degrees. Further, harsh environmental influences like humidity and long vehicle lifetimes are significantly additional requirements not typically found in most industrial and consumer products. Finally, automotive standards place high requirements on electronic component, system and subsystem quality and reliability. For example, it’s not uncommon to demand a 1part per million (ppm) failure rate requirement for infotainment system and a zero defect rate over the lifetime of the car for safety systems, e.g., braking and steering systems. PPM (Parts per million) is a common measurement of performance quality.

These expectations place an additional challenge on components that will wear out during the lifetime of the car, namely, non-volatile memories. Accordingly, such components need to be thoroughly qualified and validated to meet reliability and safety requirements. Adding to this challenge are both the function of the electronic component and its location in the car, all of which creates a wide spectrum of requirements and mission profiles for electronic memory components.

Non-Volatile Memory (NVM) Components

One of the key components in automotive electronics is non-volatile memory, from which program code, application data or configuration bits can be retrieved even after power has been turned off and back on. It is typically used for the task of secondary storage and long-term storage. The size of the NVM in automotive systems can range from a few bytes to many giga-bytes for infotainment video systems.

The various types of NVM adds to the range of available components. For example, a form of NVM known as Flash Memory can have NOR and NAND architectures. Further, there can be single and multi-level cell (SLC and MLC) flash memory technologies. A qualification and validation approach that works for all of these types is needed.

Valentin Kottler, Robert Bosch GmbH

Automotive application requirements can be very different from one application to another. Application requirements will affect the basic performance of memory device characteristics such as speed, write endurance, data retention time, temperature performance and cost effectiveness, noted Valentin Kottler, Robert Bosch GmbH. One particular application may require only a few write cycles of the entire memory. Another application may require the same component to write continuously for over one-half million cycles. Still, another application might require 30 years of data retention, which happens to be the typical 20 year life time of the car plus up to 10 years of shelf time if the supplier has to pre-produce the electronics that support that application.

The simultaneous fulfillment of all these requirements may not be possible in any cost effective way. What is needed is an approach to validation that is application specific. The trade-off is that application specific validation may need to be repeated for each new application that uses a given component. This can mean significant effort in validation and qualification.

Standard approaches using fixed stress tests – like the “3 lots x 77parts/lot approach – will not be able to cover this wide spread of mission profile and the high variety just described. The Automotive Electronics Council (AEC) AEC-Q100 is a failure mechanism based stress test qualification for packaged integrated circuits (1). The 3 lots x 77 parts/lot failure tests aims at a 1% failure rate with 90% confidence.

More importantly, this type of approach does not provide information margins (discussed shortly), which are very important for determining the PPM fail rates in the field.

For these reasons, the standard approach needs to be complemented with a flexible qualification methodology like the robustness validation approach as described on the ZVEI pages (2):

“A RV Process demonstrates that a product performs its intended function(s) with sufficient margin under a defined Mission Profile for its specified lifetime. It requires specification of requirements based on a Mission Profile, FMEA to identify the potential risks associated with significant failure mechanisms, and testing to failure, “end-of-life” or acceptable degradation to determine Robustness Margins. The process is based on measuring and maximizing the difference between known application requirements and product capability within timing and economic constraints. It encompasses the activities of verification, legal validation, and producer risk margin validation.”

Wikipedia defines robustness validation as follows:
“Robustness Validation is used to assess the reliability of electronic components by comparing the specific requirements of the product with the actual “real life values”. With the introduction of this methodology, a specific list of requirements (usually based on the OEM) is required. The requirements for the product can be defined in the environmental requirements (mission profiles) and the functional requirements (use cases).”

The Robustness Validation (RV) technique characterizes the intrinsic capability and limitations of the component and of its technology. It is a failure mechanism and technology based approach using test-to-fail trials instead of test-to-pass and employing drift analysis. Further, it does allow for an assessment of the robustness margin of the component in the application.

For clarification, the test-to-pass approach refers to an application where a test is conducted using a specific user-flow instructions. Conversely, a test-to-fail approach refers testing a feature in every conceivable way possible. Test-to-pass is an adequate approach for proof of concept designs but for end-product systems the test-to-fail is necessary to ensure reliability, quality and safety concerns.

The benefit of the robustness validation approach is that the characterization of the device capability would only need to be done once, explained Kottler. Subsequent activities would allow for the deduction of the behavior of the memory under the various mission profiles without repeating the qualification exercise.

Robustness Margin

Robustness Validation (RV) can be used as a holistic approach to NVM qualification. One way to visualize RV is to consider two memory parameters, i.e., endurance and temperature. The intrinsic capability of the NVM may be described as an area between these two parameters (see Figure 1). Within that area are the hard requirements for the memory (NVM spec) and the application (application spec). The distance between the application spec, the remaining portion of memory and the NVM capability limit is called the “robustness margin.”

In other words, the robustness margin is a measure of the distance of the requirements to the actual test results. It is the margin between the outer limits of the customer specification and the actual performance of the component.

The importance of the robustness margin is that it determines the actual safety margin of the component as used in the application verses its failure mode.

The overall capability of the device including its quality and reliability is that its properties are determined and eventually designed throughout the product development life-cycle phrases:

  • Product & technology planning
  • Development and design
  • Manufacturing and test
  • –In order to prove whether the device is suitable for automotive usage, data is gathered from the early design phases in
  • –addition to qualification trial data.

Then, investigations are held of the performance of the device on a specific application conditions.

Robustness Validation Applied to Memory Qualification

How then do you specifically apply the robustness validation approach to a memory qualification? Kottler listed four basic steps in his presentation (see in Figure 1). One should note that Steps 2 and 3 require input from the NVM suppliers. Further, the NVM supplier can run these exercises without input from Step 1 or output to Step 4. We’ll now consider each of these steps more closely.

Figure 1: Steps to apply the Robustness Validation approach to memory devices.

The first step is to identify the mission profile, which is used to describe the loads and stresses acting on the product in actual use. These are typically changes in temperature, temperature profile, vibration and working of electrical and mechanical fields, or other environmental factors. In order to qualify a non-volatile memory for a specific automotive application, an automotive Tier 1 supplier must therefore identify the sum of application requirements to the NVM and must assess whether and to which extent a given NVM component will fulfil them.

To specifically determine the mission profile, all NVM component application requirements must be collected, from electronic control unit (ECU) design, manufacturing and operation in the vehicle. This is usually done within the Tier 1 organization based on requirements from the vehicle manufacturer.

The second step requires identification of all relevant failure mechanisms. Specifically, it means mapping application requirements to the intrinsic properties and failure modes of the NVM component. This requires the competence of the component supplier to share their understanding of the NVM physics and design to identify all relevant failure mechanisms. Intensive cooperation of the NVM technology and product experts with the quality and reliability team on NVM supplier and Tier 1 sides are necessary to accomplish this step.

As an example, consider the typical requirements to an NVM component. These requirements include data retention, re-programmability and unaltered performance as specified over the vehicle lifetime and under various conditions in the harsh environment of a vehicle. According to Kottler’s paper, some of the corresponding failure mechanisms in a flash memory include the various charge loss mechanisms through dielectrics, charge de-trapping, read, program and erase disturbs, tunnel oxide degradation due to programming and erasing, as well as radiation-induced errors. These mechanisms are already predefined by choices made at design of the NVM technology, memory cell and array architecture, as well as of the conditions and algorithms for programming, erasing and reading.

The third step focuses on trial planning and execution with the goal of characterizing NVM capabilities and limits with respect to the previously identified failure mechanism. As in the previous step, the competence and participation of the component supplier to provide insight into the physics of the NVM, as well as NVM quality and reliability. Acceleration life cycle testing models, parameters and model limitations need to be identified for each failure mechanism. The health of the NVM component related to the failure mechanism must be observable and allow for drift analysis, e.g., by measuring the memory cell’s threshold voltage variations.

How might the drift analysis be performed and by whom, i.e., the supplier or the Tier 1 customer? For example, will the flash memory provider be asked to give the customer more component data?

According to Kottler, the drift analysis will depend upon the flash memory manufacturer to measure data that is not accessible to the customer/end user. Generally, the latter doesn’t have access to test modes to get this data. Only the manufacturer has the product characterization and test technologies related to their components.

The manufacturer and customer should work together to jointly define the parameters that need to be tracked. It is a validation task. The measurements are definitely done by the manufacturer but the manufacturer and customer should jointly interpret the details. What the customer doesn’t need is a blank statement that the components have simply passed qualification. This “test to pass” approach is no longer sufficient, according to Kottler.

The trials and experiments for drift analysis need to be planned and jointly agreed upon. Their execution usually falls to the NVM supplier, being the only party with full access to the design, necessary sample structures, test modes, programs and equipment.

According to Kottler, the identification of an appropriate electrical observable is of utmost importance for applying Robustness Validation (RV) to NVM. Such observables may be for memory cell threshold voltage Vth for NOR flash and EEPROM, or corrected bit count for managed NAND flash memories. Both observables provide sensitive early indication on the memory health status and must therefore be accessible for qualification, production testing and failure analysis in automotive.

The fourth and final step in the Robustness Validation approach involves the assessment of the reliability and robustness margin of the NVM component against the mission profile of the automotive application. The basis for this assessment is the technology reliability data and consideration of the initial design features and limitations, such as error correction code (ECC), adaptive read algorithms (e.g. read retry) and firmware housekeeping (e.g. block refresh and wear leveling), noted Kottler in his paper.

Reliability characterization on technology and component level do not necessarily have to be separated. Combined trials may even be recommended, e.g. for managed NAND flash, due to the complex interaction between firmware, controller and NAND flash memory.

Benefits of the Robustness Validation Approach

The Robustness Validation (RV) approach provides a straight-forward way in which a semiconductor company might design and validate an NVM component that is acceptable in the automotive electronics market. Using RV, the supplier will enable its customers to assess the suitability of the component for their applications in the necessary detail.

The resulting NVM qualification and characterization report that results from the NVM approach should list the memory failure mechanisms considered and characterized. Further, the report should describe the acceleration models applied, and showing drift analysis data supporting a quantitative prediction of failure rate vs. stress or lifetime for each failure mode. According to Kottler, combinations of stresses are to be included according to previous agreements, e.g. data retention capability after write/erase endurance pre-stress, temperature dependent.

To some, the Robustness Validation approach may appear to cause significant additional qualification work. However, most or all of these reliability investigations are part of the typical NVM product and technology characterization during the development phase. For new designs, the optimized top-down RV approach may be applied directly. For existing NVM designs, this approach must be tailored to the agreement of both the NVM supplier and tier 1 company, potentially re-running trials to complete the RV approach. Even so, some existing NVM components may not meet automotive qualification. It is therefore important to jointly assess the feasibility of the automotive NVM qualification by RV prior to the design-in decision.

The end result of the RV approach is an efficient solution to cope with the high requirements of the automotive market, requiring a close cooperation along the value creation chain,” noted Kottler.

Summary

The automotive expectations to non-volatile memory (NVM) components continues to grow due to market evolution, increasingly complex data structures and the demand for performance and endurance. Tier 1 and NVM suppliers must cope with this challenge jointly. By considering these expectations from the beginning of product and technology development, and by providing comprehensive data, the NVM supplier can enable the automotive Tier 1 to assess the NVM suitability for the application under a Robustness Validation (RV) approach.

References

  1. AEC-Q100: Stress Test Qualification for Integrated Circuits – Rev. H, Spe. 2014, pp. 36-30
  2. ZVEI “Handbook for Robustness Validation of Semiconductor Devices in Automotive Applications,” 3rd edition, May 2015, pp. 4-20

 

Read the complete story and original post on “IP Insider”

Automotive HW-SW Integration – SAE Event at UofW Campus

Saturday, January 18th, 2014

How do automotive electronic designers and testers handle the complexities of hardware-software integration at the chip, board, and network levels? I’ll answer this question at the upcoming event sponsored by the Seattle Chapter of the Society of Automotive Engineers (SAE). After my talk, attendees will be able to tour the SAE Formula 1 Manufacturing area.

SAE NW January Event Details:

Date: January 23rd, 2014
Location: University of Washington (Seattle)
Mechanical Engineering Building – Room 238
Agenda: 6:00 Social – Pasta Bar
6:30 Introductions SAE and INCOSE
6:45 Presentation John Blyler
7:45 Tour of SAE Formula 1 Manufacturing area
RSVP: Email Mark.Shoaf@PACCAR.com to RSVP.
Dinner-Event is Free.

 

Background: The current state of the art is to integrate, verify, validate, and test automotive hardware and software with a complement of physical hardware and virtual software prototyping tools. The growth of sophisticated software tools, sometimes combined with hardware-in-the-loop devices, has allowed the automotive industry to meet shrinking time-to-market, decreasing cost, and increasing safety demands. But is this approach enough, especially when applied across the entire system of chip-board-network electronics? This talk will address this and related issues.

Software-Hardware Integration in Automotive Product Development by John Blyler

INCOSE/SAE January 23 joint event – Software and Hardware Integration in Automotive Product Development

Software-Hardware Integration of Automotive Electronics

Friday, October 11th, 2013

My SAE book arranges and extrapolates on expert papers in automotive hardware-software electronic integration at the chip, package, and network vehicle levels.

My latest book - more of a mini-book – is now available for pre-order from the Society of Automotive Engineers. This time, I explore the technical challenges in the hardware-software integration of automotive electronics. (Can you say “systems engineering?”)  I selected this topic to serve as a series of case studies for my related course at Portland State University. This work includes quotes from Dassault Systemes and Mentor Graphics.

 

Software-Hardware Integration in Automotive Product Development

Coming Soon – Pre-order Now!

Software-Hardware Integration in Automotive Product Development brings together a must-read set of technical papers on one of the most talked-about subjects among industry experts.

The carefully selected content of this book demonstrates how leading companies, universities, and organizations have developed methodologies, tools, and technologies to integrate, verify, and validate hardware and software systems. The automotive industry is no different, with the future of its product development lying in the timely integration of these chiefly electronic and mechanical systems….

 

IP Smoke Testing, PSI5 Sensors, and Security Tagging

Friday, April 19th, 2013

The growth of semiconductor IP brings challenges for subsystem verification, integration, security, and the addition of sensor standards. Can Savage clear the smoke?

Warren Savage, marathon runner and President & CEO of IP-Extreme, talks about the trends, misconception, and dangers resulting from the increasing popularity of semiconductor intellectual property (IP). What follows is the first portion of a two-part story.

Blyler: Let’s start by talking about trends in IP for field-programmable-gate-array (FPGA) and application-specific-integrated-circuit (ASIC) systems-on-a-chip (SoCs). What’s new?

Savage: If you look at global macro trends, you’ll see an increased amount of customization. For example, the iPhone can be customized “six ways to Sunday.” I’m starting to see something similar in the semiconductor space, where companies are differentiating themselves through IP and putting it together in different ways. Some guys – the Broadcoms and Qualcomms of the world – can do huge quantities of SoCs. But many mid- and lower-tier guys are doing more customized types of products that appeal to a certain niche market. [Editor’s Note: Makimoto’s Wave remains in its customization cycle.]

If the volumes are low enough, an FPGA with the right IP could offer a big differentiation from an off-the-shelf (ASIC) SoC. I’m seeing more of that trend and it gets stronger every year.

Blyler: The numbers are showing that IP continues to be a larger share of the revenue. How about subsystem IP? Is it starting to take off – perhaps in the vertical integration of certain types of IP?

Savage: One of the artifacts of the downturn from several years ago is that fewer engineers must do more increasingly complex things. We are seeing people buy more of our subsystem IP that includes the processors, bus infrastructure, and peripherals needed to run a real-time operating system (RTOS). People want to buy the whole thing and start with a working platform, then add their IP around that platform.

Blyler: Won’t the move toward subsystem IP lead to more verification and integration issues? Does the entire subsystem then come with a suite of tests or do you need to test each IP block individually?

Savage: The expectation is that the subsystems are fully verified and come to the designer as a black box. Typically, people don’t re-verify things of that complexity; it is too much. Plus, it has already been verified at the subsystem level by the IP provider. What the IP provider supplies is some type of integration-level test so the designer can – for lack of a better word – run a “smoke test” to ensure that the subsystem is installed properly and fundamentally working. In processor-based designs, it means you can run software like a “hello world” test that verifies all of the memory, interface, and peripheral connections.

Not a lot of extra verification tests come along with the IP. After all, the IP is expected to be used as a black box.

Blyler: With the rise of sensors in our increasingly connected world, I would expect to see more sensor-related IP. Is that the case? Or is it a microelectromechanical-systems (MEMS) technology and fabrication issue?

Savage: One of our major customers is a provider of automotive sensors. They use MEMS technology for sensors, accelerometers, etc. – as part of their products. But these MEMS chips and sensors still need to be connected into an SoC. That’s why we’re seeing interest in the Peripheral Sensor Interface 5 (PSI5) standard, which is a specific interface dedicated to automotive sensor applications. PSI5 is kind of an upgrade to the Local Interconnect Network (LIN) standard.

Here is an overview of hardware interfaces (courtesy of Vector).

For background, there’s a hierarchy of automotive interface standards. At the low end of complexity is LIN, which is typically used to control the mirrors on a car via a driver-side toggle switch. Next comes the controller-area-network (CAN) bus interface, which has a lot more bandwidth for moving data around. CAN is used for suspension, airbags, etc.

Lastly, FlexRay is a true vehicle network for real-time apps. Eventually, it will give way to a drive-by-wire or steer-by-wire implementation.

Blyler: What’s new on the security front for semiconductor IP?

Savage: On the commercial side, I’m seeing less and less concern about security. But there are some developments that will be important. For some time now, there has been an IEEE standard on hard IP tagging that allows you to track cores at the GDS level.(Editor’s Note: Hard IP is offered in a GDSII format and optimized for a specific foundry process.)

The thing that has been missing is what to do about soft IP. (Editor’s Note: Soft IP is synthesizable in a high-level language like RTL, C++, Verilog, or VHDL.) Watermarking the code is a common approach for tracking soft IP and one that we use at IP Extreme. I’ve been working with Kathy Werner, who heads a committee on soft-IP tagging. She has worked with IP at Freescale and then Accellera. Her committee is incorporating many of the same conventions into soft IP that proved successful in hard IP. The goal is that these soft-IP security mechanisms will work throughout the EDA-tool design flow to be propagated downward into the GDSII. In other words, the high-level soft-IP tags could be detected at the GDS level.

Related Stories:



Free counter and web stats


STMicroelectronics Pushes SOI While Leaving the Mobile Space

Thursday, December 20th, 2012

Why is one of Europe’s leading semiconductor IDMs pushing into leading-edge, 28-nm FD-SOI technology while leaving a market where such technology might be useful?

It was a chance meeting that made me wonder about two recent announcements from one of the world’s largest semiconductor companies.

Last week, I attended an IEDM briefing in which STMicroelectronics presented silicon-verified data to further confirm the manufacturability of its 28-nm Fully Depleted Silicon-on-Insulator (FD-SOI) technology (see “FinFETs or FD-SOI?“). Ed Sperling, Editor-in-Chief for SemiMD, summed it up this way:

“What’s particularly attractive about FD-SOI is that it can be implemented at the 28-nm node for a boost in performance and a reduction in power. The mainstream process node right now is 40 nm. And while Intel introduced its version of a finFET transistor called Tri-Gate at 22 nm, TSMC and GlobalFoundries plan to introduce it at the next node—whether that’s 16 nm or 14 nm. That leaves companies facing a big decision about whether to move all the way to 16/14 nm to reap the lower leakage of finFETs, whether to move to 20 nm on bulk, or whether to stay longer at 28 nm with FD-SOI.”

Joel Hartmann, Executive VP Front-End Manufacturing & Process R&D, STMicroelectronics, presents SoC-level, 28-nm Planar Fully Depleted silicon results at IEDM 2012.

I didn’t realize until later that week, but – on the same day as its 28-nm FD-SOI technology announcement – STMicroelectronics stated that it would curtail its presence in the mobile-handset space via the Ericsson partnership. As Chris Ciufo noted in his “All Things Embedded” blog, Ericsson will remain in only two market domains: Sense and Power and Automotive as well as Embedded Processing. “For the former, device categories include MEMS, sensors, power discretes, advanced analog, automotive powertrain, automotive safety (such as Advanced Driver Assistance Systems [ADASs]), automotive body, and the red-hot In-Vehicle Infotainment (IVI) category,” wrote Ciufo.

In the embedded processing market, the company will “focus on the core of the electronics systems” and ditch wireless broadband. Target areas include microcontrollers, imaging, digital consumer, application processors, and digital ASICs.

Considered together, these two announcements beg the following question: If STMicroelectronics is only interested in the sensor, automotive, and “embedded” markets, why does the company need to work at leading-edge process nodes – like 28 nm on FD-SOI? This question arose during a recent chance meeting with Juergen Jaeger, Sr. Product Manager at Cadence Design Systems.

Jaeger suggested a possible answer by noting that Moore’s Law generally provides a cost savings with power and performance benefits at lower processing nodes. “This makes sense for both automotive infotainment and networking technologies,” explained Juergen. “But it doesn’t make too much sense for gearbox, engine, anti-lock brakes, or steering systems, since they need high reliability and tolerance.” Those requirements tend to restrict devices to fully tested, high-node geometries.

Jaeger reminded me that infotainment systems-on-a-chip (SoCs) are very complex devices requiring integrated network and wireless systems – in addition to an array of audio/video codecs that must drive multiple LCD screens within today’s cars.

Additionally, STMicroelectronics’ move to FD-SOI is one way to mitigate the risk facing leading-edge bulk CMOS processes. As Sperling observed, “At 28 nm and beyond, however, bulk has run out of steam, which is why Intel has opted for finFETs.” Meanwhile, FD-SOI offers power and performance benefits while staying on today’s planar-transistor manufacturing processes.

In the end, the push toward FD-SOI technology at exiting 28-nm nodes may play well into a number of low-power and high-performance chip markets. This is not a path without risk. But it does highlight the accelerating convergence of SOI and bulk CMOS at leading-edge nodes. And it should strengthen STMicroelectronics’ strong position in the automotive infotainment space.

Originally posted on “IP Insider.”

What do Medical Devices, Facial Recognition, Genivi, and Clustering Processors have in Common?

Thursday, September 22nd, 2011

All of these very cool technologies – showcased by Intel’s ECA partners at IDF2011 – provide a clear direction for future trends in medical, consumer and automotive electronics.

Let’s start with the cluster controller and backplane technology.

Designers that require blazingly fast backplane buses are happy to see the development of PCI Express, Generation 3 products.  The latest version of the popular interface will provide an impressive eight gigabits per second (Gbits/s) per lane and 128 Gbit/s in designs using x16 port widths. Such performance will be welcomed in the enterprise computing, storage and communications spaces.

IDT demonstrated its latest high-performance PCIe switches alongside  re-timing devices for longer distance application. Ken Curt, Sr. Product Marketing Manager of Enterprise Computing Division at IDT, gave me the one-minute demonstration tour:

“Here are our newly announced Gen3 packet-switch devices. In this demonstration (see Figure 1), we are using a Gen2 server since we can not get a Gen3 server. The Gen2 signal comes out over cable to go into our packet switch which does a rates conversion from 5Gbits to 8Gbits per sec. The 8Gbit/sec signal – 8 lanes in parallel – is sent to a Gen3 Sata-SAS controller card from LSI logic.”

“Also, we are tapping off to a LeCroy bus analyzer (not shown) to verify that we are running 8Gb/s across 8 lanes. Further, we are showing our Eye-diagram capability to see the waveform inside of our chip and optimize the signal configuration.

“For longer traces and longer cables, we also provide PCI Express Gen3 re-timing devices, which will easily extend across 30 inches of trace or backplane.”

Short, sweet and too the point. Great demo, Ken!  

Figure 1: Ken Curt from IDT demonstrations a PCI Express Gen3 (converted from Gen2) data storage application at IDF2011.

 

Turning from data storage and cloud computing backplane technology, let’s now take a brief look at the medical space.

Embedded and mobile software vendor Wind River introduced a new platform for medical device development at the show. The platform, built on the company’s real-time operating system (RTOS), includes a collection of embedded software development tools, networking and middleware run-time technologies, such as IPsec, SSL, IPv6 and USB.

Having a platform is great, but experiencing the end-product is even better (see Figure 2). Automated “cuff” blood pressuring measuring devices are nothing new. What is new is having such automated devices meet stringent vendor qualification summary (VQS) processes – which is part of the company’s development platform.

Equally important to accurate monitoring of ones’ blood pressure is displaying the information in a user-friendly format (see Figure 3). This is accomplished through a collection of development tools known as the Tilcon Graphics Suite. Products such as these are sure to find a place in the booming home-care market, as well as in hospitals and the like.

Figure 2: The Wind River folks have a great bedside manner.

Figure 3: Apparently, I’m a somewhat overweight woman with higher than normal blood pressure. Well, that’s good to know.

 

Moving on – Let’s look at the world of intelligent displays.

Emerson Networks had a great demonstration of facial recognition applications for intelligent kiosks. I believe this kiosk was running the KR8-315, a fanless embedded computer based on the Atom E640 processor running at 1.0 GHz with 1GB DDR2 and a 64GB Solid State Drive.

Figure 4: Note the “Viewer Count” and “Majority Gender” in the bottom part of the display panel. The next figure shows how these numbers are derived via facial recognition technology.

 

Figure 5: Facial recognition is used to determine “Viewer Count” and “Majority Gender.” That is Connie Schultejans from Emerson in the background.

 

Changing direction – Let’s now move to the automotive market.

Mentor Graphics is a member of the GENIVI alliance , a non-profit industry alliance for the adoption of an In-Vehicle Infotainment (IVI) reference platform. After the recent relationship cool-off with mobile phone giant Nokia, Intel has repositioned (or re-emphasized) it MeeGo operating system platform within GENIVI. (see, “ATOM Leader Leaves Intel”)

In addition to MeeGo, Mentor also offers a complete Android-platform development environment. All of these operating systems, including Mentor’s Embedded Linux, run on Atom processors – among others. A tool suite known as Inflexion is used to create the impressive user interfaces (see Figure 6).

Figure 6: Supporting the GENINI In-Vehicle Infotainment market, Mentor Graphics offers user interface development tools that operate on MeeGo, Android and Linux system running on Intel Atom processors.

IP’s Silent Presence in Automotive Market

Thursday, June 16th, 2011

Even though there was no specific mention of IP at this year’s Integrated Electrical Solutions Forum (IESF), all discussions about the future growth of both infotainment systems and self-braking, parking and driving autonomous vehicle operations will only be possible by a heavy reliance on chip and FPGA IP.

Not once did I hear the expression nor see the phrase “IP” while attending the 2011 Integrated Electrical Solutions Forum (IESF) in Dearborn, MI. The absence of IP nomenclature was hardly surprising as the forum focused on Electronic/Electrical (E/E) systems design and wire harness engineering issues. Still, the value and growth of electronic hardware and software reuse was apparent thought-out the event.

From the beginning of the one-day show, the growing importance of electronics in automotive systems was stressed. The first keynote speaker, John McElroy, the host of the Autoline Daily show, ended his presentation by talking about cars that can brake, park and even drive by themselves. One can image the extensive array of sensors, networks, analog and digital subsystems that are needed to accomplish these autonomous tasks.

McElroy even when so far as to say that these vehicle-to-vehicle communication-based systems would be game changers for the automotive industry and might be available by 2014. Perhaps that is why Google is a major developer in several of these initiatives. http://www.youtube.com/watch?v=X0I5DHOETFE

In today’s automobiles, electronics are the chief differentiator between competing auto makers. In terms of numbers, automotive electronics per car typically include hundreds of sensors, tens of Electronic Control Unit (ECU) processing systems, miles of cable harnesses and extensive network systems – not to mention up to 10million lines of software code.

The complexity hinted at by such numbers, coupled with the safety concerns of the industry, make hardware a software reuse a must. Trusted hardware IP and software libraries will be the most obvious way to achieve the necessary economies of scale and shorten development cycle demanded by a consumer market.

The automotive space is still an industry of siloes, from electrical, mechanical, industrial and computer science disciplines. In practical terms, this means that hardware and software engineers don’t often talk with one another. Such communication challenges are why Wally Rhines, CEO of Mentor Graphics, noted in his keynote that the “biggest problems still occur in the system integration phase of the V-diagram life cycle.” The integration phase is traditionally the part of the system or product life cycle stage where hardware and software subsystems first come together. Not surprisingly, this is often where interface problems first appear.

Interface definition means connecting the right bits to the right bus, activities that are best modeled in the early architectural design phases of the life cycle. Such models include virtual prototypes, which simulate the yet undeveloped hardware – usually via IP and libraries. But virtual constructs are still relatively new to the automotive industry, where prototypes are still predominantly physical. However, the complexities of electronic systems are making physical prototypes a thing of the past.

Paul Hansen, in his annual report at the end of the IESF, noted that automotive giants like Ford are relying on newer players like Bsquare, an embedded software vendor, to help create infotainment players. Apparently, Tier 1 software provides are struggling with the hardware-software challenges of ever more complex and integrated infotainment systems.  Here is yet another segment where hardware and software IP reuse can bring significant benefit.

One doesn’t need to look very far to find a growing market for automotive infotainment IP. Common elements in this segment include ARM, ARC and Tensilica (among others) processors for audio systems, audio amplifier, communication controllers for automotive specific networks like CAN and more general standards like Bluetooth, DACs, microcontrollers, memory, and even embedded security.

Automotive FPGA IP related is also growing, such as Ethernet Audio Video Bridging (AVB) IP for network connectivity, MOST and Flexray network controllers, and even stepper motor controller IP for the simultaneous operation of two phase stepper motors.

IP continues to play an important role in automotive electronics, even if the phrase is seldom used at market events.

[First published on IPInsider at ChipEstimate.com ]