Part of the  

Chip Design Magazine


About  |  Contact

Archive for January, 2017

Beginning the Discussion on the Internet-of-Space

Tuesday, January 17th, 2017

A panel of experts from academia and industry assembled at the recent IEEE IMS event to answer critical questions are the role and impact of RFIC technologies.

By John Blyler, Editorial Director

Synopsis of compelling comments:

  • “Satellite communication becomes practical, low cost, and comparable to LTE only if you are at multi-Tera-bit per second capacity.
  • “Ultimately, we are not selling the bandwidth of our system but the power.”
  • “Power harvesting on the satellite is one of the most important things we can do.”
  • “You must establish commercial-off-the-shelf (COTS) variants of your main space product line (to support both new and traditional space).”
  • “You need to consider new business models as well as new technology and processes.”

Recently, IEEE MTT Society sponsored an initial discussion and networking session on the Internet of Space (IoS) at the 2016 International Microwave Symposium. It was billed as one of several upcoming forums to bring the IoS and IoT communities together as these technologies and systems continue to evolve. The short term goal of this initiative is to, “jump start a global technical community cutting across multiple hardware-oriented fields of interest including: aerospace systems; antennas; autonomous systems; communications; electronics; microwave/mm-wave technology; photonics; positioning, navigation and timing; power electronics, etc.”

With representation from a global community of satellite and end-user companies, the IEEE IMS 2016 Rump Session Panel explored the technical and business challenges facing the emerging   industries. What exactly is the IoS? Does it include both low-earth orbit and potentially sub-orbital platforms like drones and balloons? How do microwave and RF designs differ for satellite and airborne applications? These are a few of the questions that were addressed by the panel. Part 1 of this series of reports focuses on the challenges forecasted by each of the panelists. What follows is a portion of that panel discussion. – John Blyler

Panelists and Moderators (left to right):

  • [Co-Moderator] Sanjay Raman, Professor and Associate VP, National Capital Region, Virginia Tech
  • Prakash Chitre, Comsat Laboratories, ViaSat, VP and GM
  • Hamid Hemmati, Facebook, Director of Engineering for Telecom Infrastructure
  • Lisa Coe, Director of Commercial Business Dev. for Boeing
  • David Bettinger, OneWeb, VP of Engineering, Communications Systems
  • Michael Pavloff, RUAG Space (Zürich Switzerland), Chief Technology Officer
  • [Co-Moderator] and Mark Wallace, VP and GM, Keysight

Raman (Co-Moderator): Hi. I’m joined by Mark Wallace, my co-moderator to this panel. We’re here to discuss the emerging Internet-of-Space (IoS) industry. Let’s start with Prakash Chitre from Comsat Labs.

Chitre (Comsat): I’m going to talk about a new generation of satellite systems that NASA has been designing, building and launching. This will give you an understanding of what we have been doing for the last 5 years and our plans for the next 5 years. The main goal for us is to provide connectivity throughout the world. Even with today’s voracious appetite for high-speed and high-volume Internet, half the world’s population of 7B people don’t have any broadband Internet connection.

ViaSat has three satellites, ViaSat-1, WildBlue1, and Anik-F2.  Most of these satellites, like the ANIK-F2 and WildBlue 1, were more or less traditional Ka-Band satellites with 8Gbps (in throughput). But the ViaSat-1 satellite that we designed and launched in 2011, had about 140Gbps (see Figure 1). ViaSat-1 handles about 1 million users and covers North America (NA), including US and Canada. It was the start of a longer vision of very high throughput satellites to cover the globe.

Figure 1: ViaSat-1 rendering (Courtesy of Comsat Labs)

We want to provide broadband communication platforms that deliver affordable high-speed Internet connectivity and video streaming via fixed, mobile and portable systems. The key thing is that we are totally vertically integrated solution; the terminals, the gateway, the satellite all fit together to provide a very cost effective system. We deal with geosynchronous satellite latency issues with software embedded in the terminal and the gateway to make sure we can do very high page loads from media.

[Editor’s Note: Terminals link the satellite signal to fixed and mobile locations on the ground and on airborne systems. Examples of terminals include satellite TV disk systems, aviation broadband devices for Ku-, Ka-, and dual-band in-flight connectivity, emergency responder equipment, cellular extensions and the like.]

Soon we’ll be launching ViaSat-2 (see Table 1), which will provide almost 2 ½ times the capacity of ViaSat-1 while providing much greater coverage. It will bridge the North Atlantic with contiguous coverage over NA and Europe, including all the air and shipping routes.

The ViaSat-3 ultra-high capacity satellite platform is comprised of three ViaSat-3 class satellites and ground network infrastructure.  The first two satellites will focus on the Americas and Europe, Middle East and Africa (EMEA). Work is underway with delivery expected in 2019. A third satellite system is planned for the Asia-Pacific region, completing global service coverage.

In the next few years, we’ll launch ViaSat-3, which will be about 3 times smaller than ViaSat-2. It has 1Tbps capacity and much larger coverage. The first two ViaSat-3 satellites will cover the Americas and Europe, Middle East and Africa (EMEA). A third satellite system is planned for the Asia-Pacific region, completing the global service coverage. We have already given the contract to Boeing to build the bus framework for the first Viasat-3. We are designing and building our own payload.


Satellite Name

Throughput Capacity


WildBlue 8 Gpbs


IPSTAR 1 45 Gpbs


KA-SAT 70 Gpbs


ViaSat-1 140 Gpbs


EchoStar XVII 100+ Gpbs


NBN-Co 1a (“Sky Muster”) 80+ Gpbs


ViaSat-2 350 Gpbs


ViaSat-3 Americas 1 Tbps


ViaSat-3 EMEA 1 Tbps


ViaSat-4 APAC 1 TBPS

Table 1: ViaSat Satellites

Raman (Co-Moderator): Our next speaker is Hamid Hemmati, Director of Engineering for Telecom Infrastructure at Facebook.

Hemmati: Facebook’s interest in providing Internet coverage stems from our desire to connect everyone in the world. Anyone that wants to be connected. Something like 60% of world’s people aren’t on the Internet or have a poor connection – typically a 2G connection. If they are not on Internet, then they cannot be connected.

Most of the data centers around the world are based on open source models for both hardware and software. We can devote technologies to significantly increase the capacities and lower costs and then provide it to the community to then develop and implement.

In terms of the global Internet, we are interested in developed and underdeveloped countries that don’t have connectivity. Providing connectivity to underdeveloped countries is fairly tricky because the population distribution is very different between countries. For example, the red color means a large population of people and green means a small population (Figure 2). As you can see, these are the six different countries with widely different distributions. Some have more or less uniform distribution while others have regions that are scarcely populated.

Figure 2: Population distribution varies according to country. (Courtesy Facebook via IMS presentation).

Figure 2: Population distribution varies according to country. (Courtesy Facebook via IMS presentation).

There is a magnitude of difference in population distribution around the world, which means that there is not one solution that fits all. You can’t come up with one architecture to provide Internet connection to everyone around the world. Each country requires a unique solution. It is more cost effective to allocate capacity where needed. But each solution comes from a combination of terrestrial links with perhaps airborne or satellite links. Satellites are only viable if you can increase the data rate significantly to about 100 Tbps. This is the throughput required to connect the unconnected.


  • 4 billion people with 25 kbps per user (based on average capacity and that users are on the Internet simultaneously).
  • Calculation: (4×109) x (2.5 x 104) = 100 Tbps

This is a staggering number (100 Tbps), so we are talking about very large capacity for all of these populations.

Technology advancements are required to extend the capability of current commercial wireless communication units by 1 to 2 orders of magnitude. What we need to do is amass the state of the art in a number of areas: GEO satellites, LEO Satellites, High Altitude Platforms, and Terrestrial. Satellite communication becomes practical, low cost, and comparable to LTE only if you are at multi-Tbsp capacity, otherwise it is much more expensive than providing LTE. There must be a business justification to do that.

High altitude platforms (like airplanes/drones) need to be able to stay airborne for months at a time. They must be low cost to produce and maintain, plus run at 10-100 Gpbs uplink/downlink/crosslink RF and optical capacity.

Meanwhile, terrestrial including fiber and wireless are already here. It’s just that it is immensely expensive if you want to cover all of the country with fiber. So other solutions are needed, like wireless links, tower to tower, and so forth. This is just a laundry list of what needs to be done. It doesn’t mean we at Facebook are looking at all of them. We are looking at some of them. We want to get these technologies into the hands of the implementers.

Raman (Co-Moderator): Next, let me introduce Lisa Coe, Director of Commercial Business Dev. for Boeing. Originally, James Farricker, Boeing, VP Engineering, was slated to speak on this panel. He was not able to join us.

Coe: I looked up the phrase “new space” on Wikipedia since others are talking about the traditional vs. the new space. I was asking myself if Boeing is a traditional space or new space company. Wikipedia called out Boeing as “not” new space.

[Editor’s Note: [New space is often affiliated with an emergent private spaceflight industry. Specifically, the terms are used to refer to a community of relatively new aerospace companies working to develop low-cost access to space or spaceflight technologies.]

Boeing builds commercial airplanes, military jets, helicopters, International Space State, satellites, cyber security solutions, and everything. We build a lot of very different things. So when you ask us about the Internet of Space (IOS) you’ll get a very different answer. Let me try to answer it.

When an airplane disappears, like the Egypt airplane, a lot of people ask why we don’t connect airplanes via satellites. We need to get our airplanes smarter and all connected. Passengers are already connected on aircraft with Wi-Fi. So before we push for the Internet of Things, why don’t we push to get all the airplanes connected?

Boeing is also a user of the Internet of Space. For example, we just flew an unmanned aircraft that was completely remote controlled from the ground. This is why we care about security, about hacking into these systems. How can we make the Internet of Space secure to connect more people and things?

Raman (Co-Moderator): Next we have David Bettinger, VP of Engineering, Communications System, at OneWeb

Bettinger: OneWeb is trying to provide very low latency Internet access to those who don’t have access everywhere. We are two years into the project and are quite far along. The things that ultimately make us successful are the microwave components used in our system. I’m a modem guy by nature – not an RF one. I wish all modems and baseband could stay at baseband but of course RF is needed on the wireless side. We utilize Ku-band in our system. We also have access to Ka-band, which are a more pointed feeder links that are servicing the satellites.

Supporting both bands means that we need a lot of different components for different functionality. The satellite is probably the most critical for us. The only thing that makes something as crazy as launching 648 satellites feasible is if we get the cost of the satellite and the weight down significantly compared to what is actually done today. Our satellite is about the size of a washing machine, weighing roughly 150 kg. You can fit 30 of them on the launch (payload). That is what makes this work.

The only thing that makes satellite mass work is if you figure out the power problem. Ultimately, we are not selling the bandwidth of our system but the power. This is because we don’t have the luxury of a bus sized satellite up there that is designed to power constantly regardless of the environment, whether you are in an eclipse or not. We have to effectively manage our power with the subscribers of the service. Power harvesting on the satellite is one of the most important things we can do. It drives almost every aspect of our business case.

We have looked heavily at a lot of different silicon technologies, especially GaN and GaS chip technologies. We are utilizing low noise amplifiers (LNAs) and up/down converters, among other components. Power and then cost are important. If there was anything I would ask you to keep working on, it’s the efficiency thing. We can use every bit that we can.

On the ground side, our challenges are a little bit different. We have two different ground components. One is the user terminals like the devices that you put on your roof. They point straight up at the satellite to provide local access via an Ethernet cable, Wi-Fi or even LTE extension. These terminals are all about cost. To crack the markets we want to crack, we need to get the cost of the CPE down yet have a device that actually points at satellites that are moving across at about 7km per second. And changing to different satellites every 3 ½ minutes. It’s a difficult and different problem from the GEO world. Now I remember why I did Geo for 25 years before this.

[Editor’s Note: Customer-premises equipment or customer-provided equipment (CPE) is any terminal and associated equipment located at a subscriber's premises and connected with a carrier's telecommunication channel at the demarcation point ("demarc").]

It all comes down to cost. How can we get cost and power utilization down? What tech can we use to be able to point at our satellites? We are excited about the prospect of trying to bring active steering antenna to a mass market. I see our friends from RUAG are here (in the audience). We have done reference work on looking at these different technologies. There is a lot of secret sauce in there but I think ultimately it comes down to how do you make small, cheap chips and then how can you make antennas around that.

[Editor’s Note: The gateway is the other ground component. A gateway or ground station connects the satellite signal to the end user or subscriber terminals. Most satellite systems are comprised of a large number of small, inexpensive user terminals and a small number of gateway earth stations.]

Raman (Co-Moderator): Our final panelist is Michael Pavloff, CTO, RUAG Space with headquarters in Zürich Switzerland)

Payloff: It’s an honor to be here. How many have heard of RUAG? Maybe 30%? That’s not bad. We are a small, specialized version of Boeing based in Switzerland. Also, we have divisions in aviation, defense, cyber security, space, etc. I’m the CTO of the space division. We do launchers, satellite structures, mechanical-thermal systems, communication equipment and related systems. I’m glad we are here to talk about what are the key technology enablers that allow us to do Internet cost effectively in space.

Costs must continue to decrease for the satellite. We saw this “New Space” world coming some years ago and we had to decide whether to participate in it or not. Up to that point, our legacy markets were institutional ones like the European Space agency, large GEO commercial telecom companies, and similar customers where we do a lot of RF and microwave work. Our main challenge it to make money in this business. So when you get a factor of 10 or more cost pressure on your products, you feel like giving up.

In the end, we saw that all of our traditional institutional and commercial customers were starting to ask the same question, which is, if we are manufacturing some avionics or frequency converters or computers for OneWeb ( space) that are a factor of 10 or 100 less than our standard products, why can’t we do it for the European Space agency or other government customers, namely the large satellite operators. In the end, we didn’t feel it was optional. We had to support this parallel world in which we are doing this business.

There are four main elements that are critical to get to that capability (to support both new and traditional space). First, you should be doing high-rate production. You get a lot of cost savings that way. We have moved to a lot of high-rate production lines. For example, our RF frequency converter chip business is coming to a point where 75% of the product, i.e., half of that product line, will be for non-space applications. Having that type of throughput, handling commercial, non-space grade components and so forth is key to getting that type of high rate production capability

The second critical capability is to increase the emphasis on automation. I’ll cover that shortly.

Third, you must establish commercial-off-the-shelf (COTS) variants of your main product line.

Finally, it’s important to adopt new business models including collaboration and taking risk-sharing positions with customers. Our friends at Oneweb have been pushing us to adopt new business models. Collaboration often means to co-locate and do co-engineering. You need to consider new business models as well as new technologies and processes.

Let’s return to the automation element. RUAG has been doing automation into a lot of different areas, from electronic and satellite panel production to out-of-autoclave composites and multi-layer insulation production. An example of the out-of-autoclave composites are our rocket launcher payload fairings (see Figure 3). [Editor’s Note: A payload fairing is a nose cone used to protect a spacecraft (launch vehicle payload) against the impact of pressure and aerodynamic heating during launch through an atmosphere.]

Figure 3: Payload fairing for the small European launcher Vega. (Courtesy of RUAG)

There should be more cost pressures being put on the launchers, as well. We are trying to be proactive with the composites, with the launcher side to cut down costs. Reusability is a big key subject in the launcher world, that is, to reuse all the bits of the rocket.

From our perspective, these are the key enabling products for the Internet-of-Space (IoS):

  • Future microwave products (Q/V-band, flexible analog converters)
  • GNSS receivers for space
  • 3-D printed structures
  • COTS digital signal processors

Future microwave products have been an evolution to the higher frequency bands as well as to optical. This is key to enabling some of the high capacity throughput for the future. Another enabling area is COTS as applied to signal processors. Some customers are evolving to regenerative types to try to squeeze every last bit of capacity out of the system. The focus is on bandwidths for DSPs which have to be based on COTS. GNSS receivers are enablers as they are a key technology for the satellite bus. And, as Dave mentioned previously, mass is a real thing that we have to try to get out of these systems. One way to drive down mass is with 3-D printing structures.

In Part II of this series, the panelist are asked questions about the cost viability of the Internet of Space, LEO vs. GEO technologies, competition with 5G and airborne platforms.

Cybernetic Human Via Wearable IOT

Tuesday, January 17th, 2017

UC Berkeley’s Dr. Rabaey sees humans becoming an extension of the wearable IoT via neuron connectivity at recent IEEE IMS event.

by Hamilton Carter and John Blyler, Editors, JB Systems

During the third week in May, more than 3000 microwave engineers from across the globe descended upon San Francisco for the International Microwave Symposium 2016. To close the week, it seemed only fitting then that the final plenary talk by Jan Rabaey was titled “The Human Intranet- Where Swarms and Humans Meet.”


Dr. Rabaey, Professor and EE Division Chair at UC Berkeley, took the stage wearing a black T-shirt, a pair of slacks, and a sports coat that shimmered under the bright stage lights. He briefly summarized the topic of his talk, as well as his research goal: turning humans themselves into the next extension of the IoT. Ultimately he hopes to be able to create human-machine interfaces that could ideally not only read individual neurons, but write them as well.

What Makes a Wearable Wearable?

The talk opened with a brief discourse on the inability thus far of wearables to capture the public’s imagination. Dr. Rabaey cited several key problems facing the technology: battery life; how wearable a device actually is; limited functionality; inability to hold user interest; and perhaps most importantly something he termed stove-piping. Wearable technologies today are built to communicate only with other devices manufactured by the same company. Dr. Rabaey called for an open wearables platform to enable the industry to expand at an increasing rate.

Departing from wearables to discuss an internet technology that almost everyone does use, Dr. Rabaey focused for a few moments on the smart phone. He emphasized that while the devices are useful, the bandwidth of the communications channel between the device, and its human owner is debilitatingly narrow. His proposal for remedying this issue is not to further enhance the smart phone, but instead to enhance the human user!

One way to enhance the bandwidth between device and user is simply to provide more input channels. Rabaey discussed one project, already in the works, that utilizes Braille-like technology to turn skin into a tactile interface, and another project for the visually-impaired that aims to transmit visual images to the brain over aural channels via sonification.

Human limbs as prosthetics

As another powerful example of what has already been achieved in human extensibility, Dr. Rabaey, showed a video produced by the scientific journal “Nature” portraying research that has enabled quadriplegic Ian Burkhart to regain control of the muscles in his arms and hands. The video showed Mr. Burkhart playing Guitar Hero, and gripping other objects with his own hands; hands that he lost the use of five years ago. The system that enables his motor control utilizes a sensor to scan the neurons firing in his brain as researchers show him images of a hand closing around various objects. After a period of training and offline data analysis, a bank of computers learns to associate his neural patterns with his desire to close his hand. Finally, sensing the motions he would like to make, the computers fire electro-constricting arm bands that cause the correct muscles in his arm to flex and close his hand around an object. (See video: “The nerve bypass: how to move a paralysed hand“)

Human Enhancements Inside and Out

Rabaey divides human-enhancing tech into two categories, extrospective, applications, like those described above, that interface the enhanced human to the outside world, and introspective applications that look inwards to provide more information about enhanced humans themselves. Turning his focus to introspective applications, Rabaey presented several examples of existing bio-sensor technology including printed blood oximetry sensors, wound healing bandages, and thin-film EEGs. He then described the technology that will enable his vision of the human intranet: neural dust.

The Human Intranet

In 1997, Kris Pister outlined his vision for something called smart dust, one cubic millimeter devices that contained sensors, a processor, and networked communications. Pister’s vision was recently realized by the Michigan Micro Mote research team. Rabaey’s, proposed neural dust would take this technology a step further providing smart dust systems that measure a mere 10 to 100 microns on a side. At these dimensions, the devices could travel within the human blood stream. Dr. Rabaey described his proposed human intranet as consisting of a network fabric of neural dust particles that communicate with one or more wearable network hubs. The headband/bracelet/necklace-borne hub devices would handle the more heavy-duty communication, and processing tasks of the system, while the neural dust would provide real-time data measured on-site from within the body. The key challenge to enabling neural dust at this point lies in determining a communications channel that can deliver the data from inside the human body at real-time speeds while consuming very little power, (think picowatts).

Caution for the future

In closing, Dr. Jan implored the audience, that in all human/computer interface devices, security must be considered at the onset, and throughout the development cycle. He pointed out that internal defibrillators with wireless controls can be hacked, and therefore, could be used to kill a human who uses one. While this fortunately has never occurred, he emphasized that since the possibility exists it is key to encrypt every packet of information related to the human body. While encryption might be power-hungry in software, he stated that encryption algorithms build into ASICs could be performed at a fraction of the power cost. As for passwords, there are any number of unique biometric indicators that can be used. Among these are voice, and heart-rate. The danger for these bio-metrics, however, is that once they can be cloned, or imitated, the hacker has access to a treasure-trove of information, and possibly control. Perhaps the most promising biometric at present is a scan of neurons via EEG or other technology so that as the user thinks of a new password, the machine interface can pick it up instantly, and incorporates it into new transmissions.

Wrapping up his exciting vision of a bright cybernetic future, Rabaey grounded the audience with a quote made by Joanna Zylinska, an Australian performance artist, in a 2002 interview:

“The body has always been a prosthetic body. Ever since we developed as humanoids and developed bipedal locomotion, two limbs became manipulators. We have become creatures that construct tools, artifacts, and machines. We’ve always been augmented by our instruments, our technologies. Technology is what constructs our humanity. …, so to consider technology as a kind of alien other that happens upon us at the end of the millennium is rather simplistic.”

The more things change, the more they stay the same.

A Holistic Approach to Automotive Memory Qualification

Tuesday, January 3rd, 2017


The Robustness Validation approach in design of automotive memory components addresses reliability and safety margins between design and actual application.

By John Blyler, Editorial Director, JB Systems

Improved reliability is just one of the benefits claimed in using the supply-chain sensitive Robustness Validation (RV) approach to qualifying non-volatile memory (NVM) components for automotive electronic application. The following is a summarized and paraphrased coverage of a paper presented by the author, Valentin Kottler, Robert Bosch GmbH, at the IEEE IEDM 2016. — JB

Today’s cars have many electronic systems to control motor, transmission, and infotainment systems. Future vehicles will include more telematics to monitor performance as well as car-to-car communication. As the number of electronic applications in the car increases so does the need for non-volatile memories to store program code, application data and more.

Automotive applications place special requirements on electronic components, most noticeably regarding the temperature range in which the components must operate. Automotive temperature ranges can vary -40 to 165 C degrees. Further, harsh environmental influences like humidity and long vehicle lifetimes are significantly additional requirements not typically found in most industrial and consumer products. Finally, automotive standards place high requirements on electronic component, system and subsystem quality and reliability. For example, it’s not uncommon to demand a 1part per million (ppm) failure rate requirement for infotainment system and a zero defect rate over the lifetime of the car for safety systems, e.g., braking and steering systems. PPM (Parts per million) is a common measurement of performance quality.

These expectations place an additional challenge on components that will wear out during the lifetime of the car, namely, non-volatile memories. Accordingly, such components need to be thoroughly qualified and validated to meet reliability and safety requirements. Adding to this challenge are both the function of the electronic component and its location in the car, all of which creates a wide spectrum of requirements and mission profiles for electronic memory components.

Non-Volatile Memory (NVM) Components

One of the key components in automotive electronics is non-volatile memory, from which program code, application data or configuration bits can be retrieved even after power has been turned off and back on. It is typically used for the task of secondary storage and long-term storage. The size of the NVM in automotive systems can range from a few bytes to many giga-bytes for infotainment video systems.

The various types of NVM adds to the range of available components. For example, a form of NVM known as Flash Memory can have NOR and NAND architectures. Further, there can be single and multi-level cell (SLC and MLC) flash memory technologies. A qualification and validation approach that works for all of these types is needed.

Valentin Kottler, Robert Bosch GmbH

Automotive application requirements can be very different from one application to another. Application requirements will affect the basic performance of memory device characteristics such as speed, write endurance, data retention time, temperature performance and cost effectiveness, noted Valentin Kottler, Robert Bosch GmbH. One particular application may require only a few write cycles of the entire memory. Another application may require the same component to write continuously for over one-half million cycles. Still, another application might require 30 years of data retention, which happens to be the typical 20 year life time of the car plus up to 10 years of shelf time if the supplier has to pre-produce the electronics that support that application.

The simultaneous fulfillment of all these requirements may not be possible in any cost effective way. What is needed is an approach to validation that is application specific. The trade-off is that application specific validation may need to be repeated for each new application that uses a given component. This can mean significant effort in validation and qualification.

Standard approaches using fixed stress tests – like the “3 lots x 77parts/lot approach – will not be able to cover this wide spread of mission profile and the high variety just described. The Automotive Electronics Council (AEC) AEC-Q100 is a failure mechanism based stress test qualification for packaged integrated circuits (1). The 3 lots x 77 parts/lot failure tests aims at a 1% failure rate with 90% confidence.

More importantly, this type of approach does not provide information margins (discussed shortly), which are very important for determining the PPM fail rates in the field.

For these reasons, the standard approach needs to be complemented with a flexible qualification methodology like the robustness validation approach as described on the ZVEI pages (2):

“A RV Process demonstrates that a product performs its intended function(s) with sufficient margin under a defined Mission Profile for its specified lifetime. It requires specification of requirements based on a Mission Profile, FMEA to identify the potential risks associated with significant failure mechanisms, and testing to failure, “end-of-life” or acceptable degradation to determine Robustness Margins. The process is based on measuring and maximizing the difference between known application requirements and product capability within timing and economic constraints. It encompasses the activities of verification, legal validation, and producer risk margin validation.”

Wikipedia defines robustness validation as follows:
“Robustness Validation is used to assess the reliability of electronic components by comparing the specific requirements of the product with the actual “real life values”. With the introduction of this methodology, a specific list of requirements (usually based on the OEM) is required. The requirements for the product can be defined in the environmental requirements (mission profiles) and the functional requirements (use cases).”

The Robustness Validation (RV) technique characterizes the intrinsic capability and limitations of the component and of its technology. It is a failure mechanism and technology based approach using test-to-fail trials instead of test-to-pass and employing drift analysis. Further, it does allow for an assessment of the robustness margin of the component in the application.

For clarification, the test-to-pass approach refers to an application where a test is conducted using a specific user-flow instructions. Conversely, a test-to-fail approach refers testing a feature in every conceivable way possible. Test-to-pass is an adequate approach for proof of concept designs but for end-product systems the test-to-fail is necessary to ensure reliability, quality and safety concerns.

The benefit of the robustness validation approach is that the characterization of the device capability would only need to be done once, explained Kottler. Subsequent activities would allow for the deduction of the behavior of the memory under the various mission profiles without repeating the qualification exercise.

Robustness Margin

Robustness Validation (RV) can be used as a holistic approach to NVM qualification. One way to visualize RV is to consider two memory parameters, i.e., endurance and temperature. The intrinsic capability of the NVM may be described as an area between these two parameters (see Figure 1). Within that area are the hard requirements for the memory (NVM spec) and the application (application spec). The distance between the application spec, the remaining portion of memory and the NVM capability limit is called the “robustness margin.”

In other words, the robustness margin is a measure of the distance of the requirements to the actual test results. It is the margin between the outer limits of the customer specification and the actual performance of the component.

The importance of the robustness margin is that it determines the actual safety margin of the component as used in the application verses its failure mode.

The overall capability of the device including its quality and reliability is that its properties are determined and eventually designed throughout the product development life-cycle phrases:

  • Product & technology planning
  • Development and design
  • Manufacturing and test
  • –In order to prove whether the device is suitable for automotive usage, data is gathered from the early design phases in
  • –addition to qualification trial data.

Then, investigations are held of the performance of the device on a specific application conditions.

Robustness Validation Applied to Memory Qualification

How then do you specifically apply the robustness validation approach to a memory qualification? Kottler listed four basic steps in his presentation (see in Figure 1). One should note that Steps 2 and 3 require input from the NVM suppliers. Further, the NVM supplier can run these exercises without input from Step 1 or output to Step 4. We’ll now consider each of these steps more closely.

Figure 1: Steps to apply the Robustness Validation approach to memory devices.

The first step is to identify the mission profile, which is used to describe the loads and stresses acting on the product in actual use. These are typically changes in temperature, temperature profile, vibration and working of electrical and mechanical fields, or other environmental factors. In order to qualify a non-volatile memory for a specific automotive application, an automotive Tier 1 supplier must therefore identify the sum of application requirements to the NVM and must assess whether and to which extent a given NVM component will fulfil them.

To specifically determine the mission profile, all NVM component application requirements must be collected, from electronic control unit (ECU) design, manufacturing and operation in the vehicle. This is usually done within the Tier 1 organization based on requirements from the vehicle manufacturer.

The second step requires identification of all relevant failure mechanisms. Specifically, it means mapping application requirements to the intrinsic properties and failure modes of the NVM component. This requires the competence of the component supplier to share their understanding of the NVM physics and design to identify all relevant failure mechanisms. Intensive cooperation of the NVM technology and product experts with the quality and reliability team on NVM supplier and Tier 1 sides are necessary to accomplish this step.

As an example, consider the typical requirements to an NVM component. These requirements include data retention, re-programmability and unaltered performance as specified over the vehicle lifetime and under various conditions in the harsh environment of a vehicle. According to Kottler’s paper, some of the corresponding failure mechanisms in a flash memory include the various charge loss mechanisms through dielectrics, charge de-trapping, read, program and erase disturbs, tunnel oxide degradation due to programming and erasing, as well as radiation-induced errors. These mechanisms are already predefined by choices made at design of the NVM technology, memory cell and array architecture, as well as of the conditions and algorithms for programming, erasing and reading.

The third step focuses on trial planning and execution with the goal of characterizing NVM capabilities and limits with respect to the previously identified failure mechanism. As in the previous step, the competence and participation of the component supplier to provide insight into the physics of the NVM, as well as NVM quality and reliability. Acceleration life cycle testing models, parameters and model limitations need to be identified for each failure mechanism. The health of the NVM component related to the failure mechanism must be observable and allow for drift analysis, e.g., by measuring the memory cell’s threshold voltage variations.

How might the drift analysis be performed and by whom, i.e., the supplier or the Tier 1 customer? For example, will the flash memory provider be asked to give the customer more component data?

According to Kottler, the drift analysis will depend upon the flash memory manufacturer to measure data that is not accessible to the customer/end user. Generally, the latter doesn’t have access to test modes to get this data. Only the manufacturer has the product characterization and test technologies related to their components.

The manufacturer and customer should work together to jointly define the parameters that need to be tracked. It is a validation task. The measurements are definitely done by the manufacturer but the manufacturer and customer should jointly interpret the details. What the customer doesn’t need is a blank statement that the components have simply passed qualification. This “test to pass” approach is no longer sufficient, according to Kottler.

The trials and experiments for drift analysis need to be planned and jointly agreed upon. Their execution usually falls to the NVM supplier, being the only party with full access to the design, necessary sample structures, test modes, programs and equipment.

According to Kottler, the identification of an appropriate electrical observable is of utmost importance for applying Robustness Validation (RV) to NVM. Such observables may be for memory cell threshold voltage Vth for NOR flash and EEPROM, or corrected bit count for managed NAND flash memories. Both observables provide sensitive early indication on the memory health status and must therefore be accessible for qualification, production testing and failure analysis in automotive.

The fourth and final step in the Robustness Validation approach involves the assessment of the reliability and robustness margin of the NVM component against the mission profile of the automotive application. The basis for this assessment is the technology reliability data and consideration of the initial design features and limitations, such as error correction code (ECC), adaptive read algorithms (e.g. read retry) and firmware housekeeping (e.g. block refresh and wear leveling), noted Kottler in his paper.

Reliability characterization on technology and component level do not necessarily have to be separated. Combined trials may even be recommended, e.g. for managed NAND flash, due to the complex interaction between firmware, controller and NAND flash memory.

Benefits of the Robustness Validation Approach

The Robustness Validation (RV) approach provides a straight-forward way in which a semiconductor company might design and validate an NVM component that is acceptable in the automotive electronics market. Using RV, the supplier will enable its customers to assess the suitability of the component for their applications in the necessary detail.

The resulting NVM qualification and characterization report that results from the NVM approach should list the memory failure mechanisms considered and characterized. Further, the report should describe the acceleration models applied, and showing drift analysis data supporting a quantitative prediction of failure rate vs. stress or lifetime for each failure mode. According to Kottler, combinations of stresses are to be included according to previous agreements, e.g. data retention capability after write/erase endurance pre-stress, temperature dependent.

To some, the Robustness Validation approach may appear to cause significant additional qualification work. However, most or all of these reliability investigations are part of the typical NVM product and technology characterization during the development phase. For new designs, the optimized top-down RV approach may be applied directly. For existing NVM designs, this approach must be tailored to the agreement of both the NVM supplier and tier 1 company, potentially re-running trials to complete the RV approach. Even so, some existing NVM components may not meet automotive qualification. It is therefore important to jointly assess the feasibility of the automotive NVM qualification by RV prior to the design-in decision.

The end result of the RV approach is an efficient solution to cope with the high requirements of the automotive market, requiring a close cooperation along the value creation chain,” noted Kottler.


The automotive expectations to non-volatile memory (NVM) components continues to grow due to market evolution, increasingly complex data structures and the demand for performance and endurance. Tier 1 and NVM suppliers must cope with this challenge jointly. By considering these expectations from the beginning of product and technology development, and by providing comprehensive data, the NVM supplier can enable the automotive Tier 1 to assess the NVM suitability for the application under a Robustness Validation (RV) approach.


  1. AEC-Q100: Stress Test Qualification for Integrated Circuits – Rev. H, Spe. 2014, pp. 36-30
  2. ZVEI “Handbook for Robustness Validation of Semiconductor Devices in Automotive Applications,” 3rd edition, May 2015, pp. 4-20


Read the complete story and original post on “IP Insider”