Part of the  

Chip Design Magazine


About  |  Contact

Beginning the Discussion on the Internet-of-Space

January 17th, 2017

A panel of experts from academia and industry assembled at the recent IEEE IMS event to answer critical questions are the role and impact of RFIC technologies.

By John Blyler, Editorial Director

Synopsis of compelling comments:

  • “Satellite communication becomes practical, low cost, and comparable to LTE only if you are at multi-Tera-bit per second capacity.
  • “Ultimately, we are not selling the bandwidth of our system but the power.”
  • “Power harvesting on the satellite is one of the most important things we can do.”
  • “You must establish commercial-off-the-shelf (COTS) variants of your main space product line (to support both new and traditional space).”
  • “You need to consider new business models as well as new technology and processes.”

Recently, IEEE MTT Society sponsored an initial discussion and networking session on the Internet of Space (IoS) at the 2016 International Microwave Symposium. It was billed as one of several upcoming forums to bring the IoS and IoT communities together as these technologies and systems continue to evolve. The short term goal of this initiative is to, “jump start a global technical community cutting across multiple hardware-oriented fields of interest including: aerospace systems; antennas; autonomous systems; communications; electronics; microwave/mm-wave technology; photonics; positioning, navigation and timing; power electronics, etc.”

With representation from a global community of satellite and end-user companies, the IEEE IMS 2016 Rump Session Panel explored the technical and business challenges facing the emerging   industries. What exactly is the IoS? Does it include both low-earth orbit and potentially sub-orbital platforms like drones and balloons? How do microwave and RF designs differ for satellite and airborne applications? These are a few of the questions that were addressed by the panel. Part 1 of this series of reports focuses on the challenges forecasted by each of the panelists. What follows is a portion of that panel discussion. – John Blyler

Panelists and Moderators (left to right):

  • [Co-Moderator] Sanjay Raman, Professor and Associate VP, National Capital Region, Virginia Tech
  • Prakash Chitre, Comsat Laboratories, ViaSat, VP and GM
  • Hamid Hemmati, Facebook, Director of Engineering for Telecom Infrastructure
  • Lisa Coe, Director of Commercial Business Dev. for Boeing
  • David Bettinger, OneWeb, VP of Engineering, Communications Systems
  • Michael Pavloff, RUAG Space (Zürich Switzerland), Chief Technology Officer
  • [Co-Moderator] and Mark Wallace, VP and GM, Keysight

Raman (Co-Moderator): Hi. I’m joined by Mark Wallace, my co-moderator to this panel. We’re here to discuss the emerging Internet-of-Space (IoS) industry. Let’s start with Prakash Chitre from Comsat Labs.

Chitre (Comsat): I’m going to talk about a new generation of satellite systems that NASA has been designing, building and launching. This will give you an understanding of what we have been doing for the last 5 years and our plans for the next 5 years. The main goal for us is to provide connectivity throughout the world. Even with today’s voracious appetite for high-speed and high-volume Internet, half the world’s population of 7B people don’t have any broadband Internet connection.

ViaSat has three satellites, ViaSat-1, WildBlue1, and Anik-F2.  Most of these satellites, like the ANIK-F2 and WildBlue 1, were more or less traditional Ka-Band satellites with 8Gbps (in throughput). But the ViaSat-1 satellite that we designed and launched in 2011, had about 140Gbps (see Figure 1). ViaSat-1 handles about 1 million users and covers North America (NA), including US and Canada. It was the start of a longer vision of very high throughput satellites to cover the globe.

Figure 1: ViaSat-1 rendering (Courtesy of Comsat Labs)

We want to provide broadband communication platforms that deliver affordable high-speed Internet connectivity and video streaming via fixed, mobile and portable systems. The key thing is that we are totally vertically integrated solution; the terminals, the gateway, the satellite all fit together to provide a very cost effective system. We deal with geosynchronous satellite latency issues with software embedded in the terminal and the gateway to make sure we can do very high page loads from media.

[Editor’s Note: Terminals link the satellite signal to fixed and mobile locations on the ground and on airborne systems. Examples of terminals include satellite TV disk systems, aviation broadband devices for Ku-, Ka-, and dual-band in-flight connectivity, emergency responder equipment, cellular extensions and the like.]

Soon we’ll be launching ViaSat-2 (see Table 1), which will provide almost 2 ½ times the capacity of ViaSat-1 while providing much greater coverage. It will bridge the North Atlantic with contiguous coverage over NA and Europe, including all the air and shipping routes.

The ViaSat-3 ultra-high capacity satellite platform is comprised of three ViaSat-3 class satellites and ground network infrastructure.  The first two satellites will focus on the Americas and Europe, Middle East and Africa (EMEA). Work is underway with delivery expected in 2019. A third satellite system is planned for the Asia-Pacific region, completing global service coverage.

In the next few years, we’ll launch ViaSat-3, which will be about 3 times smaller than ViaSat-2. It has 1Tbps capacity and much larger coverage. The first two ViaSat-3 satellites will cover the Americas and Europe, Middle East and Africa (EMEA). A third satellite system is planned for the Asia-Pacific region, completing the global service coverage. We have already given the contract to Boeing to build the bus framework for the first Viasat-3. We are designing and building our own payload.


Satellite Name

Throughput Capacity


WildBlue 8 Gpbs


IPSTAR 1 45 Gpbs


KA-SAT 70 Gpbs


ViaSat-1 140 Gpbs


EchoStar XVII 100+ Gpbs


NBN-Co 1a (“Sky Muster”) 80+ Gpbs


ViaSat-2 350 Gpbs


ViaSat-3 Americas 1 Tbps


ViaSat-3 EMEA 1 Tbps


ViaSat-4 APAC 1 TBPS

Table 1: ViaSat Satellites

Raman (Co-Moderator): Our next speaker is Hamid Hemmati, Director of Engineering for Telecom Infrastructure at Facebook.

Hemmati: Facebook’s interest in providing Internet coverage stems from our desire to connect everyone in the world. Anyone that wants to be connected. Something like 60% of world’s people aren’t on the Internet or have a poor connection – typically a 2G connection. If they are not on Internet, then they cannot be connected.

Most of the data centers around the world are based on open source models for both hardware and software. We can devote technologies to significantly increase the capacities and lower costs and then provide it to the community to then develop and implement.

In terms of the global Internet, we are interested in developed and underdeveloped countries that don’t have connectivity. Providing connectivity to underdeveloped countries is fairly tricky because the population distribution is very different between countries. For example, the red color means a large population of people and green means a small population (Figure 2). As you can see, these are the six different countries with widely different distributions. Some have more or less uniform distribution while others have regions that are scarcely populated.

Figure 2: Population distribution varies according to country. (Courtesy Facebook via IMS presentation).

Figure 2: Population distribution varies according to country. (Courtesy Facebook via IMS presentation).

There is a magnitude of difference in population distribution around the world, which means that there is not one solution that fits all. You can’t come up with one architecture to provide Internet connection to everyone around the world. Each country requires a unique solution. It is more cost effective to allocate capacity where needed. But each solution comes from a combination of terrestrial links with perhaps airborne or satellite links. Satellites are only viable if you can increase the data rate significantly to about 100 Tbps. This is the throughput required to connect the unconnected.


  • 4 billion people with 25 kbps per user (based on average capacity and that users are on the Internet simultaneously).
  • Calculation: (4×109) x (2.5 x 104) = 100 Tbps

This is a staggering number (100 Tbps), so we are talking about very large capacity for all of these populations.

Technology advancements are required to extend the capability of current commercial wireless communication units by 1 to 2 orders of magnitude. What we need to do is amass the state of the art in a number of areas: GEO satellites, LEO Satellites, High Altitude Platforms, and Terrestrial. Satellite communication becomes practical, low cost, and comparable to LTE only if you are at multi-Tbsp capacity, otherwise it is much more expensive than providing LTE. There must be a business justification to do that.

High altitude platforms (like airplanes/drones) need to be able to stay airborne for months at a time. They must be low cost to produce and maintain, plus run at 10-100 Gpbs uplink/downlink/crosslink RF and optical capacity.

Meanwhile, terrestrial including fiber and wireless are already here. It’s just that it is immensely expensive if you want to cover all of the country with fiber. So other solutions are needed, like wireless links, tower to tower, and so forth. This is just a laundry list of what needs to be done. It doesn’t mean we at Facebook are looking at all of them. We are looking at some of them. We want to get these technologies into the hands of the implementers.

Raman (Co-Moderator): Next, let me introduce Lisa Coe, Director of Commercial Business Dev. for Boeing. Originally, James Farricker, Boeing, VP Engineering, was slated to speak on this panel. He was not able to join us.

Coe: I looked up the phrase “new space” on Wikipedia since others are talking about the traditional vs. the new space. I was asking myself if Boeing is a traditional space or new space company. Wikipedia called out Boeing as “not” new space.

[Editor’s Note: [New space is often affiliated with an emergent private spaceflight industry. Specifically, the terms are used to refer to a community of relatively new aerospace companies working to develop low-cost access to space or spaceflight technologies.]

Boeing builds commercial airplanes, military jets, helicopters, International Space State, satellites, cyber security solutions, and everything. We build a lot of very different things. So when you ask us about the Internet of Space (IOS) you’ll get a very different answer. Let me try to answer it.

When an airplane disappears, like the Egypt airplane, a lot of people ask why we don’t connect airplanes via satellites. We need to get our airplanes smarter and all connected. Passengers are already connected on aircraft with Wi-Fi. So before we push for the Internet of Things, why don’t we push to get all the airplanes connected?

Boeing is also a user of the Internet of Space. For example, we just flew an unmanned aircraft that was completely remote controlled from the ground. This is why we care about security, about hacking into these systems. How can we make the Internet of Space secure to connect more people and things?

Raman (Co-Moderator): Next we have David Bettinger, VP of Engineering, Communications System, at OneWeb

Bettinger: OneWeb is trying to provide very low latency Internet access to those who don’t have access everywhere. We are two years into the project and are quite far along. The things that ultimately make us successful are the microwave components used in our system. I’m a modem guy by nature – not an RF one. I wish all modems and baseband could stay at baseband but of course RF is needed on the wireless side. We utilize Ku-band in our system. We also have access to Ka-band, which are a more pointed feeder links that are servicing the satellites.

Supporting both bands means that we need a lot of different components for different functionality. The satellite is probably the most critical for us. The only thing that makes something as crazy as launching 648 satellites feasible is if we get the cost of the satellite and the weight down significantly compared to what is actually done today. Our satellite is about the size of a washing machine, weighing roughly 150 kg. You can fit 30 of them on the launch (payload). That is what makes this work.

The only thing that makes satellite mass work is if you figure out the power problem. Ultimately, we are not selling the bandwidth of our system but the power. This is because we don’t have the luxury of a bus sized satellite up there that is designed to power constantly regardless of the environment, whether you are in an eclipse or not. We have to effectively manage our power with the subscribers of the service. Power harvesting on the satellite is one of the most important things we can do. It drives almost every aspect of our business case.

We have looked heavily at a lot of different silicon technologies, especially GaN and GaS chip technologies. We are utilizing low noise amplifiers (LNAs) and up/down converters, among other components. Power and then cost are important. If there was anything I would ask you to keep working on, it’s the efficiency thing. We can use every bit that we can.

On the ground side, our challenges are a little bit different. We have two different ground components. One is the user terminals like the devices that you put on your roof. They point straight up at the satellite to provide local access via an Ethernet cable, Wi-Fi or even LTE extension. These terminals are all about cost. To crack the markets we want to crack, we need to get the cost of the CPE down yet have a device that actually points at satellites that are moving across at about 7km per second. And changing to different satellites every 3 ½ minutes. It’s a difficult and different problem from the GEO world. Now I remember why I did Geo for 25 years before this.

[Editor’s Note: Customer-premises equipment or customer-provided equipment (CPE) is any terminal and associated equipment located at a subscriber's premises and connected with a carrier's telecommunication channel at the demarcation point ("demarc").]

It all comes down to cost. How can we get cost and power utilization down? What tech can we use to be able to point at our satellites? We are excited about the prospect of trying to bring active steering antenna to a mass market. I see our friends from RUAG are here (in the audience). We have done reference work on looking at these different technologies. There is a lot of secret sauce in there but I think ultimately it comes down to how do you make small, cheap chips and then how can you make antennas around that.

[Editor’s Note: The gateway is the other ground component. A gateway or ground station connects the satellite signal to the end user or subscriber terminals. Most satellite systems are comprised of a large number of small, inexpensive user terminals and a small number of gateway earth stations.]

Raman (Co-Moderator): Our final panelist is Michael Pavloff, CTO, RUAG Space with headquarters in Zürich Switzerland)

Payloff: It’s an honor to be here. How many have heard of RUAG? Maybe 30%? That’s not bad. We are a small, specialized version of Boeing based in Switzerland. Also, we have divisions in aviation, defense, cyber security, space, etc. I’m the CTO of the space division. We do launchers, satellite structures, mechanical-thermal systems, communication equipment and related systems. I’m glad we are here to talk about what are the key technology enablers that allow us to do Internet cost effectively in space.

Costs must continue to decrease for the satellite. We saw this “New Space” world coming some years ago and we had to decide whether to participate in it or not. Up to that point, our legacy markets were institutional ones like the European Space agency, large GEO commercial telecom companies, and similar customers where we do a lot of RF and microwave work. Our main challenge it to make money in this business. So when you get a factor of 10 or more cost pressure on your products, you feel like giving up.

In the end, we saw that all of our traditional institutional and commercial customers were starting to ask the same question, which is, if we are manufacturing some avionics or frequency converters or computers for OneWeb ( space) that are a factor of 10 or 100 less than our standard products, why can’t we do it for the European Space agency or other government customers, namely the large satellite operators. In the end, we didn’t feel it was optional. We had to support this parallel world in which we are doing this business.

There are four main elements that are critical to get to that capability (to support both new and traditional space). First, you should be doing high-rate production. You get a lot of cost savings that way. We have moved to a lot of high-rate production lines. For example, our RF frequency converter chip business is coming to a point where 75% of the product, i.e., half of that product line, will be for non-space applications. Having that type of throughput, handling commercial, non-space grade components and so forth is key to getting that type of high rate production capability

The second critical capability is to increase the emphasis on automation. I’ll cover that shortly.

Third, you must establish commercial-off-the-shelf (COTS) variants of your main product line.

Finally, it’s important to adopt new business models including collaboration and taking risk-sharing positions with customers. Our friends at Oneweb have been pushing us to adopt new business models. Collaboration often means to co-locate and do co-engineering. You need to consider new business models as well as new technologies and processes.

Let’s return to the automation element. RUAG has been doing automation into a lot of different areas, from electronic and satellite panel production to out-of-autoclave composites and multi-layer insulation production. An example of the out-of-autoclave composites are our rocket launcher payload fairings (see Figure 3). [Editor’s Note: A payload fairing is a nose cone used to protect a spacecraft (launch vehicle payload) against the impact of pressure and aerodynamic heating during launch through an atmosphere.]

Figure 3: Payload fairing for the small European launcher Vega. (Courtesy of RUAG)

There should be more cost pressures being put on the launchers, as well. We are trying to be proactive with the composites, with the launcher side to cut down costs. Reusability is a big key subject in the launcher world, that is, to reuse all the bits of the rocket.

From our perspective, these are the key enabling products for the Internet-of-Space (IoS):

  • Future microwave products (Q/V-band, flexible analog converters)
  • GNSS receivers for space
  • 3-D printed structures
  • COTS digital signal processors

Future microwave products have been an evolution to the higher frequency bands as well as to optical. This is key to enabling some of the high capacity throughput for the future. Another enabling area is COTS as applied to signal processors. Some customers are evolving to regenerative types to try to squeeze every last bit of capacity out of the system. The focus is on bandwidths for DSPs which have to be based on COTS. GNSS receivers are enablers as they are a key technology for the satellite bus. And, as Dave mentioned previously, mass is a real thing that we have to try to get out of these systems. One way to drive down mass is with 3-D printing structures.

In Part II of this series, the panelist are asked questions about the cost viability of the Internet of Space, LEO vs. GEO technologies, competition with 5G and airborne platforms.

Cybernetic Human Via Wearable IOT

January 17th, 2017

UC Berkeley’s Dr. Rabaey sees humans becoming an extension of the wearable IoT via neuron connectivity at recent IEEE IMS event.

by Hamilton Carter and John Blyler, Editors, JB Systems

During the third week in May, more than 3000 microwave engineers from across the globe descended upon San Francisco for the International Microwave Symposium 2016. To close the week, it seemed only fitting then that the final plenary talk by Jan Rabaey was titled “The Human Intranet- Where Swarms and Humans Meet.”


Dr. Rabaey, Professor and EE Division Chair at UC Berkeley, took the stage wearing a black T-shirt, a pair of slacks, and a sports coat that shimmered under the bright stage lights. He briefly summarized the topic of his talk, as well as his research goal: turning humans themselves into the next extension of the IoT. Ultimately he hopes to be able to create human-machine interfaces that could ideally not only read individual neurons, but write them as well.

What Makes a Wearable Wearable?

The talk opened with a brief discourse on the inability thus far of wearables to capture the public’s imagination. Dr. Rabaey cited several key problems facing the technology: battery life; how wearable a device actually is; limited functionality; inability to hold user interest; and perhaps most importantly something he termed stove-piping. Wearable technologies today are built to communicate only with other devices manufactured by the same company. Dr. Rabaey called for an open wearables platform to enable the industry to expand at an increasing rate.

Departing from wearables to discuss an internet technology that almost everyone does use, Dr. Rabaey focused for a few moments on the smart phone. He emphasized that while the devices are useful, the bandwidth of the communications channel between the device, and its human owner is debilitatingly narrow. His proposal for remedying this issue is not to further enhance the smart phone, but instead to enhance the human user!

One way to enhance the bandwidth between device and user is simply to provide more input channels. Rabaey discussed one project, already in the works, that utilizes Braille-like technology to turn skin into a tactile interface, and another project for the visually-impaired that aims to transmit visual images to the brain over aural channels via sonification.

Human limbs as prosthetics

As another powerful example of what has already been achieved in human extensibility, Dr. Rabaey, showed a video produced by the scientific journal “Nature” portraying research that has enabled quadriplegic Ian Burkhart to regain control of the muscles in his arms and hands. The video showed Mr. Burkhart playing Guitar Hero, and gripping other objects with his own hands; hands that he lost the use of five years ago. The system that enables his motor control utilizes a sensor to scan the neurons firing in his brain as researchers show him images of a hand closing around various objects. After a period of training and offline data analysis, a bank of computers learns to associate his neural patterns with his desire to close his hand. Finally, sensing the motions he would like to make, the computers fire electro-constricting arm bands that cause the correct muscles in his arm to flex and close his hand around an object. (See video: “The nerve bypass: how to move a paralysed hand“)

Human Enhancements Inside and Out

Rabaey divides human-enhancing tech into two categories, extrospective, applications, like those described above, that interface the enhanced human to the outside world, and introspective applications that look inwards to provide more information about enhanced humans themselves. Turning his focus to introspective applications, Rabaey presented several examples of existing bio-sensor technology including printed blood oximetry sensors, wound healing bandages, and thin-film EEGs. He then described the technology that will enable his vision of the human intranet: neural dust.

The Human Intranet

In 1997, Kris Pister outlined his vision for something called smart dust, one cubic millimeter devices that contained sensors, a processor, and networked communications. Pister’s vision was recently realized by the Michigan Micro Mote research team. Rabaey’s, proposed neural dust would take this technology a step further providing smart dust systems that measure a mere 10 to 100 microns on a side. At these dimensions, the devices could travel within the human blood stream. Dr. Rabaey described his proposed human intranet as consisting of a network fabric of neural dust particles that communicate with one or more wearable network hubs. The headband/bracelet/necklace-borne hub devices would handle the more heavy-duty communication, and processing tasks of the system, while the neural dust would provide real-time data measured on-site from within the body. The key challenge to enabling neural dust at this point lies in determining a communications channel that can deliver the data from inside the human body at real-time speeds while consuming very little power, (think picowatts).

Caution for the future

In closing, Dr. Jan implored the audience, that in all human/computer interface devices, security must be considered at the onset, and throughout the development cycle. He pointed out that internal defibrillators with wireless controls can be hacked, and therefore, could be used to kill a human who uses one. While this fortunately has never occurred, he emphasized that since the possibility exists it is key to encrypt every packet of information related to the human body. While encryption might be power-hungry in software, he stated that encryption algorithms build into ASICs could be performed at a fraction of the power cost. As for passwords, there are any number of unique biometric indicators that can be used. Among these are voice, and heart-rate. The danger for these bio-metrics, however, is that once they can be cloned, or imitated, the hacker has access to a treasure-trove of information, and possibly control. Perhaps the most promising biometric at present is a scan of neurons via EEG or other technology so that as the user thinks of a new password, the machine interface can pick it up instantly, and incorporates it into new transmissions.

Wrapping up his exciting vision of a bright cybernetic future, Rabaey grounded the audience with a quote made by Joanna Zylinska, an Australian performance artist, in a 2002 interview:

“The body has always been a prosthetic body. Ever since we developed as humanoids and developed bipedal locomotion, two limbs became manipulators. We have become creatures that construct tools, artifacts, and machines. We’ve always been augmented by our instruments, our technologies. Technology is what constructs our humanity. …, so to consider technology as a kind of alien other that happens upon us at the end of the millennium is rather simplistic.”

The more things change, the more they stay the same.

A Holistic Approach to Automotive Memory Qualification

January 3rd, 2017


The Robustness Validation approach in design of automotive memory components addresses reliability and safety margins between design and actual application.

By John Blyler, Editorial Director, JB Systems

Improved reliability is just one of the benefits claimed in using the supply-chain sensitive Robustness Validation (RV) approach to qualifying non-volatile memory (NVM) components for automotive electronic application. The following is a summarized and paraphrased coverage of a paper presented by the author, Valentin Kottler, Robert Bosch GmbH, at the IEEE IEDM 2016. — JB

Today’s cars have many electronic systems to control motor, transmission, and infotainment systems. Future vehicles will include more telematics to monitor performance as well as car-to-car communication. As the number of electronic applications in the car increases so does the need for non-volatile memories to store program code, application data and more.

Automotive applications place special requirements on electronic components, most noticeably regarding the temperature range in which the components must operate. Automotive temperature ranges can vary -40 to 165 C degrees. Further, harsh environmental influences like humidity and long vehicle lifetimes are significantly additional requirements not typically found in most industrial and consumer products. Finally, automotive standards place high requirements on electronic component, system and subsystem quality and reliability. For example, it’s not uncommon to demand a 1part per million (ppm) failure rate requirement for infotainment system and a zero defect rate over the lifetime of the car for safety systems, e.g., braking and steering systems. PPM (Parts per million) is a common measurement of performance quality.

These expectations place an additional challenge on components that will wear out during the lifetime of the car, namely, non-volatile memories. Accordingly, such components need to be thoroughly qualified and validated to meet reliability and safety requirements. Adding to this challenge are both the function of the electronic component and its location in the car, all of which creates a wide spectrum of requirements and mission profiles for electronic memory components.

Non-Volatile Memory (NVM) Components

One of the key components in automotive electronics is non-volatile memory, from which program code, application data or configuration bits can be retrieved even after power has been turned off and back on. It is typically used for the task of secondary storage and long-term storage. The size of the NVM in automotive systems can range from a few bytes to many giga-bytes for infotainment video systems.

The various types of NVM adds to the range of available components. For example, a form of NVM known as Flash Memory can have NOR and NAND architectures. Further, there can be single and multi-level cell (SLC and MLC) flash memory technologies. A qualification and validation approach that works for all of these types is needed.

Valentin Kottler, Robert Bosch GmbH

Automotive application requirements can be very different from one application to another. Application requirements will affect the basic performance of memory device characteristics such as speed, write endurance, data retention time, temperature performance and cost effectiveness, noted Valentin Kottler, Robert Bosch GmbH. One particular application may require only a few write cycles of the entire memory. Another application may require the same component to write continuously for over one-half million cycles. Still, another application might require 30 years of data retention, which happens to be the typical 20 year life time of the car plus up to 10 years of shelf time if the supplier has to pre-produce the electronics that support that application.

The simultaneous fulfillment of all these requirements may not be possible in any cost effective way. What is needed is an approach to validation that is application specific. The trade-off is that application specific validation may need to be repeated for each new application that uses a given component. This can mean significant effort in validation and qualification.

Standard approaches using fixed stress tests – like the “3 lots x 77parts/lot approach – will not be able to cover this wide spread of mission profile and the high variety just described. The Automotive Electronics Council (AEC) AEC-Q100 is a failure mechanism based stress test qualification for packaged integrated circuits (1). The 3 lots x 77 parts/lot failure tests aims at a 1% failure rate with 90% confidence.

More importantly, this type of approach does not provide information margins (discussed shortly), which are very important for determining the PPM fail rates in the field.

For these reasons, the standard approach needs to be complemented with a flexible qualification methodology like the robustness validation approach as described on the ZVEI pages (2):

“A RV Process demonstrates that a product performs its intended function(s) with sufficient margin under a defined Mission Profile for its specified lifetime. It requires specification of requirements based on a Mission Profile, FMEA to identify the potential risks associated with significant failure mechanisms, and testing to failure, “end-of-life” or acceptable degradation to determine Robustness Margins. The process is based on measuring and maximizing the difference between known application requirements and product capability within timing and economic constraints. It encompasses the activities of verification, legal validation, and producer risk margin validation.”

Wikipedia defines robustness validation as follows:
“Robustness Validation is used to assess the reliability of electronic components by comparing the specific requirements of the product with the actual “real life values”. With the introduction of this methodology, a specific list of requirements (usually based on the OEM) is required. The requirements for the product can be defined in the environmental requirements (mission profiles) and the functional requirements (use cases).”

The Robustness Validation (RV) technique characterizes the intrinsic capability and limitations of the component and of its technology. It is a failure mechanism and technology based approach using test-to-fail trials instead of test-to-pass and employing drift analysis. Further, it does allow for an assessment of the robustness margin of the component in the application.

For clarification, the test-to-pass approach refers to an application where a test is conducted using a specific user-flow instructions. Conversely, a test-to-fail approach refers testing a feature in every conceivable way possible. Test-to-pass is an adequate approach for proof of concept designs but for end-product systems the test-to-fail is necessary to ensure reliability, quality and safety concerns.

The benefit of the robustness validation approach is that the characterization of the device capability would only need to be done once, explained Kottler. Subsequent activities would allow for the deduction of the behavior of the memory under the various mission profiles without repeating the qualification exercise.

Robustness Margin

Robustness Validation (RV) can be used as a holistic approach to NVM qualification. One way to visualize RV is to consider two memory parameters, i.e., endurance and temperature. The intrinsic capability of the NVM may be described as an area between these two parameters (see Figure 1). Within that area are the hard requirements for the memory (NVM spec) and the application (application spec). The distance between the application spec, the remaining portion of memory and the NVM capability limit is called the “robustness margin.”

In other words, the robustness margin is a measure of the distance of the requirements to the actual test results. It is the margin between the outer limits of the customer specification and the actual performance of the component.

The importance of the robustness margin is that it determines the actual safety margin of the component as used in the application verses its failure mode.

The overall capability of the device including its quality and reliability is that its properties are determined and eventually designed throughout the product development life-cycle phrases:

  • Product & technology planning
  • Development and design
  • Manufacturing and test
  • –In order to prove whether the device is suitable for automotive usage, data is gathered from the early design phases in
  • –addition to qualification trial data.

Then, investigations are held of the performance of the device on a specific application conditions.

Robustness Validation Applied to Memory Qualification

How then do you specifically apply the robustness validation approach to a memory qualification? Kottler listed four basic steps in his presentation (see in Figure 1). One should note that Steps 2 and 3 require input from the NVM suppliers. Further, the NVM supplier can run these exercises without input from Step 1 or output to Step 4. We’ll now consider each of these steps more closely.

Figure 1: Steps to apply the Robustness Validation approach to memory devices.

The first step is to identify the mission profile, which is used to describe the loads and stresses acting on the product in actual use. These are typically changes in temperature, temperature profile, vibration and working of electrical and mechanical fields, or other environmental factors. In order to qualify a non-volatile memory for a specific automotive application, an automotive Tier 1 supplier must therefore identify the sum of application requirements to the NVM and must assess whether and to which extent a given NVM component will fulfil them.

To specifically determine the mission profile, all NVM component application requirements must be collected, from electronic control unit (ECU) design, manufacturing and operation in the vehicle. This is usually done within the Tier 1 organization based on requirements from the vehicle manufacturer.

The second step requires identification of all relevant failure mechanisms. Specifically, it means mapping application requirements to the intrinsic properties and failure modes of the NVM component. This requires the competence of the component supplier to share their understanding of the NVM physics and design to identify all relevant failure mechanisms. Intensive cooperation of the NVM technology and product experts with the quality and reliability team on NVM supplier and Tier 1 sides are necessary to accomplish this step.

As an example, consider the typical requirements to an NVM component. These requirements include data retention, re-programmability and unaltered performance as specified over the vehicle lifetime and under various conditions in the harsh environment of a vehicle. According to Kottler’s paper, some of the corresponding failure mechanisms in a flash memory include the various charge loss mechanisms through dielectrics, charge de-trapping, read, program and erase disturbs, tunnel oxide degradation due to programming and erasing, as well as radiation-induced errors. These mechanisms are already predefined by choices made at design of the NVM technology, memory cell and array architecture, as well as of the conditions and algorithms for programming, erasing and reading.

The third step focuses on trial planning and execution with the goal of characterizing NVM capabilities and limits with respect to the previously identified failure mechanism. As in the previous step, the competence and participation of the component supplier to provide insight into the physics of the NVM, as well as NVM quality and reliability. Acceleration life cycle testing models, parameters and model limitations need to be identified for each failure mechanism. The health of the NVM component related to the failure mechanism must be observable and allow for drift analysis, e.g., by measuring the memory cell’s threshold voltage variations.

How might the drift analysis be performed and by whom, i.e., the supplier or the Tier 1 customer? For example, will the flash memory provider be asked to give the customer more component data?

According to Kottler, the drift analysis will depend upon the flash memory manufacturer to measure data that is not accessible to the customer/end user. Generally, the latter doesn’t have access to test modes to get this data. Only the manufacturer has the product characterization and test technologies related to their components.

The manufacturer and customer should work together to jointly define the parameters that need to be tracked. It is a validation task. The measurements are definitely done by the manufacturer but the manufacturer and customer should jointly interpret the details. What the customer doesn’t need is a blank statement that the components have simply passed qualification. This “test to pass” approach is no longer sufficient, according to Kottler.

The trials and experiments for drift analysis need to be planned and jointly agreed upon. Their execution usually falls to the NVM supplier, being the only party with full access to the design, necessary sample structures, test modes, programs and equipment.

According to Kottler, the identification of an appropriate electrical observable is of utmost importance for applying Robustness Validation (RV) to NVM. Such observables may be for memory cell threshold voltage Vth for NOR flash and EEPROM, or corrected bit count for managed NAND flash memories. Both observables provide sensitive early indication on the memory health status and must therefore be accessible for qualification, production testing and failure analysis in automotive.

The fourth and final step in the Robustness Validation approach involves the assessment of the reliability and robustness margin of the NVM component against the mission profile of the automotive application. The basis for this assessment is the technology reliability data and consideration of the initial design features and limitations, such as error correction code (ECC), adaptive read algorithms (e.g. read retry) and firmware housekeeping (e.g. block refresh and wear leveling), noted Kottler in his paper.

Reliability characterization on technology and component level do not necessarily have to be separated. Combined trials may even be recommended, e.g. for managed NAND flash, due to the complex interaction between firmware, controller and NAND flash memory.

Benefits of the Robustness Validation Approach

The Robustness Validation (RV) approach provides a straight-forward way in which a semiconductor company might design and validate an NVM component that is acceptable in the automotive electronics market. Using RV, the supplier will enable its customers to assess the suitability of the component for their applications in the necessary detail.

The resulting NVM qualification and characterization report that results from the NVM approach should list the memory failure mechanisms considered and characterized. Further, the report should describe the acceleration models applied, and showing drift analysis data supporting a quantitative prediction of failure rate vs. stress or lifetime for each failure mode. According to Kottler, combinations of stresses are to be included according to previous agreements, e.g. data retention capability after write/erase endurance pre-stress, temperature dependent.

To some, the Robustness Validation approach may appear to cause significant additional qualification work. However, most or all of these reliability investigations are part of the typical NVM product and technology characterization during the development phase. For new designs, the optimized top-down RV approach may be applied directly. For existing NVM designs, this approach must be tailored to the agreement of both the NVM supplier and tier 1 company, potentially re-running trials to complete the RV approach. Even so, some existing NVM components may not meet automotive qualification. It is therefore important to jointly assess the feasibility of the automotive NVM qualification by RV prior to the design-in decision.

The end result of the RV approach is an efficient solution to cope with the high requirements of the automotive market, requiring a close cooperation along the value creation chain,” noted Kottler.


The automotive expectations to non-volatile memory (NVM) components continues to grow due to market evolution, increasingly complex data structures and the demand for performance and endurance. Tier 1 and NVM suppliers must cope with this challenge jointly. By considering these expectations from the beginning of product and technology development, and by providing comprehensive data, the NVM supplier can enable the automotive Tier 1 to assess the NVM suitability for the application under a Robustness Validation (RV) approach.


  1. AEC-Q100: Stress Test Qualification for Integrated Circuits – Rev. H, Spe. 2014, pp. 36-30
  2. ZVEI “Handbook for Robustness Validation of Semiconductor Devices in Automotive Applications,” 3rd edition, May 2015, pp. 4-20


Read the complete story and original post on “IP Insider”

New Event Focuses on Semiconductor IP Reuse

November 28th, 2016

Unique exhibition and trade show levels the playing field for customers and vendors as semiconductor intellectual property (IP) reuse grows beyond EDA tools.

By John Blyler, Editorial Director, JB Systems

The sale of semiconductor intellectual property (IP) has outpaced that of Electronic Design Automation (EDA) chip design tools for the first time, according to a report of Q3 2015 sales by the Electronic System Design Alliance’s MSS report. Despite this growth, there is no industry event dedicated solely to semiconductor IP – until now.

The IP community in Silicon Valley will witness an inaugural event this week, one that will enable IP practitioners to exchange ideas and network while providing IP buyers with access to a diverse group of suppliers. REUSE 2016 will debut on December 1, 2016 at the Computer History Museum in Mountain View, CA.

I talked with one of the main visionaries of the event, Warren Savage, General Manager of IP at Silvaco, Inc. Most professionals in the IP industry will remember Savage as the former CEO of IPextreme, plus the organizer of the Constellations group and the “Stars of IP” social event held annually at the Design Automation Conference (DAC).

IPextreme’s Constellations group is a collection of independent semiconductor IP companies and industry partners that collaborate at both the marketing and engineering levels for mutual benefit. The idea was for IP companies to pool resources and energy to do more than they could do on their own.

This idea has been extended to the REUSE event, which Savage has humorously described as the steroid-enhanced version of the former Constellations sponsored “Silicon Valley IP User Group” event.

“REUSE 2016 includes the entire world of semiconductor IP,” explains Savage. “This is a much bigger event that includes not just the Constellation companies but everybody in the IP ecosystem. Our goal is to reach about 350 attendees for this inaugural event.”

The primary goal for REUSE 2016 is to create a yearly venue that brings both IP vendors and customers together. Customers will be able to meet with vendors not normally seen at the larger but less IP-focused conferences. To best serve the IP community, the founding members decided that the event’s venue should be a combination of exhibition and trade show, where exhibitors present technical content during the trade show portion of the event.

Perhaps the most distinguishing aspect of REUSE is that the exhibition hall will only be open to companies who were licensing semiconductor design and verification IP or related embedded software.

“Those were the guiding rules about the exhibition,” noted Savage. “EDA (chip design) companies, design services or somebody in an IP support role would be allowed to sponsor activities like lunch. But we didn’t want them taking attention away from the main focus of the event, namely, semiconductor IP.”

The other unique characteristic of this event is its sensitivity to the often unfair advantages that bigger companies have over smaller ones in the IP space. Larger companies can use their financial advantage to appear more prominent and even superior to smaller but well established firms. In an effort to level the playing field, REUSE has limited all booth spaces in the exhibition hall to a table. Both large and small companies will have the same size area to highlight their technology.

This year’s event is drawing from the global semiconductor IP community with participating companies from the US, Europe, Asia and even Serbia.

The breadth of IP related topics covers system-on-chip (SOC) IP design and verification for both hardware and software developers. Jim Feldham, President and CEO, of Semico Research will provide the event’s inaugural keynote address on trends driving IP reuse. In addition to the exhibition hall with over 30 exhibitors, there will be three tracks of presentations held throughout the day at REUSE 2016 on December 1, 2016 at the Computer Science Museum in San Jose, CA. See you there!

Originally posted on “IP Insider”

World of Sensors Highlights Pacific NW Semiconductor Industry

October 25th, 2016

Line-up of semiconductor and embedded IOT experts to talk at SEMI Pacific NW “World of Sensors” event.

The Pacific NW Chapter of SEMI will be holding their Fall 2016 event highlighting the world of sensors. Mentor Graphics will be hosting the event on Friday, October 28, 2016 from 7:30 to 11:30 am.

The event will gather experts in the sensor professions who will share their vision of the future and the impact it may have on the overall semiconductor industry. Here’s a brief list of the speaker line-up:

  • Design for the IoT Edge—Mentor Graphics
  • Image Sensors for IoT—ON Semiconductor
  • Next Growth Engine for Semiconductors—PricewaterhouseCoopers
  • Expanding Capabilities of MEMS Sensors through Advanced Manufacturing—Rogue Valley Microdevices
  • Engineering Biosensors for Cell Biology Research and Drug Discovery—Thermo Fisher Scientific

Register today and meet and network with industry peers from these companies, Applied Materials, ASM America, Brewer Science, Cascade Microtech, Delphon Industries, FEI Company, Kanto, Microchip Technology, SSOE Group, VALQUA America and many more.

See the full agenda and Register today.

IEEE Governance in Division

September 20th, 2016

Will a proposed amendment modernize the governance of one of the oldest technical societies or transfer power to a small grouper of officials?

By John Blyler, Editorial Director

As a member of the IEEEE, I recently received an email concerning a proposed change to the society’s constitution that might fundamentally impact the governance of the entire organization. Since that initial email, there have been several messages from various societies within the IEEE that either oppose or support this amendment.

To gain a broader perspective on the issue, I asked the current IEEE President-Elect and well-known EDA colleague, Karen Bartleson, for her viewpoint concerning the opposition’s main points of contention. Ms. Bartleson supports the proposed changes. What follows is a portion of her response. – JB

Opposition: The amendment could enable:

  • a small group to take control of IEEE

Support: Not at all. There is no conspiracy going on – the Boards of Directors from 2015 and 2016 are not sinister. They want the best for the IEEE.

  • transferring of power from over 300,000 members to a small group of insiders,

Support: Not at all. Currently the Board of Directors is not elected by the full membership of IEEE. Allowing all members to elect their Board is more fair than it is today.

  • removing regional representation from the Board of Directors thereby making it possible that, e.g., no Asian or European representatives will be on the Board of Directors – thus breaking the link between our sections and the decisions the Board will make,

Support: No. The slate for the Board of Directors will better ensure geographic diversity. Today, Region 10 – which is 25% of membership – gets only 1 seat on the Board of Directors. Today, there are 7 seats reserved exclusively for the USA.

  • removing technical activities representation from the Board of Directors thereby diminishing the voices of technology in steering IEEE’s future,

Support: No. There will be plenty of opportunity for technical activities to be represented on the Board of Directors.

  • moving vital parts of the constitution to the bylaws – which could be subject to change by a small group, on short notice.

Support: This is not a new situation. Today, the bylaws can be changed by the Board on short notice. For instance, the Board could decide to eliminate every Region except one. But the Board is not irresponsible and wouldn’t do this without buy-in from the broader IEEE.

The society has create a public page concerning this proposed amendment.

It is the responsibility of all IEEE members to develop an informed opinion and vote by October 3, 2016, in the annual election.



Has The Time Come for SOC Embedded FPGAs?

August 30th, 2016

Shrinking technology nodes at lower product costs plus the rise of compute-intensive IOT applications help Menta’s e-FPGA outlook.

By John Blyler, IP Systems


The following are edited portions of my video interview the Design Automation Conference (DAC) 2016 with Menta’s business development director, Yoan Dupret. – JB

John Blyler's interview with Yoan Dupret from Menta

Blyler: You’re technology enables designers to include an FPGA almost anywhere on a System-on-Chip (SOC). How is your approach unique from others that purport to do the same thing?

Dupret: Our technology enables placement of an Field Programmable Gate Array (FPGA) onto a silicon ASIC, which is why we call it an embedded FPGA (e-FPGA). How are we different from others? First, let me explain why others have failed in the past while we are succeeding now.

In the past, the time just wasn’t right. Further, the cost of developing the SOC was still too high. Today, all of those challenges are changing. This has been confirmed by our customers and from GSA studies that explain the importance of having some programmable logic inside an ASIC.

Now, the time is right. We have spent the last few years focusing on research and development (R&D) to strengthen our tools, architectures and to build out competencies. Toolwise, we have a more robust and easier to use GUI and our architecture has gone through several changes from the first generation.

Our approach uses standard cell-based ASICs so we are not disruptive to the EDA too flow of our customers. Our hard IP just plugs into the regular chip design flow using all of the classical techniques for CMOS design. Naturally, we support testing with standard scan chain tests and impressive test coverage. We believe our FPGA performance is better than the competitions in terms of numbers of lookup tables per of area, of frequencies, and low power consumption.

Blyler:  Are you targeting a specific area for these embedded FPGAs, e.g., IOT?

Dupret: IOT is one of the markets we are looking at but it is not the only one. Why? That’s because the embedded FPGA fabric can actually go anywhere you have RTL, which is intensively parallel programming based (see Figure 1). For example, we are working on a cryptographic algorithms inside the e-FPGA for IOT applications. We have tractions on the filters for digital radios (IIR and FLIR filters), which is another IOT application. Further, we have customers in the industrial and automotive audio and image processing space

Figure 1: SOC architecture with e-FPGA core, which is programmed after the tape-out. (Courtesy of Menta)

Do you remember when Intel bought Altera, a large FPGA company? This acquisition was, in part, for Intel’s High Performance Computing (HPC) applications. Now they have several big FPGAs from Altera just next to very high frequency processing cores. But there is another way to do achieve this level of HPC. For example, a design could consists of a very big parallel intensive HPC architecture with a lot of lower frequency CPUs and next to each of these CPUs you could have an e-FPGa.

Blyler: At DAC this year, there are a number of companies from France. Is there something going on there? Will it become the next Silicon Valley?

Dupret: Yes, that is true. There are quite some companies doing EDA. Others are doing IP, some of which are well known. For example, Dolphin, is based in Grenoble and it is also part of the ecosystem there.

Blyler: That’s great to see. Thank you, Yoan.

To learn more about Menta’s latest technology: “Menta Delivers Industry’s Highest Performing Embedded Programmable Logic IP for SoCs.”

Increasing Power Density of Electric Motors Challenges IGBT Makers

August 23rd, 2016

Mentor Graphics answers questions about failure modes and simulation-testing for IGBT and MOSFET power electronics in electronic and hybrid-electronic vehicles (EV/HEV).

By John Blyler, Editorial Director

Most news about electric and hybrid vehicles (EV/HEV) electronics focuses on the processor-based engine control and the passenger infotainment systems.  Of equal importance is the power electronics that support and control the actual vehicle motors. On-road EVs and HEVs operate on either AC induction or permanent magnet (PM) motors. These high-torque motors must operate over a wide range of temperatures and in often electrically noisy environments. The motors are driven by converters that generally contain a main IGBT or power MOSFET inverter.

The constant power cycling that occurs during the operation of the vehicle significantly affects the reliability of these inverters. Design and reliability engineers must simulate and test the power electronics for thermal reliability and lifecycle performance.

To understand more about the causes of inverter failures and the test that reveal these failures, I presented the following questions to Andras Vass-Varnai, Senior Product Manager for the MicReD Power Tester 600A , Mentor Graphic’s Mechanical Analysis Division. What follows is a portion of his responses. – JB


Blyler: What are some of the root causes of failures for power devices in EV/HEV devices today, namely, for insulated gate bipolar transistors (IGBTs), MOSFETs, transistors, and chargers?

Vass-Varnai: As the chip and module sizes of power devices show a shrinking tendency, while the required power dissipation stays the same or even increases, the power density in power devices increases, too. The increasing power densities require careful thermal design and management. The majority of failures is thermal related, the temperature difference between the material layers within an IGBT or MOSFET structure, plus the differences in the coefficient of thermal expansion of the same layers lead to thermo-mechanical stress.

The failure will develop ultimately at these layer boundaries or interconnects, such as the bond wires, die attach, base plate solder, etc. (see Figure 1). Our technology can induce the failure mechanisms using active power cycling and can track the failure while it develops using high resolution electric tests, from which we derive thermal and structural information.

Figure 1: Cross-section of an IGBT module.

Blyler: Reliability testing during power cycling improves the reliability of these devices. How was this testing done in the past? What new technology is Mentor bringing to the testing approach?

Vass-Varnai: The way we see it, traditionally the tests were done in a very simplified way, companies used tools to stress the devices by power cycles, however these technologies were not combined with in-progress characterization. They started the tests, then stopped to see if any failure happened (using X-ray microscopy, ultrasonic microscopy, sometimes dissection), then continued the power cycling. Testing this way took much more time and more user interaction, and there was a chance that the device fails before one had the chance to take a closer look at the failure. In some more sophisticated cases companies tried to combine the tests with some basic electrical characterization, however none of these were as sophisticated and complete as offered by today’s power testers. One major advantage of today’s technology is the high resolution (about 0.01C) temperature measurement and the structure function technology, which helps users to precisely identify in which structural layer the failure develops and what is its effect on the thermal resistance, all of these embedded in the power cycling process.

The combination with simulation is also unique. In order to calculate the lifetime of the car, one needs to simulate very precisely the temperature changes in an IGBT for a given mission profile. In order to do this, the simulation model has to behave exactly as the real device both for steady state and transient excitations. The thermal simulation and testing system must be capable of taking real measurement data and calibrating the simulation model for precise behavior.

Blyler: Can this tester be used for both (non-destructive) power-cycle stress screening as well as (destructive) testing the device all the way to failure? I assume the former is the wider application in EV/HEV reliability testing.

Vass-Varnai: The system can be used for non-destructive thermal metrics measurements (junction temperature, thermal resistance) and also for active power cycling (which is a stress test), and can track automatically the development of the failure (see Figure 2).

Figure 2: Device voltage change during power cycling for three tested devices in Mentor Graphics MicReD Power Tester 1500A

Blyler: How do you make IGBT thermal lifetime failure estimations?

Vass-Varnai: We use a combination of thermal software simulation and hardware testing solution specifically for the EV/HEV market. Thermal models are created using computational fluid dynamics based on the material properties of the IGBT under test. These models accurately simulate the real temperature response of the EV/HEV’s dynamic power input.

Blyler: Thank you.

For more information, see the following: “Mentor Graphics Launches Unique MicReD Power Tester 600A Solution for Electric and Hybrid Vehicle IGBT Thermal Reliability

Bio: Andras Vass-Varnai obtained his MSc degree in electrical engineering in 2007 at the Budapest University of Technology and Economics. He started his professional career at the MicReD group of Mentor Graphics as an application engineer. Currently, he works as a product manager responsible for the Mentor Graphics thermal transient testing hardware solutions, including the T3Ster product. His main topics of interest include thermal management of electric systems, advanced applications of thermal transient testing, characterization of TIM materials, and reliability testing of high power semiconductor devices.


One EDA Company Embraces IP in an Extreme Way

June 7th, 2016

Silvaco’s acquisition of IPextreme points to the increasing importance of IP in EDA.

By John Blyler, Editorial Director

One of the most promising directions for future electronic design automation (EDA) growth lies in semiconductor intellectual property (IP) technologies, noted Laurie Balch in her pre-DAC (previously Gary Smith) analysis of the EDA market. As if to confirm this observation, EDA tool provider Silvaco just announced the acquisition of IPextreme.

At first glance, this merger may seems like an odd match. Why would an EDA tool vendor who specializes in the highly technical analog and mixed signal chip design space want to acquire an IP discovery, management and security company? The answer lies in the past.

According to Warren Savage, former CEO of IPextreme, the first inklings of a foundation for the future merger began at DAC 2015.  The company had a suite of tools and an ecosystem that enabled IP discovery, commercialization and management. What they lacked was a strong sale channel and supporting infrastructure.

Conversely, Silvaco’s EDA tools were used by other companies to create customized analog chip IP.  This has been the business model for most of the EDA industry where EDA companies engineer and market their own IP. Only a small portion of the IP created by this model have been made commercially available to all.

According to David Dutton, the CEO of Silvaco, the acquisition of IPextreme’s tools and technology will allow them to unlock their IP assets and deliver this underused IP to the market. Further, this strategic acquisition is part of Silvaco’s 3-year plan to double its revenues by focusing – in part – on strengthening it’s IP offerings in the IOT and automotive vertical markets.

Savage will now lead the IP business for Silvaco. The primary assets from IPextreme will now be part of Silvaco, including:

  • Xena – A platform for managing both the business and technical aspects of semiconductor IP.
  • Constellations – A collective of independent, likeminded IP companies and industry partners that collaborate at both the marketing and engineering levels.
  • Coldfire processor IP and various interface cores.
  • “IP Fingerprinting” – A package, which allows IP owners to “fingerprint” their IP so that their customers can easily discover it in their chip designs and others using ”DNA analysis” software without the need for GDSII tags.

The merger should be mutually beneficial for both companies. For example, IPextreme and its Constellation partners will now have access to a worldwide sales force and associated infrastructure resources.

On the other hand, Silvaco will gain the tools and expertise to commercialize their untapped IP cores. Additionally, this will complement the existing efforts of customers who use Silvaco tools to make their own IP.

As the use of IP grows, so will the need for security. To date, it has been difficult for companies to tell the brand and type of IP in their chip designs. This problem can arise when engineers unknowingly “copy and paste” IP from one project to another. The “IP fingerprinting” technology developed by IPextreme creates a digital representation of all the files in a particular IP package. This representation is entered into a Core store that can then be used by other semiconductor companies to discover what internal and third-party IP is contained in their chip designs.  This provides a way for companies to protect against the accidental reuse of their IP.

According to Savage, there is no way to reverse engineer a chip design from the fingerprinted digital representation.

Many companies seem to have a disconnect between the engineering, legal and business side of their company. This disconnect causes a problem when engineers use IP without any idea of the licensing agreements attached to that IP.

“The problem is gaining the attention of big IP providers who are worried about the accidental reuse of third-party IP,” notes Savage. “Specifically, it represents a liability exposure problem.”

For smaller IP providers, having their IP fingerprint in the CORE store could potentially mean increased revenue as more instances of their IP become discoverable.

In the past, IP security measures have been implemented with limited success with hard and soft tags. (see, “Long Standards, Twinkie IP, Macro Trends, and Patent Trolls”) But tagging chip designs in this way was never really implemented in the major EDA place and route tools, like Synopsys’s IC Compiler. According to Savage, even fabs like TSMC don’t follow the Accellera tagging system, but have instead created their on security mechanisms.

For added security, IPextreme’s IP Fingerprinting technology does support the tagging information, notes Savage.

Trends in Hyper-Spectral Imaging, Cyber-Security and Auto Safety

April 25th, 2016

Highlights from SPIE Photonics, Accellera’s DVCon and Automotive panels focus on semiconductor’s changing role in emerging markets.

By John Blyler, Editorial Director

Publisher John Blyler talks with Chipestimate.TV executive director Sean O’Kane during the monthly travelogue of the semiconductor and embedded systems industries. In this episode, Blyler shares his coverage to two major conferences: SPIE Photonics and Accellera’s Design-Verification Conference (DVCon). He concludes with the risk emphasis in automotive electronics from a recent market panel. Please note that what follows is not a verbatim transcription of the interview. Instead, it has been edited and expanded for readability. Cheers — JB

O’Kane: Earlier this year, you were at the SPIE Photonic show in San Francisco. Did you see any cool tech?

Blyler: As always, there was a lot to see at the show covering photonic and optical semiconductor-related technologies. One thing that caught my attention was the continuing development of hyperspectral cameras.  For example, start-up SCiO prototypes a pocket-sized molecular scanner based on spectral imaging that tells you everything about your food.

Figure 1: SCiO Molecular scanner based on spectral imaging technology.

O’Kane: That sounds like the Star Trek Tricorder. Mr. Spock would be proud.

Blyler: It’s very much so. I talked with Imec’s Andy Lambrechts at the Photonics show.  They have developed a process that allows them to deposit spectral filter banks in both the visible and near infra-red range on the same CMOS sensor. That’s the key innovation for shrinking the size and – in some cases – the power consumption. It’s very useful for quickly determining the health of agricultural crops. And all thanks to semiconductor technology.


Figure 2: Imec Hyperspectral imaging technology for agricultural crop markets.

O’Kane: Recently, you attended the Design and Verification Conference (DVCon). This year, it was Mentor Graphic’s turn to give the keynote. What did the CEO Wally Rhines talk about?

Blyler: His presentations are always rich in data and trends slides. What caught my eye were his comments about cyber security.

Figure 3: Wally Rhines, CEO of Mentor Graphics, giving the DVCon2016 keynote.

O’Kane: Did he mention Beckstrom’s law?

Blyler: You’re right! Soon, the Internet of Things (IoT) will expand the security need to almost everything we do, which is why Beckstrom’s law is important:

Beckstrom’s Laws of Cyber Security:

  1. Everything that is connected to the Internet can be hacked.
  2. Everything is being connected to the Internet
  3. Everything else follows from the first two laws.

Naturally, the semiconductor supply chain want some assurance the chips are resistant to hacking. That’s why chip designers need to pay attention to three levels of security breaches: Side-Channel Attacks (On-Chip Countermeasures); Counterfeit Chips (Supply-chain security); and Malicious Logic Inside Chip (Trojan detection)

EDA tools will become the core of the security framework, but not without changes. For example, verification will move from its traditional role to an emerging one:

  • Traditional role: Verifying that a chip does what it is supposed to do
  • Emerging role: Verifying that a chip does nothing it is not supposed to do

This is a nice lead into safety-critical design and verification systems. Safety critical design requires that both the product development process and related software tools introduce no potentially harmful effects into the system, product or the operators and users. One example of this is the emerging certification standards in the automotive electronics space, namely, ISO 26262.

O’Kane: How does this safety standard impact engineers developing electronics in this space?

Blyler: Recently, I put that question to a panel of experts from the automotive, semiconductor and systems companies (see Figure 4). During our discussion, I noted that the focus on functional safety seems like yet another “Design-for-X” methodology, where “X” is the activity that you did poorly during the last product iteration, like requirements, testing, etc. But ISO 26262 is a compliant, risk-based safety standard for future automobile systems – not a passing fad.


Figure 4: Panel on design of automotive electronics hosted by Jama Software – including experts from Daimler, Mentor Graphics, Jama and Synopsys.

Mike Bucala from Daimler put it this way: “The ISO standard is different than other risk standards because it focuses on hazards to persons that result from the malfunctioning behavior of EE systems – as opposed to the risk of failure of a product. For purposes of liability and due care, reducing that risk implies a certain rigor in documentation that has never been there before.”

O’Kane: Connected cars are getting closer to becoming a reality.  Safety will be critical issues for regulatory approval.

Blyler: Indeed. Achieving that approval will encompass everything all aspects of connectivity, for example, from connected system within the automobile to other drivers, roadway infrastructures and the cloud. I think many consumers tend to focus on only the self-driving and parking aspects of the evolving autonomous vehicles.

Figure 5: CES2016 BMW self-parking connected car.

It’s interesting to note that connected car technology is nothing new. It’s been used in the racing industry for years at places like the Sonoma Raceway near San Francisco, CA. The high performance race cars are constantly collecting, conditioning and sending data throughout different parts of the car, to the driver and finally to the telemetry-based control centers where the pit crews reside. This is quite a bit different from the self-driving and parking aspects of consumer autonomous vehicles.

Figure 6: Indy car race at Sonoma Raceway.




Next Page »