Part of the  

Chip Design Magazine


About  |  Contact

Posts Tagged ‘IoT’

Next Page »

How can the Chip Community Improve the Industry for IOT Designers?

Monday, March 13th, 2017

Meeting the 20 billion IOT devices prediction by 2020 will require the semiconductor industry to streamline its processes for up and coming chip designers.

By John Blyler, Editorial Director, IOT Embedded Systems

Part I of this article covered the difficulties in designing System-on-Chip (SOC) devices for the Internet-of-Things (IOT) market, as explained by Jim Bruister, CEO of SOC Solutions, during his talk at the inaugural REUSE event. In Part II, we will examine ways for the semiconductor and electronics industries to improve the development process for the next generation of IOT designers. — JB

Quotable  Quotes:

  • … the semiconductor community needs to market outside of its traditional channels, for example, to the “Field and Stream” or perhaps the “Sports Illustrated” communities.”
  • … licensing agreements represent a real problem for buyers especially those that must buy IP from multiple vendors.
  • … a general contractor type of person is needed for the emerging IOT design industry.
  • … (could) open source be used to get IOT designers started especially with FPGAs? 

How, then, do we improve as an industry to ensure success for IOT chip designers? Bruister believes there are 5 pieces that need to be in place. First among those is a proactive ecosystem, one that consists of more than just a few companies getting together and sharing their names on websites.

Secondly, the ecosystem must consist of IP providers, design houses and even the foundries whose goal is to offer real SOC reference designs for the IOT community.

Information marketing focused on the IoT business channels is the third needed item. Bruister emphasized that the semiconductor community needs to market outside of its traditional channels, for example, to the “Field and Stream” or perhaps the “Sports Illustrated” communities. The semiconductor world needs to reach out to those places where the next generation of SOC designers will live.

Fourthly, a general contractor type of position is needed in the IOT SOC ecosystem. By analogy, a general contractor is the person that helps you build a house. The general contractor has the experience and connections to bring in and coordinate the activities of the framer, electrician, plumber and others needed to build a house. The same type of person is need for IOT designers.

At this point in the presentation, an attendee from the audience noted the general contractor should probably own all of the tools for the “building of a house” analogy to work. Bruister looked at the problem differently, explaining that the general contractor for a house doesn’t typically own all the tools.

“I see the general contractor (for IOT design) more like a consultant that selects the design house and helps you pick the IP,” explained Bruister. There are design houses that play that role, but it’s not a smooth flow of activities from start to finish for doing an IOT design. That’s where I think a general contractor or coordinator could help.”

The last thing needed for improvement in the IOT design process was one stop shopping with a common licensing model. Today, there is no standard licensing model and there will probably never be one, said Bruister. But the licensing agreements represent a real problem for buyers especially those that must buy IP from multiple vendors. Current models take way too long to license the IP, get it in-house and evaluate the IP. There needs to be a consolidation on how IP is licensed. Bruister suggested a boiler plate IP license that could contain 90% of the common elements required in a license.

Bruister concluded by saying that the semiconductor industry needs to figure out a way to simplify the whole IOT design process. This statement prompted a question about the use of open source tools and IP as a possible solution. The questioner noted that open source could be used to get IOT designers started especially with FPGAs.

Bruister wondered if there were enough open source folks that would significantly help with the 20 billion predicted IOT devices by 2020. Nikos Zervas, CEO of CAST, who was in the audience, noted that relying on open source may be problematic with the millions of dollars involved in chip design. He question who would stand behind the open source tools in such a case.

But the questioner was persistent, saying that even major chip IP providers like ARM don’t pay for the blunders of the chip designer. He cited software as another example were nothing is really warranted, in his opinion.

Bruister tried to address the question by looking at the big picture. For the coming IOT design challenges, there will be one camp of providers who believe that one hundred different designs types will be good for all devices. The opposing camp will believe that each design situation will require some customization, e.g., to include energy harvesting capabilities, etc. Both groups will be large and vocal. The IOT device market will be so big that it will have lots variability.

“But the common thread is that it takes way too long to design IOT devices,” said Bruister. “There is no way we can reach that many devices with such a long design and long IP licensing processes. Expensive tools are always going to be an issue. I don’t think you can get away from that unless the big EDA vendors decide to go with a “pay as you design” model. They have resisted that for years.”

It may be difficult to simplify the process for less SOC experienced IOT designers, but we must try if the IOT market is to realize it’s potential.

Why is Chip Design for IOT so Hard?

Tuesday, February 28th, 2017

Internet-of-Things (IOT) designers face a different set of challenges from their traditional ASIC and SOC brethren. Will the market be ready?

By John Blyler, Editorial Director, IOT Embedded Systems

Quotable Quotes:

  • … we’ll need 10,000 plus IOT designers. Where will they come from?
  • …a majority of IOT designers will have little experienced in traditional SOC design.
  • … SOC industry newcomers will suffer from “new IOT designer anxiety disorder or “New IDeA Disorder”
  • … need modular architectures that are specific to IOT devices.

It’s a daunting task for a non-experienced company to create a custom chip, ASIC or SOC to implement their new “bright idea” IoT product. The company’s engineers face equal challenges in developing, manufacturing, and getting the chip delivered on time. With this introduction, Jim Bruister, President of SoC Solutions, began his talk at the inaugural REUSE show about the overwhelming number of tools, skill sets, costs, IP acquisition and industry associations needed to navigate the chip design and delivery process. He examined how the industry presently supports new chip development and where it needs to go in the future to streamline the process for the non-experienced companies that will no doubt fuel the coming IoT boom.

Bruister started his talk by considering the drivers of IOT in a market predicted to include 20 billion devices by the year 2020. On the business side, IOT will be driven by data and subscription models. But while IOT devices will be enablers for data businesses, the devices won’t be the real money makers. Instead, revenues will flow from data and related analysis. Most of the IOT devices will compete under strong price pressures resulting in cheaper products with tight profit margins.

Further challenging the revenues from physical IOT devices will be the lack of high-end users. For example, many IOT devices will not be fashionable wearables for fitness as most of the world’s population are struggling with basic needs such as indoor plumbing. They have neither the money nor interest in wearable devices. Still, IOT technology will represent a huge electronic market.

“There will be tens of thousands of new IOT businesses in my opinion,” explained Bruister. “This implies at least as many IOT device designers will be needed, or about 10,000 plus. Where will these designers come from?”

It’s reasonable to assume that IOT designers will come from existing system, software, field programmable gate array (FPGA), Printed Circuit Board (PCB) and semiconductor industries. A larger portion will probably come from the FPGA markets while a much smaller amount will come from the semiconductor space.

A majority of IOT companies will be startups, incubated from universities. Naturally, companies will recruit college graduates and interns to do a lot of the work. This means that a majority of designers will have little experienced in traditional SOC design.

“What is the likely approach that these college graduates will take to IOT design,” asked Bruister? His view was that these designers would first turn to Google searches on terms like SOC, chip or ASIC. They will look in trade magazines like EETimes, EDN, Sports Illustrated, Field & Stream and others. They will probably look for SOC experts and semiconductor consultants but there won’t be enough of such gurus to go around.

IP portals like, Design & Reuse (D&R) and others will be consulted only if the college graduate IOT designers know about them. Similarly, these designers might even contact a few design houses if they are aware of them.

One of the big challenges will be the difficulty in maneuvering a typical SOC flow with its many critical steps (see Figure 1). Also, there are over 1,200 IP cores from over 400 IP vendors from which the IOT designer must choose, (see Figure 2). He or she will quickly realize that the front-end design tools are quite expensive, e.g., for synthesis, timing, etc. The back-end tools for place and route and packaging are even more expensive and require tools experts just to run them.

Figure 1: Vendor complexity and cost that IOT designers will face for their SOCs. (Courtesy SOC Solutions)


Figure 2: Snapshot of current semiconductor IP vendors.

The challenges of SOC design complexity, numerous IP vendors, varying licensing agreements and expensive front-end and back-end tools will result in a “new IOT designer anxiety disorder or “New IDeA Disorder,” Bruister noted humorously. The IOT designer will be overwhelmed with too much information (TMI). Where can the designer get help?

Bruister believes that practical education is an important missing piece of the IOT design puzzle. The new inductee will need many “How do” guides, e.g., an IOT SOC Design for Dummies book. He or she will need a better place to find information than performing a Google search. Unfortunately, there are just not enough SOC consultants to go around for the 10,000+ designers that will be needed for devices to go into 20 billion products. Instead, IOT designers will need an easy, fast inexpensive way to design a chip from concept to first silicon. This process will require both easy-to-use development platforms and many reference designs to get things started.

Let’s consider a typical SOC architecture containing a CPU, bus structure, peripherals, and interfaces for radios, baseband processing and sensors (see Figure 3). This architecture probably represents about 80% to 90% of those to be used in most small IOT devices.

What is the 10%-20% difference between the different IOT devices? The type of communication to be used will be one difference, for example, Bluetooth, Wi-Fi, proprietary radios or optical methods. Also, IOT devices will probably have different types of sensors such as accelerometers, MEMs, strain gauges, etc. But Bruister believes that the most important differentiator may lie with the power management unit. IOT devices will have a wide range of power duty cycles requiring the devices to turn on every millisecond, minute, hour or even day and then go back to sleep. Thus, power management will have to be customized for each different type of operational requirement.

Figure 3: A typical System-on-Chip (SOC) architecture. (Courtesy SOC Solutions)

All of these challenges mean that the bar on design abstraction must be raised. Modular architectures will need to be specific to IOT devices. This may result in class libraries for hardware.

“I think we need to raise the bar on design abstraction, noted Bruister. “We need modular architectures that are specific to IOT devices. And we need what I call a set of class libraries for hardware for both analog and digital subsystems. These subsystems will be abstracted away to make it easier for IOT designers to plug play amongst these different models. Also, there will need to be complementary software abstractions, e.g., APIs, HAL layers and such. Design abstractions are common in Arduino and Raspberry Pi platforms.”

The good news is that many of the pieces to create a successful IOT ecosystem are in place. There is a large selection of quality IP suppliers, many of which attended REUSE 2016. Complementing the IP vendors are a number of design houses with a lot of good experience and solutions, noted Bruister. Silicon aggregators like eSilicon and the foundries necessary to actually build the 20 billion IOT devices round out the existing ecosystem.

Part II of this article will examine ways for the semiconductor and electronics industries to improve the design process for the next generation of IOT designers.

Beginning the Discussion on the Internet-of-Space

Tuesday, January 17th, 2017

A panel of experts from academia and industry assembled at the recent IEEE IMS event to answer critical questions are the role and impact of RFIC technologies.

By John Blyler, Editorial Director

Synopsis of compelling comments:

  • “Satellite communication becomes practical, low cost, and comparable to LTE only if you are at multi-Tera-bit per second capacity.
  • “Ultimately, we are not selling the bandwidth of our system but the power.”
  • “Power harvesting on the satellite is one of the most important things we can do.”
  • “You must establish commercial-off-the-shelf (COTS) variants of your main space product line (to support both new and traditional space).”
  • “You need to consider new business models as well as new technology and processes.”

Recently, IEEE MTT Society sponsored an initial discussion and networking session on the Internet of Space (IoS) at the 2016 International Microwave Symposium. It was billed as one of several upcoming forums to bring the IoS and IoT communities together as these technologies and systems continue to evolve. The short term goal of this initiative is to, “jump start a global technical community cutting across multiple hardware-oriented fields of interest including: aerospace systems; antennas; autonomous systems; communications; electronics; microwave/mm-wave technology; photonics; positioning, navigation and timing; power electronics, etc.”

With representation from a global community of satellite and end-user companies, the IEEE IMS 2016 Rump Session Panel explored the technical and business challenges facing the emerging   industries. What exactly is the IoS? Does it include both low-earth orbit and potentially sub-orbital platforms like drones and balloons? How do microwave and RF designs differ for satellite and airborne applications? These are a few of the questions that were addressed by the panel. Part 1 of this series of reports focuses on the challenges forecasted by each of the panelists. What follows is a portion of that panel discussion. – John Blyler

Panelists and Moderators (left to right):

  • [Co-Moderator] Sanjay Raman, Professor and Associate VP, National Capital Region, Virginia Tech
  • Prakash Chitre, Comsat Laboratories, ViaSat, VP and GM
  • Hamid Hemmati, Facebook, Director of Engineering for Telecom Infrastructure
  • Lisa Coe, Director of Commercial Business Dev. for Boeing
  • David Bettinger, OneWeb, VP of Engineering, Communications Systems
  • Michael Pavloff, RUAG Space (Zürich Switzerland), Chief Technology Officer
  • [Co-Moderator] and Mark Wallace, VP and GM, Keysight

Raman (Co-Moderator): Hi. I’m joined by Mark Wallace, my co-moderator to this panel. We’re here to discuss the emerging Internet-of-Space (IoS) industry. Let’s start with Prakash Chitre from Comsat Labs.

Chitre (Comsat): I’m going to talk about a new generation of satellite systems that NASA has been designing, building and launching. This will give you an understanding of what we have been doing for the last 5 years and our plans for the next 5 years. The main goal for us is to provide connectivity throughout the world. Even with today’s voracious appetite for high-speed and high-volume Internet, half the world’s population of 7B people don’t have any broadband Internet connection.

ViaSat has three satellites, ViaSat-1, WildBlue1, and Anik-F2.  Most of these satellites, like the ANIK-F2 and WildBlue 1, were more or less traditional Ka-Band satellites with 8Gbps (in throughput). But the ViaSat-1 satellite that we designed and launched in 2011, had about 140Gbps (see Figure 1). ViaSat-1 handles about 1 million users and covers North America (NA), including US and Canada. It was the start of a longer vision of very high throughput satellites to cover the globe.

Figure 1: ViaSat-1 rendering (Courtesy of Comsat Labs)

We want to provide broadband communication platforms that deliver affordable high-speed Internet connectivity and video streaming via fixed, mobile and portable systems. The key thing is that we are totally vertically integrated solution; the terminals, the gateway, the satellite all fit together to provide a very cost effective system. We deal with geosynchronous satellite latency issues with software embedded in the terminal and the gateway to make sure we can do very high page loads from media.

[Editor’s Note: Terminals link the satellite signal to fixed and mobile locations on the ground and on airborne systems. Examples of terminals include satellite TV disk systems, aviation broadband devices for Ku-, Ka-, and dual-band in-flight connectivity, emergency responder equipment, cellular extensions and the like.]

Soon we’ll be launching ViaSat-2 (see Table 1), which will provide almost 2 ½ times the capacity of ViaSat-1 while providing much greater coverage. It will bridge the North Atlantic with contiguous coverage over NA and Europe, including all the air and shipping routes.

The ViaSat-3 ultra-high capacity satellite platform is comprised of three ViaSat-3 class satellites and ground network infrastructure.  The first two satellites will focus on the Americas and Europe, Middle East and Africa (EMEA). Work is underway with delivery expected in 2019. A third satellite system is planned for the Asia-Pacific region, completing global service coverage.

In the next few years, we’ll launch ViaSat-3, which will be about 3 times smaller than ViaSat-2. It has 1Tbps capacity and much larger coverage. The first two ViaSat-3 satellites will cover the Americas and Europe, Middle East and Africa (EMEA). A third satellite system is planned for the Asia-Pacific region, completing the global service coverage. We have already given the contract to Boeing to build the bus framework for the first Viasat-3. We are designing and building our own payload.


Satellite Name

Throughput Capacity


WildBlue 8 Gpbs


IPSTAR 1 45 Gpbs


KA-SAT 70 Gpbs


ViaSat-1 140 Gpbs


EchoStar XVII 100+ Gpbs


NBN-Co 1a (“Sky Muster”) 80+ Gpbs


ViaSat-2 350 Gpbs


ViaSat-3 Americas 1 Tbps


ViaSat-3 EMEA 1 Tbps


ViaSat-4 APAC 1 TBPS

Table 1: ViaSat Satellites

Raman (Co-Moderator): Our next speaker is Hamid Hemmati, Director of Engineering for Telecom Infrastructure at Facebook.

Hemmati: Facebook’s interest in providing Internet coverage stems from our desire to connect everyone in the world. Anyone that wants to be connected. Something like 60% of world’s people aren’t on the Internet or have a poor connection – typically a 2G connection. If they are not on Internet, then they cannot be connected.

Most of the data centers around the world are based on open source models for both hardware and software. We can devote technologies to significantly increase the capacities and lower costs and then provide it to the community to then develop and implement.

In terms of the global Internet, we are interested in developed and underdeveloped countries that don’t have connectivity. Providing connectivity to underdeveloped countries is fairly tricky because the population distribution is very different between countries. For example, the red color means a large population of people and green means a small population (Figure 2). As you can see, these are the six different countries with widely different distributions. Some have more or less uniform distribution while others have regions that are scarcely populated.

Figure 2: Population distribution varies according to country. (Courtesy Facebook via IMS presentation).

Figure 2: Population distribution varies according to country. (Courtesy Facebook via IMS presentation).

There is a magnitude of difference in population distribution around the world, which means that there is not one solution that fits all. You can’t come up with one architecture to provide Internet connection to everyone around the world. Each country requires a unique solution. It is more cost effective to allocate capacity where needed. But each solution comes from a combination of terrestrial links with perhaps airborne or satellite links. Satellites are only viable if you can increase the data rate significantly to about 100 Tbps. This is the throughput required to connect the unconnected.


  • 4 billion people with 25 kbps per user (based on average capacity and that users are on the Internet simultaneously).
  • Calculation: (4×109) x (2.5 x 104) = 100 Tbps

This is a staggering number (100 Tbps), so we are talking about very large capacity for all of these populations.

Technology advancements are required to extend the capability of current commercial wireless communication units by 1 to 2 orders of magnitude. What we need to do is amass the state of the art in a number of areas: GEO satellites, LEO Satellites, High Altitude Platforms, and Terrestrial. Satellite communication becomes practical, low cost, and comparable to LTE only if you are at multi-Tbsp capacity, otherwise it is much more expensive than providing LTE. There must be a business justification to do that.

High altitude platforms (like airplanes/drones) need to be able to stay airborne for months at a time. They must be low cost to produce and maintain, plus run at 10-100 Gpbs uplink/downlink/crosslink RF and optical capacity.

Meanwhile, terrestrial including fiber and wireless are already here. It’s just that it is immensely expensive if you want to cover all of the country with fiber. So other solutions are needed, like wireless links, tower to tower, and so forth. This is just a laundry list of what needs to be done. It doesn’t mean we at Facebook are looking at all of them. We are looking at some of them. We want to get these technologies into the hands of the implementers.

Raman (Co-Moderator): Next, let me introduce Lisa Coe, Director of Commercial Business Dev. for Boeing. Originally, James Farricker, Boeing, VP Engineering, was slated to speak on this panel. He was not able to join us.

Coe: I looked up the phrase “new space” on Wikipedia since others are talking about the traditional vs. the new space. I was asking myself if Boeing is a traditional space or new space company. Wikipedia called out Boeing as “not” new space.

[Editor’s Note: [New space is often affiliated with an emergent private spaceflight industry. Specifically, the terms are used to refer to a community of relatively new aerospace companies working to develop low-cost access to space or spaceflight technologies.]

Boeing builds commercial airplanes, military jets, helicopters, International Space State, satellites, cyber security solutions, and everything. We build a lot of very different things. So when you ask us about the Internet of Space (IOS) you’ll get a very different answer. Let me try to answer it.

When an airplane disappears, like the Egypt airplane, a lot of people ask why we don’t connect airplanes via satellites. We need to get our airplanes smarter and all connected. Passengers are already connected on aircraft with Wi-Fi. So before we push for the Internet of Things, why don’t we push to get all the airplanes connected?

Boeing is also a user of the Internet of Space. For example, we just flew an unmanned aircraft that was completely remote controlled from the ground. This is why we care about security, about hacking into these systems. How can we make the Internet of Space secure to connect more people and things?

Raman (Co-Moderator): Next we have David Bettinger, VP of Engineering, Communications System, at OneWeb

Bettinger: OneWeb is trying to provide very low latency Internet access to those who don’t have access everywhere. We are two years into the project and are quite far along. The things that ultimately make us successful are the microwave components used in our system. I’m a modem guy by nature – not an RF one. I wish all modems and baseband could stay at baseband but of course RF is needed on the wireless side. We utilize Ku-band in our system. We also have access to Ka-band, which are a more pointed feeder links that are servicing the satellites.

Supporting both bands means that we need a lot of different components for different functionality. The satellite is probably the most critical for us. The only thing that makes something as crazy as launching 648 satellites feasible is if we get the cost of the satellite and the weight down significantly compared to what is actually done today. Our satellite is about the size of a washing machine, weighing roughly 150 kg. You can fit 30 of them on the launch (payload). That is what makes this work.

The only thing that makes satellite mass work is if you figure out the power problem. Ultimately, we are not selling the bandwidth of our system but the power. This is because we don’t have the luxury of a bus sized satellite up there that is designed to power constantly regardless of the environment, whether you are in an eclipse or not. We have to effectively manage our power with the subscribers of the service. Power harvesting on the satellite is one of the most important things we can do. It drives almost every aspect of our business case.

We have looked heavily at a lot of different silicon technologies, especially GaN and GaS chip technologies. We are utilizing low noise amplifiers (LNAs) and up/down converters, among other components. Power and then cost are important. If there was anything I would ask you to keep working on, it’s the efficiency thing. We can use every bit that we can.

On the ground side, our challenges are a little bit different. We have two different ground components. One is the user terminals like the devices that you put on your roof. They point straight up at the satellite to provide local access via an Ethernet cable, Wi-Fi or even LTE extension. These terminals are all about cost. To crack the markets we want to crack, we need to get the cost of the CPE down yet have a device that actually points at satellites that are moving across at about 7km per second. And changing to different satellites every 3 ½ minutes. It’s a difficult and different problem from the GEO world. Now I remember why I did Geo for 25 years before this.

[Editor’s Note: Customer-premises equipment or customer-provided equipment (CPE) is any terminal and associated equipment located at a subscriber's premises and connected with a carrier's telecommunication channel at the demarcation point ("demarc").]

It all comes down to cost. How can we get cost and power utilization down? What tech can we use to be able to point at our satellites? We are excited about the prospect of trying to bring active steering antenna to a mass market. I see our friends from RUAG are here (in the audience). We have done reference work on looking at these different technologies. There is a lot of secret sauce in there but I think ultimately it comes down to how do you make small, cheap chips and then how can you make antennas around that.

[Editor’s Note: The gateway is the other ground component. A gateway or ground station connects the satellite signal to the end user or subscriber terminals. Most satellite systems are comprised of a large number of small, inexpensive user terminals and a small number of gateway earth stations.]

Raman (Co-Moderator): Our final panelist is Michael Pavloff, CTO, RUAG Space with headquarters in Zürich Switzerland)

Payloff: It’s an honor to be here. How many have heard of RUAG? Maybe 30%? That’s not bad. We are a small, specialized version of Boeing based in Switzerland. Also, we have divisions in aviation, defense, cyber security, space, etc. I’m the CTO of the space division. We do launchers, satellite structures, mechanical-thermal systems, communication equipment and related systems. I’m glad we are here to talk about what are the key technology enablers that allow us to do Internet cost effectively in space.

Costs must continue to decrease for the satellite. We saw this “New Space” world coming some years ago and we had to decide whether to participate in it or not. Up to that point, our legacy markets were institutional ones like the European Space agency, large GEO commercial telecom companies, and similar customers where we do a lot of RF and microwave work. Our main challenge it to make money in this business. So when you get a factor of 10 or more cost pressure on your products, you feel like giving up.

In the end, we saw that all of our traditional institutional and commercial customers were starting to ask the same question, which is, if we are manufacturing some avionics or frequency converters or computers for OneWeb ( space) that are a factor of 10 or 100 less than our standard products, why can’t we do it for the European Space agency or other government customers, namely the large satellite operators. In the end, we didn’t feel it was optional. We had to support this parallel world in which we are doing this business.

There are four main elements that are critical to get to that capability (to support both new and traditional space). First, you should be doing high-rate production. You get a lot of cost savings that way. We have moved to a lot of high-rate production lines. For example, our RF frequency converter chip business is coming to a point where 75% of the product, i.e., half of that product line, will be for non-space applications. Having that type of throughput, handling commercial, non-space grade components and so forth is key to getting that type of high rate production capability

The second critical capability is to increase the emphasis on automation. I’ll cover that shortly.

Third, you must establish commercial-off-the-shelf (COTS) variants of your main product line.

Finally, it’s important to adopt new business models including collaboration and taking risk-sharing positions with customers. Our friends at Oneweb have been pushing us to adopt new business models. Collaboration often means to co-locate and do co-engineering. You need to consider new business models as well as new technologies and processes.

Let’s return to the automation element. RUAG has been doing automation into a lot of different areas, from electronic and satellite panel production to out-of-autoclave composites and multi-layer insulation production. An example of the out-of-autoclave composites are our rocket launcher payload fairings (see Figure 3). [Editor’s Note: A payload fairing is a nose cone used to protect a spacecraft (launch vehicle payload) against the impact of pressure and aerodynamic heating during launch through an atmosphere.]

Figure 3: Payload fairing for the small European launcher Vega. (Courtesy of RUAG)

There should be more cost pressures being put on the launchers, as well. We are trying to be proactive with the composites, with the launcher side to cut down costs. Reusability is a big key subject in the launcher world, that is, to reuse all the bits of the rocket.

From our perspective, these are the key enabling products for the Internet-of-Space (IoS):

  • Future microwave products (Q/V-band, flexible analog converters)
  • GNSS receivers for space
  • 3-D printed structures
  • COTS digital signal processors

Future microwave products have been an evolution to the higher frequency bands as well as to optical. This is key to enabling some of the high capacity throughput for the future. Another enabling area is COTS as applied to signal processors. Some customers are evolving to regenerative types to try to squeeze every last bit of capacity out of the system. The focus is on bandwidths for DSPs which have to be based on COTS. GNSS receivers are enablers as they are a key technology for the satellite bus. And, as Dave mentioned previously, mass is a real thing that we have to try to get out of these systems. One way to drive down mass is with 3-D printing structures.

In Part II of this series, the panelist are asked questions about the cost viability of the Internet of Space, LEO vs. GEO technologies, competition with 5G and airborne platforms.

Cybernetic Human Via Wearable IOT

Tuesday, January 17th, 2017

UC Berkeley’s Dr. Rabaey sees humans becoming an extension of the wearable IoT via neuron connectivity at recent IEEE IMS event.

by Hamilton Carter and John Blyler, Editors, JB Systems

During the third week in May, more than 3000 microwave engineers from across the globe descended upon San Francisco for the International Microwave Symposium 2016. To close the week, it seemed only fitting then that the final plenary talk by Jan Rabaey was titled “The Human Intranet- Where Swarms and Humans Meet.”


Dr. Rabaey, Professor and EE Division Chair at UC Berkeley, took the stage wearing a black T-shirt, a pair of slacks, and a sports coat that shimmered under the bright stage lights. He briefly summarized the topic of his talk, as well as his research goal: turning humans themselves into the next extension of the IoT. Ultimately he hopes to be able to create human-machine interfaces that could ideally not only read individual neurons, but write them as well.

What Makes a Wearable Wearable?

The talk opened with a brief discourse on the inability thus far of wearables to capture the public’s imagination. Dr. Rabaey cited several key problems facing the technology: battery life; how wearable a device actually is; limited functionality; inability to hold user interest; and perhaps most importantly something he termed stove-piping. Wearable technologies today are built to communicate only with other devices manufactured by the same company. Dr. Rabaey called for an open wearables platform to enable the industry to expand at an increasing rate.

Departing from wearables to discuss an internet technology that almost everyone does use, Dr. Rabaey focused for a few moments on the smart phone. He emphasized that while the devices are useful, the bandwidth of the communications channel between the device, and its human owner is debilitatingly narrow. His proposal for remedying this issue is not to further enhance the smart phone, but instead to enhance the human user!

One way to enhance the bandwidth between device and user is simply to provide more input channels. Rabaey discussed one project, already in the works, that utilizes Braille-like technology to turn skin into a tactile interface, and another project for the visually-impaired that aims to transmit visual images to the brain over aural channels via sonification.

Human limbs as prosthetics

As another powerful example of what has already been achieved in human extensibility, Dr. Rabaey, showed a video produced by the scientific journal “Nature” portraying research that has enabled quadriplegic Ian Burkhart to regain control of the muscles in his arms and hands. The video showed Mr. Burkhart playing Guitar Hero, and gripping other objects with his own hands; hands that he lost the use of five years ago. The system that enables his motor control utilizes a sensor to scan the neurons firing in his brain as researchers show him images of a hand closing around various objects. After a period of training and offline data analysis, a bank of computers learns to associate his neural patterns with his desire to close his hand. Finally, sensing the motions he would like to make, the computers fire electro-constricting arm bands that cause the correct muscles in his arm to flex and close his hand around an object. (See video: “The nerve bypass: how to move a paralysed hand“)

Human Enhancements Inside and Out

Rabaey divides human-enhancing tech into two categories, extrospective, applications, like those described above, that interface the enhanced human to the outside world, and introspective applications that look inwards to provide more information about enhanced humans themselves. Turning his focus to introspective applications, Rabaey presented several examples of existing bio-sensor technology including printed blood oximetry sensors, wound healing bandages, and thin-film EEGs. He then described the technology that will enable his vision of the human intranet: neural dust.

The Human Intranet

In 1997, Kris Pister outlined his vision for something called smart dust, one cubic millimeter devices that contained sensors, a processor, and networked communications. Pister’s vision was recently realized by the Michigan Micro Mote research team. Rabaey’s, proposed neural dust would take this technology a step further providing smart dust systems that measure a mere 10 to 100 microns on a side. At these dimensions, the devices could travel within the human blood stream. Dr. Rabaey described his proposed human intranet as consisting of a network fabric of neural dust particles that communicate with one or more wearable network hubs. The headband/bracelet/necklace-borne hub devices would handle the more heavy-duty communication, and processing tasks of the system, while the neural dust would provide real-time data measured on-site from within the body. The key challenge to enabling neural dust at this point lies in determining a communications channel that can deliver the data from inside the human body at real-time speeds while consuming very little power, (think picowatts).

Caution for the future

In closing, Dr. Jan implored the audience, that in all human/computer interface devices, security must be considered at the onset, and throughout the development cycle. He pointed out that internal defibrillators with wireless controls can be hacked, and therefore, could be used to kill a human who uses one. While this fortunately has never occurred, he emphasized that since the possibility exists it is key to encrypt every packet of information related to the human body. While encryption might be power-hungry in software, he stated that encryption algorithms build into ASICs could be performed at a fraction of the power cost. As for passwords, there are any number of unique biometric indicators that can be used. Among these are voice, and heart-rate. The danger for these bio-metrics, however, is that once they can be cloned, or imitated, the hacker has access to a treasure-trove of information, and possibly control. Perhaps the most promising biometric at present is a scan of neurons via EEG or other technology so that as the user thinks of a new password, the machine interface can pick it up instantly, and incorporates it into new transmissions.

Wrapping up his exciting vision of a bright cybernetic future, Rabaey grounded the audience with a quote made by Joanna Zylinska, an Australian performance artist, in a 2002 interview:

“The body has always been a prosthetic body. Ever since we developed as humanoids and developed bipedal locomotion, two limbs became manipulators. We have become creatures that construct tools, artifacts, and machines. We’ve always been augmented by our instruments, our technologies. Technology is what constructs our humanity. …, so to consider technology as a kind of alien other that happens upon us at the end of the millennium is rather simplistic.”

The more things change, the more they stay the same.

World of Sensors Highlights Pacific NW Semiconductor Industry

Tuesday, October 25th, 2016

Line-up of semiconductor and embedded IOT experts to talk at SEMI Pacific NW “World of Sensors” event.

The Pacific NW Chapter of SEMI will be holding their Fall 2016 event highlighting the world of sensors. Mentor Graphics will be hosting the event on Friday, October 28, 2016 from 7:30 to 11:30 am.

The event will gather experts in the sensor professions who will share their vision of the future and the impact it may have on the overall semiconductor industry. Here’s a brief list of the speaker line-up:

  • Design for the IoT Edge—Mentor Graphics
  • Image Sensors for IoT—ON Semiconductor
  • Next Growth Engine for Semiconductors—PricewaterhouseCoopers
  • Expanding Capabilities of MEMS Sensors through Advanced Manufacturing—Rogue Valley Microdevices
  • Engineering Biosensors for Cell Biology Research and Drug Discovery—Thermo Fisher Scientific

Register today and meet and network with industry peers from these companies, Applied Materials, ASM America, Brewer Science, Cascade Microtech, Delphon Industries, FEI Company, Kanto, Microchip Technology, SSOE Group, VALQUA America and many more.

See the full agenda and Register today.

Has The Time Come for SOC Embedded FPGAs?

Tuesday, August 30th, 2016

Shrinking technology nodes at lower product costs plus the rise of compute-intensive IOT applications help Menta’s e-FPGA outlook.

By John Blyler, IP Systems


The following are edited portions of my video interview the Design Automation Conference (DAC) 2016 with Menta’s business development director, Yoan Dupret. – JB

John Blyler's interview with Yoan Dupret from Menta

Blyler: You’re technology enables designers to include an FPGA almost anywhere on a System-on-Chip (SOC). How is your approach unique from others that purport to do the same thing?

Dupret: Our technology enables placement of an Field Programmable Gate Array (FPGA) onto a silicon ASIC, which is why we call it an embedded FPGA (e-FPGA). How are we different from others? First, let me explain why others have failed in the past while we are succeeding now.

In the past, the time just wasn’t right. Further, the cost of developing the SOC was still too high. Today, all of those challenges are changing. This has been confirmed by our customers and from GSA studies that explain the importance of having some programmable logic inside an ASIC.

Now, the time is right. We have spent the last few years focusing on research and development (R&D) to strengthen our tools, architectures and to build out competencies. Toolwise, we have a more robust and easier to use GUI and our architecture has gone through several changes from the first generation.

Our approach uses standard cell-based ASICs so we are not disruptive to the EDA too flow of our customers. Our hard IP just plugs into the regular chip design flow using all of the classical techniques for CMOS design. Naturally, we support testing with standard scan chain tests and impressive test coverage. We believe our FPGA performance is better than the competitions in terms of numbers of lookup tables per of area, of frequencies, and low power consumption.

Blyler:  Are you targeting a specific area for these embedded FPGAs, e.g., IOT?

Dupret: IOT is one of the markets we are looking at but it is not the only one. Why? That’s because the embedded FPGA fabric can actually go anywhere you have RTL, which is intensively parallel programming based (see Figure 1). For example, we are working on a cryptographic algorithms inside the e-FPGA for IOT applications. We have tractions on the filters for digital radios (IIR and FLIR filters), which is another IOT application. Further, we have customers in the industrial and automotive audio and image processing space

Figure 1: SOC architecture with e-FPGA core, which is programmed after the tape-out. (Courtesy of Menta)

Do you remember when Intel bought Altera, a large FPGA company? This acquisition was, in part, for Intel’s High Performance Computing (HPC) applications. Now they have several big FPGAs from Altera just next to very high frequency processing cores. But there is another way to do achieve this level of HPC. For example, a design could consists of a very big parallel intensive HPC architecture with a lot of lower frequency CPUs and next to each of these CPUs you could have an e-FPGa.

Blyler: At DAC this year, there are a number of companies from France. Is there something going on there? Will it become the next Silicon Valley?

Dupret: Yes, that is true. There are quite some companies doing EDA. Others are doing IP, some of which are well known. For example, Dolphin, is based in Grenoble and it is also part of the ecosystem there.

Blyler: That’s great to see. Thank you, Yoan.

To learn more about Menta’s latest technology: “Menta Delivers Industry’s Highest Performing Embedded Programmable Logic IP for SoCs.”

Autonomous Car Patches, SoC Rebirth, IP IoT Platforms and Systems Engineering

Wednesday, December 9th, 2015

Highlights include autonomous car technology, patches, IoT Platforms, SoC hardware revitalization, IP trends and a new edition of a systems engineering classic.

By John Blyler, Editorial Director, IP and IoT Systems

In this month’s travelogue, publisher John Blyler talks with Chipestimate.TV director Sean O’Kane about the recent Renesas DevCon and trends in software security patches, hardware-software platforms, small to medium businesses creating System-on-Chips, intellectual property (IP) in the Internet-of-Things (IoT) and systems engineering management. Please note that what follows is not a verbatim transcription of the interview. Instead, it has been edited and expanded for readability. I hope you find it informative. Cheers — JB


ChipEstimate.TV — John Blyler Travelogue, November 2015

Read the transcribed, complete post on the “IP Insider” blog.



Is Hardware Really That Much Different From Software

Sunday, November 30th, 2014

When can hardware be considered as software? Are software flows less complex? Why are hardware tools less up-to-date? Experts from ARM, Jama Software and Imec propose the answers.

By John Blyler, Editorial Director

HiResThe Internet-of-Things will bring hardware and software designers into closer collaboration than every before. Understanding the working differences between both technical domains in terms of design approaches and terminology will be the first step in harmonizing the relationships between these occasionally contentious camps. What are the these differences in hardware and software design approaches? To answer that question, I talked with the technical experts including Harmke De Groot, Program Director Ultra-Low Power Technologies at Imec; Jonathan Austin, Senior Software Engineer at ARM; and Eric Nguyen, Director of Business Intelligence at Jama Software; . What follows is a portion of their responses. — JB

Blyler: The Internet-of-Things (IoT) will bring a greater mix of both HW and SW IP issues to systems developers. But hardware and software developers use the same words to mean different things. What do you see as the real differences between hardware and software IP?

De Groot: Hardware IP, and with that I include very low level software, is usually optimized for different categories of devices, i.e. devices on small batteries or harvesters, medium size batteries like mobile phones and laptops and connected to the mains. Software IP, especially for the higher layers, i.e. middleware and up can easier be developed to scale and fit many platforms with less adaptation. However practice learns that scaling for IoT of software also has its limitations, for very resource limited devices special measures have to be taken. For example direct retrieval of data from the cloud and combining this with local sensor data by a very small sensor node is a partly unsolved challenge today. For mobiles, laptops and more performing devices there are reasonable solutions (though also not perfect yet) to retrieve cloud data and combine this with the sensor information from the device in real-time. For sensoric devices with more resource constraints working on smaller batteries this is not so easy, especially not with heterogeneous networking challenges. Sending data to the cloud (potentially via a gateway device as a mobile phone, laptop or special router) seems to work reasonably, but retrieving the right data from the cloud to combine with the sensor data of the small sensor node itself for real-time use is a challenge to be solved.

Austin: Personally, I see two significant differences between the real differences between hardware and software design and tools:

  1. How hard it is to change something when you get it wrong? It is ‘really hard’ for hardware, and somewhere on spectrum from ‘really hard’ to ‘completely trivial’ in software.
  2. The tradeoffs around adding abstraction to help deal with complexity. Software is typically able to ‘absorb’ more of this overhead than hardware. Also, in software it is far easier to only optimize the fast path. In fact, there usually isn’t as much impact to an unoptimised slow path (as would be the case in hardware.)
  3. There are differences in the tool sets. This was an interesting part of an ongoing debate with my colleagues. We couldn’t quite get to the bottom of why it is so common for hardware projects to stick with really old tools for so long. Some possible ideas included:
  • The (hardware) flow is more complex, so getting something that works well takes longer, requires more investment and results in a higher cost to switch tools.
  • There’s far less competition in the hardware design space so things aren’t pushed as much. This point is compounded by the one above, but the two sort of play together to slow things down.
  • The tools are hardware to write and more complex to use. This was contentious, but I think on balance, some of the simplicity and elegance available in software comes because people solve some really touch physical issues in the hardware tools.

So, this sort of thinking led me to an analogy of considering hardware to be very low level software. We could have a similar debate about javascript productivity versus C – and I think the arguments on either side would like quite similar to the software versus hardware arguments.

Finally on tools, I think it might be significant that the tools for building hardware are *software* tools, and the tools for building software are *also* software tools. If a tool for building software (say a compiler) is broken, or poor in some way, the software engineer feels able to fix it. If a hardware tool is broken in some way, the hardware engineer is less likely to feel like it is easy to just switch tasks quickly and fix it. So that is I guess to say, software tools are build for software engineers by software engineers, and hardware tools are built by software engineers to be sold to companies, to be given to hardware engineers!

Nguyen: One of the historical differences relates to the way integrated system companies organized their teams. As marketing requirements came in, the systems engineers in the hardware group would lay out the overall design. Most of the required features and functionality were very electrical and mechanical in nature, where software was limited to drivers and firmware for embedded electronics.

Today, software plays a much bigger role than hardware and many large companies have difficulties incorporating this new mindset. Software teams move at a much faster pace than hardware. On the other hand, software teams have a hard time integrating with the tool sets, processes and methodologies of the hardware teams. From a management perspective, the “hardware first” paradigm has been flipped. Now it is a more of software driven design process where the main question is how much of the initial requirements can be accomplished in software. The hardware is then seen as the enabler for the overall (end-user) experience. For example, consider Google’s Nest Thermostat. It was designed as a software experience with the hardware brought in later.

Blyler: Thank you.

Soft (Hardware) and Software IP Rule the IoT

Tuesday, September 2nd, 2014

By John Blyler, JB Systems

Both soft (hardware) and software IP should dominate in the IoT market. But for which segments will that growth occur? See what the experts from IPExtreme, Atmel, GarySmithEDA, Semico Research and Jama Software are thinking.

The Internet-of-Things will significantly increase the diversity and amount of semiconductor IP. But what will be the specific trends among the hardware and software IP communities? Experts from both domains shared there perceptions including,  Warren Savage, President and CEO of IPExtreme; Patrick Sullivan, VP of Marketing, MCU Business Unit for Atmel; Gary Smith, Founder and Chief Analyst for Gary Smith EDA; Richard Wawrzyniak, Senior Market Analyst for ASIC & SoC at Semico Research, and; Eric Nguyen, Director of Business Intelligence at Jama Software. What follows is a portion of their responses. — JB

Blyler: Do you expect an accelerated growth of both hardware and software IP (maybe subsystem IP) due to the growth of the IoT? What are the growth trends for electronic hardware and software IP?

Savage: I don’t think that there is anything special about the Internet-of-Things (IoT) from an intellectual property (IP) perspective.   The prospect of IoT simply means there is going to be a lot more silicon in the world as we start attaching networking to things that previously were not connected. As a natural evolution of the semiconductor market, hardware and software IP is going to keep growing and will outpace everything else for the foreseeable future. Subsystems are a natural artifact of that maturing as well as customers wanting to do more and more with less people, outsourcing whole functions of chips to be delivered from their IP supplier who is likely an expert in that subject matter.

Sullivan: The largest growth will be in software IP for hardware IPs that already exists in order to connect devices to the Internet. Developers that are not familiar with wireless applications will find themselves making connected devices, and for suppliers to have context aware stacks and other IP tailored for the different IoT usage models will be crucial. i.e.; just having a ZigBee stack is not sufficient. You need a version for healthcare, a version for lighting, and so on.

Security is also going to be an important factor for both securing communication between IoT devices and the cloud (SSL/TLS technologies), and also to authenticate that firmware images running on connected devices have not been tampered with. Addressing these needs may require additional software development of IoT devices, and potentially specialized hardware components as well.

On the hardware side, the main focus will continue to be power consumption reduction as well as range and quality improvements.

Smith: Yes, growth in hardware and software IP will increase with the IoT expansion. However, the IoT market comprise multiple segments. To get accurate growth figures you would need to explore them all (see Table).

Table: Markets for the Internet-of-Things. (Courtesy of

Wawrzyniak: I do expect some acceleration of revenues derived from IP going into IoT applications. At this point it is hard to determine just how much acceleration there will be since we are just at the very beginning of this trend. It also will depend upon which types of IP are chosen as the ones most favored by SoC designers. For example, if designers select of one of the wireless IP types as the preeminent solution, then this might be more expensive (generate more IP revenue over time) than say ZigBee.

Given the sheer volume of IoT applications and silicon being projected, it is possible that once a specific process geometry is decided on as the optimum type to use, the IP characterized for that geometry might actually be less expensive than the same IP at another geometry. Volume will drive cost in this case. All these factors will go into figuring out how much additional IP revenue will be generated. I would say a safe estimate today would be on the order of 10%.Wawrzyniak: I do expect some acceleration of revenues derived from IP going into IoT applications. At this point it is hard to determine just how much acceleration there will be since we are just at the very beginning of this trend. It also will depend upon which types of IP are chosen as the ones most favored by SoC designers. For example, if designers select of one of the wireless IP types as the preeminent solution, then this might be more expensive (generate more IP revenue over time) than say ZigBee.

I also think it’s likely that IP Subsystems will be created for IoT applications. Again, this depends on how complex the silicon solution will need to be. If we are talking lightbulbs, then it is hard to imagine that an IP Subsystem will be needed. On the other hand, a relatively complex chip might require an IP subsystem, e.g., a Sensor Fusion Hub subsystem. Sensors will certainly be everywhere in the IoT, so why not create a subsystem that deals with this part of the solution and ties it all together from the designer

Hard IP will probably be more expensive than Soft IP. I would say that Soft IP will be used more in these types of SoCs. I would estimate that it could be as high as a 70 – 30 split in favor of Soft IP.

Nguyen: Absolutely, the growth of IoT will not only open new markets such as wearable technologies and home automation but will also cause disruption in existing due to software based services being delivered through connected devices. Technology products are evolving from electro-mechanical based IP competitive differentiation to customer experience differentiation powered by software applications running on optimized hardware.

The trends in hardware and software IP are accelerating the rate of innovation for customer facing products, which in turn will have a direct impact throughout the supply chain. Software producers must mange the interdependencies not only across their product lines but also across the various technologies they’ll be deployed on (i.e. iOS, Android, Web, integrated into 3rd party technology) or various subsystems. The connected aspect of these technologies allows vendors to continually update the offerings and therefore evolve the customer experience throughout the life of the physical technology.

The performance demands of continuously evolving software heavy products is also driving accelerated innovations throughout the supply chain, specifically hardware components such as Systems on Chip, Systems in a Package, sensor technology, and battery/power management.

Final product producers are also accelerating release cycles and therefore driving the need to more easily integrate sub-components. This demand is driving the demand for Systems in a Package (SiP) technologies, which incorporate the chips, drivers, and software within a physical sub-component package that can easily integrated into the overall system. Semiconductor companies must now coordinate the growing complexity of silicon, software, and documentation development while accelerating their ability to incorporate market feedback into product roadmaps, R&D, and ultimate manufacturing and delivery to customers; all the while ensuring they can meet per unit cost targets.

Blyler: Thank you.

ARM TechCon, Semico Research IP Impact, and the MEMS Congress

Tuesday, November 12th, 2013

Two weeks ago, I attended the ARM Developer’s Conference and TechCon. Last week, it was the Semico Research IP Impact Forum. Later that week, I went to the MEMS Congress. This week is the Dassault Systemes 3DExperience User Group event. All of these events have three things in common:

First, they actively engage with the supply chains. Second, they are focused on being part of the Fourth Industry Revolution [a.k.a., the Internet of Things (IoT)].

Lastly and perhaps most tellingly, companies and organizations at all of these events showed a growing awareness of the experiences created by their endeavors in both the supply chain and market.

Karen Lightman, Director of the MEMS Industry Group, speaks at the MEMS Congress.



Next Page »