Part of the  

Chip Design Magazine


About  |  Contact

Posts Tagged ‘Semico Research’

New Event Focuses on Semiconductor IP Reuse

Monday, November 28th, 2016

Unique exhibition and trade show levels the playing field for customers and vendors as semiconductor intellectual property (IP) reuse grows beyond EDA tools.

By John Blyler, Editorial Director, JB Systems

The sale of semiconductor intellectual property (IP) has outpaced that of Electronic Design Automation (EDA) chip design tools for the first time, according to a report of Q3 2015 sales by the Electronic System Design Alliance’s MSS report. Despite this growth, there is no industry event dedicated solely to semiconductor IP – until now.

The IP community in Silicon Valley will witness an inaugural event this week, one that will enable IP practitioners to exchange ideas and network while providing IP buyers with access to a diverse group of suppliers. REUSE 2016 will debut on December 1, 2016 at the Computer History Museum in Mountain View, CA.

I talked with one of the main visionaries of the event, Warren Savage, General Manager of IP at Silvaco, Inc. Most professionals in the IP industry will remember Savage as the former CEO of IPextreme, plus the organizer of the Constellations group and the “Stars of IP” social event held annually at the Design Automation Conference (DAC).

IPextreme’s Constellations group is a collection of independent semiconductor IP companies and industry partners that collaborate at both the marketing and engineering levels for mutual benefit. The idea was for IP companies to pool resources and energy to do more than they could do on their own.

This idea has been extended to the REUSE event, which Savage has humorously described as the steroid-enhanced version of the former Constellations sponsored “Silicon Valley IP User Group” event.

“REUSE 2016 includes the entire world of semiconductor IP,” explains Savage. “This is a much bigger event that includes not just the Constellation companies but everybody in the IP ecosystem. Our goal is to reach about 350 attendees for this inaugural event.”

The primary goal for REUSE 2016 is to create a yearly venue that brings both IP vendors and customers together. Customers will be able to meet with vendors not normally seen at the larger but less IP-focused conferences. To best serve the IP community, the founding members decided that the event’s venue should be a combination of exhibition and trade show, where exhibitors present technical content during the trade show portion of the event.

Perhaps the most distinguishing aspect of REUSE is that the exhibition hall will only be open to companies who were licensing semiconductor design and verification IP or related embedded software.

“Those were the guiding rules about the exhibition,” noted Savage. “EDA (chip design) companies, design services or somebody in an IP support role would be allowed to sponsor activities like lunch. But we didn’t want them taking attention away from the main focus of the event, namely, semiconductor IP.”

The other unique characteristic of this event is its sensitivity to the often unfair advantages that bigger companies have over smaller ones in the IP space. Larger companies can use their financial advantage to appear more prominent and even superior to smaller but well established firms. In an effort to level the playing field, REUSE has limited all booth spaces in the exhibition hall to a table. Both large and small companies will have the same size area to highlight their technology.

This year’s event is drawing from the global semiconductor IP community with participating companies from the US, Europe, Asia and even Serbia.

The breadth of IP related topics covers system-on-chip (SOC) IP design and verification for both hardware and software developers. Jim Feldham, President and CEO, of Semico Research will provide the event’s inaugural keynote address on trends driving IP reuse. In addition to the exhibition hall with over 30 exhibitors, there will be three tracks of presentations held throughout the day at REUSE 2016 on December 1, 2016 at the Computer Science Museum in San Jose, CA. See you there!

Originally posted on “IP Insider”

Soft (Hardware) and Software IP Rule the IoT

Tuesday, September 2nd, 2014

By John Blyler, JB Systems

Both soft (hardware) and software IP should dominate in the IoT market. But for which segments will that growth occur? See what the experts from IPExtreme, Atmel, GarySmithEDA, Semico Research and Jama Software are thinking.

The Internet-of-Things will significantly increase the diversity and amount of semiconductor IP. But what will be the specific trends among the hardware and software IP communities? Experts from both domains shared there perceptions including,  Warren Savage, President and CEO of IPExtreme; Patrick Sullivan, VP of Marketing, MCU Business Unit for Atmel; Gary Smith, Founder and Chief Analyst for Gary Smith EDA; Richard Wawrzyniak, Senior Market Analyst for ASIC & SoC at Semico Research, and; Eric Nguyen, Director of Business Intelligence at Jama Software. What follows is a portion of their responses. — JB

Blyler: Do you expect an accelerated growth of both hardware and software IP (maybe subsystem IP) due to the growth of the IoT? What are the growth trends for electronic hardware and software IP?

Savage: I don’t think that there is anything special about the Internet-of-Things (IoT) from an intellectual property (IP) perspective.   The prospect of IoT simply means there is going to be a lot more silicon in the world as we start attaching networking to things that previously were not connected. As a natural evolution of the semiconductor market, hardware and software IP is going to keep growing and will outpace everything else for the foreseeable future. Subsystems are a natural artifact of that maturing as well as customers wanting to do more and more with less people, outsourcing whole functions of chips to be delivered from their IP supplier who is likely an expert in that subject matter.

Sullivan: The largest growth will be in software IP for hardware IPs that already exists in order to connect devices to the Internet. Developers that are not familiar with wireless applications will find themselves making connected devices, and for suppliers to have context aware stacks and other IP tailored for the different IoT usage models will be crucial. i.e.; just having a ZigBee stack is not sufficient. You need a version for healthcare, a version for lighting, and so on.

Security is also going to be an important factor for both securing communication between IoT devices and the cloud (SSL/TLS technologies), and also to authenticate that firmware images running on connected devices have not been tampered with. Addressing these needs may require additional software development of IoT devices, and potentially specialized hardware components as well.

On the hardware side, the main focus will continue to be power consumption reduction as well as range and quality improvements.

Smith: Yes, growth in hardware and software IP will increase with the IoT expansion. However, the IoT market comprise multiple segments. To get accurate growth figures you would need to explore them all (see Table).

Table: Markets for the Internet-of-Things. (Courtesy of

Wawrzyniak: I do expect some acceleration of revenues derived from IP going into IoT applications. At this point it is hard to determine just how much acceleration there will be since we are just at the very beginning of this trend. It also will depend upon which types of IP are chosen as the ones most favored by SoC designers. For example, if designers select of one of the wireless IP types as the preeminent solution, then this might be more expensive (generate more IP revenue over time) than say ZigBee.

Given the sheer volume of IoT applications and silicon being projected, it is possible that once a specific process geometry is decided on as the optimum type to use, the IP characterized for that geometry might actually be less expensive than the same IP at another geometry. Volume will drive cost in this case. All these factors will go into figuring out how much additional IP revenue will be generated. I would say a safe estimate today would be on the order of 10%.Wawrzyniak: I do expect some acceleration of revenues derived from IP going into IoT applications. At this point it is hard to determine just how much acceleration there will be since we are just at the very beginning of this trend. It also will depend upon which types of IP are chosen as the ones most favored by SoC designers. For example, if designers select of one of the wireless IP types as the preeminent solution, then this might be more expensive (generate more IP revenue over time) than say ZigBee.

I also think it’s likely that IP Subsystems will be created for IoT applications. Again, this depends on how complex the silicon solution will need to be. If we are talking lightbulbs, then it is hard to imagine that an IP Subsystem will be needed. On the other hand, a relatively complex chip might require an IP subsystem, e.g., a Sensor Fusion Hub subsystem. Sensors will certainly be everywhere in the IoT, so why not create a subsystem that deals with this part of the solution and ties it all together from the designer

Hard IP will probably be more expensive than Soft IP. I would say that Soft IP will be used more in these types of SoCs. I would estimate that it could be as high as a 70 – 30 split in favor of Soft IP.

Nguyen: Absolutely, the growth of IoT will not only open new markets such as wearable technologies and home automation but will also cause disruption in existing due to software based services being delivered through connected devices. Technology products are evolving from electro-mechanical based IP competitive differentiation to customer experience differentiation powered by software applications running on optimized hardware.

The trends in hardware and software IP are accelerating the rate of innovation for customer facing products, which in turn will have a direct impact throughout the supply chain. Software producers must mange the interdependencies not only across their product lines but also across the various technologies they’ll be deployed on (i.e. iOS, Android, Web, integrated into 3rd party technology) or various subsystems. The connected aspect of these technologies allows vendors to continually update the offerings and therefore evolve the customer experience throughout the life of the physical technology.

The performance demands of continuously evolving software heavy products is also driving accelerated innovations throughout the supply chain, specifically hardware components such as Systems on Chip, Systems in a Package, sensor technology, and battery/power management.

Final product producers are also accelerating release cycles and therefore driving the need to more easily integrate sub-components. This demand is driving the demand for Systems in a Package (SiP) technologies, which incorporate the chips, drivers, and software within a physical sub-component package that can easily integrated into the overall system. Semiconductor companies must now coordinate the growing complexity of silicon, software, and documentation development while accelerating their ability to incorporate market feedback into product roadmaps, R&D, and ultimate manufacturing and delivery to customers; all the while ensuring they can meet per unit cost targets.

Blyler: Thank you.

Augmented Tools Design Reality

Friday, June 28th, 2013

Augmented reality appears in consumer applications and in virtual-prototyping designs for the early validation of everything from tablets to refrigerators.

Augmented reality (AR) continues to make the news. At the recent Entertainment Expo (E3) video-game show, Sony’s PlayStation division showcased its latest AR and motion-control camera technology in the Playstation4. 

But augmented reality is more than just a game. For many consumers, their first interaction with AR came through Google Maps and the direction arrows superimposed over the actual road. Other implementations include IBM, which has created technology that marries augmented reality with comparison-shopping. Imec, the Flemish government’s R&D nanotech giant, has created augmented-reality contact lenses for medical and cosmetic applications. Metaio’s Juaio application turns iOS and Android-based cell phones into AR-enabled devices (see Figure 1).

Figure 1: The latest augmented-reality browser uses ordinary objects as markers to get virtual information. (Courtesy of Metaio)

These varied implementations for augmented-reality technology make it difficult to calculate the total market value for the semiconductor industry. One reason is that with just one software development kit (SDK), AR can be implemented on a number of devices.  It’s a market that is decentralized, open source, and hardware independent.

In forecasting for the augmented market, analysts at Semico Research considered the varied types of products that implement the technology while weighing the popularity and other factors driving interest in these products. The result is that the AR market is expected to reach almost $620 billion by 2016 (see Figure 2). This analysis is part of a comprehensive and informative report published in October 2012.

Figure 2: Shown are forecasts for the total augmented-reality hardware market. (Courtesy of Semico Research)

The need for specific hardware and software tools that create AR experiences is also growing. One such tool suite from Dassault Systemes was used by designers to create a consumer application that transforms a comic book into a 3D augmented-reality experience, while another application accurately reconstructs the creation of the city of Paris.

While these AR implementations are educational, the semiconductor and embedded electronic-design community may scoff at them as mere entertainment. But this same technology is finding use in early design-validation “virtual prototypes” of everything from cell phones and tablets to refrigerators. 

Augmented Versus Virtual Reality

To understand the benefit that virtual-prototyping platforms can offer to designers, one must first appreciate the relationship between augmented versus virtual reality.

“In a virtual-reality environment, you see virtual objects,” explains Vincent Merlino, High-Tech Industry Solutions Leader at Dassault Systemes. “But with augmented reality, you see augmentations in the real world.”

“Virtual reality defines a completely immersed digital environment in which reality is replicated for the user,” elaborates Trak Lord, marketing and media relations at Metaio. “Augmented reality instead utilizes reality itself as an anchor for virtual content that can be experienced as part of the real world.” Whereas virtual reality inserts the user into an entirely new and simulated reality, augmented reality instead inserts virtual elements into the user’s reality. 

The electronic-design community is using augmented reality to both design end-user applications and improve the conceptual design of future products. One example of the former comes from the automotive market, where Audi uses image recognition and Metaio’s AR software to power its Interactive Manual application. Mobile users point their smartphone devices at different surfaces and objects in the car to get instant feedback on the identity, function, and quick-start instructions of that specific feature. For example, pointing a smartphone toward windshield wipers yields “Windshield Wiper” identification, with the ability to swipe through to a brief, animated tutorial on how to use the wipers.

Designers are also using augmented reality to improve the conceptual design and early validation of future products. Here, product teams use the technology to create a virtual (as opposed to physical) prototype to gain critical end-user feedback before the design is realized in hardware or software. Consider the design of a new mobile phone or tablet (see Figure 3). Augmented reality could be used to validate the usefulness of a new form factor –size, shape, and even “feel” – of the future product. This would result in significant cost and time savings over traditional in-person focus-group meetings.

Figure 3: The editor takes a picture of a virtual prototype of a future tablet at a recent trade show.

The cost and time benefits of AR-based virtual-reality prototypes are dependent upon the complexity and size of the end-user product or system. “The bigger (or more complex) the product, the bigger the savings over a physical prototype,” remarks Merlino. “If you want a physical prototype of a large refrigerator with one color and configuration, you may spend over a million yen (or about $10k). With a virtual-reality prototype, you only spend money on the software with the advantage of many configurations and global distribution.” Augmented reality allows you to compare the virtual refrigerator alongside a real (competitive) offering (see Figure 4).

Figure 4: By comparing a competitor’s product with one created using augmented reality, hi-tech companies can validate design issues and user experiences before committing time and money on the actual product. (Courtesy of Dassault Systemes)

Almost any product could be validated with augmented reality, including the electrical and mechanical subsystems on both chips and circuit boards. Naturally, cost will ultimately determine the use of augmented reality over an actual physical prototype. Once a designer decides to use augmented reality in the virtual prototyping of a design, however, what is needed in terms of hardware and software?

Designing For Augmented Reality

To accomplish the insertion of virtual elements into a user’s reality, camera technology is used to identify and recognize real-world images and objects. Digital and virtual content is then added to them in real time.

Designers of end-user applications need to consider both the software and hardware aspects of their AR implementations. Most vendors provide augmented-reality software application development kits (ADKs) that work on the majority of iOS and Android platforms. “Beyond the basic needs of front-facing camera and reasonable performance, many of the newer platforms offer new compute resources, such as programmable image processors that promise improved computer visioning capabilities,” says Lord. “The ongoing improvement in graphic-processing-unit (GPU) and general-purpose-GPU (GP-GPU) processing also provides more opportunity to improve augmented-reality user experiences.”

In addition to improved performance, hardware must provide more power-efficient depth-of-field imaging sensors and greater ease of programming for synchronized, multi-sensor data streams.

High-performance, low-power GPUs and associated computing engines are a critical part of the design of AR systems. Companies like Metaio offer dedicated hardware image processors for accelerating augmented-reality experiences. Dubbed the “AREngine,” the acceleration chip works by taking on much of the processing required to run AR experiences from the general CPU. The company claims a drastic reduction in battery power consumption and an increase in initialization speeds.

Many designers use the compute power of the existing mobile-device GPU to enhance performance and minimize power for their AR applications. This requires careful integration of the GPU, video, and camera-vision processing to ensure the best performance.

What is the difference between image and graphics processors? Image processing deals with the manipulation of images acquired through some device, like a camera. The emphasis is on analysis and enhancement of the image. Today’s popular computer vision systems require the use of image analysis.

Conversely, graphic processing deals with synthesizing images based upon geometry, lighting, materials, and textures.

“Augmented-reality applications usually blend live video with computer images, where 3D graphics rendering is performed using OpenGL software APIs,” explains David Harold, Senior Director of Marketing Communications at Imagination Technologies. “Powerful cores can provide high-quality 3D graphics rendering, which can then be blended into the real-world camera capture. Also, by implementing features like camera image texture streaming, GPUs are capable of processing camera images as textures to enable 3D and reality integration with minimal CPU loading.” Efficient integration of camera images into the 3D rendering flow is essential for good performance and efficiency in augmented-reality designs (see Figure 5).

Figure 5: Harold shows a tablet connected to a screen to demonstrate the real-time computer power of GPUs.


What Does The Future Hold For Designers?

One tantalizing future would be the complete simulation of both hardware and software in the virtual prototype. For example, while looking at a virtual prototype of a tablet, the designer or end user could be running an application on the tablet (like a game) or using the camera.

Another near-horizon goal is the greater incorporation of social-media input to the design process. Using AR-based prototypes, a company could include stakeholders or a larger social network to provide valuable feedback as part of a crowd-sourcing team. In many cases, this would be much easier than developing just a physical prototype.

Finally, companies are in the early stages of developing image-processing techniques for gesture-recognizing augmented reality. In one implementation, an application would superimpose imagery over the screen of a smart-phone or tablet, allowing users to interact with it via hand gestures. Another implementation leverages Wi-Fi signals to detect specific hand gestures without the need for sensors on the human body or cameras.  

Augmented reality has quickly moved beyond gaming to the wider consumer market. The technology has created a new set of hardware and software applications that allow designers to create high-performance and low-power augmented-reality experiences. That’s the reality.

“Reality: What a concept?!” – Robin Williams

Free counter and web stats

Semicon West 2012 Videos

Tuesday, July 24th, 2012

Show floor interviews with leaders from Semico Research, MEMS Industry.

Group, ASML, Soitec, Applied Materials, and IMEC.Semicon West 2012 – Part 1
- Where cerulean skies shine on uncertain market trends and a MEMS director dreams of terminating the interviewer.
Interviews with:

  • Jim Feldhan, President and CEO of Semico Research
  • Karen Lightman, Managing Director for MEMS Industry Group (MIG)

Semicon West 2012 – Part 2
- Where materials matter and a French CEO talks about scaling.
Interviews with:

  • Lucas van Grinsven, Head of Communications, ASML
  • André-Jacques Auberton-Hervé is co-founder and CEO of Soitec.

Semicon West 2012 – Part 3
- Where 3D models float in the air, tall men talk about 450nm and the Flemish government, and Sean and John seek refreshment.
Interviews with:

  • Sree R. Kesapragada, PhD, Global Product Manager at Applied Materials
  • Ludo Deferm, Executive Vice President at IMEC