Part of the  

Chip Design Magazine

  Network

About  |  Contact

New Event Focuses on Semiconductor IP Reuse

November 28th, 2016

Unique exhibition and trade show levels the playing field for customers and vendors as semiconductor intellectual property (IP) reuse grows beyond EDA tools.

By John Blyler, Editorial Director, JB Systems

The sale of semiconductor intellectual property (IP) has outpaced that of Electronic Design Automation (EDA) chip design tools for the first time, according to a report of Q3 2015 sales by the Electronic System Design Alliance’s MSS report. Despite this growth, there is no industry event dedicated solely to semiconductor IP – until now.

The IP community in Silicon Valley will witness an inaugural event this week, one that will enable IP practitioners to exchange ideas and network while providing IP buyers with access to a diverse group of suppliers. REUSE 2016 will debut on December 1, 2016 at the Computer History Museum in Mountain View, CA.

I talked with one of the main visionaries of the event, Warren Savage, General Manager of IP at Silvaco, Inc. Most professionals in the IP industry will remember Savage as the former CEO of IPextreme, plus the organizer of the Constellations group and the “Stars of IP” social event held annually at the Design Automation Conference (DAC).

IPextreme’s Constellations group is a collection of independent semiconductor IP companies and industry partners that collaborate at both the marketing and engineering levels for mutual benefit. The idea was for IP companies to pool resources and energy to do more than they could do on their own.

This idea has been extended to the REUSE event, which Savage has humorously described as the steroid-enhanced version of the former Constellations sponsored “Silicon Valley IP User Group” event.

“REUSE 2016 includes the entire world of semiconductor IP,” explains Savage. “This is a much bigger event that includes not just the Constellation companies but everybody in the IP ecosystem. Our goal is to reach about 350 attendees for this inaugural event.”

The primary goal for REUSE 2016 is to create a yearly venue that brings both IP vendors and customers together. Customers will be able to meet with vendors not normally seen at the larger but less IP-focused conferences. To best serve the IP community, the founding members decided that the event’s venue should be a combination of exhibition and trade show, where exhibitors present technical content during the trade show portion of the event.

Perhaps the most distinguishing aspect of REUSE is that the exhibition hall will only be open to companies who were licensing semiconductor design and verification IP or related embedded software.

“Those were the guiding rules about the exhibition,” noted Savage. “EDA (chip design) companies, design services or somebody in an IP support role would be allowed to sponsor activities like lunch. But we didn’t want them taking attention away from the main focus of the event, namely, semiconductor IP.”

The other unique characteristic of this event is its sensitivity to the often unfair advantages that bigger companies have over smaller ones in the IP space. Larger companies can use their financial advantage to appear more prominent and even superior to smaller but well established firms. In an effort to level the playing field, REUSE has limited all booth spaces in the exhibition hall to a table. Both large and small companies will have the same size area to highlight their technology.

This year’s event is drawing from the global semiconductor IP community with participating companies from the US, Europe, Asia and even Serbia.

The breadth of IP related topics covers system-on-chip (SOC) IP design and verification for both hardware and software developers. Jim Feldham, President and CEO, of Semico Research will provide the event’s inaugural keynote address on trends driving IP reuse. In addition to the exhibition hall with over 30 exhibitors, there will be three tracks of presentations held throughout the day at REUSE 2016 on December 1, 2016 at the Computer Science Museum in San Jose, CA. See you there!

Originally posted on Chipestimate.com “IP Insider”

World of Sensors Highlights Pacific NW Semiconductor Industry

October 25th, 2016

Line-up of semiconductor and embedded IOT experts to talk at SEMI Pacific NW “World of Sensors” event.

The Pacific NW Chapter of SEMI will be holding their Fall 2016 event highlighting the world of sensors. Mentor Graphics will be hosting the event on Friday, October 28, 2016 from 7:30 to 11:30 am.

The event will gather experts in the sensor professions who will share their vision of the future and the impact it may have on the overall semiconductor industry. Here’s a brief list of the speaker line-up:

  • Design for the IoT Edge—Mentor Graphics
  • Image Sensors for IoT—ON Semiconductor
  • Next Growth Engine for Semiconductors—PricewaterhouseCoopers
  • Expanding Capabilities of MEMS Sensors through Advanced Manufacturing—Rogue Valley Microdevices
  • Engineering Biosensors for Cell Biology Research and Drug Discovery—Thermo Fisher Scientific

Register today and meet and network with industry peers from these companies, Applied Materials, ASM America, Brewer Science, Cascade Microtech, Delphon Industries, FEI Company, Kanto, Microchip Technology, SSOE Group, VALQUA America and many more.

See the full agenda and Register today.

IEEE Governance in Division

September 20th, 2016

Will a proposed amendment modernize the governance of one of the oldest technical societies or transfer power to a small grouper of officials?

By John Blyler, Editorial Director

As a member of the IEEEE, I recently received an email concerning a proposed change to the society’s constitution that might fundamentally impact the governance of the entire organization. Since that initial email, there have been several messages from various societies within the IEEE that either oppose or support this amendment.

To gain a broader perspective on the issue, I asked the current IEEE President-Elect and well-known EDA colleague, Karen Bartleson, for her viewpoint concerning the opposition’s main points of contention. Ms. Bartleson supports the proposed changes. What follows is a portion of her response. – JB

Opposition: The amendment could enable:

  • a small group to take control of IEEE

Support: Not at all. There is no conspiracy going on – the Boards of Directors from 2015 and 2016 are not sinister. They want the best for the IEEE.

  • transferring of power from over 300,000 members to a small group of insiders,

Support: Not at all. Currently the Board of Directors is not elected by the full membership of IEEE. Allowing all members to elect their Board is more fair than it is today.

  • removing regional representation from the Board of Directors thereby making it possible that, e.g., no Asian or European representatives will be on the Board of Directors – thus breaking the link between our sections and the decisions the Board will make,

Support: No. The slate for the Board of Directors will better ensure geographic diversity. Today, Region 10 – which is 25% of membership – gets only 1 seat on the Board of Directors. Today, there are 7 seats reserved exclusively for the USA.

  • removing technical activities representation from the Board of Directors thereby diminishing the voices of technology in steering IEEE’s future,

Support: No. There will be plenty of opportunity for technical activities to be represented on the Board of Directors.

  • moving vital parts of the constitution to the bylaws – which could be subject to change by a small group, on short notice.

Support: This is not a new situation. Today, the bylaws can be changed by the Board on short notice. For instance, the Board could decide to eliminate every Region except one. But the Board is not irresponsible and wouldn’t do this without buy-in from the broader IEEE.

The society has create a public page concerning this proposed amendment.

It is the responsibility of all IEEE members to develop an informed opinion and vote by October 3, 2016, in the annual election.

 

 

Has The Time Come for SOC Embedded FPGAs?

August 30th, 2016

Shrinking technology nodes at lower product costs plus the rise of compute-intensive IOT applications help Menta’s e-FPGA outlook.

By John Blyler, IP Systems

 

The following are edited portions of my video interview the Design Automation Conference (DAC) 2016 with Menta’s business development director, Yoan Dupret. – JB

John Blyler's interview with Yoan Dupret from Menta

Blyler: You’re technology enables designers to include an FPGA almost anywhere on a System-on-Chip (SOC). How is your approach unique from others that purport to do the same thing?

Dupret: Our technology enables placement of an Field Programmable Gate Array (FPGA) onto a silicon ASIC, which is why we call it an embedded FPGA (e-FPGA). How are we different from others? First, let me explain why others have failed in the past while we are succeeding now.

In the past, the time just wasn’t right. Further, the cost of developing the SOC was still too high. Today, all of those challenges are changing. This has been confirmed by our customers and from GSA studies that explain the importance of having some programmable logic inside an ASIC.

Now, the time is right. We have spent the last few years focusing on research and development (R&D) to strengthen our tools, architectures and to build out competencies. Toolwise, we have a more robust and easier to use GUI and our architecture has gone through several changes from the first generation.

Our approach uses standard cell-based ASICs so we are not disruptive to the EDA too flow of our customers. Our hard IP just plugs into the regular chip design flow using all of the classical techniques for CMOS design. Naturally, we support testing with standard scan chain tests and impressive test coverage. We believe our FPGA performance is better than the competitions in terms of numbers of lookup tables per of area, of frequencies, and low power consumption.

Blyler:  Are you targeting a specific area for these embedded FPGAs, e.g., IOT?

Dupret: IOT is one of the markets we are looking at but it is not the only one. Why? That’s because the embedded FPGA fabric can actually go anywhere you have RTL, which is intensively parallel programming based (see Figure 1). For example, we are working on a cryptographic algorithms inside the e-FPGA for IOT applications. We have tractions on the filters for digital radios (IIR and FLIR filters), which is another IOT application. Further, we have customers in the industrial and automotive audio and image processing space

Figure 1: SOC architecture with e-FPGA core, which is programmed after the tape-out. (Courtesy of Menta)

Do you remember when Intel bought Altera, a large FPGA company? This acquisition was, in part, for Intel’s High Performance Computing (HPC) applications. Now they have several big FPGAs from Altera just next to very high frequency processing cores. But there is another way to do achieve this level of HPC. For example, a design could consists of a very big parallel intensive HPC architecture with a lot of lower frequency CPUs and next to each of these CPUs you could have an e-FPGa.

Blyler: At DAC this year, there are a number of companies from France. Is there something going on there? Will it become the next Silicon Valley?

Dupret: Yes, that is true. There are quite some companies doing EDA. Others are doing IP, some of which are well known. For example, Dolphin, is based in Grenoble and it is also part of the ecosystem there.

Blyler: That’s great to see. Thank you, Yoan.

To learn more about Menta’s latest technology: “Menta Delivers Industry’s Highest Performing Embedded Programmable Logic IP for SoCs.”

Increasing Power Density of Electric Motors Challenges IGBT Makers

August 23rd, 2016

Mentor Graphics answers questions about failure modes and simulation-testing for IGBT and MOSFET power electronics in electronic and hybrid-electronic vehicles (EV/HEV).

By John Blyler, Editorial Director

Most news about electric and hybrid vehicles (EV/HEV) electronics focuses on the processor-based engine control and the passenger infotainment systems.  Of equal importance is the power electronics that support and control the actual vehicle motors. On-road EVs and HEVs operate on either AC induction or permanent magnet (PM) motors. These high-torque motors must operate over a wide range of temperatures and in often electrically noisy environments. The motors are driven by converters that generally contain a main IGBT or power MOSFET inverter.

The constant power cycling that occurs during the operation of the vehicle significantly affects the reliability of these inverters. Design and reliability engineers must simulate and test the power electronics for thermal reliability and lifecycle performance.

To understand more about the causes of inverter failures and the test that reveal these failures, I presented the following questions to Andras Vass-Varnai, Senior Product Manager for the MicReD Power Tester 600A , Mentor Graphic’s Mechanical Analysis Division. What follows is a portion of his responses. – JB

 

Blyler: What are some of the root causes of failures for power devices in EV/HEV devices today, namely, for insulated gate bipolar transistors (IGBTs), MOSFETs, transistors, and chargers?

Vass-Varnai: As the chip and module sizes of power devices show a shrinking tendency, while the required power dissipation stays the same or even increases, the power density in power devices increases, too. The increasing power densities require careful thermal design and management. The majority of failures is thermal related, the temperature difference between the material layers within an IGBT or MOSFET structure, plus the differences in the coefficient of thermal expansion of the same layers lead to thermo-mechanical stress.

The failure will develop ultimately at these layer boundaries or interconnects, such as the bond wires, die attach, base plate solder, etc. (see Figure 1). Our technology can induce the failure mechanisms using active power cycling and can track the failure while it develops using high resolution electric tests, from which we derive thermal and structural information.

Figure 1: Cross-section of an IGBT module.

Blyler: Reliability testing during power cycling improves the reliability of these devices. How was this testing done in the past? What new technology is Mentor bringing to the testing approach?

Vass-Varnai: The way we see it, traditionally the tests were done in a very simplified way, companies used tools to stress the devices by power cycles, however these technologies were not combined with in-progress characterization. They started the tests, then stopped to see if any failure happened (using X-ray microscopy, ultrasonic microscopy, sometimes dissection), then continued the power cycling. Testing this way took much more time and more user interaction, and there was a chance that the device fails before one had the chance to take a closer look at the failure. In some more sophisticated cases companies tried to combine the tests with some basic electrical characterization, however none of these were as sophisticated and complete as offered by today’s power testers. One major advantage of today’s technology is the high resolution (about 0.01C) temperature measurement and the structure function technology, which helps users to precisely identify in which structural layer the failure develops and what is its effect on the thermal resistance, all of these embedded in the power cycling process.

The combination with simulation is also unique. In order to calculate the lifetime of the car, one needs to simulate very precisely the temperature changes in an IGBT for a given mission profile. In order to do this, the simulation model has to behave exactly as the real device both for steady state and transient excitations. The thermal simulation and testing system must be capable of taking real measurement data and calibrating the simulation model for precise behavior.

Blyler: Can this tester be used for both (non-destructive) power-cycle stress screening as well as (destructive) testing the device all the way to failure? I assume the former is the wider application in EV/HEV reliability testing.

Vass-Varnai: The system can be used for non-destructive thermal metrics measurements (junction temperature, thermal resistance) and also for active power cycling (which is a stress test), and can track automatically the development of the failure (see Figure 2).

Figure 2: Device voltage change during power cycling for three tested devices in Mentor Graphics MicReD Power Tester 1500A

Blyler: How do you make IGBT thermal lifetime failure estimations?

Vass-Varnai: We use a combination of thermal software simulation and hardware testing solution specifically for the EV/HEV market. Thermal models are created using computational fluid dynamics based on the material properties of the IGBT under test. These models accurately simulate the real temperature response of the EV/HEV’s dynamic power input.

Blyler: Thank you.

For more information, see the following: “Mentor Graphics Launches Unique MicReD Power Tester 600A Solution for Electric and Hybrid Vehicle IGBT Thermal Reliability

Bio: Andras Vass-Varnai obtained his MSc degree in electrical engineering in 2007 at the Budapest University of Technology and Economics. He started his professional career at the MicReD group of Mentor Graphics as an application engineer. Currently, he works as a product manager responsible for the Mentor Graphics thermal transient testing hardware solutions, including the T3Ster product. His main topics of interest include thermal management of electric systems, advanced applications of thermal transient testing, characterization of TIM materials, and reliability testing of high power semiconductor devices.

 

One EDA Company Embraces IP in an Extreme Way

June 7th, 2016

Silvaco’s acquisition of IPextreme points to the increasing importance of IP in EDA.

By John Blyler, Editorial Director

One of the most promising directions for future electronic design automation (EDA) growth lies in semiconductor intellectual property (IP) technologies, noted Laurie Balch in her pre-DAC (previously Gary Smith) analysis of the EDA market. As if to confirm this observation, EDA tool provider Silvaco just announced the acquisition of IPextreme.

At first glance, this merger may seems like an odd match. Why would an EDA tool vendor who specializes in the highly technical analog and mixed signal chip design space want to acquire an IP discovery, management and security company? The answer lies in the past.

According to Warren Savage, former CEO of IPextreme, the first inklings of a foundation for the future merger began at DAC 2015.  The company had a suite of tools and an ecosystem that enabled IP discovery, commercialization and management. What they lacked was a strong sale channel and supporting infrastructure.

Conversely, Silvaco’s EDA tools were used by other companies to create customized analog chip IP.  This has been the business model for most of the EDA industry where EDA companies engineer and market their own IP. Only a small portion of the IP created by this model have been made commercially available to all.

According to David Dutton, the CEO of Silvaco, the acquisition of IPextreme’s tools and technology will allow them to unlock their IP assets and deliver this underused IP to the market. Further, this strategic acquisition is part of Silvaco’s 3-year plan to double its revenues by focusing – in part – on strengthening it’s IP offerings in the IOT and automotive vertical markets.

Savage will now lead the IP business for Silvaco. The primary assets from IPextreme will now be part of Silvaco, including:

  • Xena – A platform for managing both the business and technical aspects of semiconductor IP.
  • Constellations – A collective of independent, likeminded IP companies and industry partners that collaborate at both the marketing and engineering levels.
  • Coldfire processor IP and various interface cores.
  • “IP Fingerprinting” – A package, which allows IP owners to “fingerprint” their IP so that their customers can easily discover it in their chip designs and others using ”DNA analysis” software without the need for GDSII tags.

The merger should be mutually beneficial for both companies. For example, IPextreme and its Constellation partners will now have access to a worldwide sales force and associated infrastructure resources.

On the other hand, Silvaco will gain the tools and expertise to commercialize their untapped IP cores. Additionally, this will complement the existing efforts of customers who use Silvaco tools to make their own IP.

As the use of IP grows, so will the need for security. To date, it has been difficult for companies to tell the brand and type of IP in their chip designs. This problem can arise when engineers unknowingly “copy and paste” IP from one project to another. The “IP fingerprinting” technology developed by IPextreme creates a digital representation of all the files in a particular IP package. This representation is entered into a Core store that can then be used by other semiconductor companies to discover what internal and third-party IP is contained in their chip designs.  This provides a way for companies to protect against the accidental reuse of their IP.

According to Savage, there is no way to reverse engineer a chip design from the fingerprinted digital representation.

Many companies seem to have a disconnect between the engineering, legal and business side of their company. This disconnect causes a problem when engineers use IP without any idea of the licensing agreements attached to that IP.

“The problem is gaining the attention of big IP providers who are worried about the accidental reuse of third-party IP,” notes Savage. “Specifically, it represents a liability exposure problem.”

For smaller IP providers, having their IP fingerprint in the CORE store could potentially mean increased revenue as more instances of their IP become discoverable.

In the past, IP security measures have been implemented with limited success with hard and soft tags. (see, “Long Standards, Twinkie IP, Macro Trends, and Patent Trolls”) But tagging chip designs in this way was never really implemented in the major EDA place and route tools, like Synopsys’s IC Compiler. According to Savage, even fabs like TSMC don’t follow the Accellera tagging system, but have instead created their on security mechanisms.

For added security, IPextreme’s IP Fingerprinting technology does support the tagging information, notes Savage.

Trends in Hyper-Spectral Imaging, Cyber-Security and Auto Safety

April 25th, 2016

Highlights from SPIE Photonics, Accellera’s DVCon and Automotive panels focus on semiconductor’s changing role in emerging markets.

By John Blyler, Editorial Director

Publisher John Blyler talks with Chipestimate.TV executive director Sean O’Kane during the monthly travelogue of the semiconductor and embedded systems industries. In this episode, Blyler shares his coverage to two major conferences: SPIE Photonics and Accellera’s Design-Verification Conference (DVCon). He concludes with the risk emphasis in automotive electronics from a recent market panel. Please note that what follows is not a verbatim transcription of the interview. Instead, it has been edited and expanded for readability. Cheers — JB

O’Kane: Earlier this year, you were at the SPIE Photonic show in San Francisco. Did you see any cool tech?

Blyler: As always, there was a lot to see at the show covering photonic and optical semiconductor-related technologies. One thing that caught my attention was the continuing development of hyperspectral cameras.  For example, start-up SCiO prototypes a pocket-sized molecular scanner based on spectral imaging that tells you everything about your food.

Figure 1: SCiO Molecular scanner based on spectral imaging technology.

O’Kane: That sounds like the Star Trek Tricorder. Mr. Spock would be proud.

Blyler: It’s very much so. I talked with Imec’s Andy Lambrechts at the Photonics show.  They have developed a process that allows them to deposit spectral filter banks in both the visible and near infra-red range on the same CMOS sensor. That’s the key innovation for shrinking the size and – in some cases – the power consumption. It’s very useful for quickly determining the health of agricultural crops. And all thanks to semiconductor technology.

 

Figure 2: Imec Hyperspectral imaging technology for agricultural crop markets.

O’Kane: Recently, you attended the Design and Verification Conference (DVCon). This year, it was Mentor Graphic’s turn to give the keynote. What did the CEO Wally Rhines talk about?

Blyler: His presentations are always rich in data and trends slides. What caught my eye were his comments about cyber security.

Figure 3: Wally Rhines, CEO of Mentor Graphics, giving the DVCon2016 keynote.

O’Kane: Did he mention Beckstrom’s law?

Blyler: You’re right! Soon, the Internet of Things (IoT) will expand the security need to almost everything we do, which is why Beckstrom’s law is important:

Beckstrom’s Laws of Cyber Security:

  1. Everything that is connected to the Internet can be hacked.
  2. Everything is being connected to the Internet
  3. Everything else follows from the first two laws.

Naturally, the semiconductor supply chain want some assurance the chips are resistant to hacking. That’s why chip designers need to pay attention to three levels of security breaches: Side-Channel Attacks (On-Chip Countermeasures); Counterfeit Chips (Supply-chain security); and Malicious Logic Inside Chip (Trojan detection)

EDA tools will become the core of the security framework, but not without changes. For example, verification will move from its traditional role to an emerging one:

  • Traditional role: Verifying that a chip does what it is supposed to do
  • Emerging role: Verifying that a chip does nothing it is not supposed to do

This is a nice lead into safety-critical design and verification systems. Safety critical design requires that both the product development process and related software tools introduce no potentially harmful effects into the system, product or the operators and users. One example of this is the emerging certification standards in the automotive electronics space, namely, ISO 26262.

O’Kane: How does this safety standard impact engineers developing electronics in this space?

Blyler: Recently, I put that question to a panel of experts from the automotive, semiconductor and systems companies (see Figure 4). During our discussion, I noted that the focus on functional safety seems like yet another “Design-for-X” methodology, where “X” is the activity that you did poorly during the last product iteration, like requirements, testing, etc. But ISO 26262 is a compliant, risk-based safety standard for future automobile systems – not a passing fad.

 

Figure 4: Panel on design of automotive electronics hosted by Jama Software – including experts from Daimler, Mentor Graphics, Jama and Synopsys.

Mike Bucala from Daimler put it this way: “The ISO standard is different than other risk standards because it focuses on hazards to persons that result from the malfunctioning behavior of EE systems – as opposed to the risk of failure of a product. For purposes of liability and due care, reducing that risk implies a certain rigor in documentation that has never been there before.”

O’Kane: Connected cars are getting closer to becoming a reality.  Safety will be critical issues for regulatory approval.

Blyler: Indeed. Achieving that approval will encompass everything all aspects of connectivity, for example, from connected system within the automobile to other drivers, roadway infrastructures and the cloud. I think many consumers tend to focus on only the self-driving and parking aspects of the evolving autonomous vehicles.

Figure 5: CES2016 BMW self-parking connected car.

It’s interesting to note that connected car technology is nothing new. It’s been used in the racing industry for years at places like the Sonoma Raceway near San Francisco, CA. The high performance race cars are constantly collecting, conditioning and sending data throughout different parts of the car, to the driver and finally to the telemetry-based control centers where the pit crews reside. This is quite a bit different from the self-driving and parking aspects of consumer autonomous vehicles.

Figure 6: Indy car race at Sonoma Raceway.

 

 

 

Fit-for-Purpose Tools Needed for ISO 26262 Certification

April 19th, 2016

Both the product development process and third-party tool “fit-for-purpose” certification are needed for Automotive ISO 26262.

By John Blyler, Editorial Director

Recently, Portland-based Jama Software announced a partnership with an internationally recognized ISO 26262 automotive testing body to obtain ISO26262 “Fit-for-Purpose” certification.  This accreditation will assure automotive OEM and suppliers that the workflows they follow to define, build and test automotive related products in the Jama tool suits meet critical functional safety requirements.

When asked the name of the testing body issuing the “Fit-for-Purpose” certification, Jama’s co-founder Derywn Harris replied that the well-known organization could not be named until the certification was issued. Further, he emphasized that the certification was less for the designers and more for the compliance folks who define the process and obtain their own certification.

“This is the big difference between “fit for purpose” and having an actual certification. We will NOT be ISO 26262 certified,” he explained.

So how exactly does this certification help? Customers seeking ISO 26262 certification must make sure the tools they use and the use cases within those tools are evaluated to determine the Tool Confidence Level (TCL) level for each workflow. The TCL is a function of the Tool Impact (TI) measure, which indicates the possibility of a development system failure based on the cause of a tool problem and the Tool Error Detection (TD). The TD measures the likelihood of a tools problem detection and finding a suitable workaround.

In simple words, tool vendors must be sure their software process is fit-for-purpose for functional safety development in alignment with ISO 26262 (functional safety standard for passenger vehicles).

For example, let’s assume that a company uses Jama software for traceability of critical embedded hardware-software safety requirements and associated tests. This company will have to demonstrate how they are actually using this functionality in their workflow and apply a TCL level to that flow. That TCL number along with other risk-related measures will provide a level of confidence that the tools are fit for automotive safety-focused development.

Figure: Here’s an example of traceability showing both upstream and downstream trace relationships.

“While customers can do this themselves there are aspects of the tool development process they don’t have control of or visibility into,” notes Harris. “Hence they either need to audit the vendor or the vendor needs a certification. So, long story short, by Jama having a certification we save our customers time and cost.”

The traceability example is but one of many safety related system functions that companies may need to re-evaluate to gain ISO 26262 certification for their product development process. But traceability is a key process function needed for today’s robust design. Customers seeking ISO 26262 product certification are often blocked because the third-party tools they use are not ISO 26262 “fit for purpose” certified.

General Chair Shares Insights on DVCon 2016

February 22nd, 2016

Chair Yatin Trivedi highlights the upcoming US chip design-verification show and differences with European and Asian DVCon events.

By John Blyler, Editorial Director

What’s new at this year’s annual semiconductor chip design and verification conference (DVCon), held between Feb. 29 through Mar. 3, 2016, at the Doubletree Hotel in San Jose, CA? How has the globalization of this event affected the primary show? “JB Systems” sat down with Yatin Trivedi, DVCon General Chair, to answer these questions. What follows is a portion of that interview. – JB

Blyler: How is DVCon doing?

Trivedi: For 2016, we expect attendance to be around 1,000 attendees, about 800 attendees and about 300 exhibitors, which will be greater than last year. The number of exhibit booths should be about 40. People are still signing up. As usual, there will be lots of networking events with qualified engineers. I like to think of DVCon as “Facebook” live for engineers. The value of the show remains the same: attendees are able to learn from their peers.

There will be two panel sessions: one moderated by Jim Hogan on where the industry goes from here and the other moderated by Brain Bailey on ESL. Other opportunities exist in the poster sessions, where people talk with the authors and other engineers. Everyone exchanges good pieces of information about what does and doesn’t work and under what conditions.

The exhibit floor provides a place to show attendees that vendor claims about solutions can actually be demonstrated.

There will be 37 papers at this year’s show plus a couple of invited talks. The CEO of Mentor Graphics, Wally Rhines, will present the invited keynote on Tuesday. Tutorials start on Monday with courses on Accellera standards given by Accellera committee members. Vendors will provide tutorials on Thursday to solve specific problems. Topics range from debug methodologies to the Universal Verification Methodology (UVM), SystemC, formal verification and more.

Blyler: Recently, DVCon has expanded into Europe and Asia. What is the latest information on those activities?

Trivedi: DVCon US is the flagship of the show. A few years back we had the first DVCon Europe and India. We started events in these countries as a way to serve specific centers of excellence. For example, a lot of automotive work is done in Europe because of the presence of BMW, Mercedes and other automotive manufactures. Naturally, a large community of electronic designers has developed to support these companies.

Another motivating factor is that not everybody has the opportunity to travel to the US for DVCon. European Accellera board members like ST, NXP, Infineon, ARM and others convinced us of the need for a DVCon in Europe. So we put together the first conference in 2014, which had about 200 people. At last year’s event in 2015, we had over 300 attendees. The reason for the growth was pent-up interest from local communities that could not travel. The other benefit of a local DVCon was that people who could attend would be more willing to submit technical papers.

Blyler: Did the show in India grow from the same motivation as in Europe?

Trivedi: No, it happened a little bit differently. In India, there was already an event called India SystemC User Group or ISCUG. This event had about 300 people. At the same time, there existed a chip design-verification (DV) community that wasn’t exactly served by ISCUG. The merging of the Open SystemC Initiative (OSCI) with Accellera presented the opportunity for DVCon to open in India with two tracks: One for ESL or SystemC and another track on design and verification (DV). The later track provided a new platform where DV engineers could get together. At the first show in 2014, we had about 450 attendees. Last year in 2015, we topped 600 attendees. As a two year track record, that’s about 30 to 40% growth year-over-year.

Initially, we were worried that these new conferences might cannibalize the original US conference. That fear never came true because the paper submissions for the new shows came from local communities as did the volunteer organizations in terms of program and steering committees, exhibitors, etc. And the attendance came locally. It was probably something we should have done earlier.

This means that DVCON globally has grown to a 2000+ worldwide community.

Blyler: Thank you.

EDA Tool Reduces Chip Test Time With Same Die Size

February 4th, 2016

Cadence combines physically-aware scan logic with elastic decompression in new test solution. What does that really mean?

By John Blyler, Editorial Director

Cadence recently announced the Modus Test Solution suite that the company claims will enable up to 3X reduction in test time and up to 2.6X reduction in compression logic wirelength. This improvement is made possible, in part, by a patent-pending, physically aware 2D Elastic Compression architecture that enables compression ratios beyond 400X without impacting design size or routing. The press release can be found on the company’s website.

What does all the technical market-ese mean? My talk with Paul Cunningham, vice president of R&D at Cadence, helps clarify the engineering behind the announcement. What follows are portions of that conversation. – JB

 

Blyler:  Reducing test times saves companies a lot of money. What common methods are used today?

Cunningham: Test compression is the technique of reducing the test data volume and test application time while retaining test coverage. XOR-based compression has been widely used to reduce test time and cost. Shorter scan chains mean fewer clock cycles are needed to shift in each test pattern, reducing test time. Compression reduces test time by partitioning registers in a design into more scan chains than there are scan pins.

But there is an upper limit to test time. If the compression ratio is too high, then the test coverage is lost. Even if test coverage is not lost, test time savings eventually dry up. In other words, as you shrink the test time you also shrink the data you can put into the compression system for fault coverage.

As I change the compression ratio, I’m making the scan chains shorter. But I’ve got more chains while the scan in pin numbers are constant. So every time I shrink the chain, each pattern that I’m shifting in has less and less bits because the width of the pattern coming in is the number of scan pins. The length of the pattern coming in is the length of the scan chain. So if you keep shrinking the chain, the amount of information in each pattern decreases. At some point, there just isn’t enough information in the pattern to allow us to control the circuits to detect the faults.

Blyler: Where is the cross-over point?

Cunningham: The situation is analogous to general relativity. You know that you can never go faster than the speed of light but as you approach the speed of light it takes exponentially more energy. The same thing is going on here. At some point, if the length of the chain is too short and our coverage drops. But, as we approach that cliff moment, the number of patterns that it takes to achieve the coverage – even if we can maintain it – increases exponentially. So, you can get into the situation where, for example, you half the length of the chain but you need twice as many patterns. At that point, your test time hasn’t actually dropped because test time it the number of patterns times the length of the chain. So the product of those two starts to cancel out. At some point you’ll never go beyond a certain level but your coverage will drop. But as you get close to it, you start losing any benefit because you need more and more patterns to achieve the same result.

Blyler: What is the second limit to testing a chip with compression circuitry?

Cunningham: The other limit doesn’t come from the mathematics of fault detection but is related to physical implementation. In other words, the chip size limit is due to physical implementation, not mathematics (like coverage).

Most of the test community has been focused on the upper limit of test time. But even a breakthrough there wouldn’t address the physical implementation challenge. In the diagram below, you can see that the big blue spot in the middle is the XOR circuit wiring. All that wiring in the red is wiring to and from the chains. It is quite scary in size.

Blyler: So the second limit is related to the die size and wire length for the XOR circuit?

Cunningham:  Yes - There are the algorithm limits related to coverage and pattern count (mentioned earlier) and then there are the physical limits related to wire length. The industry has been stuck because of these two things. Now for the solution. Let’s talk about the things in reverse order, i.e., the issue of the physical limits first.

What is the most efficient way to span two dimensions (2D) with Manhattan routing? The answer is by using a grid or lattice. [Editor’s Note: The Manhattan Distance is the distance measured between two points by following a grid pattern instead of the straight line between the points.]

So the lattice is the best way to get across two dimensions while giving you the best possible way to control circuit behavior at all points. We’ve come up with a special XOR Circuit structure that unfolds beautifully into a grid in 2D. So when Modus inserts compress it doesn’t just create an XOR circuit, rather, it actually places it. It takes the X-Y coordinates for those XOR gates. Thus, using 2D at 400X has the same wire length as 1D at 100X.

Blyler: This seems like a marriage with place & route technology.

Cunningham:  For a long time people did logic synthesis only based on the connectivity of the gates. Then we realized that we really had to do physical synthesis. Similarly, for a long time, the industry has realized that the way we connect up the scan chains need to be physically aware. That’s been done. But nobody made the actual compression logic physically aware. That is a key innovation in our product offering.

And it is the compression logic that is filling the chip – all that red and blue nasty stuff. That is not scan chain but compression logic.

Blyler: It seems that you’ve address the wire length problem. How do you handle the mathematics of the fault coverage issue?

Cunningham: The industry got stuck on the idea that, as you shrink the chains you have shorter patterns or a reduction in the amount of information that can be input. But why don’t we play the same game with the data we shift in. Most of the time, I do want really short scan chains because that typically means I can pump data into the chip faster than before. But in so doing, there will be a few cases where I lose the capability to detect faults because some faults really require precise control of values in the circuit. For those few cases, why don’t I shift in more clock cycles than I shift out?

In those cases, I really need more bit of information coming in. But that could be done by making the scan deeper, that is, by adding more clock cycles. In practice, that means we need to put sequential elements inside the decompressor portion of the XOR Compressor system.  Thus, where necessary, I can read in more information. For example, I might scan in for 10 clock cycles but I’ll scan out (shift out) for only five clock cycles. I’m read in more information than I’ve read out.

In every sense of the word, it is an elastic decompressor. When we need to, we can stretch that pattern to contain more information. That stretched pattern it then transposed by 90 degrees into a very wide pattern that we then shove into those scan chains.

Blyler: So you’ve combined this elastic decompressor with the 2D concept.

Cunningham: Yes – and now you have changed the testing game with 400x compression ratios and achieving up to 3X reduction in test time without impacting the wire length (chip size). We have several endorsements from key customers, too.

In summary:

  • 2D compression: Scan compression logic forms a physically aware two-dimensional grid across the chip floorplan, enabling higher compression ratios with reduced wirelength. At 100X compression ratios, wirelength for 2D compression can be up to 2.6X smaller than current industry scan compression architectures.
  • Elastic compression: Registers embedded in the decompression logic enable fault coverage to be maintained at compression ratios beyond 400X by controlling care bits sequentially across

Blyler: Thank you.

Next Page »