Part of the  

Chip Design Magazine

  Network

About  |  Contact

Archive for February, 2016

General Chair Shares Insights on DVCon 2016

Monday, February 22nd, 2016

Chair Yatin Trivedi highlights the upcoming US chip design-verification show and differences with European and Asian DVCon events.

By John Blyler, Editorial Director

What’s new at this year’s annual semiconductor chip design and verification conference (DVCon), held between Feb. 29 through Mar. 3, 2016, at the Doubletree Hotel in San Jose, CA? How has the globalization of this event affected the primary show? “JB Systems” sat down with Yatin Trivedi, DVCon General Chair, to answer these questions. What follows is a portion of that interview. – JB

Blyler: How is DVCon doing?

Trivedi: For 2016, we expect attendance to be around 1,000 attendees, about 800 attendees and about 300 exhibitors, which will be greater than last year. The number of exhibit booths should be about 40. People are still signing up. As usual, there will be lots of networking events with qualified engineers. I like to think of DVCon as “Facebook” live for engineers. The value of the show remains the same: attendees are able to learn from their peers.

There will be two panel sessions: one moderated by Jim Hogan on where the industry goes from here and the other moderated by Brain Bailey on ESL. Other opportunities exist in the poster sessions, where people talk with the authors and other engineers. Everyone exchanges good pieces of information about what does and doesn’t work and under what conditions.

The exhibit floor provides a place to show attendees that vendor claims about solutions can actually be demonstrated.

There will be 37 papers at this year’s show plus a couple of invited talks. The CEO of Mentor Graphics, Wally Rhines, will present the invited keynote on Tuesday. Tutorials start on Monday with courses on Accellera standards given by Accellera committee members. Vendors will provide tutorials on Thursday to solve specific problems. Topics range from debug methodologies to the Universal Verification Methodology (UVM), SystemC, formal verification and more.

Blyler: Recently, DVCon has expanded into Europe and Asia. What is the latest information on those activities?

Trivedi: DVCon US is the flagship of the show. A few years back we had the first DVCon Europe and India. We started events in these countries as a way to serve specific centers of excellence. For example, a lot of automotive work is done in Europe because of the presence of BMW, Mercedes and other automotive manufactures. Naturally, a large community of electronic designers has developed to support these companies.

Another motivating factor is that not everybody has the opportunity to travel to the US for DVCon. European Accellera board members like ST, NXP, Infineon, ARM and others convinced us of the need for a DVCon in Europe. So we put together the first conference in 2014, which had about 200 people. At last year’s event in 2015, we had over 300 attendees. The reason for the growth was pent-up interest from local communities that could not travel. The other benefit of a local DVCon was that people who could attend would be more willing to submit technical papers.

Blyler: Did the show in India grow from the same motivation as in Europe?

Trivedi: No, it happened a little bit differently. In India, there was already an event called India SystemC User Group or ISCUG. This event had about 300 people. At the same time, there existed a chip design-verification (DV) community that wasn’t exactly served by ISCUG. The merging of the Open SystemC Initiative (OSCI) with Accellera presented the opportunity for DVCon to open in India with two tracks: One for ESL or SystemC and another track on design and verification (DV). The later track provided a new platform where DV engineers could get together. At the first show in 2014, we had about 450 attendees. Last year in 2015, we topped 600 attendees. As a two year track record, that’s about 30 to 40% growth year-over-year.

Initially, we were worried that these new conferences might cannibalize the original US conference. That fear never came true because the paper submissions for the new shows came from local communities as did the volunteer organizations in terms of program and steering committees, exhibitors, etc. And the attendance came locally. It was probably something we should have done earlier.

This means that DVCON globally has grown to a 2000+ worldwide community.

Blyler: Thank you.

EDA Tool Reduces Chip Test Time With Same Die Size

Thursday, February 4th, 2016

Cadence combines physically-aware scan logic with elastic decompression in new test solution. What does that really mean?

By John Blyler, Editorial Director

Cadence recently announced the Modus Test Solution suite that the company claims will enable up to 3X reduction in test time and up to 2.6X reduction in compression logic wirelength. This improvement is made possible, in part, by a patent-pending, physically aware 2D Elastic Compression architecture that enables compression ratios beyond 400X without impacting design size or routing. The press release can be found on the company’s website.

What does all the technical market-ese mean? My talk with Paul Cunningham, vice president of R&D at Cadence, helps clarify the engineering behind the announcement. What follows are portions of that conversation. – JB

 

Blyler:  Reducing test times saves companies a lot of money. What common methods are used today?

Cunningham: Test compression is the technique of reducing the test data volume and test application time while retaining test coverage. XOR-based compression has been widely used to reduce test time and cost. Shorter scan chains mean fewer clock cycles are needed to shift in each test pattern, reducing test time. Compression reduces test time by partitioning registers in a design into more scan chains than there are scan pins.

But there is an upper limit to test time. If the compression ratio is too high, then the test coverage is lost. Even if test coverage is not lost, test time savings eventually dry up. In other words, as you shrink the test time you also shrink the data you can put into the compression system for fault coverage.

As I change the compression ratio, I’m making the scan chains shorter. But I’ve got more chains while the scan in pin numbers are constant. So every time I shrink the chain, each pattern that I’m shifting in has less and less bits because the width of the pattern coming in is the number of scan pins. The length of the pattern coming in is the length of the scan chain. So if you keep shrinking the chain, the amount of information in each pattern decreases. At some point, there just isn’t enough information in the pattern to allow us to control the circuits to detect the faults.

Blyler: Where is the cross-over point?

Cunningham: The situation is analogous to general relativity. You know that you can never go faster than the speed of light but as you approach the speed of light it takes exponentially more energy. The same thing is going on here. At some point, if the length of the chain is too short and our coverage drops. But, as we approach that cliff moment, the number of patterns that it takes to achieve the coverage – even if we can maintain it – increases exponentially. So, you can get into the situation where, for example, you half the length of the chain but you need twice as many patterns. At that point, your test time hasn’t actually dropped because test time it the number of patterns times the length of the chain. So the product of those two starts to cancel out. At some point you’ll never go beyond a certain level but your coverage will drop. But as you get close to it, you start losing any benefit because you need more and more patterns to achieve the same result.

Blyler: What is the second limit to testing a chip with compression circuitry?

Cunningham: The other limit doesn’t come from the mathematics of fault detection but is related to physical implementation. In other words, the chip size limit is due to physical implementation, not mathematics (like coverage).

Most of the test community has been focused on the upper limit of test time. But even a breakthrough there wouldn’t address the physical implementation challenge. In the diagram below, you can see that the big blue spot in the middle is the XOR circuit wiring. All that wiring in the red is wiring to and from the chains. It is quite scary in size.

Blyler: So the second limit is related to the die size and wire length for the XOR circuit?

Cunningham:  Yes - There are the algorithm limits related to coverage and pattern count (mentioned earlier) and then there are the physical limits related to wire length. The industry has been stuck because of these two things. Now for the solution. Let’s talk about the things in reverse order, i.e., the issue of the physical limits first.

What is the most efficient way to span two dimensions (2D) with Manhattan routing? The answer is by using a grid or lattice. [Editor’s Note: The Manhattan Distance is the distance measured between two points by following a grid pattern instead of the straight line between the points.]

So the lattice is the best way to get across two dimensions while giving you the best possible way to control circuit behavior at all points. We’ve come up with a special XOR Circuit structure that unfolds beautifully into a grid in 2D. So when Modus inserts compress it doesn’t just create an XOR circuit, rather, it actually places it. It takes the X-Y coordinates for those XOR gates. Thus, using 2D at 400X has the same wire length as 1D at 100X.

Blyler: This seems like a marriage with place & route technology.

Cunningham:  For a long time people did logic synthesis only based on the connectivity of the gates. Then we realized that we really had to do physical synthesis. Similarly, for a long time, the industry has realized that the way we connect up the scan chains need to be physically aware. That’s been done. But nobody made the actual compression logic physically aware. That is a key innovation in our product offering.

And it is the compression logic that is filling the chip – all that red and blue nasty stuff. That is not scan chain but compression logic.

Blyler: It seems that you’ve address the wire length problem. How do you handle the mathematics of the fault coverage issue?

Cunningham: The industry got stuck on the idea that, as you shrink the chains you have shorter patterns or a reduction in the amount of information that can be input. But why don’t we play the same game with the data we shift in. Most of the time, I do want really short scan chains because that typically means I can pump data into the chip faster than before. But in so doing, there will be a few cases where I lose the capability to detect faults because some faults really require precise control of values in the circuit. For those few cases, why don’t I shift in more clock cycles than I shift out?

In those cases, I really need more bit of information coming in. But that could be done by making the scan deeper, that is, by adding more clock cycles. In practice, that means we need to put sequential elements inside the decompressor portion of the XOR Compressor system.  Thus, where necessary, I can read in more information. For example, I might scan in for 10 clock cycles but I’ll scan out (shift out) for only five clock cycles. I’m read in more information than I’ve read out.

In every sense of the word, it is an elastic decompressor. When we need to, we can stretch that pattern to contain more information. That stretched pattern it then transposed by 90 degrees into a very wide pattern that we then shove into those scan chains.

Blyler: So you’ve combined this elastic decompressor with the 2D concept.

Cunningham: Yes – and now you have changed the testing game with 400x compression ratios and achieving up to 3X reduction in test time without impacting the wire length (chip size). We have several endorsements from key customers, too.

In summary:

  • 2D compression: Scan compression logic forms a physically aware two-dimensional grid across the chip floorplan, enabling higher compression ratios with reduced wirelength. At 100X compression ratios, wirelength for 2D compression can be up to 2.6X smaller than current industry scan compression architectures.
  • Elastic compression: Registers embedded in the decompression logic enable fault coverage to be maintained at compression ratios beyond 400X by controlling care bits sequentially across

Blyler: Thank you.