Part of the  

Chip Design Magazine

  Network

About  |  Contact

Posts Tagged ‘chip’

EDA Tool Reduces Chip Test Time With Same Die Size

Thursday, February 4th, 2016

Cadence combines physically-aware scan logic with elastic decompression in new test solution. What does that really mean?

By John Blyler, Editorial Director

Cadence recently announced the Modus Test Solution suite that the company claims will enable up to 3X reduction in test time and up to 2.6X reduction in compression logic wirelength. This improvement is made possible, in part, by a patent-pending, physically aware 2D Elastic Compression architecture that enables compression ratios beyond 400X without impacting design size or routing. The press release can be found on the company’s website.

What does all the technical market-ese mean? My talk with Paul Cunningham, vice president of R&D at Cadence, helps clarify the engineering behind the announcement. What follows are portions of that conversation. – JB

 

Blyler:  Reducing test times saves companies a lot of money. What common methods are used today?

Cunningham: Test compression is the technique of reducing the test data volume and test application time while retaining test coverage. XOR-based compression has been widely used to reduce test time and cost. Shorter scan chains mean fewer clock cycles are needed to shift in each test pattern, reducing test time. Compression reduces test time by partitioning registers in a design into more scan chains than there are scan pins.

But there is an upper limit to test time. If the compression ratio is too high, then the test coverage is lost. Even if test coverage is not lost, test time savings eventually dry up. In other words, as you shrink the test time you also shrink the data you can put into the compression system for fault coverage.

As I change the compression ratio, I’m making the scan chains shorter. But I’ve got more chains while the scan in pin numbers are constant. So every time I shrink the chain, each pattern that I’m shifting in has less and less bits because the width of the pattern coming in is the number of scan pins. The length of the pattern coming in is the length of the scan chain. So if you keep shrinking the chain, the amount of information in each pattern decreases. At some point, there just isn’t enough information in the pattern to allow us to control the circuits to detect the faults.

Blyler: Where is the cross-over point?

Cunningham: The situation is analogous to general relativity. You know that you can never go faster than the speed of light but as you approach the speed of light it takes exponentially more energy. The same thing is going on here. At some point, if the length of the chain is too short and our coverage drops. But, as we approach that cliff moment, the number of patterns that it takes to achieve the coverage – even if we can maintain it – increases exponentially. So, you can get into the situation where, for example, you half the length of the chain but you need twice as many patterns. At that point, your test time hasn’t actually dropped because test time it the number of patterns times the length of the chain. So the product of those two starts to cancel out. At some point you’ll never go beyond a certain level but your coverage will drop. But as you get close to it, you start losing any benefit because you need more and more patterns to achieve the same result.

Blyler: What is the second limit to testing a chip with compression circuitry?

Cunningham: The other limit doesn’t come from the mathematics of fault detection but is related to physical implementation. In other words, the chip size limit is due to physical implementation, not mathematics (like coverage).

Most of the test community has been focused on the upper limit of test time. But even a breakthrough there wouldn’t address the physical implementation challenge. In the diagram below, you can see that the big blue spot in the middle is the XOR circuit wiring. All that wiring in the red is wiring to and from the chains. It is quite scary in size.

Blyler: So the second limit is related to the die size and wire length for the XOR circuit?

Cunningham:  Yes - There are the algorithm limits related to coverage and pattern count (mentioned earlier) and then there are the physical limits related to wire length. The industry has been stuck because of these two things. Now for the solution. Let’s talk about the things in reverse order, i.e., the issue of the physical limits first.

What is the most efficient way to span two dimensions (2D) with Manhattan routing? The answer is by using a grid or lattice. [Editor’s Note: The Manhattan Distance is the distance measured between two points by following a grid pattern instead of the straight line between the points.]

So the lattice is the best way to get across two dimensions while giving you the best possible way to control circuit behavior at all points. We’ve come up with a special XOR Circuit structure that unfolds beautifully into a grid in 2D. So when Modus inserts compress it doesn’t just create an XOR circuit, rather, it actually places it. It takes the X-Y coordinates for those XOR gates. Thus, using 2D at 400X has the same wire length as 1D at 100X.

Blyler: This seems like a marriage with place & route technology.

Cunningham:  For a long time people did logic synthesis only based on the connectivity of the gates. Then we realized that we really had to do physical synthesis. Similarly, for a long time, the industry has realized that the way we connect up the scan chains need to be physically aware. That’s been done. But nobody made the actual compression logic physically aware. That is a key innovation in our product offering.

And it is the compression logic that is filling the chip – all that red and blue nasty stuff. That is not scan chain but compression logic.

Blyler: It seems that you’ve address the wire length problem. How do you handle the mathematics of the fault coverage issue?

Cunningham: The industry got stuck on the idea that, as you shrink the chains you have shorter patterns or a reduction in the amount of information that can be input. But why don’t we play the same game with the data we shift in. Most of the time, I do want really short scan chains because that typically means I can pump data into the chip faster than before. But in so doing, there will be a few cases where I lose the capability to detect faults because some faults really require precise control of values in the circuit. For those few cases, why don’t I shift in more clock cycles than I shift out?

In those cases, I really need more bit of information coming in. But that could be done by making the scan deeper, that is, by adding more clock cycles. In practice, that means we need to put sequential elements inside the decompressor portion of the XOR Compressor system.  Thus, where necessary, I can read in more information. For example, I might scan in for 10 clock cycles but I’ll scan out (shift out) for only five clock cycles. I’m read in more information than I’ve read out.

In every sense of the word, it is an elastic decompressor. When we need to, we can stretch that pattern to contain more information. That stretched pattern it then transposed by 90 degrees into a very wide pattern that we then shove into those scan chains.

Blyler: So you’ve combined this elastic decompressor with the 2D concept.

Cunningham: Yes – and now you have changed the testing game with 400x compression ratios and achieving up to 3X reduction in test time without impacting the wire length (chip size). We have several endorsements from key customers, too.

In summary:

  • 2D compression: Scan compression logic forms a physically aware two-dimensional grid across the chip floorplan, enabling higher compression ratios with reduced wirelength. At 100X compression ratios, wirelength for 2D compression can be up to 2.6X smaller than current industry scan compression architectures.
  • Elastic compression: Registers embedded in the decompression logic enable fault coverage to be maintained at compression ratios beyond 400X by controlling care bits sequentially across

Blyler: Thank you.

Software-Hardware Integration of Automotive Electronics

Friday, October 11th, 2013

My SAE book arranges and extrapolates on expert papers in automotive hardware-software electronic integration at the chip, package, and network vehicle levels.

My latest book - more of a mini-book – is now available for pre-order from the Society of Automotive Engineers. This time, I explore the technical challenges in the hardware-software integration of automotive electronics. (Can you say “systems engineering?”)  I selected this topic to serve as a series of case studies for my related course at Portland State University. This work includes quotes from Dassault Systemes and Mentor Graphics.

 

Software-Hardware Integration in Automotive Product Development

Coming Soon – Pre-order Now!

Software-Hardware Integration in Automotive Product Development brings together a must-read set of technical papers on one of the most talked-about subjects among industry experts.

The carefully selected content of this book demonstrates how leading companies, universities, and organizations have developed methodologies, tools, and technologies to integrate, verify, and validate hardware and software systems. The automotive industry is no different, with the future of its product development lying in the timely integration of these chiefly electronic and mechanical systems….

 

Blacker Boxes Lie Ahead

Thursday, June 30th, 2011

Few pundits have addressed the system engineering development implications of the recent EDA and semiconductor company’s move toward platforms that include both chip hardware with associated firmware software.

Niche industries are notoriously myopic. They have to be, since excelling in a highly specific market segment usually requires a sharp focus on low-level details. A good example of a niche market is the semiconductor Electronic Design Automation (EDA) tools industry, that fine group of highly educated professionals who create the tools that allow today’s atom-sized transistors to be designed and manufactured.

The EDA industry has long talked about the importance of software (mostly firmware) as a critical complement to the design of processor-intensive hardware ASICs. While the acknowledgement of the importance of software is nothing new, it has only been in the last few years that actual hardware-software platforms have been forthcoming by the industry.

What does this trend really mean, i.e., the move to include firmware (devices drivers) in the System-in-Chip integrated circuits (ICs)? To date, the result is that companies offer a platform that contains both the SOC hardware and accompanying software firmware. In some cases, like Mentor, the platform also includes a Real-Time Operating System (RTOS) and embedded code optimization and analysis capabilities.

One could argue that this move to include software with the chips is an inevitable step in upward abstraction, driven by the commoditization of processor chips. Others argue that end users are demanding it, as time-to-market windows shrink in the consumer market.

But rather than follow the EDA viewpoint, let’s approach this trend from the standpoint of the end-user. I define the end-user as the Systems Engineer who is responsible for the integration of all the hardware and software into a workable end-system or final product (see figure). Note the big “S” in SE, meaning the system beyond the hardware or software subsystems.

The integration phase of the typical Systems Engineering V diagram is just as critical as the design phase for hardware-software systems.

What is the end-system or final product? It might be a digital camera or tablet; or perhaps a heads-up display for commercial or military aircraft; or even a radiation detector for a homeland security devicen. Regardless of the end system or product, the role of the Systems Engineer is changing as he/she receives software supported ICs from the chip supplier, courtesy of the EDA industry. In essence, the “black box” that traditionally consisted of a black package chip just got a bit blacker.

Some might say that the systems engineer now has less to worry about. No longer will the SE have to manage the hardware and software co-design and co-verification of the chip. Traditionally, that would mean long meetings and significant time spent in “discussions” with the chip designers and the firmware developers over interface issues. Today, that job has effectively been done by the EDA company and the chip supplier as the latest generation of chips come with the needed firmware, e.g., the first offering from Cadence’s multi-staged EDA360 strategy.

On the embedded side, the chip-firmware package might also include an RTOS and tools for software developers to optimize and analyze their code. Mentor is leading this area among EDA tool suppliers.

But how does this happy union of chip hardware and firmware affect the work of a module or product level SE? Does it make his/her job easier? That is certainly the goal, e.g., to greatly reduce co-design and co-verification issues between the silicon development and associated software while including hooks into upper level application development. Now, companies claim that many of these issues have been taken care of for a variety of processor systems.

One should note that these chip hardware-software platforms don’t yet really extend to the analog side of the business. This is hardly surprising since the software requirement is far less than on the digital processor side. Still, software is needed for such things as communication protocol stacks (thing PHY and MAC layers).

Yet, even on the digital side of the platform space, important considerations remain. How does hardware and software intellectual property (IP) fit into all of this? Has the new, higher abstracted blacker box that SEs receive been fully verified? The answer to this question might be partially addressed by the emergence of IP subsystems (“IP Subsystem – Milestone or Flashback“).

Other questions remain. How will open system code and tools benefit or hinder the hardware chip and firmware platforms? From an open systems angle, the black box may be less black but is still opaque to the System Engineer.

What will be the new roles and responsibilities for systems engineers during design and – perhaps more importantly – during the hardware and software integration phase? Will he/she have to re-verify the work of the chip hardware-software vendors, just to be sure that everything is working as required? Will the lower-level SE, formerly tasked with integrating chip hardware and firmware, now be out of a job?

If history is any indication, then we might look back to the early days of RTL synthesis for clues. With the move to include chip hardware and firmware, the industry might expect a shifting of job responsibilities. Also, look for a slew of new interface and process standards to deal with issues of integration and verification. New tool suites will probably emerge.

How will the new chip hardware and firmware platforms affect the integration System Engineer is not yet certain. But SE’s are very adaptable. A black box – even a blacker one – still has inputs and outputs that must be managed between variant teams throughout the product life cycle. At least that activity won’t change.

Systems Beyond EDA’s ESL

Wednesday, August 6th, 2008

The question often begins as follows: “Do you know of any ESL-like tools that would apply to a larger system that incorporates mechanical, electrical, HW, and SW.”

Great question! One that probably can not be answer by an EDA person, or anyone else that has a niche view. Likewise, it’s not a question that can be answered by the high-level, domain-independent tools, that is, System Engineering Tools like Slate, Core, Doors, etc. It’s analogous to the challenges – in our EDA world – of linking the worlds of algorithmic and RTL development. It’s a problem of too many layers of abstraction.

In the case of high level systems engineering tools like Slate, Core, and others, the problem is that all you can do at such a high level is establish an overall architecture then drill down with point tools for rqmts, hw-sw partitioning, etc. This was one of the problem that I faced many years ago while working for the DoE superfund clean-up site: How to go from a very high level problem discussions and perceptions from stakeholders, general public, lawyers, etc, to a traceable/verifiable solution implementation in hardware, software, peopleware, mechanical systems, etc. I should really talk more about that experience, since it’s very applicable here. But I don’t want to stray too far afield for this blog entry, that is , I want to focus on implementations that result in HW and SW.

To that end, earlier this year I directed my iDesign editor (Clive Maxfield) to focus exclusively on chip-package-board issues, more from an electronics viewpoint than a hw-sw division. Max wrote a great piece to initiate the new chip-package-board direction for iDesign.

This issue of subsystem views and tools (i.e., chip vs board vs module vs complete product) remains a challenge. EDA companies like Cadence, Synposys and Mentor are all trying to find ways to address chip-package-board level designs, but this is already the domain of big companies like Autodesk and Dassault Systems, among others:

“An acquisition of Cadence by AutoDesk – makers of AutoCAD – does make sense. Cadence makes several good point tools that would complement AutoCAD’s existing product engines, e.g., in the aircraft, automotive and multimedia markets. AutoCAD has all the 3D modeling, rendering and packaging tools that are coveted by the major EDA companies. AutoCAD is truly a big fish with around $4 ½ B in sales and a market cap of $9B. This makes AutoCAD roughly four times the size of Cadence. So an acquisition of Cadence makes both technical and financial sense.”

The problem of chip-package-board design is a big one – bigger than ESL. But the need for a solution is just as pressing. Any thoughts?

Beauty in Chip Failures

Thursday, July 17th, 2008

“Beauty is in the eye of the beholder” — Greek, 3rd century BC

The pictures of the impurities that cause chip failures can be amazing. Below is a sample of an AL-based flower like crystalline impurity. For more, visit the IEEE Online – The Art of Failure.