Part of the  

Chip Design Magazine


About  |  Contact

EDA Tool Reduces Chip Test Time With Same Die Size

February 4th, 2016

Cadence combines physically-aware scan logic with elastic decompression in new test solution. What does that really mean?

By John Blyler, Editorial Director

Cadence recently announced the Modus Test Solution suite that the company claims will enable up to 3X reduction in test time and up to 2.6X reduction in compression logic wirelength. This improvement is made possible, in part, by a patent-pending, physically aware 2D Elastic Compression architecture that enables compression ratios beyond 400X without impacting design size or routing. The press release can be found on the company’s website.

What does all the technical market-ese mean? My talk with Paul Cunningham, vice president of R&D at Cadence, helps clarify the engineering behind the announcement. What follows are portions of that conversation. – JB


Blyler:  Reducing test times saves companies a lot of money. What common methods are used today?

Cunningham: Test compression is the technique of reducing the test data volume and test application time while retaining test coverage. XOR-based compression has been widely used to reduce test time and cost. Shorter scan chains mean fewer clock cycles are needed to shift in each test pattern, reducing test time. Compression reduces test time by partitioning registers in a design into more scan chains than there are scan pins.

But there is an upper limit to test time. If the compression ratio is too high, then the test coverage is lost. Even if test coverage is not lost, test time savings eventually dry up. In other words, as you shrink the test time you also shrink the data you can put into the compression system for fault coverage.

As I change the compression ratio, I’m making the scan chains shorter. But I’ve got more chains while the scan in pin numbers are constant. So every time I shrink the chain, each pattern that I’m shifting in has less and less bits because the width of the pattern coming in is the number of scan pins. The length of the pattern coming in is the length of the scan chain. So if you keep shrinking the chain, the amount of information in each pattern decreases. At some point, there just isn’t enough information in the pattern to allow us to control the circuits to detect the faults.

Blyler: Where is the cross-over point?

Cunningham: The situation is analogous to general relativity. You know that you can never go faster than the speed of light but as you approach the speed of light it takes exponentially more energy. The same thing is going on here. At some point, if the length of the chain is too short and our coverage drops. But, as we approach that cliff moment, the number of patterns that it takes to achieve the coverage – even if we can maintain it – increases exponentially. So, you can get into the situation where, for example, you half the length of the chain but you need twice as many patterns. At that point, your test time hasn’t actually dropped because test time it the number of patterns times the length of the chain. So the product of those two starts to cancel out. At some point you’ll never go beyond a certain level but your coverage will drop. But as you get close to it, you start losing any benefit because you need more and more patterns to achieve the same result.

Blyler: What is the second limit to testing a chip with compression circuitry?

Cunningham: The other limit doesn’t come from the mathematics of fault detection but is related to physical implementation. In other words, the chip size limit is due to physical implementation, not mathematics (like coverage).

Most of the test community has been focused on the upper limit of test time. But even a breakthrough there wouldn’t address the physical implementation challenge. In the diagram below, you can see that the big blue spot in the middle is the XOR circuit wiring. All that wiring in the red is wiring to and from the chains. It is quite scary in size.

Blyler: So the second limit is related to the die size and wire length for the XOR circuit?

Cunningham:  Yes - There are the algorithm limits related to coverage and pattern count (mentioned earlier) and then there are the physical limits related to wire length. The industry has been stuck because of these two things. Now for the solution. Let’s talk about the things in reverse order, i.e., the issue of the physical limits first.

What is the most efficient way to span two dimensions (2D) with Manhattan routing? The answer is by using a grid or lattice. [Editor’s Note: The Manhattan Distance is the distance measured between two points by following a grid pattern instead of the straight line between the points.]

So the lattice is the best way to get across two dimensions while giving you the best possible way to control circuit behavior at all points. We’ve come up with a special XOR Circuit structure that unfolds beautifully into a grid in 2D. So when Modus inserts compress it doesn’t just create an XOR circuit, rather, it actually places it. It takes the X-Y coordinates for those XOR gates. Thus, using 2D at 400X has the same wire length as 1D at 100X.

Blyler: This seems like a marriage with place & route technology.

Cunningham:  For a long time people did logic synthesis only based on the connectivity of the gates. Then we realized that we really had to do physical synthesis. Similarly, for a long time, the industry has realized that the way we connect up the scan chains need to be physically aware. That’s been done. But nobody made the actual compression logic physically aware. That is a key innovation in our product offering.

And it is the compression logic that is filling the chip – all that red and blue nasty stuff. That is not scan chain but compression logic.

Blyler: It seems that you’ve address the wire length problem. How do you handle the mathematics of the fault coverage issue?

Cunningham: The industry got stuck on the idea that, as you shrink the chains you have shorter patterns or a reduction in the amount of information that can be input. But why don’t we play the same game with the data we shift in. Most of the time, I do want really short scan chains because that typically means I can pump data into the chip faster than before. But in so doing, there will be a few cases where I lose the capability to detect faults because some faults really require precise control of values in the circuit. For those few cases, why don’t I shift in more clock cycles than I shift out?

In those cases, I really need more bit of information coming in. But that could be done by making the scan deeper, that is, by adding more clock cycles. In practice, that means we need to put sequential elements inside the decompressor portion of the XOR Compressor system.  Thus, where necessary, I can read in more information. For example, I might scan in for 10 clock cycles but I’ll scan out (shift out) for only five clock cycles. I’m read in more information than I’ve read out.

In every sense of the word, it is an elastic decompressor. When we need to, we can stretch that pattern to contain more information. That stretched pattern it then transposed by 90 degrees into a very wide pattern that we then shove into those scan chains.

Blyler: So you’ve combined this elastic decompressor with the 2D concept.

Cunningham: Yes – and now you have changed the testing game with 400x compression ratios and achieving up to 3X reduction in test time without impacting the wire length (chip size). We have several endorsements from key customers, too.

In summary:

  • 2D compression: Scan compression logic forms a physically aware two-dimensional grid across the chip floorplan, enabling higher compression ratios with reduced wirelength. At 100X compression ratios, wirelength for 2D compression can be up to 2.6X smaller than current industry scan compression architectures.
  • Elastic compression: Registers embedded in the decompression logic enable fault coverage to be maintained at compression ratios beyond 400X by controlling care bits sequentially across

Blyler: Thank you.

Managing Complex Hardware-Software Systems – Online Course

January 27th, 2016

Managing complex technical systems presents different challenges from typical project/program management. This course will explore those differences using good systems_engineering principals, numerous case studies and modern tools. Some of the contemporary topics include:

  • Implementing a strategic, tailored middle-out approach to development that incorporates legacy systems.
  • Collaborating change across many engineering and business disciplines without affecting existing tool flows or development processes.
  • Managing integration and assessing challenges between hardware and software teams.
  • Exposure to Model-based systems engineering (MBSE) techniques.
  • Modern case studies to support up-to-date system engineering principles and concepts including decision modeling,trade-off analysis for hardware and software systems and the effect of organizational structure on product iot design
  • Practical system engineering program and design review checklists to aid in the effective and efficient implementation of overall program requirements (partially available online).

This course is based on the latest 5th edition of Wiley’s “Systems Engineering Management,” by Blanchard and Blyler.

Online course students will receive 3 CEUs from the Florida Institute of Technology or a certificate of completion that may be applied as 30 Continuing Learning Points (CLPs).

Visit the RMS Partnership page (right-hand column) to register

Last Day to Register is Thursday, February 4th

Instructor: John Blyler

Soon after you registered a member of the RMS Partnership will contact you regarding on-line procedures and related information

Originally posted on JB Systems

The Dangers of Code Cut-Paste Techniques

January 20th, 2016

A common coding practice can lead to product efficiency but also software maintenance nightmares and technical debt.

By John Blyler, Editorial Director

At a recent EDA event, Semi-IP Systems talked with Cristian Amitroaie, the CEO of AMIQ, about the good and bad side of developing software from code that is copied from another program. What follows is a paraphrased version of that video interview (AMIQ_DAC2015_Pt1). – JB

Cristian Amitroaie (left), the CEO of AMIQ, is interviewed by John Blyler (right), editorial director of JB Systems.

Blyler: I noticed that you recently released a copy-paste detection technology. Why is that important?

Amitroaie: We introduced these capabilities in Verissimo Testbench Linter. We’ve had it in mind for some time, both because it is very useful – especially when you accumulate large amounts of code – but also because it is fun to see how much copy-paste code exists in your program. There are all sorts of tools that detect copy-paste in the software development world, so why not include that in the chip design and verification space. Also, the topic of copy-paste code development is very interesting.

Blyler: Is the capability to copy and paste a good or a bad thing for developers?

Amitroaie: To answer that question, you first have to first see how copying and pasting are being used. While it is not a fundamental pattern used by software engineers, it is a common technique. When engineers want to build something new or solve an existing problem, they usually start from something that is working or basically doing what they need. They take what exists to use directly or to enhance it for another application. Copying and pasting is a means to start something or enhance it. In software, it is very easy to do.

It may happen that you don’t know if what you need is already available. For example, you make work in a big company and similar activities are being done in parallel that you don’t know about. So, you develop the same thing again. Now you have code duplication in the overall code base.

Another reason to use a copy-paste approach is that junior engineers lack the senior-level skills or experience to start from scratch. They copy and then build upon existing solutions.

Whatever the reason, in time most companies will have duplicate code. The fact that you use copy-paste to solve problems isn’t bad, because you take something that works. You don’t have to start from scratch, so you save time. You tweak the existing code and use it to solve a new problem. After all, engineering is about making things work. It’s not about finding the best, most ideal solution.

Blyler: We are talking about software programmers who prefer elegant solutions, aren’t we?

Amitroaie: Yes, but today you have lots of time pressures. Plus you often don’t have enough resources to get the best solution. But you need to get a solution within the market window. So elegance tends not to be the highest priority. The top thing is to make it work. In this sense, copying and pasting is a practice that makes sense. It is also unavoidable.

The bad thing is that, as time goes by, you accumulate and duplicate more code.  When a mistake is detected, you must now go to several places to fix it in the original code – if there is such a thing. It’s an interesting question: In this copy-paste world, is there such a thing as the original code? But that’s another matter.

Fixing or enhancing the code is problematic. For example, if you want to enhance an algorithm or functionality, you must remember where all the duplications are located. Many times, when you duplicate code, you don’t understand the intention of the original program. You think that you understand so you copy and paste it, adding a few tweaks. But maybe you really didn’t understand the intentions or implications of the original programmer and unknowingly insert a bug.

In this sense, copying and pasting is bad. As the code base grows, you can have what is called “technical debt” resulting from the copy-paste activity. Technical debt results from code that hasn’t’ been cleaned. We never have time to clean-up the code. We say that we’ll do it later but never do. If you go to your manager with a request for code clean-up, he/she will say “no.” Who approves time for code cleanup? Very few. They all talk about it but I’ve never seen it happen. Even though we are in the EDA market, we are still software developers and have the same challenges when trying to improve our code. I know how hard it is for a team leader to approach code clean-up. This is what is known as technical debt, which is analogous to interest that accumulates on a financial loan. You add more and more and the “clean-up debt accumulates that will add to higher maintenance cost over time. You can end up having huge piles of code that no one knows where it starts or ends and how much is duplicated. That makes it tough to redesign or make the code more compact. It will blow up in your face at the worse possible moment.

Blyler: The debt comes due when you are least able to afford it.

Amitroaie: Yes, it’s unavoidable. It is similar to software entropy in that it keeps accumulating. At some point, it will become more cost effective to rewrite the code from scratch than to maintain it.

The good side of copying and pasting is that it is a fundamental way of getting code developed quickly. It helps programmers advance in an efficient way, at least from a result oriented prospective. The bad side is that you accumulate technical debt that can lead to maintenance nightmares.

Blyler: Thank you.

For more information: AMIQ EDA Introduces Duplicate Code Detection in Its Verissimo SystemVerilog Testbench Linter

Originally posted on “IP Insider”

Imagination Talks about RFIC, IP, Embedded IoT and Fabric-based SoCs

December 18th, 2015

Tony King-Smith of Imagination Technologies discusses the need for RF IC designers, IP platforms, IoT as the new embedded and fabric-based SoCs.

At this year’s Imagination Summit 2015 in Silicon Valley, John Blyler, editorial director for “IoT Embedded Systems” sat down with Tony King-Smith, the Executive VP of Marketing at Imagination, to talk about RF design, IP platforms, embedded IOT and fabric-based SoC trends. What follows are excerpts from that conversation. – JB

Read the complete story on the Chipestimate “IP Insider” blog.

Autonomous Car Patches, SoC Rebirth, IP IoT Platforms and Systems Engineering

December 9th, 2015

Highlights include autonomous car technology, patches, IoT Platforms, SoC hardware revitalization, IP trends and a new edition of a systems engineering classic.

By John Blyler, Editorial Director, IP and IoT Systems

In this month’s travelogue, publisher John Blyler talks with Chipestimate.TV director Sean O’Kane about the recent Renesas DevCon and trends in software security patches, hardware-software platforms, small to medium businesses creating System-on-Chips, intellectual property (IP) in the Internet-of-Things (IoT) and systems engineering management. Please note that what follows is not a verbatim transcription of the interview. Instead, it has been edited and expanded for readability. I hope you find it informative. Cheers — JB


ChipEstimate.TV — John Blyler Travelogue, November 2015

Read the transcribed, complete post on the “IP Insider” blog.



Is Hardware Really That Much Different From Software

November 30th, 2014

When can hardware be considered as software? Are software flows less complex? Why are hardware tools less up-to-date? Experts from ARM, Jama Software and Imec propose the answers.

By John Blyler, Editorial Director

HiResThe Internet-of-Things will bring hardware and software designers into closer collaboration than every before. Understanding the working differences between both technical domains in terms of design approaches and terminology will be the first step in harmonizing the relationships between these occasionally contentious camps. What are the these differences in hardware and software design approaches? To answer that question, I talked with the technical experts including Harmke De Groot, Program Director Ultra-Low Power Technologies at Imec; Jonathan Austin, Senior Software Engineer at ARM; and Eric Nguyen, Director of Business Intelligence at Jama Software; . What follows is a portion of their responses. — JB

Blyler: The Internet-of-Things (IoT) will bring a greater mix of both HW and SW IP issues to systems developers. But hardware and software developers use the same words to mean different things. What do you see as the real differences between hardware and software IP?

De Groot: Hardware IP, and with that I include very low level software, is usually optimized for different categories of devices, i.e. devices on small batteries or harvesters, medium size batteries like mobile phones and laptops and connected to the mains. Software IP, especially for the higher layers, i.e. middleware and up can easier be developed to scale and fit many platforms with less adaptation. However practice learns that scaling for IoT of software also has its limitations, for very resource limited devices special measures have to be taken. For example direct retrieval of data from the cloud and combining this with local sensor data by a very small sensor node is a partly unsolved challenge today. For mobiles, laptops and more performing devices there are reasonable solutions (though also not perfect yet) to retrieve cloud data and combine this with the sensor information from the device in real-time. For sensoric devices with more resource constraints working on smaller batteries this is not so easy, especially not with heterogeneous networking challenges. Sending data to the cloud (potentially via a gateway device as a mobile phone, laptop or special router) seems to work reasonably, but retrieving the right data from the cloud to combine with the sensor data of the small sensor node itself for real-time use is a challenge to be solved.

Austin: Personally, I see two significant differences between the real differences between hardware and software design and tools:

  1. How hard it is to change something when you get it wrong? It is ‘really hard’ for hardware, and somewhere on spectrum from ‘really hard’ to ‘completely trivial’ in software.
  2. The tradeoffs around adding abstraction to help deal with complexity. Software is typically able to ‘absorb’ more of this overhead than hardware. Also, in software it is far easier to only optimize the fast path. In fact, there usually isn’t as much impact to an unoptimised slow path (as would be the case in hardware.)
  3. There are differences in the tool sets. This was an interesting part of an ongoing debate with my colleagues. We couldn’t quite get to the bottom of why it is so common for hardware projects to stick with really old tools for so long. Some possible ideas included:
  • The (hardware) flow is more complex, so getting something that works well takes longer, requires more investment and results in a higher cost to switch tools.
  • There’s far less competition in the hardware design space so things aren’t pushed as much. This point is compounded by the one above, but the two sort of play together to slow things down.
  • The tools are hardware to write and more complex to use. This was contentious, but I think on balance, some of the simplicity and elegance available in software comes because people solve some really touch physical issues in the hardware tools.

So, this sort of thinking led me to an analogy of considering hardware to be very low level software. We could have a similar debate about javascript productivity versus C – and I think the arguments on either side would like quite similar to the software versus hardware arguments.

Finally on tools, I think it might be significant that the tools for building hardware are *software* tools, and the tools for building software are *also* software tools. If a tool for building software (say a compiler) is broken, or poor in some way, the software engineer feels able to fix it. If a hardware tool is broken in some way, the hardware engineer is less likely to feel like it is easy to just switch tasks quickly and fix it. So that is I guess to say, software tools are build for software engineers by software engineers, and hardware tools are built by software engineers to be sold to companies, to be given to hardware engineers!

Nguyen: One of the historical differences relates to the way integrated system companies organized their teams. As marketing requirements came in, the systems engineers in the hardware group would lay out the overall design. Most of the required features and functionality were very electrical and mechanical in nature, where software was limited to drivers and firmware for embedded electronics.

Today, software plays a much bigger role than hardware and many large companies have difficulties incorporating this new mindset. Software teams move at a much faster pace than hardware. On the other hand, software teams have a hard time integrating with the tool sets, processes and methodologies of the hardware teams. From a management perspective, the “hardware first” paradigm has been flipped. Now it is a more of software driven design process where the main question is how much of the initial requirements can be accomplished in software. The hardware is then seen as the enabler for the overall (end-user) experience. For example, consider Google’s Nest Thermostat. It was designed as a software experience with the hardware brought in later.

Blyler: Thank you.

Review of Jama, ARM Techcon and TSMC OIP Shows

November 14th, 2014

October issues of the “Silicon Valley High-Tech Traveler Log” – with Sean O’Kane and John Blyler

Three events from TSMC, ARM and JAMA Software highlight the breadth and depth of IP development that (hopefully) results in manual-less consumer apps.

A few week’s ago, I attended three shows -  Jama’s Software Product Delivery SummitTSMC’s Open Innovation Platform (OIP) and ARM’s Techcon. While each event was markedly different there was an unintentional common thread, i.e., all three dealt with the interplay between hardware and software IP systems – albeit on different levels of the supply chain.

Each of these shows characterized that interplay in different ways. For TSMC, it was a focus on deep semiconductor manufacturing-related IP. Conversely, Jama Software dealt with product delivery issues for which embedded hardware and application software played a major role. Embedded software on boards running the company’s flagship processors and ecosystem IP hardware peripherals was the focus at the ARM Techcon. Why are these various instantiations of IP important?

Read the rest of the story at: IP-Based Technology without Manuals?







Soft (Hardware) and Software IP Rule the IoT

September 2nd, 2014

By John Blyler, JB Systems

Both soft (hardware) and software IP should dominate in the IoT market. But for which segments will that growth occur? See what the experts from IPExtreme, Atmel, GarySmithEDA, Semico Research and Jama Software are thinking.

The Internet-of-Things will significantly increase the diversity and amount of semiconductor IP. But what will be the specific trends among the hardware and software IP communities? Experts from both domains shared there perceptions including,  Warren Savage, President and CEO of IPExtreme; Patrick Sullivan, VP of Marketing, MCU Business Unit for Atmel; Gary Smith, Founder and Chief Analyst for Gary Smith EDA; Richard Wawrzyniak, Senior Market Analyst for ASIC & SoC at Semico Research, and; Eric Nguyen, Director of Business Intelligence at Jama Software. What follows is a portion of their responses. — JB

Blyler: Do you expect an accelerated growth of both hardware and software IP (maybe subsystem IP) due to the growth of the IoT? What are the growth trends for electronic hardware and software IP?

Savage: I don’t think that there is anything special about the Internet-of-Things (IoT) from an intellectual property (IP) perspective.   The prospect of IoT simply means there is going to be a lot more silicon in the world as we start attaching networking to things that previously were not connected. As a natural evolution of the semiconductor market, hardware and software IP is going to keep growing and will outpace everything else for the foreseeable future. Subsystems are a natural artifact of that maturing as well as customers wanting to do more and more with less people, outsourcing whole functions of chips to be delivered from their IP supplier who is likely an expert in that subject matter.

Sullivan: The largest growth will be in software IP for hardware IPs that already exists in order to connect devices to the Internet. Developers that are not familiar with wireless applications will find themselves making connected devices, and for suppliers to have context aware stacks and other IP tailored for the different IoT usage models will be crucial. i.e.; just having a ZigBee stack is not sufficient. You need a version for healthcare, a version for lighting, and so on.

Security is also going to be an important factor for both securing communication between IoT devices and the cloud (SSL/TLS technologies), and also to authenticate that firmware images running on connected devices have not been tampered with. Addressing these needs may require additional software development of IoT devices, and potentially specialized hardware components as well.

On the hardware side, the main focus will continue to be power consumption reduction as well as range and quality improvements.

Smith: Yes, growth in hardware and software IP will increase with the IoT expansion. However, the IoT market comprise multiple segments. To get accurate growth figures you would need to explore them all (see Table).

Table: Markets for the Internet-of-Things. (Courtesy of

Wawrzyniak: I do expect some acceleration of revenues derived from IP going into IoT applications. At this point it is hard to determine just how much acceleration there will be since we are just at the very beginning of this trend. It also will depend upon which types of IP are chosen as the ones most favored by SoC designers. For example, if designers select of one of the wireless IP types as the preeminent solution, then this might be more expensive (generate more IP revenue over time) than say ZigBee.

Given the sheer volume of IoT applications and silicon being projected, it is possible that once a specific process geometry is decided on as the optimum type to use, the IP characterized for that geometry might actually be less expensive than the same IP at another geometry. Volume will drive cost in this case. All these factors will go into figuring out how much additional IP revenue will be generated. I would say a safe estimate today would be on the order of 10%.Wawrzyniak: I do expect some acceleration of revenues derived from IP going into IoT applications. At this point it is hard to determine just how much acceleration there will be since we are just at the very beginning of this trend. It also will depend upon which types of IP are chosen as the ones most favored by SoC designers. For example, if designers select of one of the wireless IP types as the preeminent solution, then this might be more expensive (generate more IP revenue over time) than say ZigBee.

I also think it’s likely that IP Subsystems will be created for IoT applications. Again, this depends on how complex the silicon solution will need to be. If we are talking lightbulbs, then it is hard to imagine that an IP Subsystem will be needed. On the other hand, a relatively complex chip might require an IP subsystem, e.g., a Sensor Fusion Hub subsystem. Sensors will certainly be everywhere in the IoT, so why not create a subsystem that deals with this part of the solution and ties it all together from the designer

Hard IP will probably be more expensive than Soft IP. I would say that Soft IP will be used more in these types of SoCs. I would estimate that it could be as high as a 70 – 30 split in favor of Soft IP.

Nguyen: Absolutely, the growth of IoT will not only open new markets such as wearable technologies and home automation but will also cause disruption in existing due to software based services being delivered through connected devices. Technology products are evolving from electro-mechanical based IP competitive differentiation to customer experience differentiation powered by software applications running on optimized hardware.

The trends in hardware and software IP are accelerating the rate of innovation for customer facing products, which in turn will have a direct impact throughout the supply chain. Software producers must mange the interdependencies not only across their product lines but also across the various technologies they’ll be deployed on (i.e. iOS, Android, Web, integrated into 3rd party technology) or various subsystems. The connected aspect of these technologies allows vendors to continually update the offerings and therefore evolve the customer experience throughout the life of the physical technology.

The performance demands of continuously evolving software heavy products is also driving accelerated innovations throughout the supply chain, specifically hardware components such as Systems on Chip, Systems in a Package, sensor technology, and battery/power management.

Final product producers are also accelerating release cycles and therefore driving the need to more easily integrate sub-components. This demand is driving the demand for Systems in a Package (SiP) technologies, which incorporate the chips, drivers, and software within a physical sub-component package that can easily integrated into the overall system. Semiconductor companies must now coordinate the growing complexity of silicon, software, and documentation development while accelerating their ability to incorporate market feedback into product roadmaps, R&D, and ultimate manufacturing and delivery to customers; all the while ensuring they can meet per unit cost targets.

Blyler: Thank you.

Weekly Chip-Science Highlights – Aug. 15th

August 15th, 2014

By John Blyler

More on Moore’s Law; Thought-Controlled Cameras; Neurons on Chip; IP Bag; Quantum Dots and blogs.

Here’s a mixed of semiconductor-related articles and blogs that caught my attention this week:

  • Can our computers continue to get smaller and more powerful? – Have we reached the limits to computation? In a review article in this week’s issue of the journal Nature, Igor Markov of the University of Michigan reviews limiting factors in the development of computing systems to help determine what is achievable, identifying “loose” limits and viable opportunities for advancements through the use of emerging technologies. His research for this project was funded in part by the National Science Foundation (NSF).
  • Thought-Controlled Camera Confirms IoT Trends – Software start-ups will dominate growth in the Internet-of-Things (IoT) as demonstrated by MindRDR’s application that combines Google Glass and Neurosky biosensor hardware.
  • Brain-inspired chip fits 1m ‘neurons’ on postage stamp – Scientists have produced a new computer chip that mimics the organization of the brain, and squeezed in one million computational units called “neurons”. They describe it as a supercomputer the size of a postage stamp. Each neuron on the chip connects to 256 others, and together they can pick out the key features in a visual scene in real time, using very little power.

Figure: TrueNorth is the first single, self-contained chip to achieve 256 million individually programmable synapses on chip which is a new paradigm. (TrueNorth Core Array, Courtesy of IBM)

  • (Potato) Chip Bags IP – The latest research from MIT, Microsoft and Adobe on recovering speech from the vibrations of ordinary objects confirms the growing importance of software IP.
  • With sharp focus, quantum dot makers scale up to meet demand – This Reuters article discusses the huge growth in the demand quantum dots, the semiconductor crystals that use less power and are cheaper than organic LEDs.


Industry Blogs:

  • Kathryn Kranen discusses Jasper, formal verification and the Cadence Acquisition.
  • Mentor’s Colin Walls shares his love of writing, especially embedded software articles about assembly language to software IP.
  • Another interesting blog from Mentor. This time, Christopher Hallinan explores hardware and software complexity via th Yocto Project.
  • Summer is a time for big trips. Synopsys’s Tom d Schutter recently shared an adventure with his family through the westernUS states. That trip sparked comparisons with the task of staring a virtual prototyping project. Finding similarities between function trace information and the unexpected buffalo crossings atYellowstoneNational Park is not an easy task, but Tom pulls it off.
  • Satellite Crowdfunding, Then and Now – Did you know that the first satellite sent up by theUnited Stateswas originally planned to be crowdfunded? Did you also know that the newest amateur satellite sent up in the third quarter of next year will be crowdfunded as well? by Hamilton Carter


Our Day at DAC – Day 2 (Tuesday)

June 2nd, 2014

Here are the brief observations on noteworthy presentations, cool demonstrations and hall-way chats from the Chip Design editorial staff covering DAC 2014.

Report from Gabe Moretti:

EDA is Alive and Ready for Another Year of Growth

The second day of DAC was a very busy one for me. I met with Dassault Systeme that showed me an impressive approach to EDA based on project management system that provides different views of the state of the project depending on the viewer position in the project. For example, project manager, individual engineer, verification engineer, and so on. I met with Verific and Invionics two different companies that have found a symbiotic way to expand the market they serve without competing with each other.
Synopsys described their approach to the automotive market. The presentation described almost perfectly my 2014 Lincoln MKZ hybrid. It is impressive to see technology becoming reality as I write.
Carbon is growing, revenues were up 46% last year and diversifying. They were not quite ready for a big announcement at DAC but I was told it would be made before the end of this month.

Much work is going on in formal verification embolden by Cadence acquisition of Jasper Design.
More meetings are scheduled for tomorrow and I promise a final impression of what DAC meant for me.


Report from Hamilton Carter:

DAC Meanderings, 51st DAC (6/3)


The day started early with the Accelera breakfast.  The food was excellent.  There were “Fluffy scrambled eggs”, bacon, sausage and a variety of pastries. For the first half hour or so folks straggled in, slowly orienting themselves after the first night of DAC parties. The proceedings kicked off with the handling of a few business issues.  Shishpal Rawat, the current Accelera chairman outlined the achievement of the prior year and the goals and schedule of the ensuing one. The last order of business was the presentation of the Accelera Leadership Award for 2014 to Yatin Trivedi, (pictured).

A few moments later, Doulos’ John Ainsley, ever-spry, bounded onto the stage to introduce the members of his UVM roundtable.

John played devil’s advocate to keep the panel lively. He first asked what the members’ general feelings on System Verilog were. When all the panelists agreed that they were generally happy, John then prodded each of them to find out how happy they were, why, and what challenges they were still having. The general consensus seemed to be as follows:

  • Asking designers to adopt object oriented class-based solution was a hard sale.
  • Finally having a uniform standard offered by all the vendors was very, very nice.
  • There were hiccups and burps along the way as internal libraries needed to be converted to the new standard and IP vendors tended not to have adopted the standard yet.

From the Accelera breakfast a brief walk brought me to the first time exhibitors’ interviews.

Silicon Cloud
Marc Edwards, presenting for Silicon Cloud, described his vision of moving the engineering flow into the cloud.  Allowing startups and others to avert the expense of large hardware box purchases.  Silicon Cloud offers a solution that moves all design tools, licenses, and IP into a server space they maintain and monitor.  The places 1000s of virtual machines at the disposal of design engineers who access the cloud via Chrome books that have been walled from the rest of the internet.  All transactions that touch the design, IP, or tools are recorded.  In addition to providing valuable information on the process flow and the usage of tools and IP, Silicon Cloud also watches for nefarious and/or non-conformal behavior with regards to the management of IP.

Larry Lapides presented Imperas’ services and product portfolio.  The company is focusing on software verification in the embedded realm.  Their portfolio of over 140 open access processor and peripheral models allows their customers to bring up their software ahead of design completion.  The models run at millions of cycles per seconds allowing very comprehensive software scenarios.  Automotive and medical embedded applications, where software failure is not an option, are adopting Imperas’ testing and system reliability tools and methodology.

Harnhua Ng presented Plunify’s FPGA-build optimizing solution.  Their tool watches FPGA builds. which can take days to not converge, and provides early warning that non-convergence is imminent.  The tool also points out the likely causes of the non-convergence within the design so that a successful build can be achieved next time.  In addition to its dynamic build-watching features, the tool also has a static facility that scans the design-to-be-built and warns of known issues before the build begins.

Jason Png, OPTIC2connect’s founder and CEO, gave a brief presentation of his company’s optical interconnect prototyping services.  He said they don’t intend to replace design engineers, just make their jobs much simpler.   OPTIC2connect has helped their customers move their prototyping cycles for optically enabled bus infrastructures from six months to three weeks.

Synopsys is back in the Formal Verification Market
From a round of interviews with the new guard of EDA, I proceeded to an interview with one of the older names in EDA, Synopsys.  Synopsys is announcing their new entry into the formal/static verification market at this year’s DAC.  The all new tool introduces capabilities for formal verification, clock domain crossing, and low power static checking with other features on the way soon.  The tool can load in chip level, fully-flattened RTL designs to facilitate proper low power and interconnect checking.  It also sports simplified and compressed error output.  Gone are the days of day long design checks followed by searching through gigabytes of data for the error that matters.  The tool bundles errors up to their root cause which is reported along with the count of other errors that are attributed to the root.  For those that sill want to get into the gory details for themselves, an API is provided for teasing every last bit of available data out of a formal/static verification run.

Jasper’s food truck party
Jasper brought three busloads of engineers and semiconductor industry aficionados to Treasure Island earlier today to partake of the delicious wares of five different food trucks.

Entertainment was provided by Rat-Pack styled musicians, a magician, a juggler, and a lawyer turned professional bubble maker.

A great time was had by all, and CEO Jasper CEO Kathryn Kranen, thanked the Jasper team for their excellent work.

Next Page »