Jun 21 2012

Update on Physical Verification at DAC

Published by at 5:18 pm under Uncategorized

June 2012 – At the DAC show this year, we took a look at the direction in Physical Verification for the IP development and Custom SOC space. The industry is still plagues with privatized programming lanuages for the tools, and tis is driving the centralized problem of rule context and interpretation.

As the design rules increased in complexity to include optical effects and now design application effects, there is ambiguity in what the rules are trying to cover and how it impacts the design flows. These interpretability effects are driving the issue of “it is clean with one verification tool, but not another”. This is a problem as the major foundries use multiple tools for signoff, and the IP developers typically only support one. As the SoCs grow more complex, the IP that is acquired as “clean” from individual sources, no longer shows “clean” when they are simultaneously placed on the same die and checked with a single tool.

One of the major issues is the “waiver” methodology. The range of interpretation of the design rules, rather than being absolute, result in a “clean” design having several aspects and design data that “false flags” in the tools when using standard runsets. The challenge with modifying the option on these runsets, is it will no longer conforms to the “signoff” spec from the fabs, so the end result is not “clean”. The workaround is to use a “waiver” methodology, where the known errors are marked in the tools, so they do not continue to flag. The difficulty, is that each of the PV vendors uses different flows, and some of the customers have their own legacy methods from before the tools formally supported this feature.

This issue is aggravated at the new 20nm and below nodes which also have to deal with 3D devices, designed parasitic devices that need to be extracted along with the primary device and the shift from restricted design rules to prescriptive design rules. This change to “this is how you build a known good device” from the “here are the minimums you cannot violate to get a good design” brings up a major issue on application interpretation. Most of these prescriptive rules, do not take into account context or application for the devices. As a result, there needs to be a whole level of functional rules, as are targeted by the Mentor PERC tool, to support these new designs.

From a performance point of view, the capacity requirements of these sub 40nm designs is tasking the limits of the IT infrastructure and compute to return a “same shift” result from a run. The distributed computing engines that are used for the core verification work, however are not employed in the analysis and data review cycle. The debug and error review are still single core, single task, single memory operations. This is a major schedule impediment for these designs as the large data object size that results, is not compatible with high speed interactive operation when used with older versions of layout editors as the graphic platform. The review of this large results data on an interactive basis is one of the few catalysts in recent years to migration away from the Virtuoso platform to layout editors from Synopsys, SpringSoft, Silvaco, and Mentor.

The main direction for these new technologies is to include the PV earlier in the flow. They have already been integrated into detailed Route tools with correction capabilities, and now they are targeting moving up through placement to IP library selection and validation. This shift is consistent with the prescriptive design rule methodology, and will require a major change in signoff methods for release of the designs.

No responses yet

Trackback URI | Comments RSS

Leave a Reply