Part of the  

Chip Design Magazine

  Network

About  |  Contact

Archive for April, 2013

Supply Chains, Big Data, and Point-of-Sale for EDA and IP

Wednesday, April 24th, 2013

These issues were addressed by supply-chain, product-lifecycle-management, board-design, and chip-design services companies.

We live in a tumultuous world in terms of disruptive technologies, natural disasters, and global politics. Do chip designers need to worry about such seemingly external influences, as manifested by the global semiconductor manufacturing and supply chain? What help will come from “Big Data” analytics? Will EDA/IP (chip companies) ever be this tightly coupled with end-product manufacturers? I asked these questions of professionals in the manufacturing-supply-chain, product-lifecycle-management (PLM), and board- and chip-design services industries, respectively: Geoff Annesley, CTO at Serus; Brian Haacke, High Tech Industry Sales Director, Dassault Systemes; Michael Ford, Marketing Development Manager, Mentor Graphics–Valor; and Naveed Sherwani, President and CEO of Open Silicon. What follows is a portion of their remarks. –JB  

Blyler: Do chip designers really need to worry about the seemingly external influences of the global semiconductor manufacturing and supply chains?

Haacke: Designers do care about manufacturing with a primary focus on the impact of design rules provided by the foundries. The more design rules for which they are compliant, the more flexible they can be when choosing a foundry – and mitigating risks if some natural disaster impacts one foundry over another. Regarding supply-chain influence, there are many aspects to consider. Designers would not be impacted by material-supply disruptions because they typically do not “design in” any of the materials used in manufacturing. However, a closed-loop feedback to designers on manufacturing test results can improve responsiveness to design-related issues impacting yield ramp-up – especially if that feedback is tied to requirements and design intelligence.

Sherwani: It doesn’t require an earthquake or other natural disaster. In the coming move from traditional single-die chips to the era of 2.5-dimensional (2.5D) stacked dies, everything changes. With 2.5D, naked dies have to be tested, placed on interposers, and then positioned into a single package. The industry has never tested or sold anything like this before. I think it will disrupt the normal supply chain and its well-understood chain of command.

Annesley: Design needs to be linked to execution in the global market. You need a feedback mechanism for companies to decide the best price and combination of packaging and manufacturing processes that result in the lowest-cost chip. That is a good example of tying back execution data to the design process and vice versa. For example, you have the material information for your design – be it chip or board. You may have alternates that you need to use (e.g., due to natural disasters). It’s important for companies to track what actual alternates were picked for every component build. Then they will have traceability and accountability with respect to the specifications.

Ford: Designers are motivated to create a product that meets the criteria set in terms of technologies, materials, costs, quality, life expectancy, etc. There is significant influence on this from the manufacturing-production side, which – if not known by the designer – can result in product variations and the product not living up to expectations. Designing a product with some knowledge of the materials to be used and the actual production environment would allow the designer to design-in features that promoted better production quality, lower manufacturing cost, or reduced variation. Typically, though, this does not happen except in rare cases, as the technologies of material choice and manufacturing capability are not visible in a way that designers can understand. This is a clear opportunity for improvement.

Blyler: One supply-chain trend is the increased use of “Big Data” analytics to allow companies to connect between very different databases. In doing so, they can discover clues to improve supply-chain performance. Comments?

Haacke: “Big Data” analytics isn’t just a good idea. Nor is it just about connecting disparate data sources.  To be competitive, companies must be able to have visibility into their supply-chain data and make informed decisions based on the intelligent correlation of requirements, design, simulation, test results, and yield data. Connecting data sets is a start. Yet it is the marriage of operational and design intelligence that enables effective analytics to improve traceability, root-cause analysis, and time-to-yield ramp-up.

Ford: This can be useful. The real issue today is that the end-product distribution chain is shrinking, due to Internet sales and quickly changing fashionable technology products. This leads to many product variations – the changing demand profile of which comes closer to the factory than in the past. Factories are then asked to be agile, supplying different quantities of products with short notice of changes. This really puts pressure on their supply chain to source materials more quickly and effectively. Otherwise, there is a large increase of inventory at the factory, which cripples the operation on costs. Managing the changing demand from the customer and translating it into short-term raw-material availability is a growing issue today.

Annesley: Data mining and analytics are necessary to do predictive analysis (e.g., to foresee shortages in the supply chain). The resulting operational metrics include such things as yield, test, cycle-time, and on-time delivery trending – all the actuals on how you are performing. The real-time metrics and calculations can be used to do alert notifications (e.g., when you are drifting from your inventory targets). Then there is the longer term, where we collect the statistics on how the supply chain is used.

Blyler: Another trend is the use of point-of-sale (POS) data from retailers to adjust supply chain and manufacturing. Will EDA/IP (chip companies) ever be this tightly coupled with the end-product manufacturers?

Haacke: This is a good question – one that I’ve gone back and forth on. Ultimately, I don’t see much relevance to POS [as related to direct business-to-consumer (B2C) chip sales] being of any significant source of demand input to EDA/IP companies. The coordination required to track this data through every device – using a given chip – would be an enormous effort. So I don’t think there is any near-term future in which they are tightly coupled. However, I do see other possibilities for these companies to anticipate the demands in the marketplace by monitoring the end-consumer “experience” with products that contain their chips and/or IP. This data could be used to anticipate how consumers and competitors will act in the future.

Today, I think “social listening” may not be obvious to companies – especially the further down the supply chain they are from the end consumer. Still, with the right tools in place, EDA/IP companies can add the thoughts and ideas of their customers and competitors to their pool of “Big Data.” This data could then be part of their analytics and correlation of cause-and-effect events that drive effective decision-making and produce competitive advantages.

Ford: I am not sure about chips themselves. But ultimately, the answer would be “yes.” Still, the issue comes down to agility and the resistance to making changes. In printed-circuit-board (PCB) production – with good management tools – we can manage the changes to schedules and allocation of operations to work orders as demands change. For the chip areas, I think it will depend on how agile the processes are to be able to adjust volumes (move to alternate machines or reassign production cells). 

Blyler: Thank you.



Free counter and web stats


IP Smoke Testing, PSI5 Sensors, and Security Tagging

Friday, April 19th, 2013

The growth of semiconductor IP brings challenges for subsystem verification, integration, security, and the addition of sensor standards. Can Savage clear the smoke?

Warren Savage, marathon runner and President & CEO of IP-Extreme, talks about the trends, misconception, and dangers resulting from the increasing popularity of semiconductor intellectual property (IP). What follows is the first portion of a two-part story.

Blyler: Let’s start by talking about trends in IP for field-programmable-gate-array (FPGA) and application-specific-integrated-circuit (ASIC) systems-on-a-chip (SoCs). What’s new?

Savage: If you look at global macro trends, you’ll see an increased amount of customization. For example, the iPhone can be customized “six ways to Sunday.” I’m starting to see something similar in the semiconductor space, where companies are differentiating themselves through IP and putting it together in different ways. Some guys – the Broadcoms and Qualcomms of the world – can do huge quantities of SoCs. But many mid- and lower-tier guys are doing more customized types of products that appeal to a certain niche market. [Editor’s Note: Makimoto’s Wave remains in its customization cycle.]

If the volumes are low enough, an FPGA with the right IP could offer a big differentiation from an off-the-shelf (ASIC) SoC. I’m seeing more of that trend and it gets stronger every year.

Blyler: The numbers are showing that IP continues to be a larger share of the revenue. How about subsystem IP? Is it starting to take off – perhaps in the vertical integration of certain types of IP?

Savage: One of the artifacts of the downturn from several years ago is that fewer engineers must do more increasingly complex things. We are seeing people buy more of our subsystem IP that includes the processors, bus infrastructure, and peripherals needed to run a real-time operating system (RTOS). People want to buy the whole thing and start with a working platform, then add their IP around that platform.

Blyler: Won’t the move toward subsystem IP lead to more verification and integration issues? Does the entire subsystem then come with a suite of tests or do you need to test each IP block individually?

Savage: The expectation is that the subsystems are fully verified and come to the designer as a black box. Typically, people don’t re-verify things of that complexity; it is too much. Plus, it has already been verified at the subsystem level by the IP provider. What the IP provider supplies is some type of integration-level test so the designer can – for lack of a better word – run a “smoke test” to ensure that the subsystem is installed properly and fundamentally working. In processor-based designs, it means you can run software like a “hello world” test that verifies all of the memory, interface, and peripheral connections.

Not a lot of extra verification tests come along with the IP. After all, the IP is expected to be used as a black box.

Blyler: With the rise of sensors in our increasingly connected world, I would expect to see more sensor-related IP. Is that the case? Or is it a microelectromechanical-systems (MEMS) technology and fabrication issue?

Savage: One of our major customers is a provider of automotive sensors. They use MEMS technology for sensors, accelerometers, etc. – as part of their products. But these MEMS chips and sensors still need to be connected into an SoC. That’s why we’re seeing interest in the Peripheral Sensor Interface 5 (PSI5) standard, which is a specific interface dedicated to automotive sensor applications. PSI5 is kind of an upgrade to the Local Interconnect Network (LIN) standard.

Here is an overview of hardware interfaces (courtesy of Vector).

For background, there’s a hierarchy of automotive interface standards. At the low end of complexity is LIN, which is typically used to control the mirrors on a car via a driver-side toggle switch. Next comes the controller-area-network (CAN) bus interface, which has a lot more bandwidth for moving data around. CAN is used for suspension, airbags, etc.

Lastly, FlexRay is a true vehicle network for real-time apps. Eventually, it will give way to a drive-by-wire or steer-by-wire implementation.

Blyler: What’s new on the security front for semiconductor IP?

Savage: On the commercial side, I’m seeing less and less concern about security. But there are some developments that will be important. For some time now, there has been an IEEE standard on hard IP tagging that allows you to track cores at the GDS level.(Editor’s Note: Hard IP is offered in a GDSII format and optimized for a specific foundry process.)

The thing that has been missing is what to do about soft IP. (Editor’s Note: Soft IP is synthesizable in a high-level language like RTL, C++, Verilog, or VHDL.) Watermarking the code is a common approach for tracking soft IP and one that we use at IP Extreme. I’ve been working with Kathy Werner, who heads a committee on soft-IP tagging. She has worked with IP at Freescale and then Accellera. Her committee is incorporating many of the same conventions into soft IP that proved successful in hard IP. The goal is that these soft-IP security mechanisms will work throughout the EDA-tool design flow to be propagated downward into the GDSII. In other words, the high-level soft-IP tags could be detected at the GDS level.

Related Stories:



Free counter and web stats


ISEPP: The Hunt for Earth 2 – A Shower of Kepler Planets!

Friday, April 12th, 2013

Editorial Director John Blyler interviews Dr. Gibor Bari of the University of California – Berkeley about the hunt for Earth-like planets in our galaxy (interview done on March 8, 2013). This is part of the ongoing Institute for Science, Engineering, and Public Policy (ISEPP) lecture series. Terry Bristol is Director of the program.