Part of the  

Chip Design Magazine

  Network

About  |  Contact

Archive for July, 2012

Many Cores but Little Parallelism

Friday, July 27th, 2012

Has the move to thin and light mobile devices sidetracked the much hyped rise of parallel coding programs for many-core chips?

Has the much-touted move to many-core systems resulted in an increase of parallel code creation? This question was recently posed by Andrew Binstock, editor-in-chief at Dr. Dobbs:  “Will Parallel Code Ever Be Embraced?

Why should semiconductor intellectual property (IP) designers care about changes in the world of parallel code creation?  Inductive reasoning would suggest that less parallel code means a decreased need for many-core systems and related integration circuitry, which in turn means fewer processor and interface IP is needed.

One of the biggest challenges in parallel code development is dealing with resource concurrency. The original goal of many parallel-coding tools was to make this concurrency easier for the programmers, thus encouraging better use of multi-core processing hardware. Binstock believes these efforts have fallen short, except perhaps in the world of server systems. “No one is threading on the client side unless they are writing games, work for an ISV, or are doing scientific work,” he explained.

Does this mean that future designs will revert back to single core processing architectures? This seems unlikely, since faster single core processors at the lowest geometric nodes suffer from serious power leakage and parasitic challenges. That is but one reason why many-core processing architectures emerged in the first place.

While the many-core chips are not going away, neither does parallel processing code seem to be increasing. What does the future hold?

Binstock suggests a trend where several stacks of low-voltage ARM-type chips run on tiny inexpensive systems that use far less power than Intel’s Xeon chips. I don’t know why he compares the ARM chips to Intel’s server-grade Xeon as opposed to the more appropriate client-grade Atom chip.

He envisions a system where the PC (client) becomes a “server” of sorts to a bunch of smaller embedded machines, each hosting its own app. One example might be an ARM chip running a browser, while another runs a multimedia application, etc. Each application program would run on its own processor, eliminating any problems of concurrency. Thus, many low power, low performance cores can be used without the need for any parallel code development. Instead of scaling up (more threads in one process), developers have scaled out (more processes).

Interesting, this is exactly the model that emerged several years ago when Intel introduced its first embedded dual-core system. One core would run a Windows or Linux operating system while the other would operate a machine on an assembly line. (see, “Dual Core Embedded Processors Bring Benefits And Challenges”  )

It would seem that embedded designers see little need to move beyond the use of a basic client-server architecture for software design, as opposed to the much hyped parallel coding model.

For a sanity check, I asked Intel’s Max Domeika for his thoughts on the trends on embedded/client parallel coding. Here’s what he had to say:

“I understand what Andrew is saying. I’d characterize it as a bit disillusioned with the multicore hype cycle that the industry went through. I’d claim the industry shifted its focus from multicore to mobile and its focus on thin and light. (I work on mobile now and not multicore.) Thin and light doesn’t yet need and can’t have as many cores as a big desktop/server.  So one possible narrative is that the part of the industry that would want to scale up is sidetracked by mobile. These mobile devices do offer multiple cores and do support multithreading. So, I think it will happen over time, just slower than perhaps some of the folks would think/like.”

 

Originally published on Chipestimate.com “IP Insider.”

Semicon West 2012 Videos

Tuesday, July 24th, 2012

Show floor interviews with leaders from Semico Research, MEMS Industry.

Group, ASML, Soitec, Applied Materials, and IMEC.Semicon West 2012 – Part 1
- Where cerulean skies shine on uncertain market trends and a MEMS director dreams of terminating the interviewer.
Interviews with:

  • Jim Feldhan, President and CEO of Semico Research
  • Karen Lightman, Managing Director for MEMS Industry Group (MIG)

Semicon West 2012 – Part 2
- Where materials matter and a French CEO talks about scaling.
Interviews with:

  • Lucas van Grinsven, Head of Communications, ASML
  • André-Jacques Auberton-Hervé is co-founder and CEO of Soitec.

Semicon West 2012 – Part 3
- Where 3D models float in the air, tall men talk about 450nm and the Flemish government, and Sean and John seek refreshment.
Interviews with:

  • Sree R. Kesapragada, PhD, Global Product Manager at Applied Materials
  • Ludo Deferm, Executive Vice President at IMEC

 

Software Entrepreneur Helps Guide EDA Giant

Friday, July 20th, 2012

Cadence’s Jim Ready, founder of MontaVista, tries to bridge the software understanding gap between EDA designers versus embedded-application developers.

Many of us are still trying to grasp what it means for electronics design automation (EDA) tool vendors to expand into the world of non-RTL software development. It is an old story, namely, that of a predominately hardware-oriented company trying to embrace the seemingly counter-culture of the software development community.

Regardless, Cadence seems to be serious about their intentions to successfully integrate the software mindset into their hardware culture. Recent evidence of this intention is found in the hiring of Jim Ready, past founder MontaVista – an embedded Linux company. According to Richard Goering’s recent interview, Ready was brought into Cadence to help advise Lip-Bu Tan, the CEO, and others about software issues related to embedded systems.

Goering’s interview with Ready is an excellent read that helps clarify the role that EDA vendors will play in the embedded space, e.g., via virtual platforms, co-development, and even open systems support. [Q&A: Jim Ready Discusses EDA Connection to Embedded Software Development]

This discussion was reminiscent of another interview with Ready, one that I conducted years ago while he was still at MontaVista. At that time, Cadence’s EDA360 concept of SOC, software and system realization had yet to be delineated in a manifesto. My main concern during the interview with Ready was to understand how an open systems developed platform like Linux faced could support proprietary operating systems (OSs) like Intel’s Moblin (remember that one) and others. What follows are the slightly edited portions of that interview that remain relevant to Ready’s current comments.

Embedded Linux Faces Low Power Demand and Open Source Commercialization - Embedded Linus magazine, 2009.

Blyler: Earlier, you said that a complete embedded development platform is analogous to an ASSP in the semiconductor world. By analogy, MontaVista’s “Linux 6” product becomes an application specific OS management environment that includes Linux, Moblin RTOS and customer unique software programs.

Ready: Going back to the ASSP analogy, you might say that our (Linux 6) embedded Linux development environment is sort of the TSMC for software. Intel is now licensing the Atom core at the TSMC foundry so others can build their own system-onchip (SoC) based on the Atom architecture. One reason that they do this is because Intel cannot predict all the different configurations of Atom that people might use. We experience the same challenge with our embedded Linux platform. Users have the capability through the integration environment … to configure and maintain their own instance of Linux, Moblin, and/or open source software that is unique to their requirements, to their products.

Blyler: Is it like an Integrated Development Environment (IDE) but for the operating system?

Ready: It’s more around source code management and change and build management systems, but on steroids. If you go to openembedded.com and you grab one of those at any instances, because of the churn of open source the probability of that actually working is very low. It can range from “working perfectly” to “oh my gosh” because there are dead links. It’s hard for volunteers to keep this going.

(A complete embedded development platform) is the configuration management and infrastructure for our assemblage for all the software that we supply in an open system, such that customers can insert their own selection from open source and or their own stuff in an environment that keeps that consistent and builds are repeatable. What we provide is fully tested. It’s under our control and works.

It’s getting this front end of very intriguing open source into a more regularized and commercialized – in a sense, more normal – software process that people would expect to have for their software. If one presumes that open source is just perfect software out there for the taking, it’s not true.

Originally published on “IP Insider.”

 



Free counter and web stats


DAC 2012 Retrospective Video

Thursday, July 19th, 2012

Another excellent DAC 2012 video collage from the pros at Chipestimate.TV

 

SOI Parity with CMOS Good News for IP Designers

Friday, July 13th, 2012

Soitec panel at Semicon West challenges both the IDM model and the dominance of bulk CMOS as material of choice for chips at 20 nm process nodes.

(Originally posted on “IP Insider“) During Semicon West 2012Soitec hosted a panel and celebration event to mark their 20th anniversary. The company and the Silicon-on-Insulator (SOI) industry as a whole had a great deal to celebrate with the onset of 20nm process node technology for System-on-Chip (SoC) design.

At the 20nm node, companies are looking to fully depleted (FD) SOI to match or even exceed parity with bulk CMOS in many areas, including power and physical effects. This parity, as applied to the future of mobile computing, was the subject of a lively panel that preceded Soitec’s celebration event. The panel consisted of well-known experts from UC Berkeley, IBM, ARM, GlobalFoundries, STMicroelectronics, the SOI Consortium and Soitec. (see, “SOI Becomes Essential At 20nm”)

Why is SOI finally a real alternative to bulk CMOS at 20nm? A recent SOI Industry Consortium benchmarked 28nm bulk vs. 28nm FD-SOI to compare silicon with similar processor (ARM) and memory controller IP blocks. The comparisons demonstrated that FD-SOI was comparable with the leakier ‘General Purpose’ technology, at better dynamic power, and dramatically better leakage power. (STMicroelectronics has a white paper )

What would be the impact to semiconductor intellectual property (IP) designs that move to fully depleted two-dimensional (FD-2D) SOI from traditional CMOS chips? Negligible, explained Steve Longoria, SVP of global strategic business development at Soitec. The FD-2D process uses existing IP libraries and design tools, so no changes in any of the fabless companies IP are needed. Further, Longoria notes that FD-2D is compatible with planar CMOS production lines, which mean no new equipment costs or retaining of staff.

Will the advantages of FD-SOI spell the end of bulk CMOS at lower process nodes? Probably not, as designers find new ways to work around power leakage and other challenges. But having an alternative should only encourage more innovation in the race to keep pace with Moore’s law.

Soitec’s 20th Anniversary Event in Pictures and Tweets

(Post during the event by jblyler @Dark_Faust)

Semiconductor’s Sustainability Promise

Tuesday, July 10th, 2012

The Imec-Semi keynote highlighted how the semiconductor collaboration model and technical advances are addressing critical world issues.

Luc Van Den Hove, CEO and president of Imec, opened the ITF keynote at Semicon West with both humor and caution about our technology-based world. A short BBC One film clip parody of fruit confused for personal computing devices was followed by a collage of grim images depicting the worlds problems, from natural disasters to an aging population, global pollution and even social unrest.

Luc Van Den Hove, IMEC CEO and President.

The common thread in both visuals was the effect of change wrought by technology upon our world. Is this rate of change sustainable by our planet? According to Van Den Hove, the answer lies in our ability to connect, collaborate and innovate.

Technology has contributed to unsustainable changes in our environment, healthcare and social life. Making these changes sustainable in the future is the goal and the promise of the business model and technical advances created by the semiconductor community.

Focusing first on the environment, Van Den Hove observed that the world continues to consume more and more energy. In 2010, consumption had increased by 5 percent. This unsustainable energy rate shows no evidence of slowing down. It has prompted calls for energy reduction, renewable energy sources and a smarter energy grid.

“Semiconductor technology can contribute to implementing more sustainable changes,” noted Van Den Hove. For example, energy generating photovoltaic (PV) solar cells are predicted to reach 1 Terra-Watt of power output by 2020. They will meet 5 percent of the world’s electric energy production. But PV solar cells still face the challenges of efficiency (less than 21 percent) and cost. They must be made more efficient and at consumer-level costs. He noted that materials are a big driver in the cost of PV.

Today, silver is used in 7 percent of PVs. But silver is very expensive and scarce. Once solution would be to replace silver with copper (Cu). In addition to be cheaper and providing greater energy efficiencies, the semiconductor industry is well versed in the use of Cu materials.

Other avenues of exploration by Imec and their partner are organic solar cells, which could be integrated in very flexible form factors.

This is an ongoing report … – JB


Wearable electronics; Semiconductor expertise destroys bacteria; Top IP scorer; Trolls don’t help inventors

Friday, July 6th, 2012

So much came to light this week that I can only offer a sampling from each topic.

You’d expect the July 4th holiday week to be a bit slow on the technology announcement side. That wasn’t the case this year for the world of semiconductor intellectual property (IP). Here are a few of the stories that caught my eye:

1. Simple Process Turns T- Shirt into a Super-capacitor: The clever researchers at the University of South Carolina (USC), led by mechanical engineering professor Xiaodong Li, have developed an easy way to turn a cotton t-shirt into a super-capacitor. Such a device (shirt?) makes the textile itself into a battery for mobile electronics.

Concept model for wearable electronics circa 2005.

A shirt can act as the battery. We already know how to create low-power nano-technology processor andmemory devices from organic materials. Perhaps wearable electronics will now go mainstream. Such consumer products would open up a whole new market for semiconductor IP – especially since the main drivers remain low cost, high volume and short time-to-market.

2. Many EDA-IP tool companies are banking on the idea that the methodologies and techniques created in the semiconductor industry can be applied to other markets, such as medical. I’ve covered some examples of this approach in the past. But here’s a more direct example!

IBM researchers have produced Staphylococcus-killing polymers that leave healthy cells alone. Staph bacteria are especially troublesome since it is not killed by ordinary antibiotics. IBM chemists have drawn from,  “years of expertise in semiconductor technology and material discovery to crack the code for safely destroying the bacteria.”

3. An Ocean Tomo market study confirms that the U.S. has transitioned to an innovation based economy founded upon intellectual property (IP). The report states that eighty percent (80%) of company value is comprised of intangible assets.

A related study – using an Ocean Tomo index – list the top inventor in the semiconductor community as Charles W.C.Lin, chairman and founder of Bridge Semiconductor. One of Lin’s many patents deals with a method for making a semiconductor chip assembly with a press-fit ground plan.

At first glance, it appears that a better path for inventors is through the corporate, rather than university, patent process. To see what I mean, contrast Lin’s standing with one of the discoverers of Graphene (see earlier blog).

4. Another study shows that patent trolls cost the economy $29 billion yearly!

A while back, I wrote how patent enforcers (trolls) argue that they, “help ensure that inventors get paid for their creations, whether through the direct application of their inventions, by lawsuits to collect unpaid royalties or by licensing agreements.”

This report demonstrates the opposite, namely, that patent trolls don’t help inventors. The report examined financial results from 12 publicly traded NPE firms, and found that, “the payments they make to inventors whose patents they acquire are far smaller than the costs they have on defendant companies.”

I encourage everyone to read this excellent, if not rather depressing, write-up. Money that could have been used for innovation is being redistributed to patent trolls and other “unknowns” at a staggering cost.

 

Originally posted on “IP Insider.”

End of Contributed Content is Near

Monday, July 2nd, 2012

Legal efforts by major technology companies are removing all incentives for reputable publishers to run contributed content. But will anyone care?

I suspect few readers will find content and copyright issues of much interest. Still, the changes taking place behind the scenes are so dramatic that they will markedly decrease the value of future contributed content. Here’s why.

Over the last couple of years, I have witnessed trends by many corporate legal departments that will force publishers to abandon the publication of contributed articles – except perhaps from selected advertisers.

Although these efforts vary slightly from company to company, the gist of the push by corporate legal departments is toward the following:

  1.  Deny copyright transfers. Some (not all) will replace transfers with a limited-time license agreement.
  2.  Deny any citations to publishers, i.e., any indication that a publisher actually published the content.
  3.  Deny author rights to post author prepared material.
  4.  Remove indemnification of the publisher.

Let me focus on the first two changes, since they have a direct impact on the value of contributed content to a publisher.

Provision #1: “Deny copyright transfers …”

Companies will argue that they shouldn’t have to transfer the copyright of any material that they create. Unfortunately, many companies have made deep cuts to their editorial budgets. Often, this means that the company’s editorial content is of such poor quality that it needs additional work by the publisher to prepare it for publishing, e.g. technical and grammatical editing – not to mention creation, resolution issues, layout, etc.

Traditionally, publishers recoup some of these costs from reprints and the repurposing of copyright material. If copyright transfer is denied, than a publisher may have no way to justify the cost of running contributed stories.

Many companies are pushing to replace copyright transfers of contributed content with limited-time licensing agreements. What will it mean to a publisher who has exclusive access to the content for only three months? Will the publisher be able to recoup their costs in that limited amount of time? Does the move from copyright to licensing even make sense for editorial intellectual property (IP)?

Such licensing agreements will quickly decrease the value of contributed content. After the license time period is reached (say, three-months), the company will merely strike-up a similar agreement with a competing publisher to run the same content for another 3 months. Reader’s quickly catch on to such schemes and will punish the publisher for running already-published content.

Will publishers need to start indicating how much of a contribute piece contains new material? (“Read today’s latest article – Now with 35% new and (maybe) original content!”)

Provision #2: “Deny any citations …”

No acknowledgment will be given to the publisher of the content. There will be no link-back to the publisher’s site, which greatly diminishes any search engine optimization (SEO) return from cross-linking of content. Again, this will result in yet another loss for the publisher.

Why should any reputable publisher engage with companies for contributed pieces with no guarantee of copyright or citation? The only advantage might be if the company is also an advertiser with the publisher. In that case, the publisher is effectively acting as the editorial arm of the company. While not necessarily a bad thing, this does represent a departure from the business model of the past.