Has the move to thin and light mobile devices sidetracked the much hyped rise of parallel coding programs for many-core chips?
Has the much-touted move to many-core systems resulted in an increase of parallel code creation? This question was recently posed by Andrew Binstock, editor-in-chief at Dr. Dobbs: “Will Parallel Code Ever Be Embraced?”
Why should semiconductor intellectual property (IP) designers care about changes in the world of parallel code creation? Inductive reasoning would suggest that less parallel code means a decreased need for many-core systems and related integration circuitry, which in turn means fewer processor and interface IP is needed.
One of the biggest challenges in parallel code development is dealing with resource concurrency. The original goal of many parallel-coding tools was to make this concurrency easier for the programmers, thus encouraging better use of multi-core processing hardware. Binstock believes these efforts have fallen short, except perhaps in the world of server systems. “No one is threading on the client side unless they are writing games, work for an ISV, or are doing scientific work,” he explained.
Does this mean that future designs will revert back to single core processing architectures? This seems unlikely, since faster single core processors at the lowest geometric nodes suffer from serious power leakage and parasitic challenges. That is but one reason why many-core processing architectures emerged in the first place.
While the many-core chips are not going away, neither does parallel processing code seem to be increasing. What does the future hold?
Binstock suggests a trend where several stacks of low-voltage ARM-type chips run on tiny inexpensive systems that use far less power than Intel’s Xeon chips. I don’t know why he compares the ARM chips to Intel’s server-grade Xeon as opposed to the more appropriate client-grade Atom chip.
He envisions a system where the PC (client) becomes a “server” of sorts to a bunch of smaller embedded machines, each hosting its own app. One example might be an ARM chip running a browser, while another runs a multimedia application, etc. Each application program would run on its own processor, eliminating any problems of concurrency. Thus, many low power, low performance cores can be used without the need for any parallel code development. Instead of scaling up (more threads in one process), developers have scaled out (more processes).
Interesting, this is exactly the model that emerged several years ago when Intel introduced its first embedded dual-core system. One core would run a Windows or Linux operating system while the other would operate a machine on an assembly line. (see, “Dual Core Embedded Processors Bring Benefits And Challenges” )
It would seem that embedded designers see little need to move beyond the use of a basic client-server architecture for software design, as opposed to the much hyped parallel coding model.
For a sanity check, I asked Intel’s Max Domeika for his thoughts on the trends on embedded/client parallel coding. Here’s what he had to say:
“I understand what Andrew is saying. I’d characterize it as a bit disillusioned with the multicore hype cycle that the industry went through. I’d claim the industry shifted its focus from multicore to mobile and its focus on thin and light. (I work on mobile now and not multicore.) Thin and light doesn’t yet need and can’t have as many cores as a big desktop/server. So one possible narrative is that the part of the industry that would want to scale up is sidetracked by mobile. These mobile devices do offer multiple cores and do support multithreading. So, I think it will happen over time, just slower than perhaps some of the folks would think/like.”
Originally published on Chipestimate.com “IP Insider.”