Part of the  

Chip Design Magazine

  Network

About  |  Contact

Archive for October, 2012

Cloudwashing – A Rose by any Other Name

Monday, October 22nd, 2012

The white-washing of cloud computing remind us of the evolution of decades-old rightsizing and client-server technologies.

Did you see the Dilbert cartoon in this weekend’s paper? In the strip, Dilbert’s pointy-haired boss tells his engineer employee to move some of the company’s function to the Internet, but call the Internet the “cloud.” When asked why, the boss simply notes that no one “will take us seriously unless we’re doing something on the cloud.”

Apparently, cloudwashing is nothing new. Last year, Apririo – a cloud IT services company – toasted the winners of “The Washies.” This is a “tongue-in-cheek award given to the worst cloudwashing offenders.” Past winners have included Oracle, Salesforce.com, and Microsoft.

But let’s take the observation of Dilbert’s boss one step further. To do this, we must step back in time to the last millennium, roughly around the mid-1990s. Back then, the big network buzzword was “rightsizing” – a term used to describe the balancing of functionality between the client (PC) and server systems connected via the Internet. Sound familiar?

At best, cloud computing is the next generation of client-server technology. At worst, it’s cloudwashing. For an interesting comparison of the two, check out this discussion thread on Stackoverflow: “Cloud computing over Client-server: differences, cons and pros?” I’ve mentioned the Stackoverflow site in an earlier blog.

Here, the story gets personal. Back in the 1990s – together with my friend and colleague, Dr. Gary Ray – I co-authored a book called, “What’s Size Got to Do with It?” (Can you recall what Tina Turner song was popular at that time?)

Our book explained the systems engineering of client-server systems, both hardware and software. Please don’t rush out to buy it, as the book is hopelessly outdated with references and case studies based upon now-antiquated operating systems and network implementations. But the systems-engineering approach remains valid.

A younger writer at his first book-signing event at Barnes and Noble, 1998.

My point is that from an architectural standpoint, very little has changed in the last 20 years. It’s certainly true that processors and throughput have gotten faster, thanks largely to the relentless push of Moore’s Law. Software development has also improved to make better use of client-server environments. And new applications (applets) have emerged by the thousands. But little has changed in the actual workings of what we now call “the cloud.” Instead, the old client-server model has simply become more personalized and accessible to the average person. This is the general trend of everything on the Internet.

For those of you with copious amounts of free time, I’ve included a link – What’s Size-Ch.01 - to the original galley proofs of the first chapter of our “rightsizing” book. In this introductory chapter, you’ll notice that we used the technology of the day (20 years ago) to basically describe what is happening in today’s cloud environments.

It truly seems like the more things change, the more they stay the same.

 

Software Developers Benefit from Windows 8 Hygiene

Friday, October 19th, 2012

A new level of hardware and software IP integration is needed for true power optimization.

Both Intel and Microsoft took the stage at the recent Intel Developer Forum in San Francisco, CA. Their goal was to highlight a new level of hardware and software integration, which aims to optimize energy usage.

This presentation covered the Windows 8 power-management enhancements targeting software developers, including new ultra-low-power states for mobile devices. Barnes Cooper, Senior PE at Intel, and Stephen Berard, Senior PM at Microsoft, gave the presentation, “Optimizing Battery Life on Intel Architecture Based PCs with Windows 8.”

Not surprisingly, battery life in today’s CPU-based systems is dependent upon both hardware and software factors. Hardware usage determines the extent of power consumption. Software dictates when the hardware is active and idle, which in turn decides when and for how long the system can make use of low-power states.

The speakers talked about CPU-related “hygiene improvements” in Windows 8 that will improve system-level power management. The hygiene-related checklist included:

  • Removal of the periodic system timer tick. Interestingly, earlier Windows operating systems
    (prior to Windows 8)) could be idle for at most 15.6 ms.
  • Removal of component timers where possible and coalescence of periodic timers that could not be removed
  • Moving periodic maintenance work to periods defined by the OS
  • Utilization of a new application model that is optimized for an application’s actual usage

Similar hygienic OS changes have been introduced in the network, storage, and input/output hardware subsystems.

Software – in all of its myriad forms – is now one of the chief causes of increased power consumption. Paying attention to “hygiene” is but one small way to reduce this consumption. Still, every bit helps.

 

Virtual Reality for Chip Design?

Friday, October 12th, 2012

As chip design moves into the realm of three-dimensional transistor structures and even MEMS, virtual-reality simulators may prove a necessity for both architects and educators.

They say that travel broadens the mind. That has certainly been the case with my recent visits to some of the leading semiconductor and electronics tool companies and research organizations in Europe, including ASMLDassault Systemes, and Imec. Each of these entities offers technologies that are pertinent to the IP community, which I’ll cover in the coming weeks.

For now, let me whet your interest with a short video clip from Dassault Systemes’s virtual-reality development and deployment system, which is called 3DVIA. Think of it as a super-fast and detailed simulation program expanded into three dimensions.

John Blyler explores the large-scale visualization of an industrial chemical facility, which was developed using Dassault Systemes’s 3DVIA Virtools technology. A handheld control enables the user to travel throughout the virtual world.

Such a system might seem like overkill for the world of chip design. After all, EDA-IP tool providers already offer sophisticated modeling and simulation tools for every aspect of chip design – especially in virtual software through hardware-based prototyping. Still, as the chip community moves into an era of 3D structures, through-silicon vias (TSVs), stacked dye, and microelectromechanical (MEMS) devices, the potential benefits of virtual-reality (VR) simulation become more tangible. Such VR simulations could be used to visualize the effects of evolving transistor structures, such as fin-Fetts, or to enhance the accuracy of thermal flows around stacked die. Plus, 3D and virtual-reality models have proven invaluable as teaching aids to both novice and seasoned designers across a wide range of engineering disciplines.

Translating the complex interactions of today’s systems-on-a-chip (SoCs) into a virtual-reality program would require serious processing and graphics hardware. Each dimensional display in the 3DVIA system requires at least one server and GPU cluster. Fortunately, advances in server and GPU technology make these systems available on the commercial market.

Like the EDA industry, chip design must become part of a larger system-design process – both in terms of disciplines (EE-CS-ME) and domains (chips-boards-modules). The move toward a system view necessitates additional simulation and modeling tools. Virtual reality may have a strong play in this evolving world.