Archive for March, 2010

Mar 12 2010

Playstation Move – Motion Controller at GDC

Published by under Uncategorized

At Game Developer Conference, Sony introduced thier new motion controller for the Playstation called “Move”.  It is similar in function to the remotes the Nintendo Wii, but utilize a 3 Axis Gyroscope and a 3 Axis Accelerometer with higher sensitivity to the Wii controller.  The Move controller works in partnership with a camera system (Playstation Eye) and a colored globe that is at the end of the controller.  The combination of the MEMS control and the optics system provides a perceived faster response (sub 1 frame of latency) than the Wii motion plus, but with the same feedback and accuracy. The experience was based on using the controller on a number of pre-alpha software titles, so things may improve and change as the games improve and are finalized to utilize the device.
The Move controller, it’s associated “wireless sub controller” and the Playstation Eye are all powered via the USB interface on the Playstation.  The Move and it subcontroller also charge through the same USB cable connections.  For some high movement games, the controllers can be combined so a single user uses two (2) of the Move controllers to map the game play rather than a controller and a sub-controller.
PC

At Game Developer Conference, Sony introduced thier new motion controller for the Playstation called “Move”.  It is similar in function to the remotes the Nintendo Wii, but utilize a 3 Axis Gyroscope and a 3 Axis Accelerometer with higher sensitivity to the Wii controller.  The Move controller works in partnership with a camera system (Playstation Eye) and a colored globe that is at the end of the controller.  The combination of the MEMS control and the optics system provides a perceived faster response (sub 1 frame of latency) than the Wii motion plus, but with the same feedback and accuracy. The experience was based on using the controller on a number of pre-alpha software titles, so things may improve and change as the games improve and are finalized to utilize the device.

The Move controller, it’s associated “wireless sub controller” and the Playstation Eye are all powered via the USB interface on the Playstation.  The Move and it subcontroller also charge through the same USB cable connections.  For some high movement games, the controllers can be combined so a single user uses two (2) of the Move controllers to map the game play rather than a controller and a sub-controller.

PC

One response so far

Mar 08 2010

ISSCC 2010 – Low power designs back on track By Anand Iyer

Published by under Uncategorized

Importance of low power has never been more pronounced as with this year’s ISSCC. Low power designs were highlighted in number of sessions throughout the conference. Some of the memorable ones were from Intel on the application of DVFS technique to 80-core processor, the clock power reduction using pulse latches in Power 7 by IBM and low power techniques applied to the AMD 32nm processor (Bulldozer?).

These processors highlight significant improvements in the application of low power techniques on real designs. Such real world design practices should encourage more interest in this area and enable some of the difficult techniques to be realized in silicon.

Low power techniques are always built on simple principles. For example, making prudent choices for architecture like limiting the bandwidth, not redoing tasks that are completed, and working only when needed are simple principles which should reduce the overall power consumption. Yet, implementation is quite complex because of the integration demanded in the modern processors and other design-imposed limitations. As we know that with every subsequent process migration, transistors are becoming cheaper. Designers would make use of these transistors by either performing more computations (multi-core) or integrating more functions. If the designers are not careful with defining the architectures, power inefficiencies can exists.

The first paper from Intel (session 9.01) studied the effect of DVFS technique on 80-core 65nm NOC processor. The processor was devised such that each of the 80-cores can be voltage and frequency controlled all the way to shutting down completely. The measurement was conducted with varying loads of traffic. Intel used proprietary scheduling techniques to come up with voltage and frequency numbers for each processors based on the load. The results were fairly intuitive that voltage variation had a bigger effect on power than the frequency variation (voltage is the square term in the power equation).

The second paper (session 9.03) highlighted some of the interesting low power concepts applied to the Power7 architecture. Power 7 designed in 45nm SOI process uses pulse clocking in order to save power. They use full master-slave configuration, but keep the master stage ON at all times during the pulsed mode operation. Data flows through the master stage and only slave stage is used for storage. For testing purposes, the master stage functionality is restored. IBM reported about 20% power savings by employing this technique.

AMD’s processor (session 5.06), designed in 32nm SOI process had several new low power techniques implemented. AMD presented two power saving techniques and one for on-chip power measurement. The first method was to control clock power which contributes ~30% of power. To control clock power, AMD did a careful clock gater placement and implemented fine grain and multi-stage clock gating. Secondly, it used power gating throughout to control the power with shutting down the processor. AMD got away using footer devices because of the SOI process.  And finally AMD designed a circuit to monitor power by monitoring a few of the critical signals for activity.

Overall, there has been rich low power design content in this year’s ISSCC. Readers can check out the proceedings for other interesting papers. After a forgettable 2009, this year has started a lot better and seems like the low power designs are back on track. This has increased our hopes of seeing more and more low power designs in future conferences.

AI

No responses yet

Mar 05 2010

Big die, multi-die, thin die and patents

Published by under Uncategorized

At the MEPTEC Chip to System symposium there a lot of discussion about the silicon that fits in the packages in addition to the package.  The opening keynote from Tom Gregorich framed the conference with the premise that chip scaling is going to be limited by the reality that sometimes users need to put the analog/MEMS and the digital/memory functions together in a single package and they can’t be built on the same chip due to technology differences and cost.  These are a combination of design, fab, test and assembly costs.
One of the drivers in the packaging arena is package size and pin pitch.  The form factor for many devices are driving the investigation of tight package pitches down to 0.3mm per pin, however these have a drawback of expensive PCBs and board level assembly, as well as thermal dissipation issues.  On the scaling side, Cisco presented another design tradeoff issue for networking Ics.  For higher data rates and throughput, they are pushing the process envelope to smaller geometries, and more parallel data processing (more channels per chip).  This is creating a pin count explosion and a heat dissipation per unit area issue with the die.  Cisco indicated that they are currently building 20mm/side die, and would like to go larger to address getting all the pins on and spreading the heat out, but they are running in to the litho field size and yield with these large die.  One of the areas to help with these issues is the development of low loss power delivery methods between PCB and die with the associated increase in thermal density handling.
Additional speakers discussed the 3D assembly techniques focusing on handling challenges for very thin (20um thickness) 300mm wafers that have been mechanically backlapped for TSV assembly.  These issues include die separation, particle defect creation and post-thin testing.  The discussion was then carried forward to the topic of BIST, scan development and post assembly testing for systems with TSVs and accessability issues to embedded and burried functions in the internal die stacks of the TSV modules.  The forecast for TSV in production is 2012 for DRAM for server applications, 2013 for use in FPGA/Memory systems, and 2014 for microprocessor/ memory applications,  From a die stacking approach, non-TSV die stacking wafers presented as a current high volume technology with over 2.5B stacked memory modules have been shipped with 2 to 8 die.  The technology has been shown to support up to 17 die stacks in R&D.
The conference ended with an open panel discussion (the panel session was open to the public, not just registered attendees) on patent and IP issues.  The lively discussion included protection in the US and Japan vs India/China policies and realities, patent trolls, and the new vehicle – the patent agitator.  This vehicle is typically a law firm or licencing organization that buys or becomes a licensee of key technology patents in bulk, and then re-licenses the entire portfolio in “insurance” mode to companies that may end up with infringement issues and are not in a position to individually negotiate and license hundreds to thousands of patents for a given project.
PC

At the MEPTEC Chip to System symposium there a lot of discussion about the silicon that fits in the packages in addition to the package.  The opening keynote from Tom Gregorich framed the conference with the premise that chip scaling is going to be limited by the reality that sometimes users need to put the analog/MEMS and the digital/memory functions together in a single package and they can’t be built on the same chip due to technology differences and cost.  These are a combination of design, fab, test and assembly costs.

One of the drivers in the packaging arena is package size and pin pitch.  The form factor for many devices are driving the investigation of tight package pitches down to 0.3mm per pin, however these have a drawback of expensive PCBs and board level assembly, as well as thermal dissipation issues.  On the scaling side, Cisco presented another design tradeoff issue for networking Ics.  For higher data rates and throughput, they are pushing the process envelope to smaller geometries, and more parallel data processing (more channels per chip).  This is creating a pin count explosion and a heat dissipation per unit area issue with the die.  Cisco indicated that they are currently building 20mm/side die, and would like to go larger to address getting all the pins on and spreading the heat out, but they are running in to the litho field size and yield with these large die.  One of the areas to help with these issues is the development of low loss power delivery methods between PCB and die with the associated increase in thermal density handling.

Additional speakers discussed the 3D assembly techniques focusing on handling challenges for very thin (20um thickness) 300mm wafers that have been mechanically backlapped for TSV assembly.  These issues include die separation, particle defect creation and post-thin testing.  The discussion was then carried forward to the topic of BIST, scan development and post assembly testing for systems with TSVs and accessability issues to embedded and burried functions in the internal die stacks of the TSV modules.  The forecast for TSV in production is 2012 for DRAM for server applications, 2013 for use in FPGA/Memory systems, and 2014 for microprocessor/ memory applications,  From a die stacking approach, non-TSV die stacking wafers presented as a current high volume technology with over 2.5B stacked memory modules have been shipped with 2 to 8 die.  The technology has been shown to support up to 17 die stacks in R&D.

The conference ended with an open panel discussion (the panel session was open to the public, not just registered attendees) on patent and IP issues.  The lively discussion included protection in the US and Japan vs India/China policies and realities, patent trolls, and the new vehicle – the patent agitator.  This vehicle is typically a law firm or licencing organization that buys or becomes a licensee of key technology patents in bulk, and then re-licenses the entire portfolio in “insurance” mode to companies that may end up with infringement issues and are not in a position to individually negotiate and license hundreds to thousands of patents for a given project.

PC

No responses yet