Cloud computing offers many challenges including the division of graphic-processing IP and rendering tasks between the mobile device and cloud-based servers.
Cloud computing has created new challenges for system designers in terms of the division of functionality between the mobile device and the cloud. This division will directly affect the design of system-on-a-chip (SoC) processors and graphic processing units (GPUs). Autodesk Media & Entertainment Tech Innovators asked the experts at Imagination Technologies and AMD to address the question of graphic processing and rendering partitioned between mobile devices and cloud-based servers. Here’s a portion of their responses. – JB
The answer, as is so often the case, depends on the use.
By and large, we think that the cost, silicon area, and power budget of our GPUs makes them efficient enough to be in the device rather than the cloud. This is especially true of the mobile space and most consumer devices.
But of course, there are often exceptions. Most creative professionals are, in our experience, running fairly slow and demanding apps anyway. They are very intolerant of any additional lag or performance hit. But there can be cases where – say, for a final render of a static image – they may be more willing to send it to the cloud.
In the more mainstream arena, the applications are less demanding. But still, gamers tend to be intolerant of lag too – especially when playing against each other online.
For applications for office, presentation, etc., there may well be a case for dumb terminals with rendering (and everything else) in the cloud. However, the cost needs to be notably lower than conventional devices, which I’m not sure has so far proven to be the case.
Because our technologies are designed for mobile applications first and foremost, they are very efficient and low power, making them very suitable for render cloud applications. When you take enough of them together, there is little in the GPU or GPU-compute space that they cannot achieve. And application programming interfaces (APIs) like OpenCL are making the interface between app and cloud relatively simple to program for.
Rendering in the cloud is, of course, not just rendering. One also requires technologies to control the elements of the system and divide work, etc. Really, it is a CPU/GPU combination for which we have very suitable technologies with our Meta CPU, PowerVR GPU, and RTU technologies.
For those willing to move ray tracing to the cloud, the power to access it is now available from an iPad. In the secretive world of high-stakes film and game production, though, many users aren’t ready or willing to send projects to the cloud. For these users, the challenge remains scaling down high-end ray-tracing technology to the laptop. This is a work in progress, but something we can enable with our ray-tracing-unit (RTU) technology.
It may well be that in the future, the ideal solution will be a combination: local rendering at your main workplace (or gaming space), where performance matters – but with documents stored in the cloud. A cloud-rendering solution will be available as a backup for when you’re in meetings or on the road.
We believe cloud-based services (including remote rendering) will begin to grow exponentially as technology and pricing/licensing models continue to be refined. There seem to be several approaches being developed – including approaches for both interactive design-review and final rendering.
The first method, which is expected to be used mostly for design-review purposes, is server-side rendering in conjunction with client-side user interaction and input on PC/mobile devices. In this scenario, a rendered image is generated in the “cloud,” compressed, and then sent over the network (or the Internet) to a mobile or other device, such as a PC. This scenario is similar to video streaming, except the user is able to interactively control camera and object parameters.
A second cloud-enabled method for design-review visualization, which is being developed by several companies, will make use of hybrid server/client rendering. This user experience is similar to the previous method. However, in this scenario, the cloud server does the heavy lifting – handling the harder lighting and procedural calculations. The “solved” scene is then converted into lightweight data, which can be transmitted instantly to a PC or handheld device, where the image is quickly rendered in 3D. Powerful, low-power, and low-cost technology (such as AMD APUs) will almost certainly accelerate 3D rendering capabilities of mobile devices in this scenario.
For final high-quality, “cinematic” rendering of HD-level stills and animations, cloud-based render farms have been around for a good many years. We see this type of service continuing to be relevant – and even expanding to include near-instantaneous (or at least very fast) GPU-based rendering of high resolution for a range of purposes.
In all of these scenarios (and any that we missed here), there are likely some limiting factors, such as network bandwidth, which will continue to be problematic. Images cannot be processed/rendered until all of the raw content is uploaded to the cloud server. In many cases, there will likely be a considerable delay as large datasets and high-resolution raw assets are transmitted to the “cloud” before any rendering can commence.
Examples include large texture and image maps, video content, and other “big” files that are sometimes required for “cinematic”-style rendering.
Originally posted on “IP Insider.”