Jun 28 2013
Augmented reality appears in consumer applications and in virtual-prototyping designs for the early validation of everything from tablets to refrigerators.
Augmented reality (AR) continues to make the news. At the recent Entertainment Expo (E3) video-game show, Sony’s PlayStation division showcased its latest AR and motion-control camera technology in the Playstation4.
But augmented reality is more than just a game. For many consumers, their first interaction with AR came through Google Maps and the direction arrows superimposed over the actual road. Other implementations include IBM, which has created technology that marries augmented reality with comparison-shopping. Imec, the Flemish government’s R&D nanotech giant, has created augmented-reality contact lenses for medical and cosmetic applications. Metaio’s Juaio application turns iOS and Android-based cell phones into AR-enabled devices (see Figure 1).
These varied implementations for augmented-reality technology make it difficult to calculate the total market value for the semiconductor industry. One reason is that with just one software development kit (SDK), AR can be implemented on a number of devices. It’s a market that is decentralized, open source, and hardware independent.
In forecasting for the augmented market, analysts at Semico Research considered the varied types of products that implement the technology while weighing the popularity and other factors driving interest in these products. The result is that the AR market is expected to reach almost $620 billion by 2016 (see Figure 2). This analysis is part of a comprehensive and informative report published in October 2012.
The need for specific hardware and software tools that create AR experiences is also growing. One such tool suite from Dassault Systemes was used by designers to create a consumer application that transforms a comic book into a 3D augmented-reality experience, while another application accurately reconstructs the creation of the city of Paris.
While these AR implementations are educational, the semiconductor and embedded electronic-design community may scoff at them as mere entertainment. But this same technology is finding use in early design-validation “virtual prototypes” of everything from cell phones and tablets to refrigerators.
Augmented Versus Virtual Reality
To understand the benefit that virtual-prototyping platforms can offer to designers, one must first appreciate the relationship between augmented versus virtual reality.
“In a virtual-reality environment, you see virtual objects,” explains Vincent Merlino, High-Tech Industry Solutions Leader at Dassault Systemes. “But with augmented reality, you see augmentations in the real world.”
“Virtual reality defines a completely immersed digital environment in which reality is replicated for the user,” elaborates Trak Lord, marketing and media relations at Metaio. “Augmented reality instead utilizes reality itself as an anchor for virtual content that can be experienced as part of the real world.” Whereas virtual reality inserts the user into an entirely new and simulated reality, augmented reality instead inserts virtual elements into the user’s reality.
The electronic-design community is using augmented reality to both design end-user applications and improve the conceptual design of future products. One example of the former comes from the automotive market, where Audi uses image recognition and Metaio’s AR software to power its Interactive Manual application. Mobile users point their smartphone devices at different surfaces and objects in the car to get instant feedback on the identity, function, and quick-start instructions of that specific feature. For example, pointing a smartphone toward windshield wipers yields “Windshield Wiper” identification, with the ability to swipe through to a brief, animated tutorial on how to use the wipers.
Designers are also using augmented reality to improve the conceptual design and early validation of future products. Here, product teams use the technology to create a virtual (as opposed to physical) prototype to gain critical end-user feedback before the design is realized in hardware or software. Consider the design of a new mobile phone or tablet (see Figure 3). Augmented reality could be used to validate the usefulness of a new form factor –size, shape, and even “feel” – of the future product. This would result in significant cost and time savings over traditional in-person focus-group meetings.
The cost and time benefits of AR-based virtual-reality prototypes are dependent upon the complexity and size of the end-user product or system. “The bigger (or more complex) the product, the bigger the savings over a physical prototype,” remarks Merlino. “If you want a physical prototype of a large refrigerator with one color and configuration, you may spend over a million yen (or about $10k). With a virtual-reality prototype, you only spend money on the software with the advantage of many configurations and global distribution.” Augmented reality allows you to compare the virtual refrigerator alongside a real (competitive) offering (see Figure 4).
Almost any product could be validated with augmented reality, including the electrical and mechanical subsystems on both chips and circuit boards. Naturally, cost will ultimately determine the use of augmented reality over an actual physical prototype. Once a designer decides to use augmented reality in the virtual prototyping of a design, however, what is needed in terms of hardware and software?
Designing For Augmented Reality
To accomplish the insertion of virtual elements into a user’s reality, camera technology is used to identify and recognize real-world images and objects. Digital and virtual content is then added to them in real time.
Designers of end-user applications need to consider both the software and hardware aspects of their AR implementations. Most vendors provide augmented-reality software application development kits (ADKs) that work on the majority of iOS and Android platforms. “Beyond the basic needs of front-facing camera and reasonable performance, many of the newer platforms offer new compute resources, such as programmable image processors that promise improved computer visioning capabilities,” says Lord. “The ongoing improvement in graphic-processing-unit (GPU) and general-purpose-GPU (GP-GPU) processing also provides more opportunity to improve augmented-reality user experiences.”
In addition to improved performance, hardware must provide more power-efficient depth-of-field imaging sensors and greater ease of programming for synchronized, multi-sensor data streams.
High-performance, low-power GPUs and associated computing engines are a critical part of the design of AR systems. Companies like Metaio offer dedicated hardware image processors for accelerating augmented-reality experiences. Dubbed the “AREngine,” the acceleration chip works by taking on much of the processing required to run AR experiences from the general CPU. The company claims a drastic reduction in battery power consumption and an increase in initialization speeds.
Many designers use the compute power of the existing mobile-device GPU to enhance performance and minimize power for their AR applications. This requires careful integration of the GPU, video, and camera-vision processing to ensure the best performance.
What is the difference between image and graphics processors? Image processing deals with the manipulation of images acquired through some device, like a camera. The emphasis is on analysis and enhancement of the image. Today’s popular computer vision systems require the use of image analysis.
Conversely, graphic processing deals with synthesizing images based upon geometry, lighting, materials, and textures.
“Augmented-reality applications usually blend live video with computer images, where 3D graphics rendering is performed using OpenGL software APIs,” explains David Harold, Senior Director of Marketing Communications at Imagination Technologies. “Powerful cores can provide high-quality 3D graphics rendering, which can then be blended into the real-world camera capture. Also, by implementing features like camera image texture streaming, GPUs are capable of processing camera images as textures to enable 3D and reality integration with minimal CPU loading.” Efficient integration of camera images into the 3D rendering flow is essential for good performance and efficiency in augmented-reality designs (see Figure 5).
What Does The Future Hold For Designers?
One tantalizing future would be the complete simulation of both hardware and software in the virtual prototype. For example, while looking at a virtual prototype of a tablet, the designer or end user could be running an application on the tablet (like a game) or using the camera.
Another near-horizon goal is the greater incorporation of social-media input to the design process. Using AR-based prototypes, a company could include stakeholders or a larger social network to provide valuable feedback as part of a crowd-sourcing team. In many cases, this would be much easier than developing just a physical prototype.
Finally, companies are in the early stages of developing image-processing techniques for gesture-recognizing augmented reality. In one implementation, an application would superimpose imagery over the screen of a smart-phone or tablet, allowing users to interact with it via hand gestures. Another implementation leverages Wi-Fi signals to detect specific hand gestures without the need for sensors on the human body or cameras.
Augmented reality has quickly moved beyond gaming to the wider consumer market. The technology has created a new set of hardware and software applications that allow designers to create high-performance and low-power augmented-reality experiences. That’s the reality.
“Reality: What a concept?!” – Robin Williams