Control with hand, eye and head movements
Right now, depth perception technology exists on a very basic level, allowing users to interact with their phones by gestures. A waving gesture can answer a call or scroll through Web pages while browsing. Eye movements can control the start and stop features of a video. And a head bob can move a Web page up or down. But this is just the beginning.
Intel, for instance, has made some major strides in the depth perception game. Devices with Intel’s RealSense 3D camera have three lenses: a standard camera, an infrared camera and an infrared laser projector. When used together, these three lenses enable the device to determine depth by detecting infrared light that has bounced back from objects in front of it. This data is then filtered through Intel’s motion-tracking software, creating a completely hands-free interface that reacts to hand, arm and head motions and even facial expressions. While the technology only exists for larger devices like laptops, the company recently announced that it would be bringing RealSense to smaller devices like tablets and smartphones.
The possibilities of gesture control
As cameras are better able to understand and respond to natural movement, gesture controls will extend well beyond swiping and scrolling, especially when paired with subtle movements from an accelerometer and gyroscope-equipped wearable devices like a smartwatch. Imagine taking your phone, opening up an app, drawing letters, numbers or even sketches in midair and watching as every stroke appears on the screen. Or picture using intuitive hand controls to play a game on your phone, simulating movements that the game’s central character reenacts. The camera will literally be able to “see” what you are doing, understand the movements and respond accordingly. And in time, depth-perception cameras and gesture controls will even extend into new realms of interaction.
Digital learning, or “edutainment,” could change significantly with this new technology, as gesture controls could make e-books, learning games and learning apps more interactive. The technology would also impact collaboration and creation, as apps will enable two people to share an experience online or help users create original content using a digital green screen. And there’s no telling where the ripple effect ends, as the technology empowers users to improve upon the real world.
What does this mean for mobile advertising?
Depth-perception cameras will go well beyond mere functionality; they will present an entirely new way of connecting with a mobile device. The interface of the future will simply be more intuitive and more natural, where users simply use their hands, face, speech and surrounding environment to communicate with their smartphones. And this, of course, presents a tremendous opportunity for advertisers.
Advancements with depth perception cameras on mobile devices mean that advertisers will be able to build fun, rich media units that allow users to engage on a more intuitive level. What if an advertisement could physically interact with the target consumer? Imagine a Star Wars ad that allows the user to use “the force” to move objects on the screen, or an ad for hands-free audio controls (like in Ford cars) -- where the ad itself is hands-free. The user would see the advertisement as something enjoyable, rather than a nuisance he or she must endure to browse a mobile Web site or use an app. This could also open up a new way of understanding the target audience. What do users respond to the most? What kinds of gestures are most appealing to them?
Motion gestures in themselves will become a new dimension of the advertising toolkit that can be used to communicate ideas, messaging and product benefits, opening the door to a new realm of creativity and ingenuity that is guaranteed to shake things up.