Mobile phones that can sense 3D motion and geometry in a touchscreen app. That's one of Google's next big projects. It all sounds pretty cool, but will the project tie into advertising? It's really not clear, but Google is giving developers an opportunity to build out apps based on the technology. The company is looking for developers interested in building more than a mobile touchscreen app.
The project's goal to give mobile devices a "human scale understanding of space and motion" could support retail several ways. One points to an indoor mapping system of physical stores. Another would offer online consumers the ability to map the inside of a room in a home before taking individual items from the store to build a model.
Consumers could use something similar when shopping for clothes by uploading a 3D image of themselves and matching garments to see what might looks best before making the purchases. Google also mentions apps for the visually impaired and gaming.
The prototype phone contains 4MP camera, 2-times computer vision processor, integrated depth-sensing technology and motion-tracking camera. It allows the phone to track motion in full 3D in real-time as the consumer holds it. The sensors make more than 250,000 measurements every second to create one 3D model of the environment.
Johnny Lee explains how Google's ATAP Project Tango Team works with universities, research laps and industrial partners to harvest the past 10 years of research on robotics and computer vision for use in a mobile phone.
Aside from George Washington University, Google partnered with 16 companies to work on the project. Companies like 3D Robotics, Bosh, Open Source Robotics Foundation, Movidius, OLogic, Paracosm, hiDOF, and MMSolutions, among others will participate. During the next few months Google and partners will release kit to developers to build applications and algorithms for the platform.
Lee works in Google's special project team, Google X. He joined the company in 2011, where he worked as a researcher in Microsoft's Applied Sciences Group. His roots at Microsoft go back to 2005 where he explored the "feasibility of low-cost brain computer interfaces as input for able bodied individuals," per his LinkedIn profile.