Google is developing software that can determine where someone snapped a photo, even if it lacks identifying data embedded in the image file, according to one report. The move could add to privacy concerns. The Mountain View, California company already has the ability to identify animals, building, landmarks, plants, food and other objects. The technology could become another piece of data used for ad targeting.
PlaNet determines a photo's location by comparing it to a database of geo-tagged images from across the Web. The technology -- created by Tobias Weyand, a computer vision specialist at Google, and colleagues proficient in deep-learning machine technology -- set out to determine the location of photos using the pixels it contains. The team divided the world into a grid of more than 26,000 squares of varying size that depend on the number of images taken in that location, ignoring areas like oceans and polar regions where few photographs have been taken, Weyand told MIT Technology Review.
A database of geolocated images from the Web and location data determine the grid square. The files consist of 126 million images with accompanying Exif location data. The team used 91 million images to teach a neural network to work out the grid location using only the image itself. They validated the neural network with the remaining 34 million images before testing the network.
Using 2.3 million geotagged images from Flickr, PlaNet determined the country where the photo was taken with 28.4% accuracy and the continent of origin in 48% of instances. While the numbers may not sound impressive, MIT Technology Review suggests the technology already performs better than humans, and created a game to prove it. With image training, PlaNet has the potential to get even better.
PlaNet localized 3.6% of the images at street-level accuracy and 10.1% at city-level accuracy.