You never know with kids. At a local university last week I was doing my "gadget guy" routine for a class of Comp Sci 101 students. This is a little dog and pony show I perform each term that walks
students through the editorial side of the tech and digital media industries. It also gives me the opportunity to cart out the latest toys manufacturers send me for review. Amidst all the wizardry --
the Kindle, the palm-sized HD camcorder, the laptop HDTV, etc. -- one bit of coolness got a satisfying "ooh." They really perked up when I whipped out my Android-fueled G1, pointed it at the UPC code
of a student's Coke bottle and pulled down Web sites, pricing, and alternative online retail sources. "Kewl" they uttered, though barely awake.
UPC codes are fine, and the long-promised
rise of QR codes, EZCodes, et. al. in the U.S. are all well and good. I am still waiting for all of this to standardize a bit more. I have downloaded several Android and iPhone apps that scan objects
and return information with mixed results. I suggest some of these companies work a little more on the return path of their systems and not just the basic WOW factor of physical world searching. The
ShopSavvy app I used in class is very good, if only because the information it returns is nicely categorized into Web and local resources, reviews and even a price alert when the cost hits a certain
point. The visual clarity of the results rather than the wizardry of the device are the real value-adds here.
Amazon's iPhone application has an Amazon Remembers tool that sends a
snapshot of any object and returns a related item from their catalog along with user reviews and a click-to-buy opportunity. The neat thing about the Amazon system is that reportedly it is
human-powered via Amazon's Mechanical Turk pennies-per-task online workforce. This leads to some unique results. I tried using a random snap of a roomful of cats and I got back an offer for cat
treats. That is where I say "Kewl!"
Meet Steve Smith at OMMA Global!
The general concept of using the mobile phone as a bridge between physical reality and the great
cloud of Internet information is staggeringly powerful. At the first OMMA Mobile conference years ago we engaged this idea in a panel I called "A Clickable World." The prospect of using your phone on
the real world in much the way we use the desktop mouse on Web links excites me beyond measure.
Move away from the UPC codes for a second and just ponder the informational possibilities. "Who
the hell is Horace Greeley?" a visitor to Greeley Square might ask in front of the weather-worn statue. Why not take a shot of it and get more information than can fit on a brass plaque? Why is the
tower at Pisa leaning? Take a picture, get an answer. What is the crash safety record of that Mazda Miata I am eying on the showroom floor? Why not take a snap and get a search box beneath it that
lets me run a specific query against the image? Better still, combine voice search with image search and let me take a picture and ask a question about the object.
We talk about mobile
users accessing the Web from their phones, but the real revolution is when the phone makes the world itself as interactive as the Web. When objects in the physical realm essentially become clickable
items that can deliver back all kinds of information, then we are changing the game in the way we work and think in the world. Imagine then taking the simple Google AdSense model of targeting and
layering that onto this system. Marketing offers could be fully contextualized, not only to place and time but to points of interest. What if that snap of Horace Greeley's statue could also point me
to the nearest bookstore with publishing or New York history tomes in stock? Or, if it had my visual search history in mind, the results themselves could be shaped by my known interests?
Getting there will be half the fun, of course. As I play with the different systems struggling to make the physical world interactive, I am not convinced that the dedicated scan code is preferable to
a visual database. Both approaches have inherent weaknesses. The scan code is just difficult to apply everywhere without making the world look like a grocery store shelf. A visual database is hard to
build richly enough to encompass the range of places and interests we want it to recognize. It is hard to hotlink the world.
Ideally I would want my mobile search tool to recognize most
major locations so it can tell me something about objects within them. Perhaps knitting together GPS with visual search might be a solution. It would be a shame to lock this system down into paths
that only marketers imagine and activate with scan codes. I would love to see a real-world-mobilized search engine that is prioritized from the bottom up, from users somehow telling us what they want
to click on and know about the world.
But let me click on you, dear readers, and ask which mobile scanning engines you are using. There are so many and my iPhone and G2 are already
filling up. Tell me what options I should click on next.