In case
you haven't been paying attention, Macy's is pretty much all-in with mobile this holiday. The company has already been well ahead of most retailers in adopting in-store video activations, a QR code
built into its star logo, and a tight cycle of TV spots pushing its branded app on consumers. Clearly, the brand sees mobile as a direct route to its customers.
Adding onto this portfolio of
mobile initiatives, the department store is embedding image recognition in its Macy's Star Gifts app so shoppers can scan an item they see advertised in Macy's upcoming gift catalog as well as in its
print advertising and out-of-home placements. Rather than scan a code, the image recognition technology simply recognizes the item itself and delivers to the app product information as well as
gift-giving help. The technology from NantMobile is an augmented reality layer that overlays red gift boxes onto the physical image. The idea is for the retailer to have a simple way of activating
interactively a wide range of other advertising assets. It is also a way of using the products themselves to promote an app experience where Macy's can engage the shopper more deeply. The Star Gift
App has other gift ideas, advice from style expert Clinton Kelly, product videos and gift lists. The content will be different by category.
Clearly, Macy's wants to get people into the app
itself. It is offering a $10 gift code to those who scan the first page of the Star Gift print catalog that arrives early this month. The app does also lead to direct shopping opportunities for making
a purchase on the spot.
It is interesting that Macy's seems to be signaling how effective apps must be in engaging its shoppers, because it seems that it is driving people to them this season
with the same sort of doggedness you would have expected for driving people to Web sites.
But let's face it -- this gets really cool when the image recognition software is adroit enough to
activate the physical good itself. Forget the WR or AR codes or page image recognition. Let me aim my camera at an object in a store so that the phone recognizes it from most angles and can deliver
more information and alternative paths to purchase. The intermediary technologies like AR and code apps work at a certain level of abstraction that require steps that still remove the shopper from the
real world experience.
This is what Google was starting to envision with its Google Goggles
project years ago and long before Google Glass became the tech dweeb obsession. If you will recall, Google was using IR to create a kind of visual index of stuff. Goggles is able to recognize great
works of art, landmarks, a lot of packages. It can even read and solve a Sudoku puzzle, translate text and scan business cards into your contact list.
It always seemed to me this was a product
with admirable Google-scale ambitions -- to index the object world. Ultimately that is closer to the ideal frictionless experience.