In-Store Screens Read Faces To Deliver Targeted Content

Imagine if — based solely on what the camera on the screen you’re looking at sees in your face — the first sentence of a story like this is different for you than it is for that frowning non-Millennial male in the corner office.

Although our smartphones, PCs and TVs can’t yet read our genders, ages and emotions to deliver targeted content, that time is already here, as you know if you happened by the Samsung booth at the National Retail Federation’s BIG Show in Manhattan this week.

Fully Aware and Responsive In-Store Technology (FARIS), embodied in a system called eyeQinsights, was on display there in a demonstration that enticed passersby with four different lifestyle images depending on whether the target was male or female and older or younger than about 35.

The product is capable, however, of much finer distinctions. “We get age pretty accurately plus or minus five years,” says Doug Bain, the chief revenue officer for eyeQ, the Austin, Tex.-based company developing FARIS. The newest version of the system is able to discern “seven different dimensions of emotion as measured by expression” and adjust the content it delivers accordingly.
“If you see a change from confused to happy, then something good is going on with that content,” Bain explains. Or vice versa.



The “attention grab” imagery for the younger male in the Samsung demo “was slightly more adventurous that that for the older male, but both were outdoors doing guy stuff,” says Bain. “The younger female was a little more playful” than the older woman.

After it attracts you, FARIS uses WiFi and touch-screen analytics not only to understand and anticipate your interest in what’s being offered but also to deliver appropriate words, video, options or recommendations. It can also use the IBM Watson Personality Insights API to recognize one of five personality traits.

For example, users who got to the “attention hold” stage at the Samsung booth were asked to type in their Twitter handles. Watson then analyzed the language of their last 200 tweets and, based on the dominant personality trait that was returned, eyeQ recommended one of three Samsung mobile phone models.

“That whole thing takes about half a second,” says Bain.

What people often emphasize with this sort of technology is the analytics: the capability to retrospectively tweak the data ever so minutely to determine who’s attracted, who’s not, and even perhaps why. And that’s exciting of course, says Bain. But the highest value of FARIS, he says, is being able to personalize the content on the fly. You can do continual A/B testing of messaging, placements, offers and other in-store variables that can improve store and brand performance. In other words, if something’s working, increase its frequency. If it’s not, get rid of it.

“Go ahead and use the tried and true, but test it against something just crazy, nutty, those things you thought about a year ago but couldn’t do because nobody wanted to commit eight weeks with that on the wall,” Bain exhorts. “Try it out. Prove it. And those learning can go to inform your ad campaigns.”

A beacon can also deliver targeted information to a shopper within its purview. But, as Bain points out, it’s dependent on a user having downloaded the appropriate app to communicate with that beacon. “You’re actually doing really, really well if you have about 2% of your shoppers having the app and actively using it,” he says. “We’re solving the problem for the other 98%.”

In its pitch to merchants, eyeQ emphasizes that FARIS can be an antidote to showrooming. If a retailer can provide the product information a customer is looking for on screens in the store, the thinking goes, she won’t whip out that Galaxy and wind up getting it delivered for free for less money by a competitor.

If all of this sounds a bit Big Brotherish to you, “100% of what we do is anonymous,” Bain says. “Even though we’re using cameras to do the facial analytics, it’s just analyzing the livestream. We’re not capturing images, not capturing video, not saving or storing anything.”

Where’s all this leading?

“Our vision is that within five to 10 years, anywhere you’d want to want to put a sign, you’re going to want to put a digital sign,” Bain says. “And anywhere you put a digital sign, you’re going want that content to be responsive to the audience in front of it.”

Next story loading loading..