What If A Search Engine Could Recognize An Image Without Title Tags, Keywords?
Sometimes my brain thinks similar to a "what if" statement in a line of application code. What if Google, Bing and Facebook embedded video in search ads? What if ad agencies worked more closely with universities to find emerging tech similar to the video technology from the Massachusetts Institute of Technology (MIT)?
George Pappachen, chief privacy officer at the Kantar Group, which generates a majority of revenue for the WPP Group, admits agencies don't do as much as they should to work with universities.
A couple of years ago Google and WPP Group launched a market research fund to support projects in new media. Pappachen points to a platform emerging from Affectiva, a company spun out from the MIT Media Lab. The technology measures the emotional response people have to ads and brands. The company recently secured a $500,000 grant from the National Science Foundation.
When asked about search, Pappachen said Kantar looked at several technologies coming out of Stanford University related to privacy issues. University researchers are working on several projects that would reinvent search to make it more relevant and personal based on historic searches without the history being available to the search engine.
Search engines work much more closely with universities, but it would benefit agencies to cozy up. Apparently, machine learning will pay off. Google researchers and Stanford scientists have discovered that with help from correctly programmed software, large computer systems can make sense of random data within days.
Google does well searching for labeled images based on keyword. Control experiments show the possibility of training the platform to identify high-level concepts using entirely unlabeled data. The experiment used faces, human bodies, and cat faces to train from random frames of YouTube videos. It means the search engine could view the image and identify it without using keywords or title tags.