The nanopossibilities of onboarding
It's integral to human nature that we ask questions. Similarly, it's pretty much always the case that once an answer is supplied, it spawns yet more questions. It's part of the human condition.
The advent of new technologies and media capabilities has extended this craving for information to the point that ever-more complex and intricate media research processes are required in an attempt to keep up with what an outsider might consider an almost pathological pursuit of data.
How, then, will we manage to gather data consistently and to a quality standard that meets the demands of an ever-voracious industry in a future that will inevitably be characterized by even more media and audience fragmentation, a wider range of devices and functionalities, increased media mobility and interoperability, and so on?
Just as technology has increasingly driven innovations in content distribution, platform development, and functional enhancements, so will it similarly continue to drive much of the development in media measurement. The result will be technologies capable of capturing information that crosses the device-defined silos of the media industry - kind of like the PPM on steroids.
We can expect to see many of today's measurement technologies and techniques turbocharged to the point where what we do now will seem as arcane and antiquated as the Telex machine (for younger readers, the Telex was a once-exciting innovation that pre-dated the fax).
But as the scale and scope of what we measure continually gets bigger and more layered, the means by which we measure much of it will get exponentially smaller. Obviously, this has already begun to happen. But when I say small, I don't mean tiny. I mean nanoscale: the kind of engineering that is quite literally out of sight; the sort of fantastical futuristic advances that, if combined with cloning, really would enable you to drive a herd of camels through the eye of a needle, with room to spare.
Fantastical though this may seem, the reality is that nanotechnology is already with us in various forms. The Project on Emerging Nanotechnologies has identified more than 800 consumer products worldwide that incorporate some sort of nanotech - and apparently the list is not complete. These range from LG's F2300 Antibacterial Cell Phone (featuring an antibacterial coating for those concerned about germs) to a host of screen display applications to Samsung's 16GB memory chip. Beyond media and electronics, there is nanotech to be found in cosmetics, automotive, toys and games, sporting goods, and even food.
Much of what has been
developed to date has been in the area of coatings and polymers, transistors and amplifiers, processors and batteries. However, combine the developing science behind these advances with the growing
demand for real-time cross-media measurement, and the application of nanotechnology to the research business begins to take shape.
Imagine, if you will, what a Nielsen panelist may be
called upon to do in 2034 (or a Goolsen panelist, as by then the company will have been sold to Google so the latter can combine resources into a data gathering-mining powerhouse). No more push-button
meter by the TV or anything so primitive as "active participation."
Instead, these panelists wear the means of monitoring their media exposure in a way so discreet it will make a mobile phone
seem about as unobtrusive as a filing cabinet attached to your hip. They sport sensors in the
form of tiny polymer-based temporary tattoos. Barely visible to the eye and placed discreetly on the
body, these sensors run off built-in batteries that use body heat as a back-up power source. That same body heat and the rhythm of the pulse tells the sensor it is still attached to the panelist even
when no data is being sent back via the cloud for analysis.
The sensors themselves detect inaudible audio signals encoded in TV programming and ads, radio content, and anything on the Web, much as now but with more sophistication. By this time, all screens are produced to emit unique audio codes for content as it appears, whether or not the speakers are on.
Print media (yes, it still exists) has also gotten in on the act, using ink compatible with the sensors to produce encoded signals for each page of a publication. The few billboards that have not transitioned to some sort of video or interactive interface will use similar inks for part of their surface area.
In this way, the sensors will capture granular detail that shows which media and which content panelists are exposed to for how long, and in which combinations at different times of the day.
Through a built-in GPS 4.0 capability, panelists' locations will be tracked, and motion sensors will record whether someone is lying down, sitting, standing, or walking. Similarly, artificial intelligence capable of analyzing sound will determine whether voices are part of panelist conversations or extraneous noise. Such a capacity will also indicate when media consumption is happening alone or in a more social context. It will recognize sounds consistent with car journeys, train journeys, time spent in airports, etc.
So much for what panelists are exposed to. How do we identify the content they actually look at and engage with?
This is where visual coding of content comes in. Each panelist also wears a minute coating on his cornea equipped to recognize codes - pixels concealed within images on screens and printed surfaces. Undetectable to the naked eye, these formations deliver similar data concerning content, duration, and so on, which is then correlated to location, time of day, and the audio codes picked up by the panelist's tattoo.
All data is transmitted in real time for dynamic analysis by bots, some programmed for specific ongoing analyses, others looking for deeper insights and emerging trends of potential interest.
Much of the groundwork for this scenario is already done; the devices described are either possible to engineer now or will be soon. They simply haven't been applied in this area yet. The cost of manufacture will need to come down first, and - just as important - nothing will happen until the public and the research industry get used to the concept, which is only likely after other applications of the technology become more familiar.
In short, the science, the economics, and the panelists will ultimately get there. Perhaps the biggest hurdle to be surmounted will be the approval and endorsement of the industry itself - the research buyers and methodologists who will ultimately pay for it.