New parental rule: You know your kid is really ticked off at you when a heated SMS exchange gets kicked over to email.
"I'm done being nice about this," my daughter started in after deciding that the laundry list of my transgressions had exceeded the limits of heated bursts in 140 characters or less. I will omit the embarrassingly petty details, but the argument is a running spat over familial obligations that has gone nuclear and multimedia. SMS, email, mobile photos and videos, Facebook postings, and even the occasional face-to-face exchange of volleys are involved. I think she has turned the "I'm 18 now!" line into a macro that works across platforms and gets laid into every other missive. She is her father's daughter, to be sure: resourcefully hot-headed.
In fact, poking through my girl's teen force field and into an honest conversation has been an object lesson in the unsettled state of new-media narrative. As I discussed in my last post, we are in that Neanderthal period of mobile media where we don't yet know which narrative techniques will take hold here and become conventions of the next great computing and communications platform. Just in bickering with my daughter I find myself playing with a number of unexpected tools like the Talking Tom animated cat who records, synthesizes and animates what you say. I have been sending those over to my steaming-mad daughter to lighten the exchanges.
If we are in those silent film days of mobile, then indeed we are just starting to learn the language, or find languages, that make the most sense here. Sometime the interfaces are themselves a kind of rudimentary language. How, for instance, do we make news information flexible enough to conform to a consumer's different modes of use when accessing it on the go?
What does "on-the-go" mean anyway? That really isn't a context, is it? Certainly not one you can program against in the same way radio conforms to drive time and office use? Untethered from the desk, there is no telling where your digital information is hitting a user: at home in their chair, in a crowded bus, surreptitiously under the classroom desk, or on a city street where glare obscures the screen. For content providers of all sorts, mobility introduces a new element: unpredictable and highly varied modes of consumption. TV, print, radio, even the Web generally could program against context. Yes, mobile has geo-location, and certain apps are obviously context-specific. But for most branded apps and information publishing, there is no clear use case, or use place.
The video news aggregator Newsy is an interesting example of multi-modal design. The app presents the viewer with a set of video thumbnails to swipe across with headlines and an option to play immediately or open the tile to see a story synopsis. The app lets you know upfront the exact length of the news clip but also lets you drop into a transcript of the video report rather than the video itself. The design is smart in that it doesn't lock the users into a specific media type and allows for multiple use cases.
How mobile and touch interfaces influence the book is going to be one of the more interesting areas to watch. The excellent Pedlar Lady of Gushing Cross iPad and iPhone app is among the best examples of text, image, animation and touch interactivity I have seen. This simple folk tale unfurls in illustrated pages where audio background, in-frame movement and even animated text combine to make something unique. It is not film, cartoon, book, magazine, radio or anything quite the same as another form. The maker, Moving Tales, uses the large screen of the iPad especially well, filling the frame with the character and juxtaposing the spoken narrative with the images. Classic folk-tale storytelling is being grafted onto the medium, but imagine when these presentation techniques free up a creator's narrative imagination to craft new ways of leveraging all of these tools.
Among all of the mobile narrative tools we will have at our disposal in coming year, augmented reality is among the most fascinating and unformed right now. Apps like junaio and Layar are still a little hard to use, with a clutter of choices for layering onto a live scene digital data. I have been impressed with the Acrossair browser, which also layers its interface effectively onto the live view. You can choose on the fly whether to superimpose on your view of the world any number of resources or search results from multiple engines. We are still a long way from knowing how best to tame this monstrously powerful device. Ultimately, AR has the potential to turn any object in the world, every place, into a storytelling moment. Unlike a 2D code or other connective tissue, AR could embellish the world as seen -- and without the kludgy intermediary of a code.
How and when some of these narrative tools transform how we talk to audiences and to one another is anyone's guess right now. We do know from media history that fearless experimentation is part of the process. As brands look for ways of engaging consumers through mobile, I don't think they should be shy about offering customers multiple entry points and variable modes of communication, and letting the user be part of the discovery process as everyone lurches towards the next language.
Consider your customer a stubborn, hot-headed, fiercely independent young adult with whom you are trying to locate that next stage in communication.
Then tell me what you find. This Dad could use a few lessons in teen-speak.