Turing 2.0: I Am Not A Robot

Last week, Saya the robot made her teaching debut in Tokyo. She is "multilingual, can organise set tasks for pupils, call the roll and get angry when the kids misbehave."


I'm sure Saya would fail it, but it seems pretty clear that the Turing test is either obsolete or near enough to it not to matter. These days, nobody cares whether a machine can pass for a human. Instead, Turing's been superseded by the far more pressing Turing 2.0: can a human pass for a human?

Consider the last time you sent an email to someone you didn't know. How did you introduce yourself? What did you say to prove that you were you -- and, therefore, worth listening to? Consider the last time you met someone new online -- maybe someone who sold you that used exercycle on Craigslist. How could you tell it was a person? More important, how could you tell it was a trustworthy person?



As we shift more and more of our activity online, this new Turing becomes more and more important. Turing 2.0 is at the heart of modern reputation management, at the heart of our ability to make online connections that develop into genuine relationships. In this era of data transparency, the data has to stack up in our favor.

Our efforts to prove that we are real people are both helped and hampered by technology. Our Digg reputations and eBay reputations help us, assuming we've been good little Diggers and eBayers. Mail merges hurt us, not because mass personalization is inherently evil, but because they raise the standard of proof.

Our online activities -- our blogs, our Facebook profiles, our Twitterfeeds -- all support our claim that we are real, that we are alive, that we matter. Sometimes I think that's the source of our collective social media addiction (yes, my own as well): we are all screaming to prove we exist, and the louder we all scream, the louder we all have to scream in order to be heard.

Google's new profiles play right into our fears about Turing 2.0. What if that person to whom I'm selling my exercycle Googles me, and nothing comes up? She'll think I'm a scammer! She'll think "Kaila Colbin" is just a made-up name! But I'm real... I'm real!

So we cast aside our privacy concerns and upload yet another profile pic and fill in yet another online form field with our blog's URL. We do this because we hope that this time, our efforts will be enough to help us rise up out of the crowd, to prove that unlike the indistinguishable morass of spammy pseudo-identities, we are unique.

We want to be legitimate in the eyes of other people. But being legitimate in the eyes of the Internet offers benefits as well: link juice benefits. AdSense benefits. Ranking benefits.

Once again, just as they created machine-generated blogs and phony Facebook accounts and fake Twitterers, the people who inevitably seek to game the system rise up, generating algorithms to produce human-like Google profiles, looking to cash in on the benefits bestowed by the Internet on those it believes to be human.

We can all be spoofed and aliased. No one is immune. All we can do is try to stay one step ahead of the machines.

Writing program terminated.

2 comments about "Turing 2.0: I Am Not A Robot".
Check to receive email when comments are posted.
  1. Scott Brinker from ion interactive, inc., May 19, 2009 at 10:58 a.m.

    That's a great twist on the Turing Test -- and an excellent point about the challenges of authenticity in a world being overrun with automated and semi-automated marketing programs.

    Given that people are also using software to fight this clutter, by filtering computer-generated marketing with anti-spam algorithms, there's also the Inverse Turing Test: can a computer distinguish a human from another computer?

  2. John Jainschigg from World2Worlds, Inc., May 19, 2009 at 1:37 p.m.

    Kaila - this is a great post. Very thought-provoking.

    I actually disagree -- I _think_. I mean, journalists have historically encountered this kind of problem a lot, because interview subjects have handlers and filters -- and good ones have no trouble establishing their authenticity and getting past these barriers to obtain a story. All you do is write a personal letter of appropriate tone and format, identifying your media affiliation, explaining your project, and making as many references to relevant/associated connections as you figure is warranted/tasteful, and they write or call you back. The scenario has analogues in many/most other contexts where you need attention from a stranger.

    I guess my point is: it's very hard (absent great skill and a high level of malicious intent) to fake an authentic, one-off, deliberate, socially-intelligent personal communication, when that communication has a legitimate reason to exist. And people tend to reply positively to communications that self-certify successfully in these simple ways, normally without performing additional steps to validate their correspondent. If I get a nice letter from someone with a company URL, I _will_ usually check out their website before responding - but that's because I want to know more about their business so as to frame a more-informed reply.

    Likewise, "authentication" fails when conditions are violated: when the communication fails to be (for example) socially-intelligent, or has no legitimate reason to exist. So people won't reply to a rude, cold, impersonal, sloppy or thoughtless note; or to one that exists outside normal boundaries of relevance and legitimate personal intent.

    The only folks troubled by this are spammers (including friend-connection spammers) and people seeking to automate transactions while preserving anonymity. And (to me at least) that seems fine. Spam is inauthentic and of low priority to its recipients, so the only concern here is how to blackhole the stuff efficiently. And there are technical fixes (involving trusted third parties, etc.) to enable anonymous transactions with high degrees of assurance.

    So I guess I don't see the problem. Real reputation (or to put it more bluntly, the question of 'Why should I care about what you just said?') is something that humans (and only humans) can generally negotiate with one another pretty rapidly and easily -- the cost of attending to this validation step is rightly perceived as negligible by comparison with the potential value of the interaction, should it be permitted to continue.

    This exists in a completely separate universe from machine-mediated 'reputation,' which is about Google ranking, spam-filtration, and other issues arising in the management of internet services at large scale -- e.g., the people who run Facebook are probably interested in ways of preventing impostures and other forms of identity- and service-theft. Everyone who uses the internet should be concerned, at least in principle, about imposture and reputational abuse.

    But these are all issues arising outside what might be called normal human scope and scale. And they only really arise if you develop highly-automated systems that overtly or covertly determine the fortunes of individuals with respect to this class of information - the way credit scores do. The cure for that is not to obsess over your internet reputation wherever it should be represented on diverse, unregulated systems, but to sternly regulate use of these systems and make them correctable.

    Confusing real reputation and 'net reputation is, I think, a bad mistake, and will tend, over the long term, to lead in exactly the direction we fear: to a point where 'net reputation (for individuals) really matters, and can trump the real.

Next story loading loading..