One of the more interesting movies I’ve watched in the past year has been "Ex Machina." Unlike the abysmally
disappointing "Transcendence" (based loosely on the ideas of futurist Ray Kurzweil -- how can you screw up
Kurzweil, for God’s sake?), "Ex Machina" is a tightly directed, frighteningly claustrophobic sci-fi thriller that peels back the moral layers of artificial intelligence one by one.
If
you haven’t seen it, do so.
But until you do, here’s the basic set-up. Caleb Smith (Domhnall Gleeson) is a programmer at a huge Internet search company called Bluebook (think
Google). He wins a contest where the prize is a week spent with the CEO, Nathan Bateman (Oscar Isaac) at his private retreat. Bateman’s character is best described as Larry Page meets Steve Jobs
meets Larry Ellison meets Charlie Sheen – brilliant as hell, but one messed-up dude. It soon becomes apparent that the contest is a ruse and Smith is there to play the human in an elaborate
Turing Test to determine if the robot Ava (Alicia Vikander) is capable of consciousness.
advertisement
advertisement
About halfway through the movie, Bateman confesses to Smith the source of Ava’s intelligence
“software.” It came from BlueBook’s own search data: "It was the weird thing about search engines. They were like striking oil in a world that hadn’t invented internal
combustion. They gave too much raw material. No one knew what to do with it. My competitors were fixated on sucking it up, and trying to monetize via shopping and social media. They
thought engines were a map of what people were thinking. But actually, they were a map of how people were thinking. Impulse, response. Fluid, imperfect. Patterned, chaotic.”
As a
search behavior guy, I must say, that sounded more like fact than fiction. I’ve always thought search data could reveal much about how we think. That’s why John Motavalli’s recent column,
"Google Looks Into Your Brain And Figures You Out," caught my
eye. Here, it seemed, fiction was indeed becoming fact.
And that fact is, when we use one source for a significant chunk of our online lives, we give that source the ability to
capture a representative view of our related thinking. Google and our searching behaviors or Facebook and our social behaviors both come immediately to mind.
Motavalli’s reference to Dan
Ariely’s post about micro-moments is just one example
of how Google can peek under the hood of our noggins and start to suss out what’s happening in there. What makes this either interesting or scary as hell, depending on your philosophic
bent, is that Ariely’s area of study is not our logical, carefully processed thoughts -- but our subconscious, irrational behaviors. And when we’re talking artificial intelligence,
it’s that murky underbelly of cognition that is the toughest nut to crack.
I think "Ex Machina"’s writer/director Alex Garland may have tapped something fundamental in the little
bit of dialogue quoted above. If the data we willingly give up in return for online functionality provides a blueprint for understanding human thought, that’s a big deal -- a very big
deal.
Ariely’s blog post talks about how a better understanding of micro-moments can lead to better ad targeting. To me, that’s kind of like using your new Maserati to drive
across the street and visit your neighbor -- it seems a total waste of horsepower. I’m sure there are higher things we can aspire to than figuring out a better way to deliver a hotels.com
ad.
Both Google and Facebook are full of really smart people. I’m pretty sure someone there is capable of connecting the dots between true artificial intelligence and their own brand of
world domination.
At the very least, they could probably whip up a really sexy robot.