"Sometimes, in order to see the light, you have to risk the dark." -- Dr. Iris Hineman, a character in the movie “Minority Report”
I don’t usually look to Hollywood for deep philosophical reflection, but today I’m making an exception. Steven Spielberg’s 2002 film “Minority Report” is balanced on some fascinating ground, ethically speaking. For me, it brought up a rather interesting question: Could you get a clear enough picture of someone’s mental state through their social media feed that would allow you to predict pathological behavior? And, even if you could -- should you?
If you’re not familiar with the movie, here's the background: In the year 2054, there are three individuals who possess a psychic ability to see events in the future, primarily premeditated murders. These individuals are known at Precognitives, or Precogs. Their predictions are used to set up a PreCrime Division in Washington, D.C., where suspects are arrested before they can commit the crime.
Our Social Media Persona
A persona is a social façade -- a mask we don that portrays a role we play in our lives. For many of us, that now includes the digital stage of social media. Here too we have created a persona, where we share the aspects of ourselves that we feel we need to put out there on our social media platform of choice.
What may surprise us, however, is that even though we supposedly have control over what we share, that content will tell a surprising amount about who we are -- both intentionally and unintentionally. And, if those clues are troubling, does our society have a responsibility -- or the right -- to proactively reach out?
In a commentary published in the American Journal of Psychiatry, Dr. Shawn McNeil said of social media, “Scientists should be able to harness the predictive potential of these technologies in identifying those most vulnerable. We should seek to understand the significance of a patient’s interaction with social media when taking a thorough history. Future research should focus on the development of advanced algorithms that can efficiently identify the highest-risk individuals.”
Along this theme, a 2017 study (Liu & Campbell) found that where we fall in the so-called “Big Five” personality traits -- neuroticism, extraversion, openness, agreeableness and conscientiousness -- as well as the “Big Two” metatraits -- plasticity and stability -- can be a pretty accurate prediction of how we use social media.
But what if we flip this around? If we just look at a person’s social media feed, could we tell what their personality traits and metatraits are with a reasonable degree of accuracy? Could we, for instance, assess their mental stability and pick up the warning signs that they might be on the verge of doing something destructive, either to themselves or to someone else? Following this logic, could we spot a potential crime before it happens?
Police are already using social media to track suspects and find criminals. But this step is typically taken after the crime has occurred. For instance, police departments regularly scan social media using facial recognition technology to track down suspects. They comb a suspect’s social media feeds to establish whereabouts and gather evidence.
Of course, you can only scan social content that people are willing to share. But when these platforms are as ubiquitous as they are, it’s constantly astounding that people share as much as they do, even when they’re on the run from the law.
There are certainly ethical questions about mining social media content for law enforcement purposes. For example, facial recognition algorithms tend to have flaws when it comes to false positives with those of darker complexion, leading to racial profiling concerns. But at least this activity tries to stick with the spirit of the tenet that our justice system is built on: You are innocent until proven guilty.
There must be a temptation, however, to go down the same path as “Minority Report” and try to preempt crime by identifying a “precrime."
Take a school shooting, for example. In the May 31 issue of Fortune, senior technology journalist Jeremy Kahn asked this question: “Could A.I. prevent another school shooting?” In the article, Kahn referenced a study where a team at the Cincinnati Children’s Hospital Medical Center used artificial intelligence software to analyze transcripts of teens who went through a preliminary interview with psychiatrists. The goal was to see how well the algorithm compared to more extensive assessments by trained psychiatrists to see if the subject had a propensity to commit violence. They found that assessments matched about 91% of the time.
I’ll restate that so the point hits home: An AI algorithm that scanned a preliminary assessment could match much more extensive assessments done by expert professionals 9 out of 10 times -- even without access to the extensive records and patient histories that the psychiatrists had at their disposal.
Let’s go one step further and connect those two dots: If social media content could be used to identify potentially pathological behaviors, and if an AI could then scan that content to predict whether those behaviors could lead to criminal activities, what do we do with that?
It puts us squarely on a very slippery down slope, but we have to acknowledge that we are getting very close to a point where technology forces us to ask a question we’ve never been able to ask before: “If we -- with a reasonable degree of success -- could prevent violent crimes that haven’t happened yet, should we?”