Commentary

How Artificial Intelligence Can Help Save Troubled Lives

As an investor in AI, I’m sensitive to the narrative that technology is subtracting humanity from our interactions.  As the story goes, we’ve become so efficient at delivering exactly the content that appeals to predisposed biases, that everyone burrows more deeply into their digital worlds and connects less with each other in person.   This results in developing deeper relationships with devices than fellow human beings, which can lead to increased isolation and loneliness.  

There’s some truth to this idea. But a new use of technology is now serving as a lifeline to catch people before they slip beneath the surface. AI is being developed to flag behavior that cries out for intervention, using natural language understanding, facial recognition, body signal monitoring, and mapping of patterns that indicate distress.  

This new frontier in mental health vaults media platforms and algorithms into critical societal roles.  For those in the media and marketing industry (such as myself) who have been known to say “We aren’t saving lives here” — perhaps we should think again. The very same practices employed to better understand behavior, motivations and emotions for the sake of brand appeal, are now being applied to a much higher purpose.

Steven Vannoy, an associate professor at UMass Boston, is dedicated to addressing mental health issues using technology.  Currently, he says, the most common way of identifying people at risk of suicide is self-reporting — understandably ineffective.  Hints of distress are more likely to be detected through behavior — similar to the way buying patterns are identified.

Vannoy notes there’s a “new territory in the psychology field, mapping emotional experiences in real time to look for situations that have elevated risk.”  One technology being tested by Vannoy and his team is Affectiva, the same facial recognition platform used by marketers to better understand emotional responses to advertising (which I wrote about in an earlier post.)   

The idea is to track the faces of at-risk individuals, so the technology is trained on expressions of extreme anxiety, sorrow, or pain.  Vannoy and his team are in the early phases of applying facial coding technology as well as body and voice monitoring, using a smartphone app that checks in and asks questions such as, “How are you feeling about your day?”

Recently, Facebook began employing AI to identify suicidal behavior, so fast action can be taken to intervene, as reported by TechCrunch. Facebook is obviously in a unique position to put people immediately in touch with loved ones if they are exhibiting what the AI algorithm identifies as risky or urgent behaviors.  The algorithm is able to identify words, phrases and facial expressions that mirror those previously reported as suicidal.  It can also prioritize the most urgent situations, so first responders can make “wellness checks” in person through local groups such as Save.org, the National Suicide Prevention Lifeline, and approximately 80 other partner organizations.

The program generated one hundred interventions in November 2017, with some cases of first responders arriving before the person finished broadcasting on Facebook Live. This is a significant innovation in suicide prevention, as time is obviously of the essence in many cases. Minutes make a difference.  

According to Mental Health America, 43 million adults in the U.S. have a mental health condition, and 56% of that group lack access to care. Andrew Ng, a prominent figure in developing AI technology at Google and Baidu, has developed Woebot to bring therapy to millions who might otherwise go untreated for depression. Woebot is reported to have an impressive natural language capability, and a useful approach to conversational problem-solving that can actually succeed in developing a relationship with the user, according to the MIT Technology Review.

Another AI company, Triggr, employs a buddy system meant to keep addicts from relapsing.  Both the person in recovery and a friend or relative must agree to the baseline concept of notification if there are signs of relapse.  The algorithm then keeps watch for erratic behavior, and the partner is notified when there is cause for concern. Sort of like an AI/human hybrid of an AA sponsor.

These are just a few examples, and just the beginning.  The combination of media-related skills and new, sophisticated tools is inspiring. In the mental health category, AI is able to help connect at the most personal and critical moments -- and actually affect lives.

Next story loading loading..