COMMENTARY

Can Facebook Help Prevent Suicide?

Drew Ramsey, MD

Disclosures

February 25, 2019

This transcript has been edited for clarity.

I'm Dr Drew Ramsey, an assistant clinical professor of psychiatry at Columbia University in New York City. Welcome back to the Brain Food blog.

I wanted to tell you about an experience I had a few weeks back at the Facebook headquarters here in New York City. Daniel J. Reidenberg, PsyD., executive director of Suicide Awareness Voices of Education, brought together a group of mental health and suicide prevention advocates, along with the global head of safety at Facebook, Antigone Davis, to discuss how the company has been handling what is a new issue for them.

After launching platforms like Facebook Live, they determined that they were increasingly being used by people to write suicide notes or to engage in self-harm while on live video. This is obviously the type of content that Facebook has to monitor very closely. I was impressed by all of the efforts they've put in place to counter this. It made me wonder, can Facebook prevent suicides?

Obviously, the company is a part of a larger debate and controversy right now, which I'll leave to other experts to discuss. But when it comes to mental health, it's really interesting to see the different tools that Facebook is developing.

This began when several of their engineers in Palo Alto, California, noted that there was a real suicide cluster happening and got interested in how their technology could be of help. There are a number of researchers interested in this, including Randy Auerbach, PhD, of Columbia University, who are developing apps and tools to allow a very granular analysis of what happens right before individuals attempt and complete suicide.

Facebook is using artificial intelligence (AI) to monitor messages and live broadcasts in order to identify users at risk. They've also implemented a new set of tools that you may have noticed, which allows users to flag messages if they are concerned about an individual or friend with whom they are connected. These messages are then sent to human teams that speak approximately 60 languages, who can rapidly assess them and potentially reach out to individuals at risk.

In the past year, Facebook has ordered 3500 wellness visits. This means that by using AI, human monitors, and/or message flagging systems, Facebook was able to contact emergency medical services when they couldn't ascertain whether one of their users was safe. It seems like this is a potentially very powerful technology.

Facebook has also developed a set of mental health and suicide prevention tools for at-risk populations, such as military veterans, firefighters, and police officers. These are also beneficial services to offer their users.

Of course, I think there are always concerns about Big Brother watching. But I think the concern should be set in a proper context if Big Brother is watching but also preventing some suicides and providing great mental health tools.

So, can Facebook prevent suicide? Who knows, but I'm sure thankful that they're trying.

Follow Medscape on Facebook, Twitter, Instagram, and YouTube

Comments

3090D553-9492-4563-8681-AD288FA52ACE
Comments on Medscape are moderated and should be professional in tone and on topic. You must declare any conflicts of interest related to your comments and responses. Please see our Commenting Guide for further information. We reserve the right to remove posts at our sole discretion.

processing....