As of the end of 2018, the social media website Facebook had 2.32 billion monthly active users worldwide. Many of Facebook’s basic functions are within the public consciousness. Once you start “friending” other users, Facebook can recommend other users to friend, show friends’ updates in your newsfeed, and suggest relevant groups to join, among other functions. At least since 2017, the website’s capabilities have moved beyond building friend and group networks, and into a complicated public health role of user suicide risk monitoring.

Facebook shared a description in September 2018 of how text and comment classifiers work, and how the company uses them to judge whether local authorities should be notified of an emergency. The succinct infographic potentially reflects critics’ concerns over Facebook’s secrecy about their methods, as the second to last step in the description simply reads “[Post] Reviewed by Community Operations.”

Image Source: Janine Schmitz

Facebook stated that it works with suicide prevention experts to create an efficient, comprehensive program to connect distressed users with friends and helpline contact information. The company also mentioned Facebook teams that are specialized to review the most urgent cases.

Facebook worked to expand on its AI and use both algorithms and user reports to flag threats of suicide after several users used Facebook Live to live-stream their suicides in early 2017.

Image Source: praetorianphoto

Some mental health experts and police officials noted Facebook has had some success in aiding law enforcement in finding and stopping suicide attempts. Other mental health experts who are more critical warn that the company’s calls to police could have harmful consequences such as unintentionally causing suicide, or causing nonsuicidal people to undergo unnecessary psychiatric evaluations. Facebook’s lack of transparency has also been noted, as the company has not shared the exact process for user risk reviewers deciding whether to notify emergency responders. In reference to Facebook’s suicide watch having visible consequences and effects for the public without the company revealing to the public the exact details of their program’s processes, Dr. John Torous, director of the digital psychiatry division at Boston’s Beth Israel Deaconess Medical Center, remarks, “It’s hard to know what Facebook is actually picking up on, what they are actually acting on, and are they giving the appropriate response to the appropriate risk. It’s black box medicine.”

Health law scholar Mason Marks, a fellow at Yale Law School and New York University School of Law, pairs his own criticism with a call to action, specifically government regulation of Facebook. This regulation should require Facebook to release their suicide prevention program safety and effectiveness evidence, which Marks argues is necessary because the company’s suicide risk scoring software essentially constitutes the practice of medicine. “In this climate in which trust in Facebook is really eroding, it concerns me that Facebook is just saying, ‘Trust us here,'” he states.

Feature Image Source: abdullah – stock.adobe.com.

Cath Ashley

Author Cath Ashley

Cath is a UC Berkeley alumnus with a Molecular and Cell Biology degree and a Music minor. She is interested in healthcare, public health, health equity, youth/student empowerment, and cats. Her hobbies include chess, social dancing, and soundtrack analysis.

More posts by Cath Ashley