Wellness & Healthy Living

Facebook's suicide prevention algorithm raises ethical concerns

Facebook reports 3,500 self-harm cases have been flagged by its algorithm and resulted in local authorities being called on to intervene
karkozphoto/Depositphotos
Facebook reports 3,500 self-harm cases have been flagged by its algorithm and resulted in local authorities being called on to intervene
karkozphoto/Depositphotos

In a new commentary published in the journal Annals of Internal Medicine, two researchers are questioning the ethics and transparency of Facebook's suicide prevention system. The algorithm reportedly flags users deemed to be at high-risk of self-harm, activating a process where the company notifies local authorities to intervene.

Back in 2017, Facebook began testing a machine learning algorithm designed to track a user's activity on the platform, and flag the person if it identifies an imminent risk of self-harm. Once flagged, the case is moved to a human team for evaluation, and if deemed urgent Facebook contacts local authorities to intervene.

By late 2018, Facebook called the experiment a great success, deploying it in many countries around the world – but not in Europe, where the new GDPR rules deem it a privacy violation. After a year the company reported around 3,500 cases had occurred where emergency services had been notified of a potential self-harm risk. Specifics of these 3,500 cases were not clear. What percentage of these resulted in Facebook actually stopping a fatal case of self-harm?

A recent New York Times report, reviewing four specific police reports of cases instigated through Facebook's algorithm, suggests the system is far from successful. One out of the four cases studied actually resulted in Facebook helping police identify the location of an individual live streaming a suicide attempt and intervene in time. Two other cases were too late, and a fourth case turned out to be entirely incorrect, with police arriving at the doorstep of a woman who claimed to have no suicidal intent. The police, not believing the woman's statements, demanded she come to a local hospital for a mental health evaluation.

Dan Muriello, one of the engineers on the Facebook team that developed the algorithm, suggests the system doesn't imply Facebook is making any kind of health diagnosis but rather it just works to connect those in need with relevant help. "We're not doctors, and we're not trying to make a mental health diagnosis," says Muriello. "We're trying to get information to the right people quickly."

Ian Barnett, a researcher from the University of Pennsylvania, and John Torous, a psychiatrist working with the Harvard Medical School, have recently penned a commentary suggesting Facebook's suicide prevention tools constitute the equivalent of medical research, and should be subject to the same ethical requirements and transparency of process.

The authors cite a variety of concerns around Facebook's suicide prevention effort, including a lack of informed consent from the users regarding real-world interventions, to the potential for the system to target vulnerable people without clear protections. Underpinning all of this is a profound lack of transparency. Neither the general public nor the medical community actually know how successful this system is, or whether there are social harms being generated by police being called on unwitting citizens. Facebook claims it doesn't even track the outcomes of calls to emergency services due to privacy issues, so what is even going on here?

"Considering the amount of personal medical and mental health information Facebook accumulates in determining whether a person is at risk for suicide, the public health system it actives through calling emergency services, and the need to ensure equal access and efficacy if the system does actually work as hoped, the scope seems more fitting for public health departments than a publicly traded company whose mandate is to return value to shareholders," the pair conclude in their commentary. "What happens when Google offers such a service based on search history, Amazon on purchase history, and Microsoft on browsing history?"

Mason Marks, a visiting fellow at Yale Law School, is another expert that has been raising concerns over Facebook's suicide prevention algorithms. Alongside the potential privacy issues of a private company generating this kind of mental health profile on a person, Marks presents some frightening possibilities for this kind of predictive algorithmic tool.

"For instance, in Singapore, where Facebook maintains its Asia-Pacific headquarters, suicide attempts are punishable by imprisonment for up to one year," Marks wrote in an editorial on the subject last year. "In these countries, Facebook-initiated wellness checks could result in criminal prosecution and incarceration."

Ultimately, all of this leaves Facebook in a tricky situation. The social networking giant may be trying to take responsibility for any negative social effects of the platform, however, it seems to be caught in a no win scenario. As researchers call for greater transparency, Antigone Davis, Facebook's Global Head of Safety, has suggested releasing too much information on the algorithm's process could be counterproductive.

"That information could could allow people to play games with the system," Davis said to NPR last year. "So I think what we are very focused on is working very closely with people who are experts in mental health, people who are experts in suicide prevention to ensure that we do this in a responsible, ethical, sensitive and thoughtful way."

At this point it is all good for Facebook to state the goal of working with experts, in ethical and sensitive ways, but the response so far from experts in the field is that no one has any idea what the technology is doing, how it is generating its results, who is reviewing these results, and if it is actually causing more harm than good. All we know for sure is that at least 10 people a day around the world are having the police or emergency services show up on their doorstep after being called by Facebook.

The new article was published in the journal Annals of Internal Medicine.

  • Facebook
  • Twitter
  • Flipboard
  • LinkedIn
6 comments
ChairmanLMAO
We should stop it. Personally, I was enjoying watching facebook killing itself.
fb36
IMHO, Facebook doing a valuable service, for common good of general public, by trying to detect suicidal people!
I also absolutely agree that, making the detection algorithm/process public, would enable lots of people trying to defeat it! (& later saying "Look people! Facebook's algorithm is keep making mistakes! It cannot be trusted! It must be stopped!")
IMHO, the people criticizing Facebook for this (& asking for transparency etc) are NOT really sincere! IMHO, they are NOT really trying to help/guide Facebook, but just trying to force Facebook to stop (by making the process practically impossible)!
Eli Willner
I'm astounded that anyone would have any concerns about this at all. Ok, so there are some false positives. Big deal! Lives are being saved!
Daishi
@Eli Willner But that's the point. We don't know if lives are being saved. If you were experiencing depression can we conclude that having police called on you would solve the problem? Maybe people anonymously talking about their problems online is beneficial without the police showing up. Sometimes people need someone to talk to. Maybe them posting online is a sign of them reaching out to people in their lives or people with similar experiences and struggles for communication and dialogue. If 2 people were sitting inside their home talking about depression you wouldn't condone bugging their home, intercepting the conversation, and sending the police so why should online communication be different? This could potentially just make mattes worse as people feel the helplessness of having nobody to turn to with their online communications being monitored by the thought police. People may have valid reasons they don't want the people around them to know what they are struggling with and police presence may only just breach their privacy humiliating them and making the issue worse. How sure can we be that doxing people who are depressed or on the verge of suicide is a good thing? If people know talking about something like this will cause the police to come rolling up does it create a sense of urgency to complete the task before the police can get there rather than taking time to think about it? How clear does someone have to be in their language before it warrants action by facebook? It seems clear that they are acting on things other than openly and clearly stated intent without the need to for transparency. It's not clear at all that this saving lives and it could be doing more harm. The technology behind it may even be a "think of the children" front for expanding into more privacy violations even less good. What stops them from using AI for assessing political and personal ideologies through private conversations on messenger and targeting you for highly specific political advertising by catering ads exactly to the issues you are most concerned with? Then getting elected or not is just a matter of raising enough money to pay Facebook enough to deliver the results you need. What could possibly go wrong?
christopher
@Daishi - read the article "...helped... police identify the location of an individual live streaming a suicide attempt and intervene in time. "
How many does it need to save to be worthwhile? Is "just one" enough? There it is. What about 25%? That's how many we know about based on the 4 we were told of. What about 75%? That's the opportunity also in those stats (2 dead, one saved, one mistake).
Facebook aren't making this public for the exact same reason this is became an article - privacy bigots prefer other people die, than their activity be tracked, and so "get off" by writing inflammatory articles about it all.
Stick it in the "I agree" rules nobody reads (it's probably already there anyhow - has anyone ever read them?), and there you go: informed consent sorted.
christopher
I just skimmed their policies - every facebook user has agreed to this stuff - it is mentioned in multiple different places that are easy to find and understand.