In a new commentary published in the journal Annals of Internal Medicine, two researchers are questioning the ethics and transparency of Facebook's suicide prevention system. The algorithm reportedly flags users deemed to be at high-risk of self-harm, activating a process where the company notifies local authorities to intervene.
Back in 2017, Facebook began testing a machine learning algorithm designed to track a user's activity on the platform, and flag the person if it identifies an imminent risk of self-harm. Once flagged, the case is moved to a human team for evaluation, and if deemed urgent Facebook contacts local authorities to intervene.
By late 2018, Facebook called the experiment a great success, deploying it in many countries around the world – but not in Europe, where the new GDPR rules deem it a privacy violation. After a year the company reported around 3,500 cases had occurred where emergency services had been notified of a potential self-harm risk. Specifics of these 3,500 cases were not clear. What percentage of these resulted in Facebook actually stopping a fatal case of self-harm?
A recent New York Times report, reviewing four specific police reports of cases instigated through Facebook's algorithm, suggests the system is far from successful. One out of the four cases studied actually resulted in Facebook helping police identify the location of an individual live streaming a suicide attempt and intervene in time. Two other cases were too late, and a fourth case turned out to be entirely incorrect, with police arriving at the doorstep of a woman who claimed to have no suicidal intent. The police, not believing the woman's statements, demanded she come to a local hospital for a mental health evaluation.
Dan Muriello, one of the engineers on the Facebook team that developed the algorithm, suggests the system doesn't imply Facebook is making any kind of health diagnosis but rather it just works to connect those in need with relevant help. "We're not doctors, and we're not trying to make a mental health diagnosis," says Muriello. "We're trying to get information to the right people quickly."
Ian Barnett, a researcher from the University of Pennsylvania, and John Torous, a psychiatrist working with the Harvard Medical School, have recently penned a commentary suggesting Facebook's suicide prevention tools constitute the equivalent of medical research, and should be subject to the same ethical requirements and transparency of process.
The authors cite a variety of concerns around Facebook's suicide prevention effort, including a lack of informed consent from the users regarding real-world interventions, to the potential for the system to target vulnerable people without clear protections. Underpinning all of this is a profound lack of transparency. Neither the general public nor the medical community actually know how successful this system is, or whether there are social harms being generated by police being called on unwitting citizens. Facebook claims it doesn't even track the outcomes of calls to emergency services due to privacy issues, so what is even going on here?
"Considering the amount of personal medical and mental health information Facebook accumulates in determining whether a person is at risk for suicide, the public health system it actives through calling emergency services, and the need to ensure equal access and efficacy if the system does actually work as hoped, the scope seems more fitting for public health departments than a publicly traded company whose mandate is to return value to shareholders," the pair conclude in their commentary. "What happens when Google offers such a service based on search history, Amazon on purchase history, and Microsoft on browsing history?"
Mason Marks, a visiting fellow at Yale Law School, is another expert that has been raising concerns over Facebook's suicide prevention algorithms. Alongside the potential privacy issues of a private company generating this kind of mental health profile on a person, Marks presents some frightening possibilities for this kind of predictive algorithmic tool.
"For instance, in Singapore, where Facebook maintains its Asia-Pacific headquarters, suicide attempts are punishable by imprisonment for up to one year," Marks wrote in an editorial on the subject last year. "In these countries, Facebook-initiated wellness checks could result in criminal prosecution and incarceration."
Ultimately, all of this leaves Facebook in a tricky situation. The social networking giant may be trying to take responsibility for any negative social effects of the platform, however, it seems to be caught in a no win scenario. As researchers call for greater transparency, Antigone Davis, Facebook's Global Head of Safety, has suggested releasing too much information on the algorithm's process could be counterproductive.
"That information could could allow people to play games with the system," Davis said to NPR last year. "So I think what we are very focused on is working very closely with people who are experts in mental health, people who are experts in suicide prevention to ensure that we do this in a responsible, ethical, sensitive and thoughtful way."
At this point it is all good for Facebook to state the goal of working with experts, in ethical and sensitive ways, but the response so far from experts in the field is that no one has any idea what the technology is doing, how it is generating its results, who is reviewing these results, and if it is actually causing more harm than good. All we know for sure is that at least 10 people a day around the world are having the police or emergency services show up on their doorstep after being called by Facebook.
The new article was published in the journal Annals of Internal Medicine.
Want a cleaner, faster loading and ad free reading experience?
Try New Atlas Plus. Learn more