This HuffPost Canada page is maintained as part of an online archive.

Facebook’s A.I. To Scan Posts For Signs Of Suicide, Terrorist Content

Mark Zuckerberg is hailing A.I.'s ability to "save lives," but not everyone is giving this move a "like."
Facebook Founder and CEO Mark Zuckerberg speaks on stage during the annual Facebook F8 developers conference in San Jose, California, U.S., April 18. Facebook is launching artificial intelligence that will scan posts to predict and prevent suicide among users, and to take down posts linked to terrorism.
Stephen Lam / Reuters
Facebook Founder and CEO Mark Zuckerberg speaks on stage during the annual Facebook F8 developers conference in San Jose, California, U.S., April 18. Facebook is launching artificial intelligence that will scan posts to predict and prevent suicide among users, and to take down posts linked to terrorism.

Facebook is launching artificial intelligence that will scan posts to predict and prevent suicide among users, and to take down posts linked to terrorism.

And while the company's founder, Mark Zuckerberg, is hailing A.I.'s ability to "save lives," others are raising privacy concerns about the initiatives.

In a Facebook post this week, Zuckerberg announced the company is deploying artificial intelligence software that will detect possible suicidal intent among users, which it says it built in consultation with suicide prevention groups.

The software will use pattern recognition to identify potentially at-risk individuals, and then alert human Facebook staffers, who will assess the situation and contact emergency responders if needed.

"In the last month alone, these A.I. tools have helped us connect with first responders quickly more than 100 times," Zuckerberg wrote.

"With all the fear about how A.I. may be harmful in the future, it's good to remind ourselves how A.I. is actually helping save people's lives today."

99% success rate flagging terrorist content

In a separate announcement, Facebook's head of global policy management and its head of counter-terrorism policy announced that the company's A.I. appears to be a stunning success when it comes to identifying terrorist content.

The company says it has been able to identify and remove 99 per cent of posts related to ISIS and Al Qaeda even before any users have flagged the content, "and in some cases, before it goes live on the site."

Like with suicide detection, Facebook relies on humans to make the call on whether there is a "credible threat," and when there is, the company says it reaches out to law enforcement. Facebook says it has provided support in terrorism investigations, including in cases where attacks were prevented.

For Facebook, both initiatives are direct responses to criticism it's not doing enough to prevent suicide — and sometimes even homicide — on its platform, and that it isn't doing enough to combat terrorism.

Numerous cases have been recorded of individuals live-streaming their suicides on Facebook Live.

In one infamous incident, a Cleveland resident live-streamed his killing of an elderly male, before later killing himself in a confrontation with police.

Incidents like that have put pressure on Facebook to do more to prevent violence on its platform — no small feat, considering the volume of content that is published to the platform daily.

A portrait of Facebook founder Mark Zuckerberg is seen on an iPhone in this photo on August 28, 2017.
NurPhoto via Getty Images
A portrait of Facebook founder Mark Zuckerberg is seen on an iPhone in this photo on August 28, 2017.

The company has also been repeatedly accused of allowing itself to be used as a platform to incite terrorism. In 2016, an Israeli rights group sued Facebook for US$1 billion in a New York court, alleging the company provided militants with a platform to spread violence. It's just one of several such lawsuits that Facebook faces.

But the use of artificial intelligence technology is raising privacy concerns among some observers, who wonder whether such technology could end up being used for other purposes, such as identifying dissenters in politically repressive countries.

In a clear sign there could be privacy concerns with its artificial intelligence, Facebook has said it won't be rolling out suicide detection A.I. in Europe. The company hasn't said why, but the experts largely agree it has to do with the General Data Protection Regulation, the European Union's tough online privacy regime.

Tech news site TechCrunch reports it was unable to get answers from Facebook when asked about how Facebook would prevent abuse of the technology — for instance, to identify petty crime or political dissent.

U.K.-based tech site The Register said Facebook was unable or unwilling to answer questions about six areas of concern, including how the software was developed, how it was trained to recognize signs of suicide, and how effective it has been so far.

Facebook's push into A.I. comes at a time when a number of prominent voices are raising concerns about the technology.

Elon Musk, the billionaire founder of Tesla Motors and SpaceX, has repeatedly said artificial intelligence could soon threaten humanity's very existence.

In a tweet earlier this week, Musk called for government regulation of artificial intelligence.

Also on HuffPost:

Close
This HuffPost Canada page is maintained as part of an online archive. If you have questions or concerns, please check our FAQ or contact support@huffpost.com.