Artificial intelligence software is going a long way toward eradicating hate speech – roughly 95% of it – from Facebook Inc.’s digital platforms.
made the disclosure during a press conference and blog post Thursday, signaling significant improvements from a year ago (80.5%) and in 2017 (24%), according to Mike Schroepfer, Facebook’s chief technology officer.
“A central focus of Facebook’s AI efforts is deploying cutting-edge machine learning technology to protect people from harmful content,” Schroepfer said.
Facebook has taken pains to get up to speed on machine learning, where algorithms improve automatically through experience, to complement the thousands of content moderators it employs worldwide to police posts, photos and videos get shared on its platforms.
But that hasn’t stopped withering criticism over the ability of Facebook, Twitter Inc.
and TikTok to keep a lid on racial slurs and religious attacks, especially in an overheated election year.
In its briefing Thursday, Facebook said it has deployed two new AI technologies: “Reinforced Integrity Optimizer,” which learns from real online examples and metrics instead of an offline dataset, and “Linformer,” which lets the company use complex language understanding models that were previously too large and unwieldly to work on a wide scale.
Facebook said it has also developed a new tool to detect deepfakes, computer-generated videos that appear to be real.
“Taken together, all these innovations mean our AI systems have a deeper, broader understanding of content,” Schroepfer said.