In the wake of news that propaganda on Facebook helped drive anti-refugee attacks in Germany, the social networking giant is taking further steps to fight back against fake news, suspending hundreds of accounts from Russia and Iran, and also starting a program that rates the trustworthiness of users.
Talk digital identity, verification and The Future of Trust with other identity experts at the KNOW 2019 Conference in Las Vegas. Register now for Early Bird pricing!
In all, 652 pages and accounts from Iran, as well as “an unspecified number” from Russia, were pulled from the site, according to The Wall Street Journal.
The report said that the activity from Iran appeared intended to push the agenda of the country’s political regime. Meanwhile, the misinformation campaigns stemming from Russia were said to be focused on Syria and Ukraine.
In statements to reporters, Facebook Chief Executive Mark Zuckerberg said the account shutdowns were a sign of the company’s shift “from reactive to proactive detection.” He said its new, more aggressive defensive measures are going to make the social network “safer for everyone over time.”
Separately, new research from Karsten Muller and Carlo Schwarz from the University of Warwick found that coordinated misinformation campaigns in Germany stoked anti-refugee sentiment. Specifically, they found that areas where Facebook use was above the national average, attacks on refugees also increased in kind — by 50 percent.
The New York Times noted that increased violence against refugees did not correlate with general web use, but was specific to Facebook activity.
As part of its efforts to stop fake news and misinformation, Facebook is not only actively shutting down fraudulent accounts, but is also assessing the users themselves. As users report items shared on the site, the social network rates them with a “reputation score,” from zero to 1.
The system was developed, Facebook officials told The Washington Post, after some users began falsely reporting legitimate news items as fake news. The problem: Facebook users have a tendency to report stories they disagree with, even if the information in the story is factual.
Facebook says the trustworthiness score is part of a comprehensive set of attributes the company assigns to users based on their activity. Using that data, the company attempts to assess the risk associated with user accounts, helping it to identify problems and stop the spread of fake news — while also allowing legitimate news to be shared.
The company has been taking a proactive approach after Zuckerberg faced questioning from legislators in the U.S. and abroad. There have also been threats of regulation against Facebook to fight against the dangers of fake news.
OWI Insight: The changes being made by Facebook are encouraging, particularly as data such as the research in Germany shows real-world effects from digital information. However, with respect to the trustworthiness score, there is irony in Facebook assessing the trust of users, when the site itself has seen a significant drop in trust regarding handling of sensitive personal data. Still, the sheer size of Facebook gives it invaluable insight into user behavior. As OWI Principal Joe Stuntz asked on Twitter: “How long before financial institutions or others want to use this to make decisions? Are they already?”