Section 230: Publisher or Distributor?

July 7, 2020
Share on facebook
Share on twitter
Share on email

Ahead of our KNOW Identity Summer Government Digital Forum, the OWI team focused in on Section 230, a recent hot-button government topic.

Section 230 isn’t the new kid on the block. This piece of the Communications Decency Act was passed in 1996, but given the challenges of user-generated content moderation (UGCM), it still provides a critical framework for online content today. This legislation has stood the test of time, despite conflicting tensions in the interpretation of the law and diverging approaches among content platforms. And we have digital identity to thank for that, as practices like trust and safety enable effective administration of this framework.

Section 230 101 

Like any piece of legislation, it’s critical to understand what it actually means. Section 230 consists of two key clauses, the first of which shields websites from liability over user content. The second allows the moderation of user content, either via the removal of posts, banning users or otherwise, in “good faith.” This is often referred to as the ‘good samaritan clause.’

As provided for in the framework of Section 230, many online platforms are built on the premise of user-generated content, which is now central to the internet, social media, and content consumption experience for many. However, a debate has arisen surrounding interpretations of the extent of platforms’ rights, responsibilities, and protections.

To explore this issue, we reviewed a few recent events related to Section 230 and some of the unique issues each of these brought to the forefront.

r/Neutrality&Bias

Reddit, the forum-based platform known for its expanse of topics discussed (to be exact, 132,000 active “subreddits,” or message boards, in a total of 1.2 million) and leniency in moderation, recently banned several subreddits for violations against rules of hate speech and harassment. The ban included a pro-Trump subreddit as well as some left-wing subreddits. 

While some may look at this as a violation of freedom of speech or freedom of the press, this moderation can be justified in Section 230, clause 2, which allows “good faith” moderation, as explained above. In its use of moderation, Reddit has applied measures to both ends of the political spectrum to maintain objectivity in the application of its terms of service, and perhaps to preempt allegations of bias towards any particular stakeholder or ideology. The company’s CEO stated that “views across the political spectrum are allowed on Reddit—but all communities must work within our policies,” to foster a safe community for all. Of course, critics point to moderation efforts against particular individuals or ideologies, and in this case, political leaders are a potential violation of the “good faith” provision of Section 230. 

Free Speech & Good Faith

For its application of Section 230, Twitter currently moderates content through clear terms of service and recently implemented moderation tactics on some of Donald Trump’s tweets, including placing an alert about false information on Trump’s tweets about voting by mail and hiding a tweet that outwardly promoted violence amid protests by the Black Lives Matter movement. 

Similar to the above example, some platform users believe such actions could inhibit free speech. Critics of Section 230 and moderation have warned about platform bias as a possible violation of “good faith.” On the flip side, others counter that platforms’ free speech rights should allow them to editorialize content on the platform, even though it’s written by a contributing individual or company’s page.

Commentators contend that extending Section 230’s existing limitations—which include reasonable limits like criminal conduct—could infringe on platforms’ rights to free speech.

On Facebook & Politics

Facebook has historically been much more lenient than competitor platforms on allowing skewed political content, false information, violent content, or otherwise. In particular, there’s been great controversy around content from President Trump, ranging from the misinformation around the 2016 election to recent statements posted by President Trump. Some of this content has been deemed unfit for audiences by other platforms. 

Facebook’s application of Section 230 centers around clause 1, which notes that as a distributor, not the publisher itself, Facebook is not liable for content posted from users, including Donald Trump. Facebook has positioned its moderation approach as more hands-off than other social media platforms, advocating for transparency in political content. The company believes that transparency can provide potentially valuable information for the public domain. Compared to the way other platforms leverage Section 230, this is a more ‘conservative’ interpretation. But, many interpret this form of inaction or leniency as an enabler of illicit content, including both illegal content and also disinformation.

Streaming Hate Speech

Twitch, the video streaming platform and Amazon subsidiary, suspended President Trump’s channel this June for violating the company’s policy against “hateful conduct,” specifically for rebroadcasting misinformation and racist comments at his recent rally in Tulsa, Oklahoma. This move comes after Twitch recently announced a crackdown on harassment within the platform’s community.

This particular case brings us back to Section 230, clause 2, as it allows for “good faith” moderation to keep users out of harm’s way or spreading potentially harmful information, real or fake. A spokesperson for the company noted, “We do not make exceptions for political or newsworthy content, and will take action on content reported to us that violates our rules.”

While Section 230 protections include a limit on illegal content, noting that platforms must remove criminal content, the law does not specify rights or obligations regarding hate speech. This open-endedness sparks debate regarding how far content platforms can take moderation and understanding how far is too far when it comes to shutting down content. 

The OWI Analyst Take

The tensions inherent in some of Section 230’s provisions—clause 1 removes editorial liability, while clause 2 affords editorial rights—leave ample space for interpretation by both platforms and users. As seen in the cases explored above, platforms have responded to that interpretability with a host of varying approaches.

While content moderation is nothing new, platforms have taken interpretation into their own hands without full clarity from the original text of Section 230. What’s more, they’ve responded with new trust and safety practices underpinned by digital identity to actively and seamlessly moderate users and content. As experts have commented, the ‘original sin’ of the internet is a lack of a digital identity layer, which is one factor that makes moderation so tricky. On the downside, however, the adoption of more robust digital identity protocols could add friction and reduce usership. There are competing incentives at play here, including that anonymity has been seen as a ‘feature,’ not a bug, since the early days of the internet.

Platforms’ diverging approaches to moderation could confuse the general public around what companies can and cannot do, as well as what platforms should and should not do. These concerns beg the question of whether standardization and regulation are necessary. Section 230 is broad, and potentially too open amid uncertainty about the framework and the extent of Section 230 provisions. 

This led us to ask, what is the correct balance among these tensions? Is it platforms’ responsibility to allow content, and prioritize users’ free speech? Is it platforms’ requirement to remove some content, and prioritize keeping users safe from illegal content? Or is it the platforms’ right to moderate content, and prioritize the platforms’ own free speech rights? Without regulation or standardization, it’s up to those behind the platform to impose the company’s interpretations and standards. 

Amid this dizzying array of stakeholders and considerations at hand, ranging from free speech to hate speech to political bias, an expansive debate has continued around Section 230 and the government’s role in regulating online content platforms. Now, our team is looking forward to seeing what the future holds regarding standardization and how these platforms may leverage digital identity to solve this problem more actively. 

We’ll be unpacking Section 230 further at our KNOW Government Digital Forum. And as always, we’ll continue to unite the industry and drive the most important conversations on digital identity, trust and safety, and MUCH more in each of our weekly digital events. See you there!