I am increasingly hearing about the term “Section 230” in discussions related to Online Safety, a phrase I wasn’t familiar with so here’s what it is for those who would like to know.
Section 230 of the Communications Decency Act (CDA) is a key piece of U.S. legislation that protects online platforms, including social media companies, from being held liable for most content posted by their users. It also allows these platforms to moderate and remove content in good faith without losing that liability protection; its key features and implications are:
Key Provisions of Section 230
- Protection Against Liability for User-Generated Content:
- Platforms like Facebook, Twitter, and YouTube are not considered publishers or speakers of content posted by their users.
- This means that if someone posts illegal or harmful content, the platform itself generally cannot be held legally responsible.
- Ability to Moderate Content:
- Section 230 allows platforms to remove or restrict content they deem objectionable, whether it is obscene, violent, or otherwise harmful, even if it’s constitutionally protected speech.
- These actions must be taken in “good faith,” giving platforms discretion over what to allow or remove.
Purpose of Section 230
When it was enacted in 1996, the internet was in its infancy. Section 230 was designed to:
- Encourage the growth of online platforms by shielding them from crippling lawsuits.
- Strike a balance between fostering free speech and allowing companies to manage harmful content without fear of losing legal protections.
Implications of Section 230
- Promotes Innovation:
- Without fear of lawsuits for user actions, platforms have been able to grow and innovate.
- Startups and small platforms benefit from the same protections as major corporations.
- Criticism and Controversy:
- Supporters argue that Section 230 is essential for free expression and the open internet.
- Critics claim it allows platforms to avoid accountability for harmful or illegal content, such as misinformation, harassment, child abuse/grooming, pornography/prostitutions or hate speech.
- Content Moderation Debate:
- Some claim platforms over-censor, while others argue they don’t do enough to remove harmful content.
- This debate has intensified with discussions of political bias and misinformation.
Calls for Reform
Section 230 has faced scrutiny, with proposals for reform or repeal gaining traction:
- Arguments for Reform: Advocates suggest holding platforms accountable for promoting harmful content or enabling illegal activity (e.g., child abuse, sex trafficking, incitement to violence, etc).
- Arguments Against Reform: Opponents warn that changes could stifle free speech, innovation, and the ability of platforms to moderate harmful content effectively.
International Relevance
While Section 230 is U.S.-specific, its principles have influenced debates about online liability worldwide. Other countries are developing their own frameworks for balancing other platforms’ accountability and freedom of expression.
In summary, Section 230 provides critical protections to online platforms for user-generated content while allowing them to moderate in good faith. It has enabled the internet to thrive but remains a point of contention in debates over free speech, accountability, and content moderation.
Time for Review ?
I remember during “Swiped”, the Channel 4 programme when Emma and Matt Willis joined social media as underage and were horrified to see how quickly innappropriate accounts were suggested to them…
If social media platforms decide to promote content by suggesting posts of accounts to other people, then these platforms are making an editorial decision and should therefore be held liable for this. “Suggested For You” is a great idea, but has been open to abuse – sadly algorithms are not capable of moral decisions.
Or can we even get to the stage where content that is not accepted in normal media such as newspapers, magazines and TV is never allowed to be shown on the internet? Yes the reality is that this content will always be available on the dark web (not somewhere I have ever been on so this comment is just based on what I hear on podcasts etc), but this is something that should be controllable by internet service providers.
I don’t see this as conversations or topics being controlled, because we see articles about everything from LGBT to bulemia being written about.
What are your thoughts?