I’ve just seen another post which originated from a social media platform relating to people setting up filters on their accounts in order to reducing their risk of viewing offensive content…. sorry guys, but I think you are not willing to address the real issue!
To effectively push social media platforms like Instagram, Facebook, and others to prioritize the removal of harmful content such as illegal activities involving bullying, negative discussions such as suicide, prostitution, animal abuse, paedophilia, etc., our efforts should focus on systemic change, policy enforcement, and collaboration with key stakeholders. Here are some suggestions:
1. Demand Stronger Content Moderation Policies
- Push for stricter enforcement of existing policies against sexually exploitative or predatory content. Most platforms already prohibit such content but often fail in enforcement.
- Advocate for transparency reports that disclose:
- How many accounts or posts related to harmful activities are flagged and removed.
- Average response times for addressing such reports.
2. Hold Platforms Legally Accountable
- Use existing laws like the FOSTA-SESTA Act (in the U.S.) that target platforms enabling sex trafficking to hold companies accountable.
- Advocate for tougher legislation similar to the EU’s Digital Services Act (DSA), which mandates platforms to swiftly remove illegal content.
- File complaints with government regulators, such as the FTC (U.S.) or ICO (UK), about platforms failing to act on harmful content.
3. Leverage Public Campaigns
- Launch awareness campaigns to increase public pressure on platforms. Highlight instances where platforms ignored harmful activity.
- Utilize hashtags, media outreach, or viral campaigns to expose how filters are insufficient and demand concrete action.
4. Partner with Organizations and Law Enforcement
- Collaborate with groups like:
- National Center for Missing & Exploited Children (NCMEC).
- Thorn, which combats child exploitation.
- Encourage better coordination between platforms and law enforcement for identifying and removing illegal accounts or networks.
5. Advocate for Better AI Moderation
- Push for investments in AI tools that detect predatory or harmful content more effectively.
- Insist on improvements in reporting systems so users can flag inappropriate content easily.
6. Ensure Platforms Hire More Human Moderators
Automated filters miss nuanced or hidden content. Encourage platforms to:
- Expand teams of human moderators to review flagged content.
- Create specialized teams for sensitive content like predatory behavior or exploitation.
7. Refocus Filtering Solutions
Rather than relying on user-side filters that avoid displaying harmful content:
- Advocate for content removal at the source.
- Insist on systemic changes like blocking known bad actors from registering new accounts using advanced verification systems.
8. Build Coalitions
- Unite with parent groups, advocacy organizations, and others concerned about platform safety.
- Use these coalitions to lobby policymakers and directly engage with social media executives.
Final Thought
While filters might limit exposure to harmful content for individuals, they do not solve the systemic issue.
Shifting platform accountability from placing the burden on users to addressing the root problem is key to creating a safer online environment and this is what is needed. Time to clean up our social media platforms!