There is a familiar rhythm to modern public policy. A problem becomes visible, concern grows, a report is commissioned, an event is sponsored, and a panel convenes to discuss it. Then, just as momentum builds, another report arrives to confirm what the last one said. Meanwhile, the underlying issue continues much as before.
Online safeguarding sits squarely in this pattern. Whether the focus is misogyny, grooming, abuse, or platform accountability, the cycle repeats with striking consistency. The question is not whether these reports contain useful information. Many do. The question is why so much resource is directed at documenting problems that are already well evidenced, instead of acting decisively to address them.
Six real-world examples illustrate how this plays out.
The first is the Online Safety Act 2023. Years of consultation papers, white papers and draft proposals preceded its passage. By the time it became law, the harms it seeks to address, including online abuse, grooming, and exposure to harmful content, had been extensively documented. The legislation is significant, but it demonstrates how long the system can remain in “analysis mode” before implementation.
Second, the work of the Ofcom continues to generate detailed reports on platform risk assessments and harms. These are thorough and necessary for regulatory oversight. However, they also highlight a tension. The more complex the reporting requirements, the more time is spent measuring harm rather than preventing it in real time.
Third, the NSPCC regularly publishes research into child sexual abuse material and online grooming. These reports are vital in keeping the issue visible and politically urgent. Yet the core findings have remained consistent for years. Children are at risk, platforms are uneven in their responses, and enforcement is fragmented.
Fourth, the Internet Watch Foundation produces annual data on the scale of child sexual abuse imagery online. Their datasets are among the most concrete in the field. They show growth, shifts in hosting, and evolving patterns. Still, the existence of this consistent evidence raises the question of whether additional reporting adds proportionate value, or whether resources could be redirected into faster removal and prevention.
Fifth, international bodies such as the UN Women have produced extensive research on online misogyny and abuse. These reports shape global narratives and influence policy direction. They are widely cited, yet the behaviours they describe remain persistent across platforms and jurisdictions.
Sixth, industry-led initiatives like the Global Partnership to End Violence Against Children convene stakeholders, publish frameworks and share best practice. These collaborations are valuable for alignment, but they often produce guidance that echoes existing knowledge rather than driving enforcement.
So why does this keep happening?
One reason is accountability. Governments and organisations need documented evidence to justify decisions, allocate funding and withstand scrutiny. A report provides a defensible foundation. It shows that action is informed, even if that action is delayed.
Another reason is risk management. Acting decisively, especially in areas involving free speech, privacy, and technology, carries legal and political risk. Commissioning research is comparatively safe. It signals engagement without committing to potentially controversial measures.
There is also the issue of fragmentation. Online safeguarding spans governments, regulators, charities, technology companies and law enforcement. No single entity owns the problem end to end. Reports become a way of coordinating understanding across this fragmented landscape, even if they do not translate into unified action.
Economic incentives play a role as well. Research, events and reports create an ecosystem. Consultants, academics, NGOs and think tanks are funded to produce them. Conferences attract sponsorship. There is a professional infrastructure built around analysis, and it sustains itself.
This leads directly to the question of benefit. Do these reports deliver value?
They do, in specific ways. They maintain visibility of issues that might otherwise be deprioritised. They standardise language and definitions. They provide evidence for legislation and enforcement. They allow benchmarking over time.
But their marginal benefit diminishes when findings are repetitive. When successive reports confirm the same risks without materially changing outcomes, their value becomes harder to justify.
Who actually uses them?
Policymakers use them to shape legislation and regulatory frameworks. Regulators use them to design compliance regimes. Charities use them to support advocacy and funding bids. Technology companies use them selectively, often to demonstrate engagement or to inform internal policy adjustments.
Notably absent from this list are frontline practitioners who need immediate tools, faster processes, and clearer authority to act. For them, another 80-page report rarely changes day-to-day reality.
Cost is the final piece. Large-scale research projects, international conferences and multi-agency consultations require significant funding. Precise figures vary, but collectively they represent substantial public and philanthropic expenditure. When similar findings are produced repeatedly, the opportunity cost becomes significant.
Could this money be spent better elsewhere?
I have spoken to many people about this and their answers tends to suggest “yes”. Investment could be redirected towards enforcement capacity, faster follow ups on content moderation systems, victim support services, and ways for problemss to be shared with organisations in other countries. I doubt we would ever get cross-platform data sharing mechanisms that operate in real time with regard to harm – I mean the thought that social media platoforms might lose users or be held liable is going to cause their lawyers to have even more sleepless nights.. However, it is clear that these are operational responses rather than analytical ones.
However, eliminating reports entirely would not be realistic or desirable. The issue is not their existence but their volume, scope and repetition. A smaller number of targeted, action-oriented reports, directly linked to implementation, would likely provide greater value.
The underlying problem is not a lack of knowledge. It is a gap between knowledge and action. Until that gap is addressed, the cycle will continue. Another report will be published next week. Another panel will discuss it. And the people affected will still be waiting for something to actually change because someone they know has been hurt….


