There is something quietly fascinating about a political moment where one chamber says no and the other calmly replies, we will try that again. That is exactly what happened when the Children’s Wellbeing and Schools Bill returned to the House of Lords last week.
Peers voted to reinstate an amendment from Lord Nash that would introduce a social media ban for children. The Commons had already rejected it. Yet here it is again, back on the table and refusing to disappear. Alongside it sat a more measured proposal from Baroness Kidron. Her amendments were widely seen as a compromise. Many Peers supported Nash’s amendment in principle while signalling that Kidron’s version might be the one the Government could realistically adopt.
That split tells you something important. Agreement on the problem does not mean agreement on the solution.
Before the debate, a letter organised by Ellen Roome and signed by 19 other bereaved parents was sent to all Peers. This was not policy language, it came from bereaved parents. It carries a different kind of weight, and hopefully harder to set aside.
At the same time, legal pressure is building elsewhere. Lawsuits against major tech platforms are no longer theoretical. Claims around addictive design, harmful content exposure, and platform responsibility have tested in court. The question being examined is simple: are these companies passive hosts, or are they active participants in harm?
That distinction matters because once courts begin leaning toward liability, the tone of regulation changes very quickly.
So why did the Commons reject the amendment? There are three potential reasons.
First, enforceability. A social media ban sounds clear until you try to apply it. Age verification is inconsistent and often easy to bypass. Legislating something that cannot be reliably enforced creates a false sense of control. However is this any different to restricting alcohol and tobacco to teenagers – take a walk around your local park at the weekend if you aren’t sure.
Second, proportionality. A full ban is a blunt measure. The Commons has generally leaned towards regulation rather than prohibition, particularly in areas where technology is already embedded in everyday life.
Third, timing. The Online Safety Act 2023 is still being implemented. Many MPs are reluctant to introduce additional major measures before seeing how the current framework performs in practice.
Then there is the issue that sits just beneath the surface. Lobbying: large tech companies engage heavily with policymakers – this is established. Whether that engagement directly determines outcomes is less clear. What can be said is that when proposals threaten commercial models, responses from those sectors tend to be organised, consistent, and well resourced. That does not automatically explain the Commons vote, but it does form part of the environment in which decisions are made.
It does appear that, thanks to the tireless campaigning done by Ellen and the other parents, that The Lords are signalling that current protections are not sufficient. The Commons is signalling that a full ban may not be workable. Campaigners are signalling that delay carries real-world consequences and the courts are beginning to test where responsibility actually sits. That combination does not resolve neatly.
Children are still using platforms designed for maximum engagement with content that is not properly moderated. Policymakers are still deciding how far intervention should go. Platforms are still defending their (financial) positions and status quo because everyone involved is aware that once harm moves from discussion to legal finding, the space for inaction narrows.
Which leaves a straightforward question sitting in the middle of all this: if not a ban, then what actually works?


