Who’s Guarding Our Data Airspace…?

September 4, 2025

I was reminded today of Joanna Bryson’s brilliant analogy about personal data privacy being like airspace, thanks to a thoughtful post shared by a contact on LinkedIn.

Before the invention of airplanes, no one thought about defending the skies. It simply did not occur to us that anything needed protecting up there. Then came the Wright brothers, commercial aviation, military planes, and now drones, and suddenly our airspace became a matter of national security, commerce, and safety. The analogy is sharp because it reminds us that technology creates entirely new kinds of vulnerabilities, and that law, regulation, and society are always playing catch-up.

In the United States the Federal Aviation Agency was only created in 1958, more than fifty years after the Wright brothers first wobbled their way into the sky. It took the UK another fourteen years to establish the Civil Aviation Authority in 1972. By then we had airports, passenger flights, and a few too many close calls. It shows just how slowly governments can move when confronted with brand new risks, especially those they don’t understand… Lessons were learned in a piecemeal, reactive way, at the cost of lives.

It raises a sobering question: are we prepared to repeat that same long, slow, painful trajectory with AI, waiting until the damage is undeniable before acting?

Because the harm is already here. Families are being targeted and torn apart by algorithmic errors and opaque systems. Women are being stalked with the help of predictive tools and tracking software. Veterans are being scammed by automated frauds that can mimic real people with chilling accuracy. Children are prime targets… These are not abstract risks or science fiction scenarios. They are happening today, and they hit the most vulnerable first.

And here is the part that makes it harder to laugh off as teething problems. None of the principles needed to protect people are new. We have been talking about data privacy since the 1960s, when the OECD developed guidelines on fair information practices. We have been discussing transparency, accountability, and fairness in technology for decades. Even in AI, robust and explainable systems are not some exotic futuristic dream. Researchers have been building explainable models for years. The gap is not intellectual or technical, it is political and economic.

Back to that image of the sky: the ai-rplanes are already flying overhead, some of them carrying passengers safely, some of them experimental, and some of them decidedly dangerous. Most of us are standing around squinting upward, murmuring about how perhaps someone should do something, and then congratulating ourselves for setting up another working group. Meanwhile, ordinary people on the ground are already getting hit by falling debris.

Here is where humour might help cut through. Imagine if the FAA had said in 1958, “Let us first create a voluntary code of conduct for birds.” Or if the safety regulations for pilots were described as “nice to have guidelines, to be applied only if it does not slow down take-off.” We would laugh, if the cost had not been so high. The same is true now. It is absurd to treat AI as though it is simply too complex to regulate, when in reality the foundations for responsible use are already sitting on the shelf gathering dust.

The serious point is that we do not need to reinvent the wheel, nor should we resign ourselves to another half century of damage before decisive regulation. Clear, enforceable frameworks can and should be applied now. Businesses should not be cajoled into compliance as though it is a chore. Responsible AI is a competitive advantage, a way to build trust and credibility in a marketplace that is already jittery about black box systems. Sustainable innovation is the smarter business choice, and it also happens to be the right one for people and communities.

We may not yet have either an FAA or a CAA for AI, but we are at a crossroads. Either we keep waiting until the human cost is undeniable, or we choose to act now with courage and clarity. The choice is stark. Defend our data airspace now, or look back in fifty years and explain to our children why we did not bother to keep them safe when we had the chance.

What do you think we should do?

A sword hilt disappearing into shadow, symbolising what remains untold
Random

600 Steps Behind the Hilt

Six hundred blogs. You would think that much writing would be exhausting, but in truth it has been the opposite. Each step sharpened a thought,

Read More »