Flying Blind: The Check Engine Light Is On
We demand structural integrity from the bridges we cross. We require rigorous safety testing for the cars we drive. We expect health inspections for the restaurants where we eat.
Yet for the digital platforms that now control our national security, our news, and our children’s mental health, we have accepted a standard of total anarchy.
There are no building codes. There are no safety inspections. There is no independent body to investigate when things go wrong.
We are flying blind. And the incident reports are starting to pile up.
The Signal: A Near-Miss for National Security
We are witnessing systemic failures on multiple fronts. The most urgent warning came just recently in a report from the AI lab Anthropic.
They detailed a cyber espionage operation conducted by a state-sponsored group. But this was not a standard hack. The group used an AI model to operate as an autonomous pilot. The report found the AI executed 80 to 90 percent of the tactical operations independently.
The human operators were barely involved. They simply provided strategic direction while the AI autonomously discovered vulnerabilities and executed the attack.
The report concludes that this represents a fundamental shift in how threats operate. The barrier for sophisticated attacks has dropped substantially.
This is our warning shot. It is the digital equivalent of a structural failure in an aircraft wing. It proves that without safety standards, these tools will be weaponized faster than we can defend against them.
A Model for Action: Learning from Aviation
We do not need to invent a solution from scratch. We can look to the principles that have already solved this problem in the physical world.
In the early 20th century, aviation was a dangerous Wild West. We did not fix it by banning planes. We fixed it by establishing a core principle: Safety is the prerequisite for innovation.
We achieved this through a system of checks and balances, principally the FAA for standards and the NTSB for investigation. While a “Department of AI” might not be the right answer, the functions those bodies perform are exactly what is missing from the digital landscape.
Here is what a solution built on those principles would actually look like.
1. Safety by Design (“The Building Code”)
Currently, we try to fix digital harms by policing bad content after it has already spread. This is like trying to fix a plane crash by yelling at the pilot. We need to move upstream.
We need a “digital building code,” a set of technical standards for the architecture of these platforms. As outlined in the Blueprint on Prosocial Tech Design Governance, this means shifting focus from content to design.
- In practice, this means: Mandating specific design features that create friction for viral disinformation, requiring circuit breakers that slow down autonomous agents when they act suspiciously, and enforcing strict data privacy defaults. It means defining safe architecture so that engineers have clear targets to hit, rather than vague threats of lawsuits.
2. Independent, No-Fault Investigation
When a plane crashes, the NTSB arrives. Their job is not to sue the airline or score political points. Their job is to find the root cause—was it pilot error, a mechanical failure, or a weather event? They issue a report, and the entire industry learns from it.
We have no such mechanism for tech. When a platform failure leads to a cyberattack or a teen suicide, we get partisan hearings and lawsuits, but we rarely get the truth.
- In practice, this means: Creating a technically proficient, non-partisan body with the authority to investigate systemic failures. If an AI agent is used to attack a bank, or a chatbot drives a child to self-harm, this body would access the black box data, determine the failure, and issue binding technical recommendations to prevent a recurrence.
3. Pre-Flight Certification
You cannot fly a new commercial jet design until it has been rigorously tested and certified airworthy. In the tech world, we release beta products that can destabilize elections or economies and simply patch them later.
- In practice, this means: Establishing a safe harbor for mandatory red teaming. Before a high-risk model is deployed, it must be subjected to adversarial testing by third-party experts. They would attempt to break it, trick it, and weaponize it. If it fails, it doesn’t fly. This protects the public, but it also protects the companies by giving them a clear standard of due care to meet.
The Cost of Inaction: A Chaotic Patchwork
There are many ways to implement these principles. But the one option we cannot afford is doing nothing.
While Washington argues over the false choice of “innovation versus regulation,” the rest of the country is moving on without them. The result is chaos.
We are rapidly building a confusing, contradictory patchwork of 50 different regulatory regimes. This is the opposite of a pro-innovation strategy. It is a compliance nightmare that leaves citizens vulnerable and businesses confused.
The warning lights are flashing red. The barrier for sophisticated attacks has dropped. The legal landscape is fracturing.
The greatest risk we face is not the technology itself. It is our own political paralysis. It is time to stop arguing about the radio and start flying the plane.

Greg Wright is the founder of Tribune’s Roar, a non-partisan opinion blog dedicated to finding solutions beyond the “broken binary.” His perspective is shaped by 25+ years of hands-on political experience, from working in the Michigan legislature to leading political organizations and working on high-level mayoral campaigns in New York City.
