The Ouroboros of Intelligence and Security
Two Infinite Resources
There are two resources that have no obvious ceiling: intelligence and insecurity.
Intelligence, because we do not know its upper bound. The AI scaling laws suggest that throwing more compute at larger models produces more capable systems, and we have not yet found the wall. Whether the wall exists is an open empirical question, but the trajectory has held long enough that serious people are planning as though it will continue. For practical purposes, intelligence is being treated as an expandable resource - something you can buy more of by building bigger machines.
Insecurity, because it operates on two levels and neither has a ceiling. The first is actual vulnerability - real attack surfaces, real exploits, real points of failure in systems that people depend on. This is finite at any given moment but expands with every new system deployed. The second is the perception of vulnerability - the feeling that things could go wrong, that the threat you have not found yet is the one that will get you. This is effectively infinite, because you do not need to find a real threat to produce it. You only need to point at the future and say: this could go wrong. And you will always be correct, because the future is uncertain by definition. Security is the only product where the sales pitch is unfalsifiable - not because the threats are imaginary, but because the space of possible threats is unbounded. Both senses of insecurity feed the loop. But they feed it differently, and conflating them is one of the mechanisms that makes the loop so hard to interrupt.
These two facts have coexisted for a long time. What is new is that they have collapsed into the same technology.
The Loop
Here is the structure:
- Insecurity exists - real or perceived, it does not matter for the market dynamics.
- Intelligence is deployed to address it. Better threat detection. Better prediction. Better defence.
- The same intelligence creates new vectors of insecurity — through both its misuse and its successful use. The adversarial vectors are obvious: deeper fakes, more convincing social engineering, novel exploits that did not exist before the tool existed. But the non-adversarial vectors may matter more. Every time AI is successfully integrated into critical infrastructure — power grids, financial systems, medical diagnostics, supply chains — it creates a new dependency. The system works better, so the system becomes essential, so the system becomes a point of failure. The attack surface grows not because anyone is attacking but because the thing worth attacking now runs on a tool whose behaviour cannot be fully enumerated. Integration is insecurity. The better the tool works, the more of the world depends on it, and dependence is vulnerability by another name.
- The new insecurity generates demand for more intelligence.
- Return to step 1.
This is not a cycle that winds down. Each rotation escalates. The intelligence deployed at step 2 is more capable than the previous round, which means the insecurity generated at step 3 is more sophisticated than the previous round, which means the demand at step 4 is more urgent than the previous round. The loop does not converge. It diverges.
Previous arms races had natural brakes. Nuclear weapons required enriched uranium - scarce, detectable, subject to physical constraints. Conventional military power required manufacturing capacity, supply chains, geography. You could not simply decide to have more of it. The material world imposed limits.
The intelligence arms race may not have this constraint. And the reason is not that capability is growing fast - it is that the loop is structurally different from any arms race that preceded it.
Growth
The important claim is not that AI capability is accelerating. It may be. But you do not need acceleration for the loop to be dangerous. Constant, steady growth is sufficient, because of three properties that distinguish this arms race from all previous ones.
First: the tool is general-purpose. A nuclear weapon has a known capability set. You can enumerate what it does. An AI system does not have a fixed capability set - it has a capability surface that expands into every domain it touches. A system built to detect fraud can be repurposed for surveillance. A system built to write code can be repurposed to find exploits. Each unit of capability growth does not add one new threat vector. It multiplies across every domain the system can reach. Even at a constant rate of improvement, the breadth of the threat surface expands combinatorially with each new capability, because general-purpose tools compose.
Second: the tool is applied to its own development. AI is already being used to design chips, optimise training runs, discover more efficient architectures, and accelerate materials science research. This is recursive improvement - the tool improves the process that builds the tool. In a conventional arms race, the weapon and the factory that builds it are separate systems. Here, they are the same system. Even if each generation delivers only a constant-sized improvement, the recursive structure means that every constraint on growth becomes a target for the tool itself. Physical limits - chips, energy, data centres - are real. But they are treated not as natural limits to be respected but as engineering problems to be solved by the very intelligence those limits are supposed to constrain.
Third: the funding has no natural ceiling. When the argument is that falling behind in AI capability means strategic vulnerability - that the nation that controls the most intelligence controls the future - then governments do what governments always do with existential framing: they pour in everything they can. Energy policy becomes AI policy. Industrial policy becomes AI policy. Defence budgets become AI budgets. The constraints that should slow this down are treated as obstacles to be overcome, because the stakes have been framed as civilisational. The economic brake exists in theory. In practice, the entire weight of national security apparatus, private capital, and the intelligence itself is being deployed to dissolve it.
These three properties mean the loop does not need to accelerate to be unmanageable. Constant growth of a general-purpose, self-improving tool backed by unlimited funding is already a system with no equilibrium.
Acceleration
There is a stronger claim available: that capability growth is not just ongoing but accelerating - that the recursive structure described above produces a positive second derivative, where the rate of improvement is itself increasing.
This is plausible. The mechanism is clear: if each generation of AI contributes meaningfully to building the next generation, then the improvement cycle shortens with each rotation. Compound returns on intelligence applied to the problem of generating intelligence would, in principle, produce exactly this kind of acceleration.
But plausible is not established. The evidence that exists - AI designing better chips, optimising training pipelines, discovering efficient architectures - demonstrates recursive improvement. It does not, by itself, demonstrate that the rate of improvement is increasing rather than holding steady. To establish acceleration, you would need to show that the time between equivalent-magnitude breakthroughs is compressing, or that capability benchmarks are curving upward rather than following a constant exponential. That data may exist in fragments, but it has not been assembled into a rigorous case.
This matters because the policy implications are different. If growth is constant, the problem is serious but the timeline for institutional response is predictable. If growth is accelerating, the window for response is shrinking in a way that compounds the difficulty. The acceleration conjecture, if true, transforms a hard problem into a potentially intractable one.
The honest position is this: the loop is dangerous at any positive growth rate, for the structural reasons described above. Acceleration would make it worse. Whether acceleration is occurring is an empirical question that deserves rigorous investigation rather than assertion - precisely because the answer determines how much time we have to build the institutions that might help us navigate it.
Complexity
The deeper problem is not the loop itself. Arms races and collective action failures are familiar. The deeper problem is that comprehension and capability are coupled to the same engine — and capability is winning.
In every previous arms race, the tools for understanding the threat were independent of the threat itself. You could study a warhead without building a bigger one. You could model a missile’s trajectory without advancing missile technology. The analytical framework and the object of analysis were separate systems, which meant understanding could, in principle, keep pace with capability or even get ahead of it.
AI breaks this independence. The best tools for understanding AI systems are AI systems. Interpretability research uses neural networks to probe neural networks. Automated red-teaming uses language models to find the failure modes of language models. Every advance in the science of understanding these systems is simultaneously an advance in the systems themselves. The instrument and the object of study are the same thing, and every time you sharpen the instrument, you also extend the object.
This creates a structural asymmetry that no amount of funding corrects. Capability research produces understanding as a byproduct — you learn things about how models work in the course of making them more powerful. Understanding research produces capability as a byproduct — you make models more capable in the course of figuring out how they work. But the incentives are not symmetric. Capability has customers, revenue, competitive advantage, and national security urgency behind it. Understanding has grant committees and a handful of research labs. The byproduct of capability research accumulates faster than the byproduct of understanding research, and the gap compounds.
The result is that each rotation of the loop makes the loop itself harder to see clearly. The systems generating insecurity become less legible with each generation, which means the insecurity they generate is increasingly opaque — not just larger in degree but harder to characterise, harder to measure, and harder to distinguish from manufactured fear. You cannot manage what you cannot model. And the modelling tools are structurally behind the thing they are trying to model, because they share a development pipeline where capability has priority.
The Tractability Question
So: how do you make this manageable?
The honest answer is that you probably cannot solve it. You can only navigate it. And the distinction matters, because the framing shapes the response.
If you treat this as a problem to be solved, you reach for solutions: regulation, alignment, international treaties, kill switches. These are not useless - some of them will help at the margins. But they all share a structural weakness: they assume the problem is static enough to be bounded by a fixed intervention. Regulation written today addresses the capabilities that exist today. The capabilities that exist next year will route around it, not because anyone is trying to evade the rules, but because the design space is larger than any regulatory framework can anticipate. And the general-purpose nature of AI means “next year” brings not just better versions of today’s capabilities but entirely new categories of capability that no regulation anticipated.
If you treat this as a landscape to be navigated, different strategies emerge:
Transparency as structural intervention. Not transparency about intentions - those are cheap and unverifiable. Transparency about capabilities. Open evaluation frameworks that map what systems can actually do, updated continuously, available to everyone. The goal is not to prevent capability growth but to keep the legibility of systems growing at roughly the same rate as their capability. When legibility falls behind capability, you get opacity, and opacity is the substrate on which unmanageable insecurity grows. This is the most actionable intervention - but it requires that legibility efforts receive the same investment as capability efforts, and right now they do not. Not even close.
Accepting irreducible uncertainty. Some of the insecurity is real and cannot be eliminated. The question is whether you respond to that fact by trying to eliminate it anyway - which feeds the loop - or by building systems and institutions that function well under uncertainty. The difference between a society that is resilient to AI risk and a society that is trying to be immune to AI risk is the difference between navigation and futility.
Distinguishing manufactured insecurity from real insecurity. The loop is powered partly by genuine risk and partly by manufactured fear. These require different responses. Genuine risk warrants investment in defence. Manufactured fear warrants scepticism and institutional resistance to FUD-driven policy. Conflating the two is how you get runaway security spending that does not make anyone safer - it just feeds the next rotation of the loop. But here is the problem: when capability growth is ongoing and the threat surface is opaque, even legitimate risk assessment looks like fear-mongering, because the honest projection is alarming. The difficulty of telling real risk from manufactured risk increases in proportion to the actual risk.
Recognising the collective action problem. The loop persists because defection is rational. For any individual actor - nation, corporation, research lab - the correct move is to keep building, because not building means falling behind an adversary who will not stop. This is a textbook collective action problem, and collective action problems are not solved by intelligence. They are solved by coordination, which is precisely what competitive dynamics erode. More intelligence does not fix this. Better institutions might. Whether we build them in time is not a technical question.
Conclusion
This is not a problem with a solution. It is a condition. Regulation will lag. Alignment research will chase a moving target. International coordination will be undermined by the competitive pressures it needs to address. Every intervention that requires intelligence to implement adds fuel to the system it is trying to constrain.
The question is not how do we stop the loop? The question is how do we live inside it? — distinguishing real threats from manufactured ones, building institutions that adapt as fast as the technology does, and resisting the narrative that more intelligence always makes things better.