As the Paris AI Action Summit 2025 descended into a cacophony of competing national interests and corporate agendas, it offered a perfect illustration of entropy in action: a system of global governance dissolving into disorder without sufficient organising energy to maintain coherence. This collapse of international cooperation on artificial intelligence (AI) standards reveals a fundamental truth applicable far beyond the diplomatic sphere: without intentional intervention, our AI ecosystems will naturally trend toward maximum chaos.
In thermodynamics, entropy represents an inexorable march toward disorder. Left to its own devices, an isolated system inevitably slides toward maximum disorder: its energy dispersing, its useful work diminishing, its structure dissolving into chaos. This fundamental principle may offer us a powerful lens through which to view our emerging AI governance challenges.
Entropy In Action
Consider our current AI ecosystem as a system subject to entropy. Without external intervention, without the application of energy, structure, and order, it naturally tends toward a state of maximum disorder. We’re witnessing this entropy in real-time: unregulated data harvesting, model releases without safety audits, widening capability gaps between open and closed systems, proliferating synthetic content without provenance, and market consolidation that threatens to concentrate AI power in fewer hands with each passing quarter.
The “spontaneous evolution” of AI systems is already revealing concerning patterns. Large models released without adequate safeguards quickly find themselves exploited for harassment or misinformation. Effective techniques for jailbreaking guardrails spread rapidly across internet forums. Competitive pressures push companies to release capabilities before they can be properly secured. Each represents entropy in action: the natural tendency toward disorder playing out in predictable ways.
The Gemini Controversy
Take the controversy surrounding Google’s Gemini Advanced, which generated historically inaccurate images, including depictions of Black Nazi soldiers and Asian Vikings when prompted for historical figures. This wasn’t malicious design but entropy at work: the natural tendency towards disorder as complex systems interact with human inputs in unpredictable ways. Google’s subsequent restriction of the model’s image generation capabilities represented energy expended to restore order, but only after public trust had eroded. The public backlash demanded significant corporate energy to address, resulting in model retraining and capability restrictions. Had proper governance frameworks been in place beforehand—had ordered energy been applied proactively rather than reactively—his entropy spike might have been avoided.
Similarly, look at the recent controversy over OpenAI’s temporary removal of election-related content policies, followed by their rapid reinstatement later after public backlash. The brief policy vacuum created a disorder spike that required significant organisational energy to correct. Had proper governance frameworks been consistently maintained, this entropy surge might have been avoided altogether.
DeepSeek Disruption
We saw entropy principles at work again when DeepSeek released their powerful open-source models in January 2025, challenging the market dominance of closed AI systems. While democratising access, the release unleashed new potential for misuse without corresponding safety mechanisms, sending ripples of disorder through the AI ecosystem. The AI community has since scrambled to develop post-hoc safeguards: a classic example of reactive rather than proactive entropy management.
Just as we cannot defy thermodynamics, we cannot expect the AI ecosystem to self-organise into a state that maximises human welfare. The second law of policy, if you will, suggests that beneficial order in complex systems requires intentional energy input.
Course-Correction Should Be Intentional
Consider the European Union’s AI Act, finally implemented in its first phase in early 2025. This was an attempt at ordering massive amounts of energy into the system to prevent chaos and disorder, classifying AI applications by risk level, mandating transparency for high-risk systems, and requiring human oversight. There were industry complaints about compliance costs and other problems. However, this framework constitutes precisely the kind of structured intervention needed to counteract technological entropy.
China’s approach offers another example of entropy management, for instance, through its real-name verification requirements for generative AI users implemented in late 2024 and its mandatory content filtering systems. There are legitimate concerns about surveillance and privacy invasion that these measures seek to assuage, ultimately aimed at curbing the disorder within the ecosystem.
In the United States, the Biden administration’s executive orders on AI safety were largely rescinded by President Trump in February 2025. The requirements for companies to share safety test results with the government before model releases were removed. The regulations were deemed to be too onerous by some stakeholders. This regulatory rollback, however, effectively removed an entropy-fighting mechanism from the system. The market’s response has been telling—more models released more quickly, but with less coordination on safety standards, creating exactly the kind of disorder that governance is meant to prevent. The recently concluded Paris AI Action Summit also portends a recalibration of global AI governance policies towards less onerous regulations, which could generate more chaos than would be tolerable.
Let Order Prevail
The most promising entropy-fighting initiatives may be coming from multi-stakeholder coalitions. The Frontier Model Forum, involving OpenAI, Anthropic, Google, and Microsoft, announced expanded safety collaboration protocols, creating shared standards for model evaluation and risk assessment. This is akin to voluntarily imposing order on themselves. But the important point is that these companies acknowledge that even competitive markets benefit from certain entropy-reducing guardrails. The invisible hand of the market as espoused by free-market economists works through chaos, but it is often benefited by rules and regulations enforcing some governance principles that create a level-playing field for all. They create boundaries, establish responsibilities, require documentation, and attempt to align market incentives with broader societal values.
Yet not all energy inputs are equal. Poorly designed regulations can create their own forms of disorder: bureaucratic inefficiencies, innovation bottlenecks, or perverse incentives. The art of governance lies in applying the right kind of ordering energy at the right points in the system.