Malte Wagenback, The Oslo Project – July 2025
We stand at the threshold of Artificial Super Intelligence – a moment when machine cognition will surpass human intelligence across all domains. But this technological leap is also an institutional crisis. The governance systems that have ordered our world for centuries are breaking down under the weight of intelligence that thinks faster than we can regulate, operates beyond borders we can patrol, and concentrates power in ways our democracies cannot contain.
This is not another essay about AI safety or the singularity. It is about the urgent need to redesign sovereignty itself – to move from institutions built for scarcity and control toward commons built for abundance and relationship. The question is not whether super-intelligence will arrive, but whether we will be ready to govern it together.
⸻
1 | A second repricing of violence
Davidson & Rees-Mogg framed the Information Revolution as a moment when the returns to territorial force collapse and sovereignty decouples from land.
Half a generation on, Artificial Super Intelligence (ASI) doesn't just re-price violence – it liquefies cognition itself. When thought can be summoned as cheaply as electricity, the leverage that once accrued to armies or financiers flows to whoever owns the pipes: compute, data, and energy.
The traditional monopoly on legitimate violence that defined the Westphalian state now faces a cognitive coup. ASI doesn't storm the palace gates – it simply renders them irrelevant by making intelligence abundant and territorial control obsolete.
⸻
2 | The material hinterland of "immaterial" intelligence
Every token of compute is anchored in copper, water, rare-earth magnets, and a supply chain that already feels colonial. Our ASI discourse skates elegantly over the Congo, Xinjiang and the Atacama. A commons politics that ignores minerals is a mirage built on extraction.
Training frontier models is already straining grid capacity; the IEA projects data-centre demand could top 1,700 TWh by 2035 – bigger than India's entire grid today. But the material footprint extends far beyond electricity. Each GPU cluster requires tonnes of refined metals, millions of litres of cooling water, and semiconductor fabrication processes that consume more energy than small nations.
IMF modelling shows energy prices diverging sharply between jurisdictions that accelerate renewables and those that double-down on legacy fuels. The United States' recent "One Big Beautiful Bill Act" illustrates the risk: by tilting subsidies back to fossil and nuclear, it raises national power costs just as AI hunger spikes.
We are witnessing the emergence of two worlds: energy-abundant territories where ASI flourishes, and energy-scarce regions locked into computational vassalage. The new geopolitics won't be fought over oil reserves but over gigawatts, grid resilience, and who controls the mineral substrate of machine intelligence.
The missing institutional move: a Global Extractive Ledger that tags every gram and litre in a model's bill-of-materials and prices ecological externalities in real time. Without this, our visions of beneficial ASI remain complicit in the same extractive logic that destabilised the climate.
⸻
3 | Data sovereignty and the new enclosures
A January 2025 survey of 800 public and corporate directors found only 12% trust existing treaties to handle a cross-border model leak or autonomous cyber-strike. But the governance gap runs deeper than regulatory lag – it touches the foundational question of who owns intelligence itself.
Without data-sovereignty compacts, ASI entrenches extraction from communities who cannot spell "eigenvector." Indigenous scholars warn of a fresh wave of digital colonialism unless ownership becomes relational, not transactional. When training datasets scrape entire cultural archives without consent, machine intelligence becomes another form of enclosure – privatising collective knowledge that took generations to develop.
Technical alignment debates mask the governance question: who certifies a model's safety, and how do we verify it hasn't been swapped at runtime? We need a protocol layer for continuous attestation – think secure boot for civilisation – plus liability law that actually bites when things go wrong.
Early coalitions such as the Artificial Super-Intelligence Alliance experiment with token-weighted, federated governance, but they remain voluntary clubs of capital. The institutional architecture of the 20th century – built for human-speed deliberation and nation-state containers – buckles under the velocity and borderlessness of machine intelligence.
We therefore face an institutional zero day. The "dark matter" – property law, audit trails, liability frameworks – must be refactored as deliberately as the code of the models themselves.
⸻
4 | Exit, voice, or mutualise?
Network-state pilots (Zuzalu, CityDAO, Balaji's Island School) prove it is now trivial to exit a jurisdiction with your wallet and your work. The cloud-native elite can forum-shop for governance as easily as they select a DNS provider.
Yet exit on its own risks producing archipelagos of gated abundance – floating techno-utopias tethered to extraction zones below. The Sovereign Individual's dream of personal secession becomes a nightmare of systemic abandonment.
The alternative is a mutualised sovereignty where rights to energy, compute and ecological integrity are encoded as open, programmable commons. Not escape from the collective, but the collective reimagined as code.
I have called this shift "structural re-commoning" – a movement from owning the thing to stewarding the relation.
⸻
5 | Design moves for the post-ASI horizon
Compute Commons: Cooperative super-clusters governed by zero-knowledge attestations. Query power without exfiltrating data or ceding model control. We see first signals in open hardware co-ops emerging in Barcelona and South Korea's experimental "sovereign compute enclaves" – attempts to democratise access to frontier intelligence without surrendering data sovereignty.
But alignment funding pours into large labs while 80% of future harm may come from mid-tier actors fine-tuning open-weights on edge devices. Commons logic has to cover "street-corner-compute", not just hyperscalers.
Energy Mutuals: Citizens hold tokenised claims on neighbourhood-scale renewables, with yields funding local basic services. Denmark's Middelgrunden wind cooperative and Kenya's emerging mini-grid DAOs point toward a future where energy abundance becomes the foundation for universal basic infrastructure rather than private wealth accumulation.
Yet local ownership of renewables tackles kilowatt-hour scarcity while ignoring grid stability and seasonal storage. The commons architecture needs federated balancing markets so neighbourhood batteries cooperate across latitudes and time zones.
Civic Ledger: A public, immutable balance-sheet of planetary assets and liabilities – biodiversity, carbon, health – priced in real-time and updated by distributed sensing networks. The EU's "Data for Common Good" pilot and Regen Ledger's biodiversity credits offer glimpses of what planetary accounting might look like when freed from quarterly reporting cycles.
But measurement is only half the story; we must reward positive ecological accrual. World Bank pilots on regenerative finance hint at investable restoration pipelines that could make ecosystem repair more profitable than extraction.
Rights of Nature: If ASI optimises for anthropocentric KPIs, we fail the biosphere. Recognising rivers, forests and aquifers as legal persons is moving from fringe activism to established jurisprudence. Our governance stack must let non-human stakeholders veto runaway extraction before irreversible tipping points.
These are not policy proposals but institutional prototypes – experiments in encoding collective intelligence into the substrate of governance itself. Each move attempts to solve the same puzzle: how do we preserve human agency in a world where machines think faster than we can vote?
⸻
Sidebar: Polycentric governance and plural epistemics
GPAI's early frameworks remain stubbornly West-centric, encoding liberal democratic assumptions into global AI governance. Commons sovereignty must be multilateral and ontologically diverse – able to hold Indigenous cosmologies, queer techno-cultures and post-growth economics in the same protocol without homogenising them.
The Maori concept of whakapapa (relational genealogy) offers a radically different model for data governance than Western property law. Similarly, Ubuntu philosophy's "I am because we are" suggests collective ownership models that could reshape how we think about AI training datasets and model governance.
Early experiments in polycentric AI governance – from Taiwanese vTaiwan digital democracy platforms to Indigenous data sovereignty initiatives in Canada – point toward institutional architectures that can handle multiple value systems without forcing convergence on a single ontology.
⸻
6 | On heliogenesis and the care infrastructure
Some propose the next civilisational project after ASI is "heliogenesis": large-scale stellar engineering or Dyson-swarm mining of solar output. The temptation is to sprint there – to treat the cosmos as the next frontier for extraction.
But super-abundant cognition will liquefy not just jobs but sense-making itself. We have no institutional muscle for mass existential care. The commons must include mental-health infrastructures and rituals for meaning, or we court nihilism at scale.
I argue for a pause-and-align ethic: prove we can steward a single biogeochemical cycle before drafting the star's bill of materials. The prudence is not technophobic; it is systemic risk management at Kardashev scale.
If we cannot govern the carbon cycle without triggering civilisational collapse, what hubris suggests we are ready to re-engineer stellar physics? The same extractive logic that destabilised Earth's climate will not suddenly become wise when applied to solar fusion.
⸻
7 | The Commons Transition Fund
If the first modernity was about mastering matter, the second about mastering information, the third – the one ASI cracks open – must be about mastering relationship: between machine cognition and human purpose, between kilowatt and biosphere, between individual and collective.
Yet shrinking tax bases will hollow out welfare systems before new commons dividends mature. We need a bridge institution – call it the Commons Transition Fund – financed by compute royalties and climate windfalls to avert a legitimacy crash as nation-states mutate.
Re-sovereignising a planetary commons after ASI is a tri-layer project: Matter (minerals + energy) → Sense (data + meaning) → Relationship (law + care). Miss one layer and the whole edifice tilts into extractive relapse.
The Sovereign Individual imagined freedom as flight from the state. Our task, instead, is to render sovereignty relational – to encode dignity, ecological viability and shared intelligence into the very substrate of our institutions.
This is not about slowing down technological development, but about ensuring it develops in service of planetary flourishing rather than extractive accumulation. ASI could be the operating system for a true planetary commons – or the final mechanism of enclosure.
Anything less, and super-intelligence becomes super-extractive; anything more, and it becomes the catalyst for a planetary mutualism worthy of the name civilisation.
The choice is not whether ASI will arrive – it is whether we will be ready to govern it together.
—M.W.