Malte Wagenbach — February 2026
A team at Sandia National Laboratories has just published something that should unsettle every assumption we hold about what computers are for. Brad Theilman and Brad Aimone, working under funding from the Department of Energy, have shown that neuromorphic computers — machines whose architecture mirrors the spiking, asynchronous, deeply parallel structure of the human brain — can solve partial differential equations. Not toy problems. The real ones. The equations that model weather systems, fluid dynamics, electromagnetic fields, structural mechanics. The mathematics that currently demands warehouse-scale supercomputers burning through megawatts of electricity.
The neuromorphic chips did it while sipping power.
This is not an incremental improvement. This is a category error being corrected.
⸻
The Wrong Metaphor, for Seventy Years
Since Turing and von Neumann, we have built computers as logic machines. Sequential processors executing instructions one after another, billions of times per second, brute-forcing their way through problems by sheer clock speed. The metaphor was the ledger, the filing cabinet, the assembly line — take a problem, break it into steps, execute the steps in order, return the result.
This worked extraordinarily well for the problems it was designed for: accounting, cryptography, database queries, deterministic simulation. But it was never how biological intelligence operates. Not even close.
Your brain does not solve differential equations by reducing them to sequential arithmetic. When you catch a ball thrown at your head, your visual cortex, motor cortex, and cerebellum are performing sophisticated real-time physics — computing trajectories, adjusting muscle tension, predicting impact — through massively parallel networks of neurons that spike asynchronously, consume roughly twenty watts of power, and arrive at solutions that would take a conventional computer significant effort to approximate.
As Theilman put it: "We're just starting to have computational systems that can exhibit intelligent-like behavior. But they look nothing like the brain, and the amount of resources that they require is ridiculous, frankly."
He is being polite. The situation is worse than ridiculous. Training a single frontier AI model now consumes more electricity than some small nations use in a year. The IEA projects data-centre demand could reach 1,700 TWh by 2035. We have built the most powerful thinking machines in history, and they think nothing like the organ they are trying to surpass.
The Sandia result suggests there might be another way entirely.
⸻
What Neuromorphic Computing Actually Is
A conventional processor is a clock. It ticks at a fixed rate, and every tick, every transistor in the circuit updates its state whether or not anything has changed. This is spectacularly wasteful. Imagine a city where every traffic light changed on a fixed schedule regardless of whether any cars were present. That is, roughly speaking, how your laptop works.
A neuromorphic processor is an ecosystem. Its artificial neurons sit silent until they receive enough input to cross a threshold — then they fire, sending a spike to connected neurons, which may or may not fire in turn depending on their own accumulated inputs. No clock. No synchronisation. No wasted cycles. Activity flows through the network like ripples across water, concentrating compute exactly where and when it is needed.
This is not a new idea. Carver Mead at Caltech coined the term "neuromorphic" in the late 1980s. Intel's Loihi chip, IBM's TrueNorth, and a handful of academic projects have been exploring the space for years. But the field has been haunted by a fundamental question: what are these things actually good for?
Pattern recognition, yes. Sensory processing, certainly. But serious mathematics? The kind of heavy-duty PDE solving that keeps nuclear arsenals simulated and weather forecasts accurate? The intuition went entirely the other way. Brains are creative and adaptive; supercomputers are precise and mathematical. Division of labour. Stay in your lane.
Theilman and Aimone just demolished that partition wall.
⸻
The Deeper Implication: Intelligence Is Physics
Here is what makes this result genuinely unsettling, in the best possible way.
The researchers did not design a neuromorphic algorithm from scratch to solve PDEs. They took an established computational neuroscience model — one that had existed for twelve years — and discovered that it contained a previously unrecognised mathematical relationship to partial differential equations. The connection had been hiding in plain sight. No one had thought to look because no one believed brains did that kind of math.
But of course they do. Every biological organism that navigates a physical environment is solving differential equations continuously. A hawk diving at a mouse is integrating equations of motion under wind resistance and gravitational acceleration. A tree growing toward light is solving diffusion equations for nutrient transport. An amoeba navigating a chemical gradient is performing something remarkably close to finite element analysis.
Life has been solving PDEs for four billion years. It just never bothered to write them down.
What the Sandia team has found is not merely that neuromorphic chips can be repurposed for physics simulations. They have found evidence that brain-like computation and physical mathematics share a deep structural kinship — that the architecture evolution converged on for navigating the physical world is, in some fundamental sense, the same architecture as the mathematics we invented to describe that world.
This should not surprise us. But it does, because we have spent seventy years assuming that intelligence and physics are separate magisteria — that thinking is software running on wetware, and that the particular hardware is incidental. The neuromorphic result suggests the opposite: that the hardware is the thinking, and that the shape of biological neural networks encodes physical law in their very structure.
⸻
What This Means for the Energy Crisis of Intelligence
The practical implications are immediate and enormous.
Nuclear weapons simulations at the Department of Energy currently require some of the largest supercomputers on Earth. Weather forecasting consumes vast computational resources. Climate modelling, drug discovery, materials science, fluid dynamics — all of these domains are bottlenecked not by algorithmic sophistication but by raw energy consumption. We know how to solve the equations. We just cannot afford to run the machines that solve them as often as we need.
If neuromorphic systems can handle even a fraction of this workload at a fraction of the energy cost, the implications cascade outward. Scientific computing becomes democratised. Real-time physics simulation becomes portable. The energy budget currently allocated to brute-force computation gets freed up — or, more realistically, gets redirected toward problems we currently cannot afford to tackle at all.
Aimone frames this in terms of national security, and he is right. But the frame extends further. We are entering an era where the limiting factor on civilisational intelligence is not algorithmic capability but thermodynamic cost. The models exist. The mathematics works. What fails is the electricity bill.
Neuromorphic computing does not just offer a cheaper way to do the same thing. It offers a fundamentally different relationship between computation and energy — one modelled on the only system we know of that sustains complex intelligence on twenty watts.
⸻
The Convergence
Stand back far enough and a pattern emerges across several seemingly unrelated developments.
Event-based cameras that see like biological retinas, processing only change rather than wasting bandwidth on static pixels. Spiking neural networks that compute only when triggered, rather than burning cycles on a fixed clock. And now, neuromorphic systems that solve the fundamental equations of physics not by brute force but by structural resonance with the physical world itself.
We are not building better machines. We are converging on biology.
This is not biomimicry in the superficial sense — slapping a neural network label on a matrix multiplication engine and calling it brain-like. This is something deeper. We are discovering that the computational strategies evolution arrived at through four billion years of optimisation under energy constraints are not merely clever heuristics. They are, in important cases, optimal solutions. The brain does not approximate physics. It computes physics, natively, in its architecture.
The seventy-year detour through von Neumann computing was not wrong. It gave us everything from the internet to genomics. But it was a detour — a powerful but energetically unsustainable approach to intelligence that is now hitting its thermodynamic ceiling just as the demand for computation explodes.
The path forward may not be faster clocks or denser transistors. It may be silicon that finally learns to think like flesh.
⸻
What Comes Next
Aimone's team is already exploring what this means for understanding neurological disorders — Alzheimer's, Parkinson's, conditions where the brain's computational architecture degrades. If we understand the mathematical principles that make neural circuits capable of solving PDEs, we may gain new insight into what exactly breaks when those circuits fail.
But I suspect the largest consequence is philosophical. For decades, the AI discourse has been dominated by a question of software: can we write the right algorithm to produce intelligence? The neuromorphic result suggests we have been asking the wrong question. Perhaps intelligence is not a program that runs on hardware. Perhaps intelligence is what certain hardware does when it is shaped by the same physical laws it needs to navigate.
If that is true, then the future of computing is not about making machines think like humans. It is about making machines that are shaped, like humans, by the physics of the world they inhabit.
The brain was never just a computer. It was always a piece of the universe trying to understand itself. We are only now building machines humble enough to learn from that.