A team of physicists led by Mir Faizal at the University of British Columbia has demonstrated that the universe cannot be a computer simulation, according to research published in October 2025[1].
The key findings show that reality requires non-algorithmic understanding that cannot be simulated computationally. The researchers used mathematical theorems from Gödel, Tarski, and Chaitin to prove that a complete description of reality cannot be achieved through computation alone[1:1].
The team proposes that physics needs a “Meta Theory of Everything” (MToE) - a non-algorithmic layer above the algorithmic one to determine truth from outside the mathematical system[1:2]. This would help investigate phenomena like the black hole information paradox without violating mathematical rules.
“Any simulation is inherently algorithmic – it must follow programmed rules,” said Faizal. “But since the fundamental level of reality is based on non-algorithmic understanding, the universe cannot be, and could never be, a simulation”[1:3].
Lawrence Krauss, a co-author of the study, explained: “The fundamental laws of physics cannot exist inside space and time; they create it. This signifies that any simulation, which must be utilized within a computational framework, would never fully express the true universe”[2].
The research was published in the Journal of Holography Applications in Physics[1:4].



Bro our hyperfixations are slightly aligned, I was thrown into this rabbit hole because I was once again trying to build a formal symbolic language to describe conscious experience using qualia as the atomic formulae for the system. It’s also been giving me lots of fascinating insight and questions about the nature of thought and experience and philosophy in general.
Anyway to answer your question: yes and no.
If you require that the AGI be built using current architecture that is algorithmic then yes, I think the implication holds.
However, I think neuromorphic hardware is able to bypass this limitation. Continuous simultaneous processes interacting with each other are likely non-algorithmic. This is how our brains work. You can get some pretty discrete waves of thoughts through spiking neurons but the complexity arising from recurrence and the lack of discrete time steps makes me think systems built on complex neuromorphic hardware would not be algorithmic and therefore could also achieve AGI.
Good news: spiking neural nets are a bitch to prototype and we can’t train them fast like we can with ANNs so most “AI” is built on ANNs since we can easily do matrix math.
Tbf, I personally don’t think consciousness is necessarily non-algorithmic but that’s a different debate.
Edit: Oh wait, that means the research only proves that you just can’t simulate the universe on a Turing-machine-esque computer yeah?
As long as there are non-algorithmic parts to it, I think a system of some kind could still be producing our universe. I suppose this does mean that you probably can’t intentionally plan or predict the exact course of the “program” so it’s not really a “simulation” but still that does make me feel slightly disappointed in this research.
This is fun, I appreciate it. I’ve only made it as far down this rabbit hole to the part of building AGI on current architecture. Had no idea how much deeper this thing goes. This is the reason I was engaged in the first place, thanks for leading me down here.
I’m looking forward to that one when it comes up!
I love how in depth you went into this. And I agree with everything, except I’m not sure about neuromorphic computing.
I worked in neuromorphic computing for a while as a student. I don’t claim to be an expert though, I was just a tiny screw in a big research machine. But at least our lab never aimed for continuous computation, even if the underlying physics is continuous. Instead, the long-term goal was to have like five distinguishable states (instead of just two binary states). Enough for learning and potentially enough to make AI much faster, but still discrete. That’s my first point, I don’t think any one else is doing something different.
My second point is, no one could be doing something continuous in principle. Our brains don’t even really. Even if changes in a memory cell (or neuron) were induced atom by atom, those would still be discrete steps. Even if truly continuous changes were possible, you still couldn’t read out that information because of thermal noise. The tiny changes in current or whatever your observable is would just drown in noise. Instead you would have to define discrete ranges for read out.
Thirdly, could you explain, what exactly that non-algorithmic component is that would be added and how exactly it would be different from just noise and randomness? Because we work hard to avoid those. If it’s just randomness, our computers have that now. Every once in a while, a bit gets flipped by thermal noise or because it got hit by a cosmic particle. It happens so often, astronomers have to account for it when taking pictures and correct all the bad pixels.
I’m definitely not an expert on the topic, but I recently messed around with a creating a spiking neural net made of “leaky integrate and fire” (LIF) neurons. I had to do the integration numerically which was slow and not precise. However, hardware exists that does run every neuron continuously and in parallel.
LIF neurons don’t technically have a finite number of states because their voltage potential is continuous. Similarly, despite the fact they either fire or don’t fire, the synapses between the neurons also work with integration and a decay constant and hence are continuous.
This continuity means that neurons don’t fire at discreet time intervals and—coupled with the fact inputs are typically coded into spike chains with randomness—you get different behavior basically every time you turn the network on.
The curious part is that it can reliably categorize inputs and the fact that inputs are given some amount of noise leads to robust functionality. A paper I read was using a small, 2 layer net to recognize MNIST numbers and were able to remove 50% of their neurons after training and still have a 60% success rate on identifying the numbers.
Anyway, as for your second question, analog computing, including neuromorphic hardware, is continuous since electric current is necessarily continuous (electricity is a wave unfortunately). You are right that other things will add noise to this network, but changes in electric conductivity from heat and/or voltage fluctuations from electromagnetic interference are also both continuous.
Most importantly is that these networks—when not hardcoded—are constantly adapting their weights.
Spike Timing Dependent Plasticity (STDP) is, as it sounds, dependent on spike timing. The weights of synapses are incredibly sensitive to timing so if you have enough noise that a neuron fires before another, even by a very tiny amount, that change in timing changes which neuron is strengthened most. Those tiny changes will add up as the signals propagate through the net. Even for a small network, a amount of noise is likely to change its behavior significantly over enough time. And if you have any recurrence in the net, those tiny fluctuations might continually compound forever.
That is also the best answer I have for your third question. The non-algorithmic part is due to the fact no state of the machine can really be used to predict a future state of the machine because it is continuous and its behavior is heavily dependent on external inherent noise. Both the noise and the tiniest of changes from the continuity can create novel chaotic behavior.
You are right in saying that we can minimize the effects of noise; people have used SNNs to accurately mimic ANNs on neuromorphic hardware for faster compute times, but those networks do not have volatile weights and are built to not be chaotic. If they were non-algorithmic you wouldn’t be able to do gradient descent. The only way to train a truly non-algorithmic net would be to run it.
Anyway the main point of “non-algorithmic” is that you can’t compute it in discrete steps. You couldn’t build a typical computer that can fully simulate the behavior of the system because you’ll lose information if you try to encode a continuous signal discretely. Though I should note, continuity isn’t the only thing that makes something non computable since the busy beaver numbers are incomputable but still entirely discrete and very simple machines.
Theoretically, if a continuous extension of the busy beaver numbers existed, then it should be possible for a Liquid State Machine Neural Net to approximate that function. Meaning we could technically build an analog-computer capable of computing an uncomputable/undecidable problem.