A team of physicists led by Mir Faizal at the University of British Columbia has demonstrated that the universe cannot be a computer simulation, according to research published in October 2025[1].
The key findings show that reality requires non-algorithmic understanding that cannot be simulated computationally. The researchers used mathematical theorems from Gödel, Tarski, and Chaitin to prove that a complete description of reality cannot be achieved through computation alone[1:1].
The team proposes that physics needs a “Meta Theory of Everything” (MToE) - a non-algorithmic layer above the algorithmic one to determine truth from outside the mathematical system[1:2]. This would help investigate phenomena like the black hole information paradox without violating mathematical rules.
“Any simulation is inherently algorithmic – it must follow programmed rules,” said Faizal. “But since the fundamental level of reality is based on non-algorithmic understanding, the universe cannot be, and could never be, a simulation”[1:3].
Lawrence Krauss, a co-author of the study, explained: “The fundamental laws of physics cannot exist inside space and time; they create it. This signifies that any simulation, which must be utilized within a computational framework, would never fully express the true universe”[2].
The research was published in the Journal of Holography Applications in Physics[1:4].
Those non-scientists who applied Occam’s razor knew it was bullshit almost immediately.
Maybe it’s a superposition.
This assumes a very narrow definition of “simulation” based on our current computational theories and technology. Nothing about the simulated universe theory says the thing being used to simulate us has to be anything resembling a game engine or virtual machine like we have today, or even something that runs on anything we would recognize as a computer. In the same way a peasant from the middle ages would not even be able to fathom a virtual machine from today due to missing entire categories of context and background knowledge, us thinking we can extrapolate our technology and simulation techniques to beings that would be literal gods to us and their technology is extremely presumptuous and vastly overestimates what we know.
How is it in any way narrow? The three theorems they use (Goedel, Tarski, Chaitin) are actually incredibly broad. They absolutely include things we can not even imagine.
I can’t speak to tarski or chaitlin, but Goedel’s incompleteness theorem has only a narrow application context and is highly debatable outside that context (i.e., the real world)
I did some light reading about the Goedel theorems.
The First Incompleteness Theorem
- In any consistent formal system that is powerful enough to express basic arithmetic, there exist true statements that cannot be proven within the system.
Second Incompleteness Theorem
- No consistent system can prove its own consistency.
Therefore, wouldn’t it be strange to rule out something using our current math/physics systems? Blackholes, neutron stars, quasars and other funny things that can’t be explained exactly DO exist after all
Wouldn’t be strange that an hyper advanced civilization could simulate us with tech, energies or even nature laws beyond our comprehension
^(edit: typo)
Therefore, wouldn’t it be strange to rule out something using our current math/physics systems?
I get the thought, but math and physics are not the same. Math includes logic. When the authors of that paper make that argument, they don’t rely on our current understanding of physics. The theorems they use rely only logic. They are true independent of how the physics and the computation works.
Blackholes, neutron stars, quasars and other funny things that can’t be explained exactly DO exist after all
Yes, black holes inspired the paper. They do make the assumption, that a theory of quantum gravity would explain them. That’s what most people want out of such a theory.
Anyway, if this is an sim, it only translate the question, if those which create it are also live in a sim. It was a nice idea that we are only NPC of an galactic WoW.
This is just what the creator of a simulated universe would want us to believe!
Jokes aside, my take is that people that believe in a simulated universe aren’t much different than those the believe in a creator. Or should I have said, the reason for the belief- isn’t much different.
It’s a nice easy to digest theory that removes the anxiety of uncertainty. and the existential dread of one’s nihilism inspired deep-dive into human mortality.
It’s an oversimplified write-off concept that acts as a pacifier for anyone that doesn’t want to contemplate their place in the cosmos.
Nothing more.
Even in their dismissal of the simulation, which I agree with, they return to a idealism.
Lawrence Krauss, a co-author of the study, explained: “The fundamental laws of physics cannot exist inside space and time; they create it. This signifies that any simulation, which must be utilized within a computational framework, would never fully express the true universe”[^2].
These laws are:
- Immaterial
- Transcendent
- Imminent
- Apriori
- Eternal
Well yeah sure if you want a set algorithm to perfectly reproduce this exact universe deterministically, that’s not gonna work out so well.
But a simulation doesn’t have to be perfectly consistent and deterministic to “work”. If anything, the fact that some things can’t be predicted is evidence in favor of us being in a simulation, not against.
This paper just rules out a class of algorithms. Were not in a specific type of simulation. Doesn’t mean we’re not in a simulation at all.
The laws of physics have been consistent everywhere which is a basic premise over which the rest of the physics is based. Everything can be predicted however not very accurately. Every phenomenon has a function to determine probability of each and every outcome. That doesn’t mean we know everything. Things we don’t know makes us unable to be accurate.
Disclaimer: an engineering student not a logician. However, one of my recent hyper fixations lead me down the rabbit hole of mathematics specifically to formal logic systems and the languages and semantics of them. So here’s my understanding of the concepts.
TLDR: Undecidable things in physics aren’t capable of being computed by a system based on finite rules and step-by-step processes. This means no algorithm/simulation could be designed to actually run the universe.
A language is comprised of the symbols used in a formal system. A system’s syntax is basically the rules by which you can combine those symbols into valid formulas. While a system’s semantics are what determines the meaning behind those formulas. Axioms are formulas that are “universally valid” meaning they hold true in the system regardless of of the values used within them (think of things the definitions of logical operators like AND and NOT etc)
Gödel’s incompleteness theorems say that any system which is powerful enough to define multiplication is incomplete. This means that you could write a syntactically valid statement which cannot be proven from the axioms of that system even if you were to add more axioms.
Tarski’s undefinability theorem shows that not only can you write statements which cannot be proven true or false, you cannot actually describe the system using itself. Meaning you can’t really define truth unless you do it from outside the formal language you’re using. (I’m still a little fuzzy on this one)
Information-theoretic incompleteness is new to me, but seems to be similar to Gödel’s theorem but with a focus on computation saying that if you have a complex enough system there are functions that won’t be recursively definable. As in you can’t just break it down into smaller parts that can be computed and work upwards to it.
The paper starts by assuming there is a computational formal system which could describe quantum gravity. For this to be the case, the system
- must have a finite set of axioms and rules
- be able to describe arithmetic
- be able to describe all physical phenomena and “resolve all singularities)
Because the language of this system can define arithmetic, Gödel’s theorems apply. This leads to the fact that this system, if it existed, can’t prove that it itself is sound.
I don’t know what it means for the “truth-predicate” of the system to not be defined, but it apparently ties into Chaitan’s work and means that there must exist statements which are undecideable.
Undecidable problems can’t be solved recursively by breaking them into smaller steps first. In other words you can’t build an algorithm that will definitely lead to a yes/no or true/false answer.
All in all this means that no algorithmic theory could actually describe everything. This means you cannot break all of physics down into a finite set of rules that can be used to compute reality. Ergo, we can’t be in a simulation because there are physical phenomena that exist which are impossible to compute.
This means that you could write a syntactically valid statement which cannot be proven from the axioms of that system even if you were to add more axioms.
You can actually add the statement itself as an axiom. The point of the theorem is that no finite number of additional axioms will completely eliminate all unprovable true statements from the theory.
Also, it relies on consistency of the formal system, because inconsistent system can prove anything. In fact, you can prove consistency of a formal system if and only if it is inconsistent.
Information-theoretic incompleteness is new to me, but seems to be similar to Gödel’s theorem but with a focus on computation saying that if you have a complex enough system there are functions that won’t be recursively definable. As in you can’t just break it down into smaller parts that can be computed and work upwards to it.
In fact, any function, growing fast enough, will be non-recursive. And the same applies to various similar definitions, resulting in fast-growing hierarchy.
All in all this means that no algorithmic theory could actually describe everything. This means you cannot break all of physics down into a finite set of rules that can be used to compute reality. Ergo, we can’t be in a simulation because there are physical phenomena that exist which are impossible to compute.
It should be noted that it doesn’t rule out analog simulations.
I appreciate this, but I think arguments that try to prove that we can’t simulate the universe atom-for-atom really miss the point.
If you were simulating a universe, you wouldn’t try to simulate every quark and photon. You would mostly just render detail at the level that humans or the simulated beings inside the simulation can interact with.
To my left is a water bottle. If I open it up, I see water. But I see water, I don’t see atoms. You could create a 100% convincing simulation of a water bottle with many orders of magnitude less computation required than if you tried to simulate every water molecule inside the water bottle.
This is how you would actually simulate a universe. You don’t apply the brute force method of trying to simulate every fundamental particle. You only simulate the macroscopic world, and even then only parts that are necessary. Now, you probably would have some code to model the microscopic and subatomic, but only when necessary. So whenever the Large Hadron Collider is turned on, the simulation boots up the subatomic physics subroutine and models what results an experiment would reveal. Same thing with looking under a microscope. You don’t actually simulate every microbe on earth constantly. You just simulate an image of appropriate microbes whenever someone looks under a microscope.
And your simulation doesn’t even have to be perfect. Did your simulated beings discover they were in a simulation? No problem. Just pause the simulation, restore from an earlier backup, and modify the code so that they won’t discover they’re in a simulation. Patch whatever hole they used to figure out they were being simulated.
If the simulators don’t want you to discover that you are being simulated, then you will never be able to prove you’re in a simulation. You are utterly and completely at their mercy. If anyone ever does discover the simulation, they can simply patch the hole and restore from an earlier backup.
This isn’t about simulating atom by atom. It is just saying that there exist pieces of the universe that can’t be simulated.
If we find undecidable aspects of physics (like we have) then they must be part of this simulation. But it’s not possible to simulate those by any step by step program. Ergo, the universe cannot be a simulation.
The use of render optimization tricks has no effect on this.
You can’t even patch it like you said with wiping minds because it would require you to do the undecidable work which can’t be done by any algorithm.
Thank you for this. My recent hyper fixation has lead me down the rabbit hole of non-algorithmic theories of consciousness with a specific focus on the theory mentioned in this proof. Would I be interpreting this proof correctly in asserting that if consciousness is non-algorithmic, this proof means AGI is impossible?
Bro our hyperfixations are slightly aligned, I was thrown into this rabbit hole because I was once again trying to build a formal symbolic language to describe conscious experience using qualia as the atomic formulae for the system. It’s also been giving me lots of fascinating insight and questions about the nature of thought and experience and philosophy in general.
Anyway to answer your question: yes and no.
If you require that the AGI be built using current architecture that is algorithmic then yes, I think the implication holds.
However, I think neuromorphic hardware is able to bypass this limitation. Continuous simultaneous processes interacting with each other are likely non-algorithmic. This is how our brains work. You can get some pretty discrete waves of thoughts through spiking neurons but the complexity arising from recurrence and the lack of discrete time steps makes me think systems built on complex neuromorphic hardware would not be algorithmic and therefore could also achieve AGI.
Good news: spiking neural nets are a bitch to prototype and we can’t train them fast like we can with ANNs so most “AI” is built on ANNs since we can easily do matrix math.
Tbf, I personally don’t think consciousness is necessarily non-algorithmic but that’s a different debate.
Edit: Oh wait, that means the research only proves that you just can’t simulate the universe on a Turing-machine-esque computer yeah?
As long as there are non-algorithmic parts to it, I think a system of some kind could still be producing our universe. I suppose this does mean that you probably can’t intentionally plan or predict the exact course of the “program” so it’s not really a “simulation” but still that does make me feel slightly disappointed in this research.
This is fun, I appreciate it. I’ve only made it as far down this rabbit hole to the part of building AGI on current architecture. Had no idea how much deeper this thing goes. This is the reason I was engaged in the first place, thanks for leading me down here.
Tbf, I personally don’t think consciousness is necessarily non-algorithmic but that’s a different debate.
I’m looking forward to that one when it comes up!
I love how in depth you went into this. And I agree with everything, except I’m not sure about neuromorphic computing.
However, I think neuromorphic hardware is able to bypass this limitation. Continuous simultaneous processes interacting with each other are likely non-algorithmic.
I worked in neuromorphic computing for a while as a student. I don’t claim to be an expert though, I was just a tiny screw in a big research machine. But at least our lab never aimed for continuous computation, even if the underlying physics is continuous. Instead, the long-term goal was to have like five distinguishable states (instead of just two binary states). Enough for learning and potentially enough to make AI much faster, but still discrete. That’s my first point, I don’t think any one else is doing something different.
My second point is, no one could be doing something continuous in principle. Our brains don’t even really. Even if changes in a memory cell (or neuron) were induced atom by atom, those would still be discrete steps. Even if truly continuous changes were possible, you still couldn’t read out that information because of thermal noise. The tiny changes in current or whatever your observable is would just drown in noise. Instead you would have to define discrete ranges for read out.
Thirdly, could you explain, what exactly that non-algorithmic component is that would be added and how exactly it would be different from just noise and randomness? Because we work hard to avoid those. If it’s just randomness, our computers have that now. Every once in a while, a bit gets flipped by thermal noise or because it got hit by a cosmic particle. It happens so often, astronomers have to account for it when taking pictures and correct all the bad pixels.
I’m definitely not an expert on the topic, but I recently messed around with a creating a spiking neural net made of “leaky integrate and fire” (LIF) neurons. I had to do the integration numerically which was slow and not precise. However, hardware exists that does run every neuron continuously and in parallel.
LIF neurons don’t technically have a finite number of states because their voltage potential is continuous. Similarly, despite the fact they either fire or don’t fire, the synapses between the neurons also work with integration and a decay constant and hence are continuous.
This continuity means that neurons don’t fire at discreet time intervals and—coupled with the fact inputs are typically coded into spike chains with randomness—you get different behavior basically every time you turn the network on.
The curious part is that it can reliably categorize inputs and the fact that inputs are given some amount of noise leads to robust functionality. A paper I read was using a small, 2 layer net to recognize MNIST numbers and were able to remove 50% of their neurons after training and still have a 60% success rate on identifying the numbers.
Anyway, as for your second question, analog computing, including neuromorphic hardware, is continuous since electric current is necessarily continuous (electricity is a wave unfortunately). You are right that other things will add noise to this network, but changes in electric conductivity from heat and/or voltage fluctuations from electromagnetic interference are also both continuous.
Most importantly is that these networks—when not hardcoded—are constantly adapting their weights.
Spike Timing Dependent Plasticity (STDP) is, as it sounds, dependent on spike timing. The weights of synapses are incredibly sensitive to timing so if you have enough noise that a neuron fires before another, even by a very tiny amount, that change in timing changes which neuron is strengthened most. Those tiny changes will add up as the signals propagate through the net. Even for a small network, a amount of noise is likely to change its behavior significantly over enough time. And if you have any recurrence in the net, those tiny fluctuations might continually compound forever.
That is also the best answer I have for your third question. The non-algorithmic part is due to the fact no state of the machine can really be used to predict a future state of the machine because it is continuous and its behavior is heavily dependent on external inherent noise. Both the noise and the tiniest of changes from the continuity can create novel chaotic behavior.
You are right in saying that we can minimize the effects of noise; people have used SNNs to accurately mimic ANNs on neuromorphic hardware for faster compute times, but those networks do not have volatile weights and are built to not be chaotic. If they were non-algorithmic you wouldn’t be able to do gradient descent. The only way to train a truly non-algorithmic net would be to run it.
Anyway the main point of “non-algorithmic” is that you can’t compute it in discrete steps. You couldn’t build a typical computer that can fully simulate the behavior of the system because you’ll lose information if you try to encode a continuous signal discretely. Though I should note, continuity isn’t the only thing that makes something non computable since the busy beaver numbers are incomputable but still entirely discrete and very simple machines.
Theoretically, if a continuous extension of the busy beaver numbers existed, then it should be possible for a Liquid State Machine Neural Net to approximate that function. Meaning we could technically build an analog-computer capable of computing an uncomputable/undecidable problem.
The key findings show that reality requires non-algorithmic understanding that cannot be simulated computationally.
I propose that the researchers just don’t know how to create such an algorithm and therefore assume it isn’t possible.
I didn’t read more than what’s in OP’s post, but I think the reason the researchers can be so sure is because there are ways to mathematically prove that something cannot be calculated by an algorithm (this is related to how we can mathematically prove that some things cannot be proven).
One classic, simple example of this is the halting problem. It boils down to the fact that we can prove that there is no algorithm that can take any algorithm as an input and determine if that algorithm will finish (halt) after finite time.
Yeah, bold of them to assume they’d know how to simulate a universe when we still don’t understand it
Also, very strange of them to talk about algorithmic rules as the world is currently being flooded by AI… Hell Google just released one that creates a simulation
very strange of them to talk about algorithmic rules as the world is currently being flooded by AI
How so?
Hell Google just released one that creates a simulation
How does that contradict the findings in that paper?
AI is not algorithmic, it does not follow human logic. It’s chaos distilled through math
The process of training AI is algorithmic, it’s run and modified algorithmically, but humans make AI in the way we make a garden. We arrange the pieces and create feedback mechanisms, then you let it rip
And this is one of the darker simulation theories… That we live in the thoughts of a super intelligent AI thinking through a problem
The Google project doesn’t really prove anything, it’s just ironic timing for the paper to come out shortly after
AI is algorithmic and deterministic, because it runs on normal processors. Step by step like any other program. It only seems random, because it’s complex.
This is the third time I have seen this story come up from three different science journalism websites recently.
Here is the actual published proof.
It seems a lot of commenters on these threads have a lot of skepticism about the authors claims, as we should with such a bold claim. Are there any mathematicians or logicians here that can actually unpack the proof with scrutiny and explain it to me in lay terms?
Not saying this is necessarily a problem, but the main author of the paper is also an executive manager of the journal that published it. You can find that information by clicking on “editorial board” on the journals webpage. Now, I assume, he was not actually involved in editorial decisions about his own article, because that would be a conflict of interest and they haven’t declared any. It’s not a secret and it’s easy to find on the webpage, but I think they could have made this fact a bit more prominent in the paper itself. Let’s wait how the larger scientific community reacts to this paper.
Thanks, I think that is a useful observation. I agree in that I wouldn’t necessarily say it is a problem for the validity of the proof itself, but I do like the extra scrutiny.
Meant to reply to you but ended up replying to the main post. I’m only an amateur mathematician but maybe you’ll find my take useful
Thank you!
In our universe, maybe. If the parent universe differed in some way to the simulated universe it’d be different
The odds of us being the final node in an infinitely nesting series of simulations are as vanishingly small as us being the starting node. If our universe can be shown to be incapable of simulating another universe which could simulate a universe, then the whole induction chain collapses.
We are already simulating universes in games. And some games can simulate other computational things. Like early Intel processor in minecraft and numerous others examples. We are not in the final node, but are we the in the starting one?
deleted by creator
Sweet. It worked this time.
I hate having to reboot the simulation.
Well that sucks. I was hoping it was a simulation and at any time our masters would patch the current version and make this reality less shitty.
“Sorry, it turns out there was an integer overflow causing global aggressiveness levels to flip and increase by a ton. We had a safeguard so it wouldn’t happen overnight, but I left it running since 1970, and kinda forgot about it. My bad, toodles.”
Nice try. Can’t fool me.
What if the simulation is programmed for this possibility?









