Maybe there’s some hidden line of reasoning we haven’t yet discovered that shows that this universe isn’t a simulation!
I find it difficult to imagine how such argument could even be constructed. “Our universe isn’t a simulation because it has property X” doesn’t explain why the simulator could not simulate X. The usual argument is “because quantum stuff, the simulation would require insane amounts of computing power”, which is true, but we have no idea what the simulating universe looks like, and what kind of physics is has… maybe what’s an insane amount for us is peanuts for them.
But maybe there is some argument why a computing power in principle (like, some mathematical reason) cannot exceed certain value, ever. And the value may turn out to be insufficient to simulate our universe. And we can somehow make sure that our entire universe is simulated in sufficient resolution (not something like: the Earth or perhaps the entire Solar system is simulated in full quantum physics, but everything else is just a credible approximation). Etc. Well, if such thing happens, then I would accept the argument.
This sounds like the kind of thing someone might say who is already relatively confident they won’t suffer eternal damnation. Imagine believing with probability at least 1/1000 that, if you act incorrectly during your life, then...
Yeah, I just… stopped worrying about these kinds of things. (In my case, “these kinds of things” refer e.g. to very unlikely Everett branches, which I still consider more likely than gods.) You just can’t win this game. There are million possible horror scenarios, each of them extremely unlikely, but each of them extremely horrifying, so you would just spend all your life thinking about them; and there is often nothing you can do about them, in some cases you would be required to do contradictory things (you spend your entire life trying to appease the bloodthirsty Jehovah, but it turns out the true master of universe is the goddess Kali and she is very displeased with your Christianity...) or it could be some god you don’t even know because it is a god of Aztecs, or some future god that will only be revealed to humanity in year 3000. Maybe humans are just a precursor to an intelligent species that will exist million years in the future, and from their god’s perspective humans are even less relevant than monkeys are for Christianity. Maybe we are just meant to be food for the space locusts from Andromeda galaxy. Maybe our entire universe is a simulation on a computer in some alien universe with insane computer power, but they don’t care about the intelligent beings or life in general; they just use the flashing galaxies as a screen saver when they are bored. If you put things into perspective, assigning probability 1/1000 to any specific religion is way too much; all kinds of religions, existing and fictional put together, don’t deserve that much.
And by the way, torturing people forever, because they did not believe in your illogical incoherent statements unsupported by evidence, that is 100% compatible with being an omnipotent, omniscient, and omnibenevolent god, right? Yet another theological mystery...
On the other hand, if you assume an evil god, then… maybe the holy texts and promises of heaven are just a sadistic way he is toying with us, and then he will torture all of us forever regardless.
So… you can’t really win this game. Better to focus on things where you actually can gather evidence, and improve your actual outcomes in life.
Psychologically, if you can’t get rid of the idea of supernatural, maybe it would be better to believe in an actually good god. Someone who is at least as reasonable and good as any of us, which should not be an impossibly high standard. Such god certainly wouldn’t spend an eternity torturing random people for “crimes” such as being generally decent people but believing in a wrong religion or no religion at all, or having extramarital sex, etc. (Some theologians would say that this is actually their god. I don’t think so, but whatever.) I don’t really believe in such god either, honestly, but it is a good fallback plan when you can’t get rid of the idea of gods completely.
Let’s say general relativity is being compared against Theory T, and the programming language is Python. Doesn’t it make a huge difference whether you’re allowed to “pip install general-relativity” before you begin?
That would be cheating, obviously. Unless by the length of code you mean also the length of all used libraries, in which case it is okay. It is assumed that the original programming language does not favoritize any specific theory, just provides very basic capabilities for expressing things like “2+2” or “if A then B else B”. (Abstracting from details such as how many pluses are in one “if”.)
If I’m allowed to invoke my sense of reasonableness to choose a good programming language to generate my priors, why don’t I instead just invoke my sense of reasonableness to choose good priors?
Yeah, ok. The meaning of all this is, how can we compare “complexity” of two things where neither is a strict subset of the other. The important part is that “fewer words” does not necessarily mean “smaller complexity”, because it allows obvious cheating (invent a new word that means exactly what you are arguing for, and then insist that your theory is really simple because it could be described by one word—the same trick as with importing a library), but even if it is not your intention to cheat, your theory can still benefit from some concepts having shorter words for historical reasons, or even because words related to humans (or life on Earth in general) are already shorter, so you should account for the complexity that is already included for them. Furthermore it should be obvious to you that “1000 different laws of physics” is more complex than “1 law of physics, applied in the same way to 1000 particles”. If this all is obvious to you, than yes, making the analogy to a programming language does not bring any extra value.
But historical evidence shows that humans are quite bad at this. They will insist that stars are just shining dots in the sky instead of distant solar systems, because “one solar system + thousands of shining dots” seems to them less complex than “thousands of solar systems”. They will insist that Milky Way or Andromeda cannot be composed of stars, because “thousand stars + one Milky Way + one Andromeda” seems to them less complex than “millions of stars”. More recently (and more controversially), they will insist that “quantum physics + collapse + classical physics” is less complex than “quantum physics all the way up”. Programming analogy helps to express this in a simple way: “Complexity is about the size of code, not how large values are stored in the variables.”
Can I ask which related [to Kolmogorov complexity] concepts you mean?
Compression (lossless), like in the “zip” files. Specifically the fact that an average random file cannotbe compressed, no matter how smart is the method used. Like, for any given value N, the number of all files with size N is larger than the number of all files with size smaller than N. So, whatever is your compression method, if it is lossless, it needs to be a bijection, so there must be at least one file of size N that it will not compress into a file of smaller size. Even worse, for each file of size N compressed to a smaller size, there must be a file of size smaller than N that gets “compressed” to a file of size N or more.
So how does compression work in real world? (Because experience shows it does.) It ultimately exploits the fact that most of the files we want to compress are not random, so it is designed to compress non-random files into smaller ones, and random (containing only noise) files into slightly larger ones. Like, you can try it at home; generate a one megabyte file full of randomly generated bits, then try all compression programs you have installed, and see what happens.
Now, each specific compression algorithm recognized and exploits some kinds of regularity, and is blind towards others. This is why people sometimes invent better compression algorithms that exploit more regularities. The question is, what is the asymptote of this progress? If you tried to invent the best compression algorithm ever, one that could exploit literally all kinds of non-randomness, what would it look like?
(You may want to think about this for a moment, before reading more.)
The answer is that the hypothetical best compression algorithm ever would transform each file into the shortest possible program that generates this file. There is only one problem with that: finding the shortest possible program for every input is algorithmically impossible—this is related to halting problem. Regardless, if the file is compressed by this hypothetical ultimate compression algorithm, the size of the compressed file would be the Kolmogorov complexity of the original file.
But then my concern just transforms into “what if there’s a powerful entity living in this universe (rather than outside of it) who will punish me if I do X, etc”.
Then we are no longer talking about gods in the modern sense, but about powerful aliens.
Yeah, I just… stopped worrying about these kinds of things. (In my case, “these kinds of things” refer e.g. to very unlikely Everett branches, which I still consider more likely than gods.) You just can’t win this game. There are million possible horror scenarios, each of them extremely unlikely, but each of them extremely horrifying, so you would just spend all your life thinking about them; [...]
I see. In that case, I think we’re reacting differently to our situations due to being in different epistemic states. The uncertainty involved in Everett branches is much less Knightian—you can often say things like “if I drive to the supermarket today, then approximately 0.001% of my future Everett branches will die in a car crash, and I’ll just eat that cost; I need groceries!”. My state of uncertainty is that I’ve barely put five minutes of thought into the question “I wonder if there are any tremendously important things I should be doing right now, and particularly if any of the things might have infinite importance due to my future being infinitely long.”
And by the way, torturing people forever, because they did not believe in your illogical incoherent statements unsupported by evidence, that is 100% compatible with being an omnipotent, omniscient, and omnibenevolent god, right? Yet another theological mystery...
Well, that’s another reference to “popular” theism. Popular theism is a subset of theism in general, which itself is a subset of “worlds in which there’s something I should be doing that has infinite importance”.
On the other hand, if you assume an evil god, then… maybe the holy texts and promises of heaven are just a sadistic way he is toying with us, and then he will torture all of us forever regardless.
Yikes!! I wish LessWrong had emojis so I could react to this possibility properly :O
So… you can’t really win this game. Better to focus on things where you actually can gather evidence, and improve your actual outcomes in life.
This advice makes sense, though given the state of uncertainty described above, I would say I’m already on it.
Psychologically, if you can’t get rid of the idea of supernatural, maybe it would be better to believe in an actually good god. [...]
This is a good fallback plan for the contingency in which I can’t figure out the truth and then subsequently fail to acknowledge my ignorance. Fingers crossed that I can at least prevent the latter!
[...] your theory can still benefit from some concepts having shorter words for historical reasons [...]
Well, I would have said that an exactly analogous problem is present in normal Kolmogorov Complexity, but...
But historical evidence shows that humans are quite bad at this.
...but this, to me, explains the mystery. Being told to think in terms of computer programs generating different priors (or more accurately, computer programs generating different universes that entail different sets of perfect priors) really does influence my sense of what constitutes a “reasonable” set of priors.
I would still hesitate to call it a “formalism”, though IIRC I don’t think you’ve used that word. In my re-listen of the sequences, I’ve just gotten to the part where Eliezer uses that word. Well, I guess I’ll take it up with somebody who calls it that.
By the way, it’s just popped into my head that I might benefit from doing an adversarial collaboration with somebody about Occam’s razor. I’m nowhere near ready to commit to anything, but just as an offhand question, does that sound like the sort of thing you might be interested in?
[...] The answer is that the hypothetical best compression algorithm ever would transform each file into the shortest possible program that generates this file.
Insightful comments! I see the connection: really, every compression of a file is a compression into the shortest program that will output that file, where the programming language is the decompression algorithm and the search algorithm that finds the shortest program isn’t guaranteed to be perfect. So the best compression algorithm ever would simply be one with a really really apt decompression routine (one that captures the nuanced nonrandomness found in files humans care aboue very well) and an oracle for computing shortest programs (rather than a decent but imperfect search algorithm).
> But then my concern just transforms into “what if there’s a powerful entity living in this universe (rather than outside of it) who will punish me if I do X, etc”.
Then we are no longer talking about gods in the modern sense, but about powerful aliens.
Well, if the “inside/outside the universe” distinction is going to mean “is/isn’t causally connected to the universe at all” and a god is required to be outside the universe, then sure. But I think if I discovered that the universe was a simulation and there was a being constantly watching it and supplying a fresh bit of input every hundred Planck intervals in such a way that prayers were occasionally answered, I would say that being is closer to a god than an alien.
But in any case, the distinction isn’t too relevant. If I found out that there was a vessel with intelligent life headed for Earth right now, I’d be just as concerned about that life (actual aliens) as I would be about god-like creatures that should debatably also be called aliens.
By the way, it’s just popped into my head that I might benefit from doing an adversarial collaboration with somebody about Occam’s razor. I’m nowhere near ready to commit to anything, but just as an offhand question, does that sound like the sort of thing you might be interested in?
Definitely not interested. My understanding of these things is kinda intuitive (with intuition based on decent knowledge of math and computer science, but still), so I believe that “I’ll know it when I see it” (give me two options, and I’ll probably tell you whether one of them seems “simpler” than the other), but I wouldn’t try to put it into exact words.
I find it difficult to imagine how such argument could even be constructed. “Our universe isn’t a simulation because it has property X” doesn’t explain why the simulator could not simulate X. The usual argument is “because quantum stuff, the simulation would require insane amounts of computing power”, which is true, but we have no idea what the simulating universe looks like, and what kind of physics is has… maybe what’s an insane amount for us is peanuts for them.
But maybe there is some argument why a computing power in principle (like, some mathematical reason) cannot exceed certain value, ever. And the value may turn out to be insufficient to simulate our universe. And we can somehow make sure that our entire universe is simulated in sufficient resolution (not something like: the Earth or perhaps the entire Solar system is simulated in full quantum physics, but everything else is just a credible approximation). Etc. Well, if such thing happens, then I would accept the argument.
Yeah, I just… stopped worrying about these kinds of things. (In my case, “these kinds of things” refer e.g. to very unlikely Everett branches, which I still consider more likely than gods.) You just can’t win this game. There are million possible horror scenarios, each of them extremely unlikely, but each of them extremely horrifying, so you would just spend all your life thinking about them; and there is often nothing you can do about them, in some cases you would be required to do contradictory things (you spend your entire life trying to appease the bloodthirsty Jehovah, but it turns out the true master of universe is the goddess Kali and she is very displeased with your Christianity...) or it could be some god you don’t even know because it is a god of Aztecs, or some future god that will only be revealed to humanity in year 3000. Maybe humans are just a precursor to an intelligent species that will exist million years in the future, and from their god’s perspective humans are even less relevant than monkeys are for Christianity. Maybe we are just meant to be food for the space locusts from Andromeda galaxy. Maybe our entire universe is a simulation on a computer in some alien universe with insane computer power, but they don’t care about the intelligent beings or life in general; they just use the flashing galaxies as a screen saver when they are bored. If you put things into perspective, assigning probability 1/1000 to any specific religion is way too much; all kinds of religions, existing and fictional put together, don’t deserve that much.
And by the way, torturing people forever, because they did not believe in your illogical incoherent statements unsupported by evidence, that is 100% compatible with being an omnipotent, omniscient, and omnibenevolent god, right? Yet another theological mystery...
On the other hand, if you assume an evil god, then… maybe the holy texts and promises of heaven are just a sadistic way he is toying with us, and then he will torture all of us forever regardless.
So… you can’t really win this game. Better to focus on things where you actually can gather evidence, and improve your actual outcomes in life.
Psychologically, if you can’t get rid of the idea of supernatural, maybe it would be better to believe in an actually good god. Someone who is at least as reasonable and good as any of us, which should not be an impossibly high standard. Such god certainly wouldn’t spend an eternity torturing random people for “crimes” such as being generally decent people but believing in a wrong religion or no religion at all, or having extramarital sex, etc. (Some theologians would say that this is actually their god. I don’t think so, but whatever.) I don’t really believe in such god either, honestly, but it is a good fallback plan when you can’t get rid of the idea of gods completely.
That would be cheating, obviously. Unless by the length of code you mean also the length of all used libraries, in which case it is okay. It is assumed that the original programming language does not favoritize any specific theory, just provides very basic capabilities for expressing things like “2+2” or “if A then B else B”. (Abstracting from details such as how many pluses are in one “if”.)
Yeah, ok. The meaning of all this is, how can we compare “complexity” of two things where neither is a strict subset of the other. The important part is that “fewer words” does not necessarily mean “smaller complexity”, because it allows obvious cheating (invent a new word that means exactly what you are arguing for, and then insist that your theory is really simple because it could be described by one word—the same trick as with importing a library), but even if it is not your intention to cheat, your theory can still benefit from some concepts having shorter words for historical reasons, or even because words related to humans (or life on Earth in general) are already shorter, so you should account for the complexity that is already included for them. Furthermore it should be obvious to you that “1000 different laws of physics” is more complex than “1 law of physics, applied in the same way to 1000 particles”. If this all is obvious to you, than yes, making the analogy to a programming language does not bring any extra value.
But historical evidence shows that humans are quite bad at this. They will insist that stars are just shining dots in the sky instead of distant solar systems, because “one solar system + thousands of shining dots” seems to them less complex than “thousands of solar systems”. They will insist that Milky Way or Andromeda cannot be composed of stars, because “thousand stars + one Milky Way + one Andromeda” seems to them less complex than “millions of stars”. More recently (and more controversially), they will insist that “quantum physics + collapse + classical physics” is less complex than “quantum physics all the way up”. Programming analogy helps to express this in a simple way: “Complexity is about the size of code, not how large values are stored in the variables.”
Compression (lossless), like in the “zip” files. Specifically the fact that an average random file cannot be compressed, no matter how smart is the method used. Like, for any given value N, the number of all files with size N is larger than the number of all files with size smaller than N. So, whatever is your compression method, if it is lossless, it needs to be a bijection, so there must be at least one file of size N that it will not compress into a file of smaller size. Even worse, for each file of size N compressed to a smaller size, there must be a file of size smaller than N that gets “compressed” to a file of size N or more.
So how does compression work in real world? (Because experience shows it does.) It ultimately exploits the fact that most of the files we want to compress are not random, so it is designed to compress non-random files into smaller ones, and random (containing only noise) files into slightly larger ones. Like, you can try it at home; generate a one megabyte file full of randomly generated bits, then try all compression programs you have installed, and see what happens.
Now, each specific compression algorithm recognized and exploits some kinds of regularity, and is blind towards others. This is why people sometimes invent better compression algorithms that exploit more regularities. The question is, what is the asymptote of this progress? If you tried to invent the best compression algorithm ever, one that could exploit literally all kinds of non-randomness, what would it look like?
(You may want to think about this for a moment, before reading more.)
The answer is that the hypothetical best compression algorithm ever would transform each file into the shortest possible program that generates this file. There is only one problem with that: finding the shortest possible program for every input is algorithmically impossible—this is related to halting problem. Regardless, if the file is compressed by this hypothetical ultimate compression algorithm, the size of the compressed file would be the Kolmogorov complexity of the original file.
Then we are no longer talking about gods in the modern sense, but about powerful aliens.
I see. In that case, I think we’re reacting differently to our situations due to being in different epistemic states. The uncertainty involved in Everett branches is much less Knightian—you can often say things like “if I drive to the supermarket today, then approximately 0.001% of my future Everett branches will die in a car crash, and I’ll just eat that cost; I need groceries!”. My state of uncertainty is that I’ve barely put five minutes of thought into the question “I wonder if there are any tremendously important things I should be doing right now, and particularly if any of the things might have infinite importance due to my future being infinitely long.”
Well, that’s another reference to “popular” theism. Popular theism is a subset of theism in general, which itself is a subset of “worlds in which there’s something I should be doing that has infinite importance”.
Yikes!! I wish LessWrong had emojis so I could react to this possibility properly :O
This advice makes sense, though given the state of uncertainty described above, I would say I’m already on it.
This is a good fallback plan for the contingency in which I can’t figure out the truth and then subsequently fail to acknowledge my ignorance. Fingers crossed that I can at least prevent the latter!
Well, I would have said that an exactly analogous problem is present in normal Kolmogorov Complexity, but...
...but this, to me, explains the mystery. Being told to think in terms of computer programs generating different priors (or more accurately, computer programs generating different universes that entail different sets of perfect priors) really does influence my sense of what constitutes a “reasonable” set of priors.
I would still hesitate to call it a “formalism”, though IIRC I don’t think you’ve used that word. In my re-listen of the sequences, I’ve just gotten to the part where Eliezer uses that word. Well, I guess I’ll take it up with somebody who calls it that.
By the way, it’s just popped into my head that I might benefit from doing an adversarial collaboration with somebody about Occam’s razor. I’m nowhere near ready to commit to anything, but just as an offhand question, does that sound like the sort of thing you might be interested in?
Insightful comments! I see the connection: really, every compression of a file is a compression into the shortest program that will output that file, where the programming language is the decompression algorithm and the search algorithm that finds the shortest program isn’t guaranteed to be perfect. So the best compression algorithm ever would simply be one with a really really apt decompression routine (one that captures the nuanced nonrandomness found in files humans care aboue very well) and an oracle for computing shortest programs (rather than a decent but imperfect search algorithm).
Well, if the “inside/outside the universe” distinction is going to mean “is/isn’t causally connected to the universe at all” and a god is required to be outside the universe, then sure. But I think if I discovered that the universe was a simulation and there was a being constantly watching it and supplying a fresh bit of input every hundred Planck intervals in such a way that prayers were occasionally answered, I would say that being is closer to a god than an alien.
But in any case, the distinction isn’t too relevant. If I found out that there was a vessel with intelligent life headed for Earth right now, I’d be just as concerned about that life (actual aliens) as I would be about god-like creatures that should debatably also be called aliens.
Definitely not interested. My understanding of these things is kinda intuitive (with intuition based on decent knowledge of math and computer science, but still), so I believe that “I’ll know it when I see it” (give me two options, and I’ll probably tell you whether one of them seems “simpler” than the other), but I wouldn’t try to put it into exact words.
Kk! Thanks for the discussion :)