I wouldn’t call the dead chieftain a god—that would just be a word game.
Some people in history did, specifically, ancient Romans. But now we don’t. Just making it obvious.
You say a god-like being created by evolution cannot be a creator of the universe. But that’s only true if you constrain that particular instance of evolution to have occered in *this* universe. Maybe this universe is a simulation designed by a powerful “alien” in another universe, who itself came about from an evolutionary process in its own universe.
And this is similar to the chieftain thing. You can have a religion that defines “god” as “an alien from another universe”, but many religions insist that god is not created but eternal.
The real thing I’m worried about isn’t whether there exists an omniscient-omnipotent-benevolent creature, but rather whether there exists *some* very powerful creature who I might need to understand to avoid getting horrible outcomes.
Yes, this is a practical approach. Well, you cannot disprove such thing, because it is logically possible. (Obviously, “possible” does not automatically imply “it happened”.) But unless you assume it is “simulations all the way up”, there must be a universe that is not created by an external alien lifeform. Therefore, it is also logically possible that our universe is like that.
There is no reason to assume that the existing religions have something in common with the simulating alien. When I play Civilization, the “people” in my simulation have a religion, but it doesn’t mean that I believe in it, or even approve it, or that I am somehow going to reward them for it.
It’s just a cosmic horror that you need to learn to live with. There are more.
the only definition I’ve ever found for that term is “the Kolmogorov Complexity of X is the length of the shortest computer program that would output X”. But this definition is itself in need of formalization. Which programming language?
Any programming language; for large enough values it doesn’t matter. If you believe that e.g. Python is much better in this regard than Java, then for sufficiently complicated things the most efficient way to implement them in Java is to implement a Python emulator (a constant-sized piece of code) and implementing the algorithm in Python. So if you chose a wrong language, you pay at most a constant-sized penalty. Which is usually irrelevant, because these things are usually applied to debating what happens in general when the data grow.
But I agree that if you talk about small data, it is underspecified. I don’t really know what could it mean to “have a universe defined by the following three bits: 0, 0, 1”, and maybe no one has a meaningful answer to this. But there are cases where you can have an intuition that for any reasonable definition of a programming language, X should be simpler than Y.
I’m not quite sure how to arrive at the conclusion that Kolmogorov-ly simpler things are more likely to be encountered.
Just an intuition pump: Imagine that there is a multiverse A containing all universes that can be run by programs having exactly 100 lines of code; and there is a multiverse B containing all universes that can be run by programs having exactly 105 lines of code. The universes from A are more likely, because they also appear in B.
For each program A1 describing a universe from A, you have a set of programs describing exactly the same universe in B, simply by adding an “exit” instruction on line 101, and arbitrary instructions on lines 102-105. If we take a program B1 describing a universe from B in such way that each of the 105 lines is used meaningfully, then… in multiverse A we have one A1 vs zero B1, and in multiverse B we have many A1-equivalents vs one B1; in either case A1 wins.
Do you know of any good learning resources for someone who has my confusions about these topics?
A part of this is in standard computer science curriculum, and another part is a philosophical extrapolation. I do not have a recommendation about a specific textbook, I just vaguely remember things like this from university. (Actually, I probably never heard explicitly about Kolmogorov complexity at university, but I learned some related concepts that allowed me to recognize what it means and what it implies, when I found it on Less Wrong.)
Imagine a Life-like simulation ruleset where the state of the array of cells at time T+1 depended on (1) the state of the array at time T and (2) the on/off state of a light switch in my attic at time T. I could listen to the prayers of the simulated creatures and use the light switch to influence their universe such that they are answered.
Then I would say the Kolmogorov complexity of the “Life-only” universe is the complexity of the rules of Life; but the complexity of the “Life+inputs” universe is the complexity of the rules plus the complexity of whoever generates the inputs, and everything they depend on, so it is the complexity of the outside universe.
The Civ analogy makes sense, and I certainly wouldn’t stop at disproving all actually-practiced religions (though at the moment I don’t even feel equipped to do that).
Well, you cannot disprove such thing, because it is logically possible. (Obviously, “possible” does not automatically imply “it happened”.) But unless you assume it is “simulations all the way up”, there must be a universe that is not created by an external alien lifeform. Therefore, it is also logically possible that our universe is like that.
Are you sure it’s logically possible in the strict sense? Maybe there’s some hidden line of reasoning we haven’t yet discovered that shows that this universe isn’t a simulation! (Of course, there’s a lot of question-untangling that has to happen first, like whether “is this a simulation?” is even an appropriate question to ask. See also: Greg Egan’s book Permutation City, a fascinating work of fiction that gives a unique take on what it means for a universe to be a simulation.)
It’s just a cosmic horror that you need to learn to live with. There are more.
This sounds like the kind of thing someone might say who is already relatively confident they won’t suffer eternal damnation. Imagine believing with probability at least 1/1000 that, if you act incorrectly during your life, then...
(WARNING: graphic imagery) …upon your bodily death, your consciousness will be embedded in an indestructible body and put in a 15K degree oven for 100 centuries. (END).
Would you still say it was just another cosmic horror you have to learn to live with? If you wouldn’t still say that, but you say it now because your probability estimate is less than 1/1000, how did you come to have that estimate?
Any programming language; for large enough values it doesn’t matter. If you believe that e.g. Python is much better in this regard than Java, then for sufficiently complicated things the most efficient way to implement them in Java is to implement a Python emulator (a constant-sized piece of code) and implementing the algorithm in Python. So if you chose a wrong language, you pay at most a constant-sized penalty. Which is usually irrelevant, because these things are usually applied to debating what happens in general when the data grow.
The constant-sized penalty makes sense. But I don’t understand the claim that this concept is usually applied in the context of looking at how things grow. Occam’s razor is (apparently) formulated in terms of raw Kolmogorov complexity—the appropriate prior for an event X is 2^(-B), where B is the Kolmogorov Complexity of X.
Let’s say general relativity is being compared against Theory T, and the programming language is Python. Doesn’t it make a huge difference whether you’re allowed to “pip install general-relativity” before you begin?
But there are cases where you can have an intuition that for any reasonable definition of a programming language, X should be simpler than Y.
I agree that these intuitions can exist, but if I’m going to use them, then I detest this process being called a formalization! If I’m allowed to invoke my sense of reasonableness to choose a good programming language to generate my priors, why don’t I instead just invoke my sense of reasonableness to choose good priors? Wisdom of the form “programming languages that generate priors that work tend to have characteristic X” can be transformed into wisdom of the form “priors that work tend to have characteristic X”.
Just an intuition pump: [...]
I have to admit that I kind of bounced off of this. The universe-counting argument makes sense, but it doesn’t seem especially intuitive to me that the whole of reality should consist of one universe for each computer program of a set length written in a set language.
(Actually, I probably never heard explicitly about Kolmogorov complexity at university, but I learned some related concepts that allowed me to recognize what it means and what it implies, when I found it on Less Wrong.)
Can I ask which related concepts you mean?
[...] so it is the complexity of the outside universe.
Oh, that makes sense. In that case, the argument would be that nothing outside MY universe could intervene in the lives of the simulated Life-creatures, since they really just live in the same universe as me. But then my concern just transforms into “what if there’s a powerful entity living in this universe (rather than outside of it) who will punish me if I do X, etc”.
Maybe there’s some hidden line of reasoning we haven’t yet discovered that shows that this universe isn’t a simulation!
I find it difficult to imagine how such argument could even be constructed. “Our universe isn’t a simulation because it has property X” doesn’t explain why the simulator could not simulate X. The usual argument is “because quantum stuff, the simulation would require insane amounts of computing power”, which is true, but we have no idea what the simulating universe looks like, and what kind of physics is has… maybe what’s an insane amount for us is peanuts for them.
But maybe there is some argument why a computing power in principle (like, some mathematical reason) cannot exceed certain value, ever. And the value may turn out to be insufficient to simulate our universe. And we can somehow make sure that our entire universe is simulated in sufficient resolution (not something like: the Earth or perhaps the entire Solar system is simulated in full quantum physics, but everything else is just a credible approximation). Etc. Well, if such thing happens, then I would accept the argument.
This sounds like the kind of thing someone might say who is already relatively confident they won’t suffer eternal damnation. Imagine believing with probability at least 1/1000 that, if you act incorrectly during your life, then...
Yeah, I just… stopped worrying about these kinds of things. (In my case, “these kinds of things” refer e.g. to very unlikely Everett branches, which I still consider more likely than gods.) You just can’t win this game. There are million possible horror scenarios, each of them extremely unlikely, but each of them extremely horrifying, so you would just spend all your life thinking about them; and there is often nothing you can do about them, in some cases you would be required to do contradictory things (you spend your entire life trying to appease the bloodthirsty Jehovah, but it turns out the true master of universe is the goddess Kali and she is very displeased with your Christianity...) or it could be some god you don’t even know because it is a god of Aztecs, or some future god that will only be revealed to humanity in year 3000. Maybe humans are just a precursor to an intelligent species that will exist million years in the future, and from their god’s perspective humans are even less relevant than monkeys are for Christianity. Maybe we are just meant to be food for the space locusts from Andromeda galaxy. Maybe our entire universe is a simulation on a computer in some alien universe with insane computer power, but they don’t care about the intelligent beings or life in general; they just use the flashing galaxies as a screen saver when they are bored. If you put things into perspective, assigning probability 1/1000 to any specific religion is way too much; all kinds of religions, existing and fictional put together, don’t deserve that much.
And by the way, torturing people forever, because they did not believe in your illogical incoherent statements unsupported by evidence, that is 100% compatible with being an omnipotent, omniscient, and omnibenevolent god, right? Yet another theological mystery...
On the other hand, if you assume an evil god, then… maybe the holy texts and promises of heaven are just a sadistic way he is toying with us, and then he will torture all of us forever regardless.
So… you can’t really win this game. Better to focus on things where you actually can gather evidence, and improve your actual outcomes in life.
Psychologically, if you can’t get rid of the idea of supernatural, maybe it would be better to believe in an actually good god. Someone who is at least as reasonable and good as any of us, which should not be an impossibly high standard. Such god certainly wouldn’t spend an eternity torturing random people for “crimes” such as being generally decent people but believing in a wrong religion or no religion at all, or having extramarital sex, etc. (Some theologians would say that this is actually their god. I don’t think so, but whatever.) I don’t really believe in such god either, honestly, but it is a good fallback plan when you can’t get rid of the idea of gods completely.
Let’s say general relativity is being compared against Theory T, and the programming language is Python. Doesn’t it make a huge difference whether you’re allowed to “pip install general-relativity” before you begin?
That would be cheating, obviously. Unless by the length of code you mean also the length of all used libraries, in which case it is okay. It is assumed that the original programming language does not favoritize any specific theory, just provides very basic capabilities for expressing things like “2+2” or “if A then B else B”. (Abstracting from details such as how many pluses are in one “if”.)
If I’m allowed to invoke my sense of reasonableness to choose a good programming language to generate my priors, why don’t I instead just invoke my sense of reasonableness to choose good priors?
Yeah, ok. The meaning of all this is, how can we compare “complexity” of two things where neither is a strict subset of the other. The important part is that “fewer words” does not necessarily mean “smaller complexity”, because it allows obvious cheating (invent a new word that means exactly what you are arguing for, and then insist that your theory is really simple because it could be described by one word—the same trick as with importing a library), but even if it is not your intention to cheat, your theory can still benefit from some concepts having shorter words for historical reasons, or even because words related to humans (or life on Earth in general) are already shorter, so you should account for the complexity that is already included for them. Furthermore it should be obvious to you that “1000 different laws of physics” is more complex than “1 law of physics, applied in the same way to 1000 particles”. If this all is obvious to you, than yes, making the analogy to a programming language does not bring any extra value.
But historical evidence shows that humans are quite bad at this. They will insist that stars are just shining dots in the sky instead of distant solar systems, because “one solar system + thousands of shining dots” seems to them less complex than “thousands of solar systems”. They will insist that Milky Way or Andromeda cannot be composed of stars, because “thousand stars + one Milky Way + one Andromeda” seems to them less complex than “millions of stars”. More recently (and more controversially), they will insist that “quantum physics + collapse + classical physics” is less complex than “quantum physics all the way up”. Programming analogy helps to express this in a simple way: “Complexity is about the size of code, not how large values are stored in the variables.”
Can I ask which related [to Kolmogorov complexity] concepts you mean?
Compression (lossless), like in the “zip” files. Specifically the fact that an average random file cannotbe compressed, no matter how smart is the method used. Like, for any given value N, the number of all files with size N is larger than the number of all files with size smaller than N. So, whatever is your compression method, if it is lossless, it needs to be a bijection, so there must be at least one file of size N that it will not compress into a file of smaller size. Even worse, for each file of size N compressed to a smaller size, there must be a file of size smaller than N that gets “compressed” to a file of size N or more.
So how does compression work in real world? (Because experience shows it does.) It ultimately exploits the fact that most of the files we want to compress are not random, so it is designed to compress non-random files into smaller ones, and random (containing only noise) files into slightly larger ones. Like, you can try it at home; generate a one megabyte file full of randomly generated bits, then try all compression programs you have installed, and see what happens.
Now, each specific compression algorithm recognized and exploits some kinds of regularity, and is blind towards others. This is why people sometimes invent better compression algorithms that exploit more regularities. The question is, what is the asymptote of this progress? If you tried to invent the best compression algorithm ever, one that could exploit literally all kinds of non-randomness, what would it look like?
(You may want to think about this for a moment, before reading more.)
The answer is that the hypothetical best compression algorithm ever would transform each file into the shortest possible program that generates this file. There is only one problem with that: finding the shortest possible program for every input is algorithmically impossible—this is related to halting problem. Regardless, if the file is compressed by this hypothetical ultimate compression algorithm, the size of the compressed file would be the Kolmogorov complexity of the original file.
But then my concern just transforms into “what if there’s a powerful entity living in this universe (rather than outside of it) who will punish me if I do X, etc”.
Then we are no longer talking about gods in the modern sense, but about powerful aliens.
Yeah, I just… stopped worrying about these kinds of things. (In my case, “these kinds of things” refer e.g. to very unlikely Everett branches, which I still consider more likely than gods.) You just can’t win this game. There are million possible horror scenarios, each of them extremely unlikely, but each of them extremely horrifying, so you would just spend all your life thinking about them; [...]
I see. In that case, I think we’re reacting differently to our situations due to being in different epistemic states. The uncertainty involved in Everett branches is much less Knightian—you can often say things like “if I drive to the supermarket today, then approximately 0.001% of my future Everett branches will die in a car crash, and I’ll just eat that cost; I need groceries!”. My state of uncertainty is that I’ve barely put five minutes of thought into the question “I wonder if there are any tremendously important things I should be doing right now, and particularly if any of the things might have infinite importance due to my future being infinitely long.”
And by the way, torturing people forever, because they did not believe in your illogical incoherent statements unsupported by evidence, that is 100% compatible with being an omnipotent, omniscient, and omnibenevolent god, right? Yet another theological mystery...
Well, that’s another reference to “popular” theism. Popular theism is a subset of theism in general, which itself is a subset of “worlds in which there’s something I should be doing that has infinite importance”.
On the other hand, if you assume an evil god, then… maybe the holy texts and promises of heaven are just a sadistic way he is toying with us, and then he will torture all of us forever regardless.
Yikes!! I wish LessWrong had emojis so I could react to this possibility properly :O
So… you can’t really win this game. Better to focus on things where you actually can gather evidence, and improve your actual outcomes in life.
This advice makes sense, though given the state of uncertainty described above, I would say I’m already on it.
Psychologically, if you can’t get rid of the idea of supernatural, maybe it would be better to believe in an actually good god. [...]
This is a good fallback plan for the contingency in which I can’t figure out the truth and then subsequently fail to acknowledge my ignorance. Fingers crossed that I can at least prevent the latter!
[...] your theory can still benefit from some concepts having shorter words for historical reasons [...]
Well, I would have said that an exactly analogous problem is present in normal Kolmogorov Complexity, but...
But historical evidence shows that humans are quite bad at this.
...but this, to me, explains the mystery. Being told to think in terms of computer programs generating different priors (or more accurately, computer programs generating different universes that entail different sets of perfect priors) really does influence my sense of what constitutes a “reasonable” set of priors.
I would still hesitate to call it a “formalism”, though IIRC I don’t think you’ve used that word. In my re-listen of the sequences, I’ve just gotten to the part where Eliezer uses that word. Well, I guess I’ll take it up with somebody who calls it that.
By the way, it’s just popped into my head that I might benefit from doing an adversarial collaboration with somebody about Occam’s razor. I’m nowhere near ready to commit to anything, but just as an offhand question, does that sound like the sort of thing you might be interested in?
[...] The answer is that the hypothetical best compression algorithm ever would transform each file into the shortest possible program that generates this file.
Insightful comments! I see the connection: really, every compression of a file is a compression into the shortest program that will output that file, where the programming language is the decompression algorithm and the search algorithm that finds the shortest program isn’t guaranteed to be perfect. So the best compression algorithm ever would simply be one with a really really apt decompression routine (one that captures the nuanced nonrandomness found in files humans care aboue very well) and an oracle for computing shortest programs (rather than a decent but imperfect search algorithm).
> But then my concern just transforms into “what if there’s a powerful entity living in this universe (rather than outside of it) who will punish me if I do X, etc”.
Then we are no longer talking about gods in the modern sense, but about powerful aliens.
Well, if the “inside/outside the universe” distinction is going to mean “is/isn’t causally connected to the universe at all” and a god is required to be outside the universe, then sure. But I think if I discovered that the universe was a simulation and there was a being constantly watching it and supplying a fresh bit of input every hundred Planck intervals in such a way that prayers were occasionally answered, I would say that being is closer to a god than an alien.
But in any case, the distinction isn’t too relevant. If I found out that there was a vessel with intelligent life headed for Earth right now, I’d be just as concerned about that life (actual aliens) as I would be about god-like creatures that should debatably also be called aliens.
By the way, it’s just popped into my head that I might benefit from doing an adversarial collaboration with somebody about Occam’s razor. I’m nowhere near ready to commit to anything, but just as an offhand question, does that sound like the sort of thing you might be interested in?
Definitely not interested. My understanding of these things is kinda intuitive (with intuition based on decent knowledge of math and computer science, but still), so I believe that “I’ll know it when I see it” (give me two options, and I’ll probably tell you whether one of them seems “simpler” than the other), but I wouldn’t try to put it into exact words.
Some people in history did, specifically, ancient Romans. But now we don’t. Just making it obvious.
And this is similar to the chieftain thing. You can have a religion that defines “god” as “an alien from another universe”, but many religions insist that god is not created but eternal.
Yes, this is a practical approach. Well, you cannot disprove such thing, because it is logically possible. (Obviously, “possible” does not automatically imply “it happened”.) But unless you assume it is “simulations all the way up”, there must be a universe that is not created by an external alien lifeform. Therefore, it is also logically possible that our universe is like that.
There is no reason to assume that the existing religions have something in common with the simulating alien. When I play Civilization, the “people” in my simulation have a religion, but it doesn’t mean that I believe in it, or even approve it, or that I am somehow going to reward them for it.
It’s just a cosmic horror that you need to learn to live with. There are more.
Any programming language; for large enough values it doesn’t matter. If you believe that e.g. Python is much better in this regard than Java, then for sufficiently complicated things the most efficient way to implement them in Java is to implement a Python emulator (a constant-sized piece of code) and implementing the algorithm in Python. So if you chose a wrong language, you pay at most a constant-sized penalty. Which is usually irrelevant, because these things are usually applied to debating what happens in general when the data grow.
But I agree that if you talk about small data, it is underspecified. I don’t really know what could it mean to “have a universe defined by the following three bits: 0, 0, 1”, and maybe no one has a meaningful answer to this. But there are cases where you can have an intuition that for any reasonable definition of a programming language, X should be simpler than Y.
Just an intuition pump: Imagine that there is a multiverse A containing all universes that can be run by programs having exactly 100 lines of code; and there is a multiverse B containing all universes that can be run by programs having exactly 105 lines of code. The universes from A are more likely, because they also appear in B.
For each program A1 describing a universe from A, you have a set of programs describing exactly the same universe in B, simply by adding an “exit” instruction on line 101, and arbitrary instructions on lines 102-105. If we take a program B1 describing a universe from B in such way that each of the 105 lines is used meaningfully, then… in multiverse A we have one A1 vs zero B1, and in multiverse B we have many A1-equivalents vs one B1; in either case A1 wins.
A part of this is in standard computer science curriculum, and another part is a philosophical extrapolation. I do not have a recommendation about a specific textbook, I just vaguely remember things like this from university. (Actually, I probably never heard explicitly about Kolmogorov complexity at university, but I learned some related concepts that allowed me to recognize what it means and what it implies, when I found it on Less Wrong.)
Then I would say the Kolmogorov complexity of the “Life-only” universe is the complexity of the rules of Life; but the complexity of the “Life+inputs” universe is the complexity of the rules plus the complexity of whoever generates the inputs, and everything they depend on, so it is the complexity of the outside universe.
The Civ analogy makes sense, and I certainly wouldn’t stop at disproving all actually-practiced religions (though at the moment I don’t even feel equipped to do that).
Are you sure it’s logically possible in the strict sense? Maybe there’s some hidden line of reasoning we haven’t yet discovered that shows that this universe isn’t a simulation! (Of course, there’s a lot of question-untangling that has to happen first, like whether “is this a simulation?” is even an appropriate question to ask. See also: Greg Egan’s book Permutation City, a fascinating work of fiction that gives a unique take on what it means for a universe to be a simulation.)
This sounds like the kind of thing someone might say who is already relatively confident they won’t suffer eternal damnation. Imagine believing with probability at least 1/1000 that, if you act incorrectly during your life, then...
(WARNING: graphic imagery) …upon your bodily death, your consciousness will be embedded in an indestructible body and put in a 15K degree oven for 100 centuries. (END).
Would you still say it was just another cosmic horror you have to learn to live with? If you wouldn’t still say that, but you say it now because your probability estimate is less than 1/1000, how did you come to have that estimate?
The constant-sized penalty makes sense. But I don’t understand the claim that this concept is usually applied in the context of looking at how things grow. Occam’s razor is (apparently) formulated in terms of raw Kolmogorov complexity—the appropriate prior for an event X is 2^(-B), where B is the Kolmogorov Complexity of X.
Let’s say general relativity is being compared against Theory T, and the programming language is Python. Doesn’t it make a huge difference whether you’re allowed to “pip install general-relativity” before you begin?
I agree that these intuitions can exist, but if I’m going to use them, then I detest this process being called a formalization! If I’m allowed to invoke my sense of reasonableness to choose a good programming language to generate my priors, why don’t I instead just invoke my sense of reasonableness to choose good priors? Wisdom of the form “programming languages that generate priors that work tend to have characteristic X” can be transformed into wisdom of the form “priors that work tend to have characteristic X”.
I have to admit that I kind of bounced off of this. The universe-counting argument makes sense, but it doesn’t seem especially intuitive to me that the whole of reality should consist of one universe for each computer program of a set length written in a set language.
Can I ask which related concepts you mean?
Oh, that makes sense. In that case, the argument would be that nothing outside MY universe could intervene in the lives of the simulated Life-creatures, since they really just live in the same universe as me. But then my concern just transforms into “what if there’s a powerful entity living in this universe (rather than outside of it) who will punish me if I do X, etc”.
I find it difficult to imagine how such argument could even be constructed. “Our universe isn’t a simulation because it has property X” doesn’t explain why the simulator could not simulate X. The usual argument is “because quantum stuff, the simulation would require insane amounts of computing power”, which is true, but we have no idea what the simulating universe looks like, and what kind of physics is has… maybe what’s an insane amount for us is peanuts for them.
But maybe there is some argument why a computing power in principle (like, some mathematical reason) cannot exceed certain value, ever. And the value may turn out to be insufficient to simulate our universe. And we can somehow make sure that our entire universe is simulated in sufficient resolution (not something like: the Earth or perhaps the entire Solar system is simulated in full quantum physics, but everything else is just a credible approximation). Etc. Well, if such thing happens, then I would accept the argument.
Yeah, I just… stopped worrying about these kinds of things. (In my case, “these kinds of things” refer e.g. to very unlikely Everett branches, which I still consider more likely than gods.) You just can’t win this game. There are million possible horror scenarios, each of them extremely unlikely, but each of them extremely horrifying, so you would just spend all your life thinking about them; and there is often nothing you can do about them, in some cases you would be required to do contradictory things (you spend your entire life trying to appease the bloodthirsty Jehovah, but it turns out the true master of universe is the goddess Kali and she is very displeased with your Christianity...) or it could be some god you don’t even know because it is a god of Aztecs, or some future god that will only be revealed to humanity in year 3000. Maybe humans are just a precursor to an intelligent species that will exist million years in the future, and from their god’s perspective humans are even less relevant than monkeys are for Christianity. Maybe we are just meant to be food for the space locusts from Andromeda galaxy. Maybe our entire universe is a simulation on a computer in some alien universe with insane computer power, but they don’t care about the intelligent beings or life in general; they just use the flashing galaxies as a screen saver when they are bored. If you put things into perspective, assigning probability 1/1000 to any specific religion is way too much; all kinds of religions, existing and fictional put together, don’t deserve that much.
And by the way, torturing people forever, because they did not believe in your illogical incoherent statements unsupported by evidence, that is 100% compatible with being an omnipotent, omniscient, and omnibenevolent god, right? Yet another theological mystery...
On the other hand, if you assume an evil god, then… maybe the holy texts and promises of heaven are just a sadistic way he is toying with us, and then he will torture all of us forever regardless.
So… you can’t really win this game. Better to focus on things where you actually can gather evidence, and improve your actual outcomes in life.
Psychologically, if you can’t get rid of the idea of supernatural, maybe it would be better to believe in an actually good god. Someone who is at least as reasonable and good as any of us, which should not be an impossibly high standard. Such god certainly wouldn’t spend an eternity torturing random people for “crimes” such as being generally decent people but believing in a wrong religion or no religion at all, or having extramarital sex, etc. (Some theologians would say that this is actually their god. I don’t think so, but whatever.) I don’t really believe in such god either, honestly, but it is a good fallback plan when you can’t get rid of the idea of gods completely.
That would be cheating, obviously. Unless by the length of code you mean also the length of all used libraries, in which case it is okay. It is assumed that the original programming language does not favoritize any specific theory, just provides very basic capabilities for expressing things like “2+2” or “if A then B else B”. (Abstracting from details such as how many pluses are in one “if”.)
Yeah, ok. The meaning of all this is, how can we compare “complexity” of two things where neither is a strict subset of the other. The important part is that “fewer words” does not necessarily mean “smaller complexity”, because it allows obvious cheating (invent a new word that means exactly what you are arguing for, and then insist that your theory is really simple because it could be described by one word—the same trick as with importing a library), but even if it is not your intention to cheat, your theory can still benefit from some concepts having shorter words for historical reasons, or even because words related to humans (or life on Earth in general) are already shorter, so you should account for the complexity that is already included for them. Furthermore it should be obvious to you that “1000 different laws of physics” is more complex than “1 law of physics, applied in the same way to 1000 particles”. If this all is obvious to you, than yes, making the analogy to a programming language does not bring any extra value.
But historical evidence shows that humans are quite bad at this. They will insist that stars are just shining dots in the sky instead of distant solar systems, because “one solar system + thousands of shining dots” seems to them less complex than “thousands of solar systems”. They will insist that Milky Way or Andromeda cannot be composed of stars, because “thousand stars + one Milky Way + one Andromeda” seems to them less complex than “millions of stars”. More recently (and more controversially), they will insist that “quantum physics + collapse + classical physics” is less complex than “quantum physics all the way up”. Programming analogy helps to express this in a simple way: “Complexity is about the size of code, not how large values are stored in the variables.”
Compression (lossless), like in the “zip” files. Specifically the fact that an average random file cannot be compressed, no matter how smart is the method used. Like, for any given value N, the number of all files with size N is larger than the number of all files with size smaller than N. So, whatever is your compression method, if it is lossless, it needs to be a bijection, so there must be at least one file of size N that it will not compress into a file of smaller size. Even worse, for each file of size N compressed to a smaller size, there must be a file of size smaller than N that gets “compressed” to a file of size N or more.
So how does compression work in real world? (Because experience shows it does.) It ultimately exploits the fact that most of the files we want to compress are not random, so it is designed to compress non-random files into smaller ones, and random (containing only noise) files into slightly larger ones. Like, you can try it at home; generate a one megabyte file full of randomly generated bits, then try all compression programs you have installed, and see what happens.
Now, each specific compression algorithm recognized and exploits some kinds of regularity, and is blind towards others. This is why people sometimes invent better compression algorithms that exploit more regularities. The question is, what is the asymptote of this progress? If you tried to invent the best compression algorithm ever, one that could exploit literally all kinds of non-randomness, what would it look like?
(You may want to think about this for a moment, before reading more.)
The answer is that the hypothetical best compression algorithm ever would transform each file into the shortest possible program that generates this file. There is only one problem with that: finding the shortest possible program for every input is algorithmically impossible—this is related to halting problem. Regardless, if the file is compressed by this hypothetical ultimate compression algorithm, the size of the compressed file would be the Kolmogorov complexity of the original file.
Then we are no longer talking about gods in the modern sense, but about powerful aliens.
I see. In that case, I think we’re reacting differently to our situations due to being in different epistemic states. The uncertainty involved in Everett branches is much less Knightian—you can often say things like “if I drive to the supermarket today, then approximately 0.001% of my future Everett branches will die in a car crash, and I’ll just eat that cost; I need groceries!”. My state of uncertainty is that I’ve barely put five minutes of thought into the question “I wonder if there are any tremendously important things I should be doing right now, and particularly if any of the things might have infinite importance due to my future being infinitely long.”
Well, that’s another reference to “popular” theism. Popular theism is a subset of theism in general, which itself is a subset of “worlds in which there’s something I should be doing that has infinite importance”.
Yikes!! I wish LessWrong had emojis so I could react to this possibility properly :O
This advice makes sense, though given the state of uncertainty described above, I would say I’m already on it.
This is a good fallback plan for the contingency in which I can’t figure out the truth and then subsequently fail to acknowledge my ignorance. Fingers crossed that I can at least prevent the latter!
Well, I would have said that an exactly analogous problem is present in normal Kolmogorov Complexity, but...
...but this, to me, explains the mystery. Being told to think in terms of computer programs generating different priors (or more accurately, computer programs generating different universes that entail different sets of perfect priors) really does influence my sense of what constitutes a “reasonable” set of priors.
I would still hesitate to call it a “formalism”, though IIRC I don’t think you’ve used that word. In my re-listen of the sequences, I’ve just gotten to the part where Eliezer uses that word. Well, I guess I’ll take it up with somebody who calls it that.
By the way, it’s just popped into my head that I might benefit from doing an adversarial collaboration with somebody about Occam’s razor. I’m nowhere near ready to commit to anything, but just as an offhand question, does that sound like the sort of thing you might be interested in?
Insightful comments! I see the connection: really, every compression of a file is a compression into the shortest program that will output that file, where the programming language is the decompression algorithm and the search algorithm that finds the shortest program isn’t guaranteed to be perfect. So the best compression algorithm ever would simply be one with a really really apt decompression routine (one that captures the nuanced nonrandomness found in files humans care aboue very well) and an oracle for computing shortest programs (rather than a decent but imperfect search algorithm).
Well, if the “inside/outside the universe” distinction is going to mean “is/isn’t causally connected to the universe at all” and a god is required to be outside the universe, then sure. But I think if I discovered that the universe was a simulation and there was a being constantly watching it and supplying a fresh bit of input every hundred Planck intervals in such a way that prayers were occasionally answered, I would say that being is closer to a god than an alien.
But in any case, the distinction isn’t too relevant. If I found out that there was a vessel with intelligent life headed for Earth right now, I’d be just as concerned about that life (actual aliens) as I would be about god-like creatures that should debatably also be called aliens.
Definitely not interested. My understanding of these things is kinda intuitive (with intuition based on decent knowledge of math and computer science, but still), so I believe that “I’ll know it when I see it” (give me two options, and I’ll probably tell you whether one of them seems “simpler” than the other), but I wouldn’t try to put it into exact words.
Kk! Thanks for the discussion :)