Thanks for the feedback! Here’s another one for ya. A relatively long time ago I used to be pretty concerned about Pascal’s wager, but then I devised some clever reasoning why it all cancels out and I don’t need to think about it. I reasoned that one of three things must be true:
I don’t have an immortal soul. In this case, I might as well be a good person.
I have an immortal soul, and after my bodily death I will be assigned to one of a handful of infinite fates, depending on how good of a person I was. In this case it’s very important that I be a good person.
Same as above, but the decision process is something else. In this case I have no way of knowing how my infinite fate will be decided, so I might as well be a good person during my mortal life and hope for the best.
But then, post-LW, I realized that there are two issues with this:
It doesn’t make any sense to separate out case 2 from the enormous ocean of possibilities allowed for by case 3. Or rather, I can separate it, but then I need to probabilistically penalize it relative to case 3, and I also need to slightly shift the “expected judgment criterion” found in case 3 away from “being a good person is the way to get a good infinite fate”, and it all balances out.
More importantly, this argument flippantly supposes that I have no way of discerning what process, if any, will be used to assign me an infinite fate. An infinite fate, mind you. I ought to be putting in more thought than this even if I thought the afterlife only lasted an hour, let alone eternity.
So now I am back to being rather concerned about Pascal’s wager, or more generally, the possibility that I have an immortal soul and need to worry about where it eventually ends up.
From my first read-through of the sequences I remember that it claims to show that the idea of there being a god is somewhat nonsensical, but I didn’t quite catch it the first time around. So my first line of attack is to read through the sequences again, more carefully this time, and see if they really do give a valid reason to believe that.
From my first read-through of the sequences I remember that it claims to show that the idea of there being a god is somewhat nonsensical, but I didn’t quite catch it the first time around.
I think the first step is to ask yourself what do you even mean by saying “god”.
Because if you have a definition like “the spirit of our dead chieftain, who sometimes visits me in my dreams”, I have no problem with that. Like, I find it completely plausible that you see the image of the chieftain in your dreams; nothing unscientific about that. It’s just that the image is generated by your brain, so if try to communicate with it, it can only give you advice that your brain generated, and it can only grant your wishes in the sense that you accomplish that yourself. But if you agree that this is exactly what you meant, then such god is perfectly okay for me.
But the modern definition is more like: an intelligent being that exists outside of our universe, but can observe it and change it. Suspending disbelief for a moment; how exactly can a being be “intelligent”, whether inside our universe or not? Intelligence implies processing data. That would imply that the god has some… supernatural neurons, or some other architecture capable of processing data. So the god is a mechanism (in the sense that a human is a biological mechanism) composed of parts, although those parts do not necessarily have to be found in our physics. Skill kinda plausible, maybe gods are made out of dark matter, who knows. But then, how did this improbably complicated mechanism come into existence? Humans were made by evolution, were gods too? But then again those gods are not the gods of religion; they are merely powerful aliens. But powerful aliens are neither creators of the universe, nor are they omniscient.
Then the theologists try to come with some smart-sounding arguments, like “god is actually supremely simple”—therefore Occam’s razor does not disprove him, and no evolution was needed, because simple things are more likely a priori, so a supreme being is supremely likely. Or something like that. But this is nonsense, because “supremely simple” and “capable of processing information” are incompatible.
The last argument would be: okay, we do not have a coherent explanation of god, but you don’t have a coherent explanation how the universe can exist without one. And a potential answer is that maybe existence is relative. Like, if you run a simulation of Conway’s Game of Life, it could possibly contain evolved intelligent life, and that life would ask “how was our universe created”? And it’s not like you created the universe by simulating it, because you are merely following the mathematical rules; so it’s more like the math created that universe and you are only observing it. If the beings in that mathematical universe will pray to gods, there is no way for anyone outside to intervene (while simultaneously following the mathematical rules). So the universe inside the Game of Life is a perfectly godless universe, based on math. But does it exist? Well, that rules of math describe the intelligent beings inside that universe that live, think, interact with their world. Maybe that’s all that is needed for existence; or rather, the question is what else could be needed? And then, maybe our world is also just an implication of some mathematical equations.
tl;dr—maybe math itself implies existence, and if you try to turn the concept of (intelligent, preceiving, acting) god into something coherent it is no longer the god of religion but merely a powerful alien
That would imply that the god has some… supernatural neurons, or some other architecture capable of processing data.
The supernatural isn’t supposed to be the natural done all over again. The typical theological claim is that God’s wisdom or whatever is an intrinisic quality, not something with moving parts.
Well, “wisdom as an intrinsic quality” is a mysterious answer. And what is “wisdom without moving parts”? A precomputed table of wise answers to all possible questions? Who precomputed it and how?
I agree that this is how theology usually answers it, but it is an answer that doesn’t make any sense when you look at it closer; it’s just some good-sounding words randomly glued together. And if you try to make it refer to something, even a hypothetical something, the whole explanation falls apart.
That those particular can be understood in terms of the operations of their components
There’s an irreduciby basic level. Its not turtles all way down.
If it’s always the case that something that isn’t explicable in terms of its parts is mysterious, then the lowest level us mysterious. If nothing is mysterious, if you apply the argument against mysterious answer without prejudice,
reductionism is false. There isn’t a consistent set of principles here.
Continued ..
Naturalism is the claim that there is a bunch of fundamental properties that just are, at the bottom of the stack ,and everything is built up from that. Supernaturalism is the claim that the intrinsic stuff is at the top of the stack, and everything else is derived from it top-down. That may be 100% false , but it is the actual claim.
There’s a thing called the principle of charity , where you one party interprets the others statements so as to maximise their truth value. This only enhances communication if the truth is not basically in dispute...that’s the easy case. The hard case is when there is a basic dispute about whats true. In that case, it’s not helpful to fix the other person’s claims by making them more reasonable from your point of view.
Anyway, thats how we ended up with “God must have superneurons in his superbrain”.
Feels like in the top-down universe, science shouldn’t work at all. I mean, when you take a magnifying glass and look at the details, they are supposedly generated on the fly to fit the larger picture. Then you apply munchkinry to the details and invent an atomic bomb or quantum computer… which means… what exactly, from the top-down perspective?
Yeah, you can find an excuse, e.g. that some of those top-down principles are hidden like Easter eggs, waiting to be discovered later. That the Platonic idea of smartphones has been waiting for us since the creation of the universe, but was only revealed to the recent generation. Which would mean that the top-down universe has some reason to pretend to be bottom-up, at least in some aspects...
Okay, the same argument could be made that quantum physics pretends to be classical physics at larger scale, or relativity pretends to be Newtonian mechanics at low speeds… as if the scientists are trying to make up silly excuses for why their latest magic works but totally “doesn’t contradict” what the previous generations of scientists were telling us...
Well, at least it seems like the bottom-up approach is fruitful, whether the true reason is that the universe is bottom-up, or that the universe it top-down in a way that tries really hard to pretend that it is actually bottom-up (either in the sense that when it generates the—inherently meaningless—details for us, it makes sure that all consequences of those details are compatible with the preexisting Platonic ideas that govern the universe… or like a Dungeon Master who allows the players to invent all kinds of crazy stuff and throw the entire game off balance, because he values consistency above everything).
More importantly, in universe where there is magic all the way up, what sense does it make to adopt the essentially half-assed approach, where you believe in the supernatural but also kinda use logic except not too seriously… might as well throw the logic away completely, because in that kind of universe it is not going to do you much good anyway.
The basic claim of a top down universe is a short string that doesn’t contain much information. About the same amount as a basic claim of reductionism.
The top down claim doesnt imply a universe of immutable physical law, but it doesn’t contradict it either.
The same goes for the bottom-up claim. A universe of randomly moving high entropy gas is useless for science and technology, but compatible with reductionism.
But all this is rather beside the point. Even if supernaturalism is indefensible, you can’t refute it by changing it into something else.
I wouldn’t call the dead chieftain a god—that would just be a word game.
But then, how did this improbably complicated mechanism come into existence? Humans were made by evolution, were gods too? But then again those gods are not the gods of religion; they are merely powerful aliens. But powerful aliens are neither creators of the universe, nor are they omniscient.
Wait wait! You say a god-like being created by evolution cannot be a creator of the universe. But that’s only true if you constrain that particular instance of evolution to have occered in *this* universe. Maybe this universe is a simulation designed by a powerful “alien” in another universe, who itself came about from an evolutionary process in its own universe.
It might be “omniscient” in the sense that it can think 1000x as fast as us and has 1000x as much working memory and is familiar with thinking habits that are 1000x as good as ours, but that’s a moot point. The real thing I’m worried about isn’t whether there exists an omniscient-omnipotent-benevolent creature, but rather whether there exists *some* very powerful creature who I might need to understand to avoid getting horrible outcomes.
I haven’t yet put much thought into this, since I only recently came to believe that this topic merits serious thought, but the existence of such a powerful creature seems like a plausible avenue to the conclusion “I have an infinite fate and it depends on me doing/avoiding X”.
[...] Occam’s razor [...]
This is another area where my understanding could stand to be improved (and where I expect it will be during my next read-through of the sequences). I’m not sure exactly what kind of simplicity Occam’s razor uses. Apparently it can be formalized as Kolmogorov complexity, but the only definition I’ve ever found for that term is “the Kolmogorov Complexity of X is the length of the shortest computer program that would output X”. But this definition is itself in need of formalization. Which programming language? And what if X is something other than a stream of bits, such as a dandelion? And even once that’s answered, I’m not quite sure how to arrive at the conclusion that Kolmogorov-ly simpler things are more likely to be encountered.
(All that being said, I’d like to note that I’m keeping in mind that just because I don’t understand these things doesn’t mean there’s nothing to them. Do you know of any good learning resources for someone who has my confusions about these topics?)
And it’s not like you created the universe by simulating it, because you are merely following the mathematical rules; so it’s more like the math created that universe and you are only observing it.
If the beings in that mathematical universe will pray to gods, there is no way for anyone outside to intervene (while simultaneously following the mathematical rules). So the universe inside the Game of Life is a perfectly godless universe, based on math.
That much makes sense, but I think it excludes a possibly important class of universe that is based on math but also depends on a constant stream of data from an outside source. Imagine a Life-like simulation ruleset where the state of the array of cells at time T+1 depended on (1) the state of the array at time T and (2) the on/off state of a light switch in my attic at time T. I could listen to the prayers of the simulated creatures and use the light switch to influence their universe such that they are answered.
I wouldn’t call the dead chieftain a god—that would just be a word game.
Some people in history did, specifically, ancient Romans. But now we don’t. Just making it obvious.
You say a god-like being created by evolution cannot be a creator of the universe. But that’s only true if you constrain that particular instance of evolution to have occered in *this* universe. Maybe this universe is a simulation designed by a powerful “alien” in another universe, who itself came about from an evolutionary process in its own universe.
And this is similar to the chieftain thing. You can have a religion that defines “god” as “an alien from another universe”, but many religions insist that god is not created but eternal.
The real thing I’m worried about isn’t whether there exists an omniscient-omnipotent-benevolent creature, but rather whether there exists *some* very powerful creature who I might need to understand to avoid getting horrible outcomes.
Yes, this is a practical approach. Well, you cannot disprove such thing, because it is logically possible. (Obviously, “possible” does not automatically imply “it happened”.) But unless you assume it is “simulations all the way up”, there must be a universe that is not created by an external alien lifeform. Therefore, it is also logically possible that our universe is like that.
There is no reason to assume that the existing religions have something in common with the simulating alien. When I play Civilization, the “people” in my simulation have a religion, but it doesn’t mean that I believe in it, or even approve it, or that I am somehow going to reward them for it.
It’s just a cosmic horror that you need to learn to live with. There are more.
the only definition I’ve ever found for that term is “the Kolmogorov Complexity of X is the length of the shortest computer program that would output X”. But this definition is itself in need of formalization. Which programming language?
Any programming language; for large enough values it doesn’t matter. If you believe that e.g. Python is much better in this regard than Java, then for sufficiently complicated things the most efficient way to implement them in Java is to implement a Python emulator (a constant-sized piece of code) and implementing the algorithm in Python. So if you chose a wrong language, you pay at most a constant-sized penalty. Which is usually irrelevant, because these things are usually applied to debating what happens in general when the data grow.
But I agree that if you talk about small data, it is underspecified. I don’t really know what could it mean to “have a universe defined by the following three bits: 0, 0, 1”, and maybe no one has a meaningful answer to this. But there are cases where you can have an intuition that for any reasonable definition of a programming language, X should be simpler than Y.
I’m not quite sure how to arrive at the conclusion that Kolmogorov-ly simpler things are more likely to be encountered.
Just an intuition pump: Imagine that there is a multiverse A containing all universes that can be run by programs having exactly 100 lines of code; and there is a multiverse B containing all universes that can be run by programs having exactly 105 lines of code. The universes from A are more likely, because they also appear in B.
For each program A1 describing a universe from A, you have a set of programs describing exactly the same universe in B, simply by adding an “exit” instruction on line 101, and arbitrary instructions on lines 102-105. If we take a program B1 describing a universe from B in such way that each of the 105 lines is used meaningfully, then… in multiverse A we have one A1 vs zero B1, and in multiverse B we have many A1-equivalents vs one B1; in either case A1 wins.
Do you know of any good learning resources for someone who has my confusions about these topics?
A part of this is in standard computer science curriculum, and another part is a philosophical extrapolation. I do not have a recommendation about a specific textbook, I just vaguely remember things like this from university. (Actually, I probably never heard explicitly about Kolmogorov complexity at university, but I learned some related concepts that allowed me to recognize what it means and what it implies, when I found it on Less Wrong.)
Imagine a Life-like simulation ruleset where the state of the array of cells at time T+1 depended on (1) the state of the array at time T and (2) the on/off state of a light switch in my attic at time T. I could listen to the prayers of the simulated creatures and use the light switch to influence their universe such that they are answered.
Then I would say the Kolmogorov complexity of the “Life-only” universe is the complexity of the rules of Life; but the complexity of the “Life+inputs” universe is the complexity of the rules plus the complexity of whoever generates the inputs, and everything they depend on, so it is the complexity of the outside universe.
The Civ analogy makes sense, and I certainly wouldn’t stop at disproving all actually-practiced religions (though at the moment I don’t even feel equipped to do that).
Well, you cannot disprove such thing, because it is logically possible. (Obviously, “possible” does not automatically imply “it happened”.) But unless you assume it is “simulations all the way up”, there must be a universe that is not created by an external alien lifeform. Therefore, it is also logically possible that our universe is like that.
Are you sure it’s logically possible in the strict sense? Maybe there’s some hidden line of reasoning we haven’t yet discovered that shows that this universe isn’t a simulation! (Of course, there’s a lot of question-untangling that has to happen first, like whether “is this a simulation?” is even an appropriate question to ask. See also: Greg Egan’s book Permutation City, a fascinating work of fiction that gives a unique take on what it means for a universe to be a simulation.)
It’s just a cosmic horror that you need to learn to live with. There are more.
This sounds like the kind of thing someone might say who is already relatively confident they won’t suffer eternal damnation. Imagine believing with probability at least 1/1000 that, if you act incorrectly during your life, then...
(WARNING: graphic imagery) …upon your bodily death, your consciousness will be embedded in an indestructible body and put in a 15K degree oven for 100 centuries. (END).
Would you still say it was just another cosmic horror you have to learn to live with? If you wouldn’t still say that, but you say it now because your probability estimate is less than 1/1000, how did you come to have that estimate?
Any programming language; for large enough values it doesn’t matter. If you believe that e.g. Python is much better in this regard than Java, then for sufficiently complicated things the most efficient way to implement them in Java is to implement a Python emulator (a constant-sized piece of code) and implementing the algorithm in Python. So if you chose a wrong language, you pay at most a constant-sized penalty. Which is usually irrelevant, because these things are usually applied to debating what happens in general when the data grow.
The constant-sized penalty makes sense. But I don’t understand the claim that this concept is usually applied in the context of looking at how things grow. Occam’s razor is (apparently) formulated in terms of raw Kolmogorov complexity—the appropriate prior for an event X is 2^(-B), where B is the Kolmogorov Complexity of X.
Let’s say general relativity is being compared against Theory T, and the programming language is Python. Doesn’t it make a huge difference whether you’re allowed to “pip install general-relativity” before you begin?
But there are cases where you can have an intuition that for any reasonable definition of a programming language, X should be simpler than Y.
I agree that these intuitions can exist, but if I’m going to use them, then I detest this process being called a formalization! If I’m allowed to invoke my sense of reasonableness to choose a good programming language to generate my priors, why don’t I instead just invoke my sense of reasonableness to choose good priors? Wisdom of the form “programming languages that generate priors that work tend to have characteristic X” can be transformed into wisdom of the form “priors that work tend to have characteristic X”.
Just an intuition pump: [...]
I have to admit that I kind of bounced off of this. The universe-counting argument makes sense, but it doesn’t seem especially intuitive to me that the whole of reality should consist of one universe for each computer program of a set length written in a set language.
(Actually, I probably never heard explicitly about Kolmogorov complexity at university, but I learned some related concepts that allowed me to recognize what it means and what it implies, when I found it on Less Wrong.)
Can I ask which related concepts you mean?
[...] so it is the complexity of the outside universe.
Oh, that makes sense. In that case, the argument would be that nothing outside MY universe could intervene in the lives of the simulated Life-creatures, since they really just live in the same universe as me. But then my concern just transforms into “what if there’s a powerful entity living in this universe (rather than outside of it) who will punish me if I do X, etc”.
Maybe there’s some hidden line of reasoning we haven’t yet discovered that shows that this universe isn’t a simulation!
I find it difficult to imagine how such argument could even be constructed. “Our universe isn’t a simulation because it has property X” doesn’t explain why the simulator could not simulate X. The usual argument is “because quantum stuff, the simulation would require insane amounts of computing power”, which is true, but we have no idea what the simulating universe looks like, and what kind of physics is has… maybe what’s an insane amount for us is peanuts for them.
But maybe there is some argument why a computing power in principle (like, some mathematical reason) cannot exceed certain value, ever. And the value may turn out to be insufficient to simulate our universe. And we can somehow make sure that our entire universe is simulated in sufficient resolution (not something like: the Earth or perhaps the entire Solar system is simulated in full quantum physics, but everything else is just a credible approximation). Etc. Well, if such thing happens, then I would accept the argument.
This sounds like the kind of thing someone might say who is already relatively confident they won’t suffer eternal damnation. Imagine believing with probability at least 1/1000 that, if you act incorrectly during your life, then...
Yeah, I just… stopped worrying about these kinds of things. (In my case, “these kinds of things” refer e.g. to very unlikely Everett branches, which I still consider more likely than gods.) You just can’t win this game. There are million possible horror scenarios, each of them extremely unlikely, but each of them extremely horrifying, so you would just spend all your life thinking about them; and there is often nothing you can do about them, in some cases you would be required to do contradictory things (you spend your entire life trying to appease the bloodthirsty Jehovah, but it turns out the true master of universe is the goddess Kali and she is very displeased with your Christianity...) or it could be some god you don’t even know because it is a god of Aztecs, or some future god that will only be revealed to humanity in year 3000. Maybe humans are just a precursor to an intelligent species that will exist million years in the future, and from their god’s perspective humans are even less relevant than monkeys are for Christianity. Maybe we are just meant to be food for the space locusts from Andromeda galaxy. Maybe our entire universe is a simulation on a computer in some alien universe with insane computer power, but they don’t care about the intelligent beings or life in general; they just use the flashing galaxies as a screen saver when they are bored. If you put things into perspective, assigning probability 1/1000 to any specific religion is way too much; all kinds of religions, existing and fictional put together, don’t deserve that much.
And by the way, torturing people forever, because they did not believe in your illogical incoherent statements unsupported by evidence, that is 100% compatible with being an omnipotent, omniscient, and omnibenevolent god, right? Yet another theological mystery...
On the other hand, if you assume an evil god, then… maybe the holy texts and promises of heaven are just a sadistic way he is toying with us, and then he will torture all of us forever regardless.
So… you can’t really win this game. Better to focus on things where you actually can gather evidence, and improve your actual outcomes in life.
Psychologically, if you can’t get rid of the idea of supernatural, maybe it would be better to believe in an actually good god. Someone who is at least as reasonable and good as any of us, which should not be an impossibly high standard. Such god certainly wouldn’t spend an eternity torturing random people for “crimes” such as being generally decent people but believing in a wrong religion or no religion at all, or having extramarital sex, etc. (Some theologians would say that this is actually their god. I don’t think so, but whatever.) I don’t really believe in such god either, honestly, but it is a good fallback plan when you can’t get rid of the idea of gods completely.
Let’s say general relativity is being compared against Theory T, and the programming language is Python. Doesn’t it make a huge difference whether you’re allowed to “pip install general-relativity” before you begin?
That would be cheating, obviously. Unless by the length of code you mean also the length of all used libraries, in which case it is okay. It is assumed that the original programming language does not favoritize any specific theory, just provides very basic capabilities for expressing things like “2+2” or “if A then B else B”. (Abstracting from details such as how many pluses are in one “if”.)
If I’m allowed to invoke my sense of reasonableness to choose a good programming language to generate my priors, why don’t I instead just invoke my sense of reasonableness to choose good priors?
Yeah, ok. The meaning of all this is, how can we compare “complexity” of two things where neither is a strict subset of the other. The important part is that “fewer words” does not necessarily mean “smaller complexity”, because it allows obvious cheating (invent a new word that means exactly what you are arguing for, and then insist that your theory is really simple because it could be described by one word—the same trick as with importing a library), but even if it is not your intention to cheat, your theory can still benefit from some concepts having shorter words for historical reasons, or even because words related to humans (or life on Earth in general) are already shorter, so you should account for the complexity that is already included for them. Furthermore it should be obvious to you that “1000 different laws of physics” is more complex than “1 law of physics, applied in the same way to 1000 particles”. If this all is obvious to you, than yes, making the analogy to a programming language does not bring any extra value.
But historical evidence shows that humans are quite bad at this. They will insist that stars are just shining dots in the sky instead of distant solar systems, because “one solar system + thousands of shining dots” seems to them less complex than “thousands of solar systems”. They will insist that Milky Way or Andromeda cannot be composed of stars, because “thousand stars + one Milky Way + one Andromeda” seems to them less complex than “millions of stars”. More recently (and more controversially), they will insist that “quantum physics + collapse + classical physics” is less complex than “quantum physics all the way up”. Programming analogy helps to express this in a simple way: “Complexity is about the size of code, not how large values are stored in the variables.”
Can I ask which related [to Kolmogorov complexity] concepts you mean?
Compression (lossless), like in the “zip” files. Specifically the fact that an average random file cannotbe compressed, no matter how smart is the method used. Like, for any given value N, the number of all files with size N is larger than the number of all files with size smaller than N. So, whatever is your compression method, if it is lossless, it needs to be a bijection, so there must be at least one file of size N that it will not compress into a file of smaller size. Even worse, for each file of size N compressed to a smaller size, there must be a file of size smaller than N that gets “compressed” to a file of size N or more.
So how does compression work in real world? (Because experience shows it does.) It ultimately exploits the fact that most of the files we want to compress are not random, so it is designed to compress non-random files into smaller ones, and random (containing only noise) files into slightly larger ones. Like, you can try it at home; generate a one megabyte file full of randomly generated bits, then try all compression programs you have installed, and see what happens.
Now, each specific compression algorithm recognized and exploits some kinds of regularity, and is blind towards others. This is why people sometimes invent better compression algorithms that exploit more regularities. The question is, what is the asymptote of this progress? If you tried to invent the best compression algorithm ever, one that could exploit literally all kinds of non-randomness, what would it look like?
(You may want to think about this for a moment, before reading more.)
The answer is that the hypothetical best compression algorithm ever would transform each file into the shortest possible program that generates this file. There is only one problem with that: finding the shortest possible program for every input is algorithmically impossible—this is related to halting problem. Regardless, if the file is compressed by this hypothetical ultimate compression algorithm, the size of the compressed file would be the Kolmogorov complexity of the original file.
But then my concern just transforms into “what if there’s a powerful entity living in this universe (rather than outside of it) who will punish me if I do X, etc”.
Then we are no longer talking about gods in the modern sense, but about powerful aliens.
Yeah, I just… stopped worrying about these kinds of things. (In my case, “these kinds of things” refer e.g. to very unlikely Everett branches, which I still consider more likely than gods.) You just can’t win this game. There are million possible horror scenarios, each of them extremely unlikely, but each of them extremely horrifying, so you would just spend all your life thinking about them; [...]
I see. In that case, I think we’re reacting differently to our situations due to being in different epistemic states. The uncertainty involved in Everett branches is much less Knightian—you can often say things like “if I drive to the supermarket today, then approximately 0.001% of my future Everett branches will die in a car crash, and I’ll just eat that cost; I need groceries!”. My state of uncertainty is that I’ve barely put five minutes of thought into the question “I wonder if there are any tremendously important things I should be doing right now, and particularly if any of the things might have infinite importance due to my future being infinitely long.”
And by the way, torturing people forever, because they did not believe in your illogical incoherent statements unsupported by evidence, that is 100% compatible with being an omnipotent, omniscient, and omnibenevolent god, right? Yet another theological mystery...
Well, that’s another reference to “popular” theism. Popular theism is a subset of theism in general, which itself is a subset of “worlds in which there’s something I should be doing that has infinite importance”.
On the other hand, if you assume an evil god, then… maybe the holy texts and promises of heaven are just a sadistic way he is toying with us, and then he will torture all of us forever regardless.
Yikes!! I wish LessWrong had emojis so I could react to this possibility properly :O
So… you can’t really win this game. Better to focus on things where you actually can gather evidence, and improve your actual outcomes in life.
This advice makes sense, though given the state of uncertainty described above, I would say I’m already on it.
Psychologically, if you can’t get rid of the idea of supernatural, maybe it would be better to believe in an actually good god. [...]
This is a good fallback plan for the contingency in which I can’t figure out the truth and then subsequently fail to acknowledge my ignorance. Fingers crossed that I can at least prevent the latter!
[...] your theory can still benefit from some concepts having shorter words for historical reasons [...]
Well, I would have said that an exactly analogous problem is present in normal Kolmogorov Complexity, but...
But historical evidence shows that humans are quite bad at this.
...but this, to me, explains the mystery. Being told to think in terms of computer programs generating different priors (or more accurately, computer programs generating different universes that entail different sets of perfect priors) really does influence my sense of what constitutes a “reasonable” set of priors.
I would still hesitate to call it a “formalism”, though IIRC I don’t think you’ve used that word. In my re-listen of the sequences, I’ve just gotten to the part where Eliezer uses that word. Well, I guess I’ll take it up with somebody who calls it that.
By the way, it’s just popped into my head that I might benefit from doing an adversarial collaboration with somebody about Occam’s razor. I’m nowhere near ready to commit to anything, but just as an offhand question, does that sound like the sort of thing you might be interested in?
[...] The answer is that the hypothetical best compression algorithm ever would transform each file into the shortest possible program that generates this file.
Insightful comments! I see the connection: really, every compression of a file is a compression into the shortest program that will output that file, where the programming language is the decompression algorithm and the search algorithm that finds the shortest program isn’t guaranteed to be perfect. So the best compression algorithm ever would simply be one with a really really apt decompression routine (one that captures the nuanced nonrandomness found in files humans care aboue very well) and an oracle for computing shortest programs (rather than a decent but imperfect search algorithm).
> But then my concern just transforms into “what if there’s a powerful entity living in this universe (rather than outside of it) who will punish me if I do X, etc”.
Then we are no longer talking about gods in the modern sense, but about powerful aliens.
Well, if the “inside/outside the universe” distinction is going to mean “is/isn’t causally connected to the universe at all” and a god is required to be outside the universe, then sure. But I think if I discovered that the universe was a simulation and there was a being constantly watching it and supplying a fresh bit of input every hundred Planck intervals in such a way that prayers were occasionally answered, I would say that being is closer to a god than an alien.
But in any case, the distinction isn’t too relevant. If I found out that there was a vessel with intelligent life headed for Earth right now, I’d be just as concerned about that life (actual aliens) as I would be about god-like creatures that should debatably also be called aliens.
By the way, it’s just popped into my head that I might benefit from doing an adversarial collaboration with somebody about Occam’s razor. I’m nowhere near ready to commit to anything, but just as an offhand question, does that sound like the sort of thing you might be interested in?
Definitely not interested. My understanding of these things is kinda intuitive (with intuition based on decent knowledge of math and computer science, but still), so I believe that “I’ll know it when I see it” (give me two options, and I’ll probably tell you whether one of them seems “simpler” than the other), but I wouldn’t try to put it into exact words.
Thanks for the feedback! Here’s another one for ya. A relatively long time ago I used to be pretty concerned about Pascal’s wager, but then I devised some clever reasoning why it all cancels out and I don’t need to think about it. I reasoned that one of three things must be true:
I don’t have an immortal soul. In this case, I might as well be a good person.
I have an immortal soul, and after my bodily death I will be assigned to one of a handful of infinite fates, depending on how good of a person I was. In this case it’s very important that I be a good person.
Same as above, but the decision process is something else. In this case I have no way of knowing how my infinite fate will be decided, so I might as well be a good person during my mortal life and hope for the best.
But then, post-LW, I realized that there are two issues with this:
It doesn’t make any sense to separate out case 2 from the enormous ocean of possibilities allowed for by case 3. Or rather, I can separate it, but then I need to probabilistically penalize it relative to case 3, and I also need to slightly shift the “expected judgment criterion” found in case 3 away from “being a good person is the way to get a good infinite fate”, and it all balances out.
More importantly, this argument flippantly supposes that I have no way of discerning what process, if any, will be used to assign me an infinite fate. An infinite fate, mind you. I ought to be putting in more thought than this even if I thought the afterlife only lasted an hour, let alone eternity.
So now I am back to being rather concerned about Pascal’s wager, or more generally, the possibility that I have an immortal soul and need to worry about where it eventually ends up.
From my first read-through of the sequences I remember that it claims to show that the idea of there being a god is somewhat nonsensical, but I didn’t quite catch it the first time around. So my first line of attack is to read through the sequences again, more carefully this time, and see if they really do give a valid reason to believe that.
I think the first step is to ask yourself what do you even mean by saying “god”.
Because if you have a definition like “the spirit of our dead chieftain, who sometimes visits me in my dreams”, I have no problem with that. Like, I find it completely plausible that you see the image of the chieftain in your dreams; nothing unscientific about that. It’s just that the image is generated by your brain, so if try to communicate with it, it can only give you advice that your brain generated, and it can only grant your wishes in the sense that you accomplish that yourself. But if you agree that this is exactly what you meant, then such god is perfectly okay for me.
But the modern definition is more like: an intelligent being that exists outside of our universe, but can observe it and change it. Suspending disbelief for a moment; how exactly can a being be “intelligent”, whether inside our universe or not? Intelligence implies processing data. That would imply that the god has some… supernatural neurons, or some other architecture capable of processing data. So the god is a mechanism (in the sense that a human is a biological mechanism) composed of parts, although those parts do not necessarily have to be found in our physics. Skill kinda plausible, maybe gods are made out of dark matter, who knows. But then, how did this improbably complicated mechanism come into existence? Humans were made by evolution, were gods too? But then again those gods are not the gods of religion; they are merely powerful aliens. But powerful aliens are neither creators of the universe, nor are they omniscient.
Then the theologists try to come with some smart-sounding arguments, like “god is actually supremely simple”—therefore Occam’s razor does not disprove him, and no evolution was needed, because simple things are more likely a priori, so a supreme being is supremely likely. Or something like that. But this is nonsense, because “supremely simple” and “capable of processing information” are incompatible.
The last argument would be: okay, we do not have a coherent explanation of god, but you don’t have a coherent explanation how the universe can exist without one. And a potential answer is that maybe existence is relative. Like, if you run a simulation of Conway’s Game of Life, it could possibly contain evolved intelligent life, and that life would ask “how was our universe created”? And it’s not like you created the universe by simulating it, because you are merely following the mathematical rules; so it’s more like the math created that universe and you are only observing it. If the beings in that mathematical universe will pray to gods, there is no way for anyone outside to intervene (while simultaneously following the mathematical rules). So the universe inside the Game of Life is a perfectly godless universe, based on math. But does it exist? Well, that rules of math describe the intelligent beings inside that universe that live, think, interact with their world. Maybe that’s all that is needed for existence; or rather, the question is what else could be needed? And then, maybe our world is also just an implication of some mathematical equations.
tl;dr—maybe math itself implies existence, and if you try to turn the concept of (intelligent, preceiving, acting) god into something coherent it is no longer the god of religion but merely a powerful alien
The supernatural isn’t supposed to be the natural done all over again. The typical theological claim is that God’s wisdom or whatever is an intrinisic quality, not something with moving parts.
Well, “wisdom as an intrinsic quality” is a mysterious answer. And what is “wisdom without moving parts”? A precomputed table of wise answers to all possible questions? Who precomputed it and how?
I agree that this is how theology usually answers it, but it is an answer that doesn’t make any sense when you look at it closer; it’s just some good-sounding words randomly glued together. And if you try to make it refer to something, even a hypothetical something, the whole explanation falls apart.
Reductionism is a combination of three claims.
That many thing are made of smaller components
That those particular can be understood in terms of the operations of their components
There’s an irreduciby basic level. Its not turtles all way down.
If it’s always the case that something that isn’t explicable in terms of its parts is mysterious, then the lowest level us mysterious. If nothing is mysterious, if you apply the argument against mysterious answer without prejudice, reductionism is false. There isn’t a consistent set of principles here.
Continued ..
Naturalism is the claim that there is a bunch of fundamental properties that just are, at the bottom of the stack ,and everything is built up from that. Supernaturalism is the claim that the intrinsic stuff is at the top of the stack, and everything else is derived from it top-down. That may be 100% false , but it is the actual claim.
There’s a thing called the principle of charity , where you one party interprets the others statements so as to maximise their truth value. This only enhances communication if the truth is not basically in dispute...that’s the easy case. The hard case is when there is a basic dispute about whats true. In that case, it’s not helpful to fix the other person’s claims by making them more reasonable from your point of view.
Anyway, thats how we ended up with “God must have superneurons in his superbrain”.
Feels like in the top-down universe, science shouldn’t work at all. I mean, when you take a magnifying glass and look at the details, they are supposedly generated on the fly to fit the larger picture. Then you apply munchkinry to the details and invent an atomic bomb or quantum computer… which means… what exactly, from the top-down perspective?
Yeah, you can find an excuse, e.g. that some of those top-down principles are hidden like Easter eggs, waiting to be discovered later. That the Platonic idea of smartphones has been waiting for us since the creation of the universe, but was only revealed to the recent generation. Which would mean that the top-down universe has some reason to pretend to be bottom-up, at least in some aspects...
Okay, the same argument could be made that quantum physics pretends to be classical physics at larger scale, or relativity pretends to be Newtonian mechanics at low speeds… as if the scientists are trying to make up silly excuses for why their latest magic works but totally “doesn’t contradict” what the previous generations of scientists were telling us...
Well, at least it seems like the bottom-up approach is fruitful, whether the true reason is that the universe is bottom-up, or that the universe it top-down in a way that tries really hard to pretend that it is actually bottom-up (either in the sense that when it generates the—inherently meaningless—details for us, it makes sure that all consequences of those details are compatible with the preexisting Platonic ideas that govern the universe… or like a Dungeon Master who allows the players to invent all kinds of crazy stuff and throw the entire game off balance, because he values consistency above everything).
More importantly, in universe where there is magic all the way up, what sense does it make to adopt the essentially half-assed approach, where you believe in the supernatural but also kinda use logic except not too seriously… might as well throw the logic away completely, because in that kind of universe it is not going to do you much good anyway.
The basic claim of a top down universe is a short string that doesn’t contain much information. About the same amount as a basic claim of reductionism.
The top down claim doesnt imply a universe of immutable physical law, but it doesn’t contradict it either.
The same goes for the bottom-up claim. A universe of randomly moving high entropy gas is useless for science and technology, but compatible with reductionism.
But all this is rather beside the point. Even if supernaturalism is indefensible, you can’t refute it by changing it into something else.
I wouldn’t call the dead chieftain a god—that would just be a word game.
Wait wait! You say a god-like being created by evolution cannot be a creator of the universe. But that’s only true if you constrain that particular instance of evolution to have occered in *this* universe. Maybe this universe is a simulation designed by a powerful “alien” in another universe, who itself came about from an evolutionary process in its own universe.
It might be “omniscient” in the sense that it can think 1000x as fast as us and has 1000x as much working memory and is familiar with thinking habits that are 1000x as good as ours, but that’s a moot point. The real thing I’m worried about isn’t whether there exists an omniscient-omnipotent-benevolent creature, but rather whether there exists *some* very powerful creature who I might need to understand to avoid getting horrible outcomes.
I haven’t yet put much thought into this, since I only recently came to believe that this topic merits serious thought, but the existence of such a powerful creature seems like a plausible avenue to the conclusion “I have an infinite fate and it depends on me doing/avoiding X”.
This is another area where my understanding could stand to be improved (and where I expect it will be during my next read-through of the sequences). I’m not sure exactly what kind of simplicity Occam’s razor uses. Apparently it can be formalized as Kolmogorov complexity, but the only definition I’ve ever found for that term is “the Kolmogorov Complexity of X is the length of the shortest computer program that would output X”. But this definition is itself in need of formalization. Which programming language? And what if X is something other than a stream of bits, such as a dandelion? And even once that’s answered, I’m not quite sure how to arrive at the conclusion that Kolmogorov-ly simpler things are more likely to be encountered.
(All that being said, I’d like to note that I’m keeping in mind that just because I don’t understand these things doesn’t mean there’s nothing to them. Do you know of any good learning resources for someone who has my confusions about these topics?)
That much makes sense, but I think it excludes a possibly important class of universe that is based on math but also depends on a constant stream of data from an outside source. Imagine a Life-like simulation ruleset where the state of the array of cells at time T+1 depended on (1) the state of the array at time T and (2) the on/off state of a light switch in my attic at time T. I could listen to the prayers of the simulated creatures and use the light switch to influence their universe such that they are answered.
Some people in history did, specifically, ancient Romans. But now we don’t. Just making it obvious.
And this is similar to the chieftain thing. You can have a religion that defines “god” as “an alien from another universe”, but many religions insist that god is not created but eternal.
Yes, this is a practical approach. Well, you cannot disprove such thing, because it is logically possible. (Obviously, “possible” does not automatically imply “it happened”.) But unless you assume it is “simulations all the way up”, there must be a universe that is not created by an external alien lifeform. Therefore, it is also logically possible that our universe is like that.
There is no reason to assume that the existing religions have something in common with the simulating alien. When I play Civilization, the “people” in my simulation have a religion, but it doesn’t mean that I believe in it, or even approve it, or that I am somehow going to reward them for it.
It’s just a cosmic horror that you need to learn to live with. There are more.
Any programming language; for large enough values it doesn’t matter. If you believe that e.g. Python is much better in this regard than Java, then for sufficiently complicated things the most efficient way to implement them in Java is to implement a Python emulator (a constant-sized piece of code) and implementing the algorithm in Python. So if you chose a wrong language, you pay at most a constant-sized penalty. Which is usually irrelevant, because these things are usually applied to debating what happens in general when the data grow.
But I agree that if you talk about small data, it is underspecified. I don’t really know what could it mean to “have a universe defined by the following three bits: 0, 0, 1”, and maybe no one has a meaningful answer to this. But there are cases where you can have an intuition that for any reasonable definition of a programming language, X should be simpler than Y.
Just an intuition pump: Imagine that there is a multiverse A containing all universes that can be run by programs having exactly 100 lines of code; and there is a multiverse B containing all universes that can be run by programs having exactly 105 lines of code. The universes from A are more likely, because they also appear in B.
For each program A1 describing a universe from A, you have a set of programs describing exactly the same universe in B, simply by adding an “exit” instruction on line 101, and arbitrary instructions on lines 102-105. If we take a program B1 describing a universe from B in such way that each of the 105 lines is used meaningfully, then… in multiverse A we have one A1 vs zero B1, and in multiverse B we have many A1-equivalents vs one B1; in either case A1 wins.
A part of this is in standard computer science curriculum, and another part is a philosophical extrapolation. I do not have a recommendation about a specific textbook, I just vaguely remember things like this from university. (Actually, I probably never heard explicitly about Kolmogorov complexity at university, but I learned some related concepts that allowed me to recognize what it means and what it implies, when I found it on Less Wrong.)
Then I would say the Kolmogorov complexity of the “Life-only” universe is the complexity of the rules of Life; but the complexity of the “Life+inputs” universe is the complexity of the rules plus the complexity of whoever generates the inputs, and everything they depend on, so it is the complexity of the outside universe.
The Civ analogy makes sense, and I certainly wouldn’t stop at disproving all actually-practiced religions (though at the moment I don’t even feel equipped to do that).
Are you sure it’s logically possible in the strict sense? Maybe there’s some hidden line of reasoning we haven’t yet discovered that shows that this universe isn’t a simulation! (Of course, there’s a lot of question-untangling that has to happen first, like whether “is this a simulation?” is even an appropriate question to ask. See also: Greg Egan’s book Permutation City, a fascinating work of fiction that gives a unique take on what it means for a universe to be a simulation.)
This sounds like the kind of thing someone might say who is already relatively confident they won’t suffer eternal damnation. Imagine believing with probability at least 1/1000 that, if you act incorrectly during your life, then...
(WARNING: graphic imagery) …upon your bodily death, your consciousness will be embedded in an indestructible body and put in a 15K degree oven for 100 centuries. (END).
Would you still say it was just another cosmic horror you have to learn to live with? If you wouldn’t still say that, but you say it now because your probability estimate is less than 1/1000, how did you come to have that estimate?
The constant-sized penalty makes sense. But I don’t understand the claim that this concept is usually applied in the context of looking at how things grow. Occam’s razor is (apparently) formulated in terms of raw Kolmogorov complexity—the appropriate prior for an event X is 2^(-B), where B is the Kolmogorov Complexity of X.
Let’s say general relativity is being compared against Theory T, and the programming language is Python. Doesn’t it make a huge difference whether you’re allowed to “pip install general-relativity” before you begin?
I agree that these intuitions can exist, but if I’m going to use them, then I detest this process being called a formalization! If I’m allowed to invoke my sense of reasonableness to choose a good programming language to generate my priors, why don’t I instead just invoke my sense of reasonableness to choose good priors? Wisdom of the form “programming languages that generate priors that work tend to have characteristic X” can be transformed into wisdom of the form “priors that work tend to have characteristic X”.
I have to admit that I kind of bounced off of this. The universe-counting argument makes sense, but it doesn’t seem especially intuitive to me that the whole of reality should consist of one universe for each computer program of a set length written in a set language.
Can I ask which related concepts you mean?
Oh, that makes sense. In that case, the argument would be that nothing outside MY universe could intervene in the lives of the simulated Life-creatures, since they really just live in the same universe as me. But then my concern just transforms into “what if there’s a powerful entity living in this universe (rather than outside of it) who will punish me if I do X, etc”.
I find it difficult to imagine how such argument could even be constructed. “Our universe isn’t a simulation because it has property X” doesn’t explain why the simulator could not simulate X. The usual argument is “because quantum stuff, the simulation would require insane amounts of computing power”, which is true, but we have no idea what the simulating universe looks like, and what kind of physics is has… maybe what’s an insane amount for us is peanuts for them.
But maybe there is some argument why a computing power in principle (like, some mathematical reason) cannot exceed certain value, ever. And the value may turn out to be insufficient to simulate our universe. And we can somehow make sure that our entire universe is simulated in sufficient resolution (not something like: the Earth or perhaps the entire Solar system is simulated in full quantum physics, but everything else is just a credible approximation). Etc. Well, if such thing happens, then I would accept the argument.
Yeah, I just… stopped worrying about these kinds of things. (In my case, “these kinds of things” refer e.g. to very unlikely Everett branches, which I still consider more likely than gods.) You just can’t win this game. There are million possible horror scenarios, each of them extremely unlikely, but each of them extremely horrifying, so you would just spend all your life thinking about them; and there is often nothing you can do about them, in some cases you would be required to do contradictory things (you spend your entire life trying to appease the bloodthirsty Jehovah, but it turns out the true master of universe is the goddess Kali and she is very displeased with your Christianity...) or it could be some god you don’t even know because it is a god of Aztecs, or some future god that will only be revealed to humanity in year 3000. Maybe humans are just a precursor to an intelligent species that will exist million years in the future, and from their god’s perspective humans are even less relevant than monkeys are for Christianity. Maybe we are just meant to be food for the space locusts from Andromeda galaxy. Maybe our entire universe is a simulation on a computer in some alien universe with insane computer power, but they don’t care about the intelligent beings or life in general; they just use the flashing galaxies as a screen saver when they are bored. If you put things into perspective, assigning probability 1/1000 to any specific religion is way too much; all kinds of religions, existing and fictional put together, don’t deserve that much.
And by the way, torturing people forever, because they did not believe in your illogical incoherent statements unsupported by evidence, that is 100% compatible with being an omnipotent, omniscient, and omnibenevolent god, right? Yet another theological mystery...
On the other hand, if you assume an evil god, then… maybe the holy texts and promises of heaven are just a sadistic way he is toying with us, and then he will torture all of us forever regardless.
So… you can’t really win this game. Better to focus on things where you actually can gather evidence, and improve your actual outcomes in life.
Psychologically, if you can’t get rid of the idea of supernatural, maybe it would be better to believe in an actually good god. Someone who is at least as reasonable and good as any of us, which should not be an impossibly high standard. Such god certainly wouldn’t spend an eternity torturing random people for “crimes” such as being generally decent people but believing in a wrong religion or no religion at all, or having extramarital sex, etc. (Some theologians would say that this is actually their god. I don’t think so, but whatever.) I don’t really believe in such god either, honestly, but it is a good fallback plan when you can’t get rid of the idea of gods completely.
That would be cheating, obviously. Unless by the length of code you mean also the length of all used libraries, in which case it is okay. It is assumed that the original programming language does not favoritize any specific theory, just provides very basic capabilities for expressing things like “2+2” or “if A then B else B”. (Abstracting from details such as how many pluses are in one “if”.)
Yeah, ok. The meaning of all this is, how can we compare “complexity” of two things where neither is a strict subset of the other. The important part is that “fewer words” does not necessarily mean “smaller complexity”, because it allows obvious cheating (invent a new word that means exactly what you are arguing for, and then insist that your theory is really simple because it could be described by one word—the same trick as with importing a library), but even if it is not your intention to cheat, your theory can still benefit from some concepts having shorter words for historical reasons, or even because words related to humans (or life on Earth in general) are already shorter, so you should account for the complexity that is already included for them. Furthermore it should be obvious to you that “1000 different laws of physics” is more complex than “1 law of physics, applied in the same way to 1000 particles”. If this all is obvious to you, than yes, making the analogy to a programming language does not bring any extra value.
But historical evidence shows that humans are quite bad at this. They will insist that stars are just shining dots in the sky instead of distant solar systems, because “one solar system + thousands of shining dots” seems to them less complex than “thousands of solar systems”. They will insist that Milky Way or Andromeda cannot be composed of stars, because “thousand stars + one Milky Way + one Andromeda” seems to them less complex than “millions of stars”. More recently (and more controversially), they will insist that “quantum physics + collapse + classical physics” is less complex than “quantum physics all the way up”. Programming analogy helps to express this in a simple way: “Complexity is about the size of code, not how large values are stored in the variables.”
Compression (lossless), like in the “zip” files. Specifically the fact that an average random file cannot be compressed, no matter how smart is the method used. Like, for any given value N, the number of all files with size N is larger than the number of all files with size smaller than N. So, whatever is your compression method, if it is lossless, it needs to be a bijection, so there must be at least one file of size N that it will not compress into a file of smaller size. Even worse, for each file of size N compressed to a smaller size, there must be a file of size smaller than N that gets “compressed” to a file of size N or more.
So how does compression work in real world? (Because experience shows it does.) It ultimately exploits the fact that most of the files we want to compress are not random, so it is designed to compress non-random files into smaller ones, and random (containing only noise) files into slightly larger ones. Like, you can try it at home; generate a one megabyte file full of randomly generated bits, then try all compression programs you have installed, and see what happens.
Now, each specific compression algorithm recognized and exploits some kinds of regularity, and is blind towards others. This is why people sometimes invent better compression algorithms that exploit more regularities. The question is, what is the asymptote of this progress? If you tried to invent the best compression algorithm ever, one that could exploit literally all kinds of non-randomness, what would it look like?
(You may want to think about this for a moment, before reading more.)
The answer is that the hypothetical best compression algorithm ever would transform each file into the shortest possible program that generates this file. There is only one problem with that: finding the shortest possible program for every input is algorithmically impossible—this is related to halting problem. Regardless, if the file is compressed by this hypothetical ultimate compression algorithm, the size of the compressed file would be the Kolmogorov complexity of the original file.
Then we are no longer talking about gods in the modern sense, but about powerful aliens.
I see. In that case, I think we’re reacting differently to our situations due to being in different epistemic states. The uncertainty involved in Everett branches is much less Knightian—you can often say things like “if I drive to the supermarket today, then approximately 0.001% of my future Everett branches will die in a car crash, and I’ll just eat that cost; I need groceries!”. My state of uncertainty is that I’ve barely put five minutes of thought into the question “I wonder if there are any tremendously important things I should be doing right now, and particularly if any of the things might have infinite importance due to my future being infinitely long.”
Well, that’s another reference to “popular” theism. Popular theism is a subset of theism in general, which itself is a subset of “worlds in which there’s something I should be doing that has infinite importance”.
Yikes!! I wish LessWrong had emojis so I could react to this possibility properly :O
This advice makes sense, though given the state of uncertainty described above, I would say I’m already on it.
This is a good fallback plan for the contingency in which I can’t figure out the truth and then subsequently fail to acknowledge my ignorance. Fingers crossed that I can at least prevent the latter!
Well, I would have said that an exactly analogous problem is present in normal Kolmogorov Complexity, but...
...but this, to me, explains the mystery. Being told to think in terms of computer programs generating different priors (or more accurately, computer programs generating different universes that entail different sets of perfect priors) really does influence my sense of what constitutes a “reasonable” set of priors.
I would still hesitate to call it a “formalism”, though IIRC I don’t think you’ve used that word. In my re-listen of the sequences, I’ve just gotten to the part where Eliezer uses that word. Well, I guess I’ll take it up with somebody who calls it that.
By the way, it’s just popped into my head that I might benefit from doing an adversarial collaboration with somebody about Occam’s razor. I’m nowhere near ready to commit to anything, but just as an offhand question, does that sound like the sort of thing you might be interested in?
Insightful comments! I see the connection: really, every compression of a file is a compression into the shortest program that will output that file, where the programming language is the decompression algorithm and the search algorithm that finds the shortest program isn’t guaranteed to be perfect. So the best compression algorithm ever would simply be one with a really really apt decompression routine (one that captures the nuanced nonrandomness found in files humans care aboue very well) and an oracle for computing shortest programs (rather than a decent but imperfect search algorithm).
Well, if the “inside/outside the universe” distinction is going to mean “is/isn’t causally connected to the universe at all” and a god is required to be outside the universe, then sure. But I think if I discovered that the universe was a simulation and there was a being constantly watching it and supplying a fresh bit of input every hundred Planck intervals in such a way that prayers were occasionally answered, I would say that being is closer to a god than an alien.
But in any case, the distinction isn’t too relevant. If I found out that there was a vessel with intelligent life headed for Earth right now, I’d be just as concerned about that life (actual aliens) as I would be about god-like creatures that should debatably also be called aliens.
Definitely not interested. My understanding of these things is kinda intuitive (with intuition based on decent knowledge of math and computer science, but still), so I believe that “I’ll know it when I see it” (give me two options, and I’ll probably tell you whether one of them seems “simpler” than the other), but I wouldn’t try to put it into exact words.
Kk! Thanks for the discussion :)