So let me try to rewrite that (and don’t be afraid to call this word salad):
(Note: the following comment is based on premises which are very probably completely unsound and unusually prone to bias. Read at your own caution and remember the distinction between impressions and beliefs. These are my impressions.)
You’re Eliezer Yudkowsky. You live in a not-too-far-from-a-Singularity world, and a Singularity is a BIG event, decision theoretically and fun theoretically speaking. Isn’t it odd that you find yourself at this time and place given all the people you could have found yourself as in your reference class? Isn’t that unsettling? Now, if you look out at the stars and galaxies and seemingly infinite space (though you can’t see that far), it looks as if the universe has been assigned measure via a universal prior (and not a speed prior) as it is algorithmically about as simple as you can get while still having life and yet seemingly very computationally expensive. And yet you find yourself as Eliezer Yudkowsky (staring at a personal computer, no less) in a close-to-Singularity world: surely some extra parameters must have been thrown into the description of this universe; surely your experience is not best described with a universal prior alone, instead of a universal prior plus some mixture of agents computing things according to their preference. In other words, this universe looks conspicuously like it has been optimized around Eliezer-does-something-multiversally-important. (I suppose this should also up your probability that you’re a delusional narcissist, but there’s not much to do about that.)
Now, if such optimization pressures exist, then one has to question some reductionist assumptions: if this universe gets at least some of its measure from the preferences of simulator-agents, then what features of the universe would be affected by those preferences? Computational cost is one. MWI implies a really big universe, and what are the chances that you would find yourself where you are in a really big universe as well as finding yourself in a conspicuously-optimized-seeming universe? Seemingly the two hypotheses are at odds. And what about cryonics? Do you really expect to die in a universe that seems to be optimized for having you around doing interesting things? (The answer to that could very well be yes, especially if your name is Light.) And when you have simulators in the picture, with explicit values, perhaps they have encoded rightness and wrongness into the fabric of reality via selectively pruning multiverse branches or something. Heaven knows what the gods do for fun.
These are of course ridiculous ideas, but ridiculous ideas that I am nonetheless hesitant to assign negligible probability to.
Maybe you’re a lot less surprised to find yourself in this universe than I am, in which case none of my arguments apply. But I get the feeling that something is awfully odd is going on, and this makes me hesitant to be confident about some seemingly basic reductionist conclusions. Thus I advise you to buy a lottery ticket. It’s the rational thing to do.
(Note: Although I personalized this for Eliezer, it applies to pretty much everyone to a greater or lesser degree. I remember (perhaps a secondhand and false memory, though, so don’t take it too seriously) at some point Michael Vassar was really confused about why he didn’t find himself as Eliezer Yudkowsky. I think the answer I would have thought up if I was him is that Michael Vassar is more decision theoretically multiversally important than Eliezer. Any other answer makes the question appear silly. Which it might be.)
(Alert to potential bias: I kinda like to be the contrarian-contrarian. Cryonics is dumb, MWI is wrong, buying a lottery ticket is a good idea, moral realism is a decent hypothesis, anthropic reasoning is more important than reductionist reasoning, CEV-like things won’t ever work and are ridiculously easy to hack, TDT is unlikely to lead to any sort of game theoretic advantage and precommitments not to negotiate with blackmailers are fundamentally doomed, winning timeless war is more important than facilitating timeless trade, the Singularity is really near, religion is currently instrumentally rational for almost everyone, most altruists are actually egoists with relatively loose boundaries around identity, et cetera, et cetera.)
More seriously, that aphorism begs the question. Yes, your hypothesis and your evidence have to be in perfectly balanced alignment. That is, from a Bayesian perspective, tautological. However, it doesn’t help you figure out how it is exactly that the adding gets done. It doesn’t help distinguish between hypotheses. For that we need Solomonoff’s lightsaber. I don’t see how saying “it (whatever ‘it’ is) adds up to (whatever ‘adding up to’ means) normality (which I think should be ‘reality’)” is at all helpful. Reality is reality? Evidence shouldn’t contradict itself? Cool story bro, but how does that help me?
it looks as if the universe has been assigned measure via a universal prior (and not a speed prior) as it is algorithmically about as simple as you can get while still having life and yet seemingly very computationally expensive.
This is rather tangential to your point, but the universe looks very computationally cheap to me. In terms of the whole ensemble, quantum mechanics is quite cheap. It only looks expensive to us because we measure by a classical slice, which is much smaller. But even if we call it exponential, that is very quick by the standards of the Solomonoff prior.
Hm, I’m not sure I follow: both a classical and quantum universe are cheap, yes, but if you’re using a speed prior or any prior that takes into account computational expense, then it’s the cost of the universes relative to each other that helps us distinguish between which universe we expect to find ourselves in, not their cost relative to all possible universes.
I could very, very well just be confused.
Added: Ah, sorry, I think I missed your point. You’re saying that even infinitely large universes seem computationally cheap in the scheme of things? I mean, compared to all possible programs in which you would expect life to evolve, the universe looks hugeeeeeee to me. It looks infinite, and there are tons of finite computations… when you compare anything to the multiverse of all things, that computation looks cheap. I guess we’re just using different scales of comparison: I’m comparing to finite computations, you’re comparing to a multiverse.
No, that’s not what I meant; I probably meant something silly in the details, but I think the main point still applies. I think you’re saying that the size of the universe is large compared to the laws of physics. To which I still reply: not large by the standards of computable functions.
So let me try to rewrite that (and don’t be afraid to call this word salad):
(Note: the following comment is based on premises which are very probably completely unsound and unusually prone to bias. Read at your own caution and remember the distinction between impressions and beliefs. These are my impressions.)
You’re Eliezer Yudkowsky. You live in a not-too-far-from-a-Singularity world, and a Singularity is a BIG event, decision theoretically and fun theoretically speaking. Isn’t it odd that you find yourself at this time and place given all the people you could have found yourself as in your reference class? Isn’t that unsettling? Now, if you look out at the stars and galaxies and seemingly infinite space (though you can’t see that far), it looks as if the universe has been assigned measure via a universal prior (and not a speed prior) as it is algorithmically about as simple as you can get while still having life and yet seemingly very computationally expensive. And yet you find yourself as Eliezer Yudkowsky (staring at a personal computer, no less) in a close-to-Singularity world: surely some extra parameters must have been thrown into the description of this universe; surely your experience is not best described with a universal prior alone, instead of a universal prior plus some mixture of agents computing things according to their preference. In other words, this universe looks conspicuously like it has been optimized around Eliezer-does-something-multiversally-important. (I suppose this should also up your probability that you’re a delusional narcissist, but there’s not much to do about that.)
Now, if such optimization pressures exist, then one has to question some reductionist assumptions: if this universe gets at least some of its measure from the preferences of simulator-agents, then what features of the universe would be affected by those preferences? Computational cost is one. MWI implies a really big universe, and what are the chances that you would find yourself where you are in a really big universe as well as finding yourself in a conspicuously-optimized-seeming universe? Seemingly the two hypotheses are at odds. And what about cryonics? Do you really expect to die in a universe that seems to be optimized for having you around doing interesting things? (The answer to that could very well be yes, especially if your name is Light.) And when you have simulators in the picture, with explicit values, perhaps they have encoded rightness and wrongness into the fabric of reality via selectively pruning multiverse branches or something. Heaven knows what the gods do for fun.
These are of course ridiculous ideas, but ridiculous ideas that I am nonetheless hesitant to assign negligible probability to.
Maybe you’re a lot less surprised to find yourself in this universe than I am, in which case none of my arguments apply. But I get the feeling that something is awfully odd is going on, and this makes me hesitant to be confident about some seemingly basic reductionist conclusions. Thus I advise you to buy a lottery ticket. It’s the rational thing to do.
(Note: Although I personalized this for Eliezer, it applies to pretty much everyone to a greater or lesser degree. I remember (perhaps a secondhand and false memory, though, so don’t take it too seriously) at some point Michael Vassar was really confused about why he didn’t find himself as Eliezer Yudkowsky. I think the answer I would have thought up if I was him is that Michael Vassar is more decision theoretically multiversally important than Eliezer. Any other answer makes the question appear silly. Which it might be.)
(Alert to potential bias: I kinda like to be the contrarian-contrarian. Cryonics is dumb, MWI is wrong, buying a lottery ticket is a good idea, moral realism is a decent hypothesis, anthropic reasoning is more important than reductionist reasoning, CEV-like things won’t ever work and are ridiculously easy to hack, TDT is unlikely to lead to any sort of game theoretic advantage and precommitments not to negotiate with blackmailers are fundamentally doomed, winning timeless war is more important than facilitating timeless trade, the Singularity is really near, religion is currently instrumentally rational for almost everyone, most altruists are actually egoists with relatively loose boundaries around identity, et cetera, et cetera.)
It all adds up to normality, damn it!
What whats to what?
More seriously, that aphorism begs the question. Yes, your hypothesis and your evidence have to be in perfectly balanced alignment. That is, from a Bayesian perspective, tautological. However, it doesn’t help you figure out how it is exactly that the adding gets done. It doesn’t help distinguish between hypotheses. For that we need Solomonoff’s lightsaber. I don’t see how saying “it (whatever ‘it’ is) adds up to (whatever ‘adding up to’ means) normality (which I think should be ‘reality’)” is at all helpful. Reality is reality? Evidence shouldn’t contradict itself? Cool story bro, but how does that help me?
http://lesswrong.com/lw/29o/open_thread_may_2010_part_2/22cp?c=1
This is rather tangential to your point, but the universe looks very computationally cheap to me. In terms of the whole ensemble, quantum mechanics is quite cheap. It only looks expensive to us because we measure by a classical slice, which is much smaller. But even if we call it exponential, that is very quick by the standards of the Solomonoff prior.
Hm, I’m not sure I follow: both a classical and quantum universe are cheap, yes, but if you’re using a speed prior or any prior that takes into account computational expense, then it’s the cost of the universes relative to each other that helps us distinguish between which universe we expect to find ourselves in, not their cost relative to all possible universes.
I could very, very well just be confused.
Added: Ah, sorry, I think I missed your point. You’re saying that even infinitely large universes seem computationally cheap in the scheme of things? I mean, compared to all possible programs in which you would expect life to evolve, the universe looks hugeeeeeee to me. It looks infinite, and there are tons of finite computations… when you compare anything to the multiverse of all things, that computation looks cheap. I guess we’re just using different scales of comparison: I’m comparing to finite computations, you’re comparing to a multiverse.
No, that’s not what I meant; I probably meant something silly in the details, but I think the main point still applies. I think you’re saying that the size of the universe is large compared to the laws of physics. To which I still reply: not large by the standards of computable functions.