Do you think its fair to say that the question could be rephrased as “After a fundamentally probabilistic event, how can you tell if every outcome happened (in different universes, weighted by probability) versus only one outcome happened, with a particular probability?”
Theory 1: Every day, the universe ‘splits’ into two. The entire contents of B are ‘copied’ somehow. One copy sees a heads event, the other copy sees a tails event.
What does A do in that universe? Or does it not really exist in that model?
Do you think its fair to say that the question is really: After a fundamentally probabilistic event, how can you tell if every outcome happened (in different universes, weighted by probability) versus only one outcome happened, with a particular probability?
Not quite. I don’t think there is a meaningful (as opposed to ‘verbal’) difference between the two options you’ve described. (Eliezer might say that the difference doesn’t “pay any rent” in terms of subjective anticipations.)
The section about incommensurability is trying to argue that there is no “Bayesian” way for a believer in probabilities to dissuade a believer in (probabilityless) branches, or vice versa. This is disturbing because there might actually be a ‘fact of the matter’ about which of Theory 1 and Theory 2 is correct. (After all, we could simulate a coin universe either by simulating all branches or by pruning with the help of a random number generator.)
What does A do in that universe? Or does it not really exist in that model?
A is copied as well. Sorry, I didn’t make that very clear.
I don’t think there is a meaningful (as opposed to ‘verbal’) difference between the two options you’ve described. (Eliezer might say that the difference doesn’t “pay any rent” in terms of subjective anticipations.)
Agreed. Yeah, I can’t really figure out what difference that causes right now...
What different anticipations would Theory 1 vs 2 cause?
A is copied as well. Sorry, I didn’t make that very clear.
What different anticipations would Theory 1 vs 2 cause?
Well, a believer in Theory 2b anticipates ‘heads event tomorrow’ with 90% probability. A believer in Theory 1 doesn’t think the concept of ‘tomorrow’ makes sense unless you’ve also specified tomorrow’s coin event. Hence, the most they’re prepared to say is that they anticipate heads with 100% probability at time (tomorrow, heads) and heads with 0% probability at time (tomorrow, tails).
In that case, what influence does A have on B?
OK, let’s forget about “A and B”. The “coin universe” is just a universe where there is a “coin event” each day, which can be observed but not influenced.
Could a Theory 1 believer have the same expectation based on anthropic reasoning?
Like, lets say that Theory 1b is that the universe splits into 10, with 9 universes having heads and 1 having tails. 9 out of 10 observers would see heads, so they could say that they expect heads event tomorrow with 90% probability.
I don’t think so. Let me show you the mental image which motivated this entire post:
Imagine that you’re about to simulate both branches of the second day of a ‘coin universe’, with the following equipment:
(i) a single ‘heads’ computer weighing seven tons, with a sticker on the side saying “probability 4/5”, and
(ii) six ‘tails’ computers each weighing 500 grams, running identical programs, and having stickers on the side saying “probability 1/30″.
Now I can’t see any reason why the number of computers as opposed to the weights or the stickers should be the relevant input into the simulated beings’ anthropic reasoning (assuming they knew on day 1 how they were going to be simulated on day 2).
In order to do any anthropic reasoning, they need some kind of “bridge law” which takes all of these (and possibly other) factors and outputs probability weights for heads and tails. But it seems that any choice of bridge law is going to be hopelessly arbitrary.
(If you equalised all but one of the factors, and were left with just the numbers of computers, the weights or the stickers, then you would have a canonical way of assigning probabilities, but just because it “leaps out at you” doesn’t mean it’s “correct”.)
Going back to your Theory 1b, I want to ask why counting up numbers of universes should be the correct bridge law (as opposed to, say, counting the number of distinct universes, or one of infinitely many other weird and wonderful bridge laws).
The dilemma for you is that you can either (i) stipulate that all you mean by “splitting into 10 universes” is that each of the 10 has probability 1⁄10 (in which case we’re back to Theory 2b) or else (ii) you need to somehow justify an arbitrary choice of ‘bridge law’ (which I don’t think is possible).
That being said, (and simplifying to the computers simulating one person rather than a universe)
Now I can’t see any reason why the number of computers as opposed to the weights or the stickers should be the relevant input into the simulated beings’ anthropic reasoning (assuming they knew on day 1 how they were going to be simulated on day 2).
I think that the number of computers is more relevant than weight and stickers in that it determines how many people are running. If you simulate someone on a really big computer, no matter how big the computer is, you’re only simulating them once. Similarly, you can slap a different sticker on, and nothing would really change observer-wise.
If you simulate someone 5 times, then there are five simulations running, and there’s 5 conscious experiences going on.
counting the number of distinct universes
I easily imagine that counting the number of distinct experiences is the proper procedure. Even if the person is on 5 computers, they’re still experiencing the exact same things.
However, you can still intervene in any of those 5 simulations without affecting the other 4 simulations. That’s probably part of the reason that I think that the fact that the person’s on 5 different computers matters.
But that wouldn’t contradict the idea that, until an intervention makes things different, only distinct universes should counted.
(ii) you need to somehow justify an arbitrary choice of ‘bridge law’ (which I don’t think is possible).
I’m trying to figure out if there are any betting rules which would make you want to choose different ways of assigning probability, kind of like how this was approached.
All the ones that I’ve come up with so far involve scoring across universes, which seems like a cheap way out, and still doesn’t boil down to predictions that the universe’s inhabitants can test.
I think that the number of computers is more relevant than weight and stickers in that it determines how many people are running. If you simulate someone on a really big computer, no matter how big the computer is, you’re only simulating them once.
What if the computers in question are two-dimensional like Ebborians. Then splitting a computer down the middle has the effect of going from one computer weighing x to two computers each weighing x/2. Why should this splitting operation mean that you’re simulating ‘twice as many people’? And how many people are you simulating if the computer is only ‘partially split’?
I’m trying to figure out if there are any betting rules which would make you want to choose different ways of assigning probability, kind of like how [the Sleeping Beauty problem] was approached.
The LW consensus on Sleeping Beauty is that there is no such thing as “SB’s correct subjective probability that the coin is tails” unless one specifies ‘betting rules’ and even then the only meaning that “subjective probability” has is “the probability assignments that prevent you from having negative expected winnings”. (Where the word ‘expected’ in the previous sentence relates only to the coin, not the subjective probabilities.)
So in terms of the “fact vs value” or “is vs ought” distinction, there is no purely “factual” answer to Sleeping Beauty, just some strategies that maximize value.
Interesting post. I have a few questions...
Do you think its fair to say that the question could be rephrased as “After a fundamentally probabilistic event, how can you tell if every outcome happened (in different universes, weighted by probability) versus only one outcome happened, with a particular probability?”
What does A do in that universe? Or does it not really exist in that model?
Not quite. I don’t think there is a meaningful (as opposed to ‘verbal’) difference between the two options you’ve described. (Eliezer might say that the difference doesn’t “pay any rent” in terms of subjective anticipations.)
The section about incommensurability is trying to argue that there is no “Bayesian” way for a believer in probabilities to dissuade a believer in (probabilityless) branches, or vice versa. This is disturbing because there might actually be a ‘fact of the matter’ about which of Theory 1 and Theory 2 is correct. (After all, we could simulate a coin universe either by simulating all branches or by pruning with the help of a random number generator.)
A is copied as well. Sorry, I didn’t make that very clear.
Agreed. Yeah, I can’t really figure out what difference that causes right now...
What different anticipations would Theory 1 vs 2 cause?
Ah. Thanks.
In that case, what influence does A have on B?
Well, a believer in Theory 2b anticipates ‘heads event tomorrow’ with 90% probability. A believer in Theory 1 doesn’t think the concept of ‘tomorrow’ makes sense unless you’ve also specified tomorrow’s coin event. Hence, the most they’re prepared to say is that they anticipate heads with 100% probability at time (tomorrow, heads) and heads with 0% probability at time (tomorrow, tails).
OK, let’s forget about “A and B”. The “coin universe” is just a universe where there is a “coin event” each day, which can be observed but not influenced.
Could a Theory 1 believer have the same expectation based on anthropic reasoning?
Like, lets say that Theory 1b is that the universe splits into 10, with 9 universes having heads and 1 having tails. 9 out of 10 observers would see heads, so they could say that they expect heads event tomorrow with 90% probability.
I don’t think so. Let me show you the mental image which motivated this entire post:
Imagine that you’re about to simulate both branches of the second day of a ‘coin universe’, with the following equipment:
(i) a single ‘heads’ computer weighing seven tons, with a sticker on the side saying “probability 4/5”, and (ii) six ‘tails’ computers each weighing 500 grams, running identical programs, and having stickers on the side saying “probability 1/30″.
Now I can’t see any reason why the number of computers as opposed to the weights or the stickers should be the relevant input into the simulated beings’ anthropic reasoning (assuming they knew on day 1 how they were going to be simulated on day 2).
In order to do any anthropic reasoning, they need some kind of “bridge law” which takes all of these (and possibly other) factors and outputs probability weights for heads and tails. But it seems that any choice of bridge law is going to be hopelessly arbitrary.
(If you equalised all but one of the factors, and were left with just the numbers of computers, the weights or the stickers, then you would have a canonical way of assigning probabilities, but just because it “leaps out at you” doesn’t mean it’s “correct”.)
Going back to your Theory 1b, I want to ask why counting up numbers of universes should be the correct bridge law (as opposed to, say, counting the number of distinct universes, or one of infinitely many other weird and wonderful bridge laws).
The dilemma for you is that you can either (i) stipulate that all you mean by “splitting into 10 universes” is that each of the 10 has probability 1⁄10 (in which case we’re back to Theory 2b) or else (ii) you need to somehow justify an arbitrary choice of ‘bridge law’ (which I don’t think is possible).
I’m gonna go ahead and notice that I’m confused.
That being said, (and simplifying to the computers simulating one person rather than a universe)
I think that the number of computers is more relevant than weight and stickers in that it determines how many people are running. If you simulate someone on a really big computer, no matter how big the computer is, you’re only simulating them once. Similarly, you can slap a different sticker on, and nothing would really change observer-wise.
If you simulate someone 5 times, then there are five simulations running, and there’s 5 conscious experiences going on.
I easily imagine that counting the number of distinct experiences is the proper procedure. Even if the person is on 5 computers, they’re still experiencing the exact same things.
However, you can still intervene in any of those 5 simulations without affecting the other 4 simulations. That’s probably part of the reason that I think that the fact that the person’s on 5 different computers matters.
But that wouldn’t contradict the idea that, until an intervention makes things different, only distinct universes should counted.
I’m trying to figure out if there are any betting rules which would make you want to choose different ways of assigning probability, kind of like how this was approached.
All the ones that I’ve come up with so far involve scoring across universes, which seems like a cheap way out, and still doesn’t boil down to predictions that the universe’s inhabitants can test.
I like this term.
What if the computers in question are two-dimensional like Ebborians. Then splitting a computer down the middle has the effect of going from one computer weighing x to two computers each weighing x/2. Why should this splitting operation mean that you’re simulating ‘twice as many people’? And how many people are you simulating if the computer is only ‘partially split’?
The LW consensus on Sleeping Beauty is that there is no such thing as “SB’s correct subjective probability that the coin is tails” unless one specifies ‘betting rules’ and even then the only meaning that “subjective probability” has is “the probability assignments that prevent you from having negative expected winnings”. (Where the word ‘expected’ in the previous sentence relates only to the coin, not the subjective probabilities.)
So in terms of the “fact vs value” or “is vs ought” distinction, there is no purely “factual” answer to Sleeping Beauty, just some strategies that maximize value.
It comes from the philosophy of mind.