That being said, (and simplifying to the computers simulating one person rather than a universe)
Now I can’t see any reason why the number of computers as opposed to the weights or the stickers should be the relevant input into the simulated beings’ anthropic reasoning (assuming they knew on day 1 how they were going to be simulated on day 2).
I think that the number of computers is more relevant than weight and stickers in that it determines how many people are running. If you simulate someone on a really big computer, no matter how big the computer is, you’re only simulating them once. Similarly, you can slap a different sticker on, and nothing would really change observer-wise.
If you simulate someone 5 times, then there are five simulations running, and there’s 5 conscious experiences going on.
counting the number of distinct universes
I easily imagine that counting the number of distinct experiences is the proper procedure. Even if the person is on 5 computers, they’re still experiencing the exact same things.
However, you can still intervene in any of those 5 simulations without affecting the other 4 simulations. That’s probably part of the reason that I think that the fact that the person’s on 5 different computers matters.
But that wouldn’t contradict the idea that, until an intervention makes things different, only distinct universes should counted.
(ii) you need to somehow justify an arbitrary choice of ‘bridge law’ (which I don’t think is possible).
I’m trying to figure out if there are any betting rules which would make you want to choose different ways of assigning probability, kind of like how this was approached.
All the ones that I’ve come up with so far involve scoring across universes, which seems like a cheap way out, and still doesn’t boil down to predictions that the universe’s inhabitants can test.
I think that the number of computers is more relevant than weight and stickers in that it determines how many people are running. If you simulate someone on a really big computer, no matter how big the computer is, you’re only simulating them once.
What if the computers in question are two-dimensional like Ebborians. Then splitting a computer down the middle has the effect of going from one computer weighing x to two computers each weighing x/2. Why should this splitting operation mean that you’re simulating ‘twice as many people’? And how many people are you simulating if the computer is only ‘partially split’?
I’m trying to figure out if there are any betting rules which would make you want to choose different ways of assigning probability, kind of like how [the Sleeping Beauty problem] was approached.
The LW consensus on Sleeping Beauty is that there is no such thing as “SB’s correct subjective probability that the coin is tails” unless one specifies ‘betting rules’ and even then the only meaning that “subjective probability” has is “the probability assignments that prevent you from having negative expected winnings”. (Where the word ‘expected’ in the previous sentence relates only to the coin, not the subjective probabilities.)
So in terms of the “fact vs value” or “is vs ought” distinction, there is no purely “factual” answer to Sleeping Beauty, just some strategies that maximize value.
I’m gonna go ahead and notice that I’m confused.
That being said, (and simplifying to the computers simulating one person rather than a universe)
I think that the number of computers is more relevant than weight and stickers in that it determines how many people are running. If you simulate someone on a really big computer, no matter how big the computer is, you’re only simulating them once. Similarly, you can slap a different sticker on, and nothing would really change observer-wise.
If you simulate someone 5 times, then there are five simulations running, and there’s 5 conscious experiences going on.
I easily imagine that counting the number of distinct experiences is the proper procedure. Even if the person is on 5 computers, they’re still experiencing the exact same things.
However, you can still intervene in any of those 5 simulations without affecting the other 4 simulations. That’s probably part of the reason that I think that the fact that the person’s on 5 different computers matters.
But that wouldn’t contradict the idea that, until an intervention makes things different, only distinct universes should counted.
I’m trying to figure out if there are any betting rules which would make you want to choose different ways of assigning probability, kind of like how this was approached.
All the ones that I’ve come up with so far involve scoring across universes, which seems like a cheap way out, and still doesn’t boil down to predictions that the universe’s inhabitants can test.
I like this term.
What if the computers in question are two-dimensional like Ebborians. Then splitting a computer down the middle has the effect of going from one computer weighing x to two computers each weighing x/2. Why should this splitting operation mean that you’re simulating ‘twice as many people’? And how many people are you simulating if the computer is only ‘partially split’?
The LW consensus on Sleeping Beauty is that there is no such thing as “SB’s correct subjective probability that the coin is tails” unless one specifies ‘betting rules’ and even then the only meaning that “subjective probability” has is “the probability assignments that prevent you from having negative expected winnings”. (Where the word ‘expected’ in the previous sentence relates only to the coin, not the subjective probabilities.)
So in terms of the “fact vs value” or “is vs ought” distinction, there is no purely “factual” answer to Sleeping Beauty, just some strategies that maximize value.
It comes from the philosophy of mind.