Suppose there are a hundred copies of you, in different cells. At random, one will be selected—that one is going to be shot tomorrow. A guard notifies that one that they’re going to be shot.
There is a mercy offered, though—there’s a memory-eraser-ray handy. The one who knows they’te going to be shot is given the option to erase their memory of the warning and everything that followed, putting them in the same information state, more or less, as any of the other copies.
“Of course!” They cry. “Erase my memory, and I could be any of them—why, when you shoot someone tomorrow, there’s a 99% chance it won’t even be me!”
Sure. This setup couldn’t really be exploited for optimizing the universe. If we assume that the self-selection assumption is a reasonable assumption to make, inducing amnesia doesn’t actually improve outcomes across possible worlds. One out of 100 prisoners still dies.
It can’t even be considered “re-rolling the dice” on whether the specific prisoner that you are dies. Under the SSA, there’s no such thing as a “specific prisoner”, “you” are implemented as all 100 prisoners simultaneously, and so regardless of whether you choose to erase your memory or not, 1⁄100 of your measure is still destroyed. Without SSA, on the other hand, if we consider each prisoner’s perspective to be distinct, erasing memory indeed does nothing: it doesn’t return your perspective to the common pool of prisoner-perspectives, so if “you” were going to get shot, “you” are still going to get shot.
I’m not super interested in that part, though. What I’m interested in is whether there are in fact 100 clones of me: whether, under the SSA, “microscopically different” prisoners could be meaningfully considered a single “high-level” prisoner.
Yes, it seems totally reasonable for bounded reasoners to consider hypotheses (where a hypothesis like ‘the universe is as it would be from the perspective of prisoner #3’ functions like treating prisoner #3 as ‘an instance of me’) that would be counterfactual or even counterlogical for more idealized reasoners.
Typical bounded reasoning weirdness is stuff like seeming to take some counterlogicals (e.g. different hypotheses about the trillionth digit of pi) seriously despite denying 1+1=3, even though there’s a chain of logic connecting one to the other. Projecting this into anthropics, you might have a certain systematic bias about which hypotheses you can consider, and yet deny that that systematic bias is valid when presented with it abstractly.
This seems like it makes drawing general lessons about what counts as ‘an instance of me’ from the fact that I’m a bounded reasoner pretty fraught.
I’ll preface this with: what I’m saying is low confidence—I’m not very educated on the topics in question (reality fluid, consciousness, quantum mechanics, etc).
Nevertheless, I don’t see how the prison example is applicable. In the prison scenario there’s an external truth (which prisoner was picked) that exists independent of memory/consciousness. The memory wipe just makes the prisoner uncertain about this external truth.
But this post is talking about a scenario where your memories/consciousness are the only thing that determines which universes count as ‘you’.
There is no external truth about which universe you’re really in—your consciousness itself defines (encompasses?) which universes contain you. So, when your memories become more coarse, you’re not just becoming uncertain about which universe you’re in—you’re changing which universes count as containing you, since your consciousness is the only arbiter of this.
As it is difficult to sort through the inmates on execution day, an automatic gun is placed above each door with blanks or lead ammunition. The guard enters the cell numbers into a hashed database, before talking to the unlucky prisoner. He recently switched to the night shift, and his eyes droop as he shoots the ray.
When he wakes up, he sees “enter cell number” crossed off on the to-do list, but not “inform the prisoners”. He must have fallen asleep on the job, and now he doesn’t know which prisoner to inform! He figures he may as well offer all the prisoners the amnesia-ray.
“If you noticed a red light blinking above your door last night, it means today is your last day. I may have come to your cell to offer your Last rights, but it is a busy prison, so I may have skipped you over. If you would like your Last rights now, they are available.”
Most prisoners breathed a sigh of relief. “I was stressing all night, thinking, what if I’m the one? Thank you for telling me about the red light, now I know it is not me.” One out of every hundred of these lookalikes were less grateful. “You told me this six hours ago, and I haven’t slept a wink. Did you have to remind me again?!”
There was another category of clones though, who all had the same response. “Oh no! I thought I was safe since nothing happened last night. But now, I know I could have just forgotten. Please shoot me again, I can’t bear this.”
Suppose there are a hundred copies of you, in different cells. At random, one will be selected—that one is going to be shot tomorrow. A guard notifies that one that they’re going to be shot.
There is a mercy offered, though—there’s a memory-eraser-ray handy. The one who knows they’te going to be shot is given the option to erase their memory of the warning and everything that followed, putting them in the same information state, more or less, as any of the other copies.
“Of course!” They cry. “Erase my memory, and I could be any of them—why, when you shoot someone tomorrow, there’s a 99% chance it won’t even be me!”
Then the next day comes, and they get shot.
Sure. This setup couldn’t really be exploited for optimizing the universe. If we assume that the self-selection assumption is a reasonable assumption to make, inducing amnesia doesn’t actually improve outcomes across possible worlds. One out of 100 prisoners still dies.
It can’t even be considered “re-rolling the dice” on whether the specific prisoner that you are dies. Under the SSA, there’s no such thing as a “specific prisoner”, “you” are implemented as all 100 prisoners simultaneously, and so regardless of whether you choose to erase your memory or not, 1⁄100 of your measure is still destroyed. Without SSA, on the other hand, if we consider each prisoner’s perspective to be distinct, erasing memory indeed does nothing: it doesn’t return your perspective to the common pool of prisoner-perspectives, so if “you” were going to get shot, “you” are still going to get shot.
I’m not super interested in that part, though. What I’m interested in is whether there are in fact 100 clones of me: whether, under the SSA, “microscopically different” prisoners could be meaningfully considered a single “high-level” prisoner.
Fair enough.
Yes, it seems totally reasonable for bounded reasoners to consider hypotheses (where a hypothesis like ‘the universe is as it would be from the perspective of prisoner #3’ functions like treating prisoner #3 as ‘an instance of me’) that would be counterfactual or even counterlogical for more idealized reasoners.
Typical bounded reasoning weirdness is stuff like seeming to take some counterlogicals (e.g. different hypotheses about the trillionth digit of pi) seriously despite denying 1+1=3, even though there’s a chain of logic connecting one to the other. Projecting this into anthropics, you might have a certain systematic bias about which hypotheses you can consider, and yet deny that that systematic bias is valid when presented with it abstractly.
This seems like it makes drawing general lessons about what counts as ‘an instance of me’ from the fact that I’m a bounded reasoner pretty fraught.
I’ll preface this with: what I’m saying is low confidence—I’m not very educated on the topics in question (reality fluid, consciousness, quantum mechanics, etc).
Nevertheless, I don’t see how the prison example is applicable. In the prison scenario there’s an external truth (which prisoner was picked) that exists independent of memory/consciousness. The memory wipe just makes the prisoner uncertain about this external truth.
But this post is talking about a scenario where your memories/consciousness are the only thing that determines which universes count as ‘you’.
There is no external truth about which universe you’re really in—your consciousness itself defines (encompasses?) which universes contain you. So, when your memories become more coarse, you’re not just becoming uncertain about which universe you’re in—you’re changing which universes count as containing you, since your consciousness is the only arbiter of this.
As it is difficult to sort through the inmates on execution day, an automatic gun is placed above each door with blanks or lead ammunition. The guard enters the cell numbers into a hashed database, before talking to the unlucky prisoner. He recently switched to the night shift, and his eyes droop as he shoots the ray.
When he wakes up, he sees “enter cell number” crossed off on the to-do list, but not “inform the prisoners”. He must have fallen asleep on the job, and now he doesn’t know which prisoner to inform! He figures he may as well offer all the prisoners the amnesia-ray.
“If you noticed a red light blinking above your door last night, it means today is your last day. I may have come to your cell to offer your Last rights, but it is a busy prison, so I may have skipped you over. If you would like your Last rights now, they are available.”
Most prisoners breathed a sigh of relief. “I was stressing all night, thinking, what if I’m the one? Thank you for telling me about the red light, now I know it is not me.” One out of every hundred of these lookalikes were less grateful. “You told me this six hours ago, and I haven’t slept a wink. Did you have to remind me again?!”
There was another category of clones though, who all had the same response. “Oh no! I thought I was safe since nothing happened last night. But now, I know I could have just forgotten. Please shoot me again, I can’t bear this.”