Sure. This setup couldn’t really be exploited for optimizing the universe. If we assume that the self-selection assumption is a reasonable assumption to make, inducing amnesia doesn’t actually improve outcomes across possible worlds. One out of 100 prisoners still dies.
It can’t even be considered “re-rolling the dice” on whether the specific prisoner that you are dies. Under the SSA, there’s no such thing as a “specific prisoner”, “you” are implemented as all 100 prisoners simultaneously, and so regardless of whether you choose to erase your memory or not, 1⁄100 of your measure is still destroyed. Without SSA, on the other hand, if we consider each prisoner’s perspective to be distinct, erasing memory indeed does nothing: it doesn’t return your perspective to the common pool of prisoner-perspectives, so if “you” were going to get shot, “you” are still going to get shot.
I’m not super interested in that part, though. What I’m interested in is whether there are in fact 100 clones of me: whether, under the SSA, “microscopically different” prisoners could be meaningfully considered a single “high-level” prisoner.
Yes, it seems totally reasonable for bounded reasoners to consider hypotheses (where a hypothesis like ‘the universe is as it would be from the perspective of prisoner #3’ functions like treating prisoner #3 as ‘an instance of me’) that would be counterfactual or even counterlogical for more idealized reasoners.
Typical bounded reasoning weirdness is stuff like seeming to take some counterlogicals (e.g. different hypotheses about the trillionth digit of pi) seriously despite denying 1+1=3, even though there’s a chain of logic connecting one to the other. Projecting this into anthropics, you might have a certain systematic bias about which hypotheses you can consider, and yet deny that that systematic bias is valid when presented with it abstractly.
This seems like it makes drawing general lessons about what counts as ‘an instance of me’ from the fact that I’m a bounded reasoner pretty fraught.
Sure. This setup couldn’t really be exploited for optimizing the universe. If we assume that the self-selection assumption is a reasonable assumption to make, inducing amnesia doesn’t actually improve outcomes across possible worlds. One out of 100 prisoners still dies.
It can’t even be considered “re-rolling the dice” on whether the specific prisoner that you are dies. Under the SSA, there’s no such thing as a “specific prisoner”, “you” are implemented as all 100 prisoners simultaneously, and so regardless of whether you choose to erase your memory or not, 1⁄100 of your measure is still destroyed. Without SSA, on the other hand, if we consider each prisoner’s perspective to be distinct, erasing memory indeed does nothing: it doesn’t return your perspective to the common pool of prisoner-perspectives, so if “you” were going to get shot, “you” are still going to get shot.
I’m not super interested in that part, though. What I’m interested in is whether there are in fact 100 clones of me: whether, under the SSA, “microscopically different” prisoners could be meaningfully considered a single “high-level” prisoner.
Fair enough.
Yes, it seems totally reasonable for bounded reasoners to consider hypotheses (where a hypothesis like ‘the universe is as it would be from the perspective of prisoner #3’ functions like treating prisoner #3 as ‘an instance of me’) that would be counterfactual or even counterlogical for more idealized reasoners.
Typical bounded reasoning weirdness is stuff like seeming to take some counterlogicals (e.g. different hypotheses about the trillionth digit of pi) seriously despite denying 1+1=3, even though there’s a chain of logic connecting one to the other. Projecting this into anthropics, you might have a certain systematic bias about which hypotheses you can consider, and yet deny that that systematic bias is valid when presented with it abstractly.
This seems like it makes drawing general lessons about what counts as ‘an instance of me’ from the fact that I’m a bounded reasoner pretty fraught.