I think he is describing the paradox of supernatural predicting power suggested by the doomsday argument and SSA in general. It will boost the probability of scenarios with a smaller reference class. Like in the sleeping beauty problem, SSA suggests the probability of heads is 2⁄3 after learning now is Monday, even though the toss is yet to happen.
Following similar logic, the astronaut can boost his survival chance by limiting the number of people saved. He can form this intention: select and reheat the passengers one by one. As soon as he feels he has been reheated halt the entire process and let all the remaining astronauts die. This will link his survival to a smaller reference class, which can boost its probability. How much will it help depends on the “correct” reference class. If the correct reference class only entails the astronauts then it would be very significant. If the correct reference class includes all “observers” in the universe then the increase would be marginal almost zero. But nonetheless, it would be greater than 50%.
This conclusion is very counter-intuitive. E.g. if I have been reheated, should I keep to the intention of killing the remaining astronauts? How can it still affect my chance of survival? It seems retro-causing.
I consider this a counterargument against SSA and the doomsday argument. But I like this thought experiment. It shows in order to actually conduct a sampling process among a group of agents that includes the first person, it has to forfeit the first-person perspective. e.g. from the viewpoint of an impartial outsider, in this case the security cameras
This maybe sounds more like he is preventing possible futures in which he doesn’t exist, like if I rig a world destroying bomb when I die then a larger percentage of possible futures will have an older me.
[edited]
How?
I think he is describing the paradox of supernatural predicting power suggested by the doomsday argument and SSA in general. It will boost the probability of scenarios with a smaller reference class. Like in the sleeping beauty problem, SSA suggests the probability of heads is 2⁄3 after learning now is Monday, even though the toss is yet to happen.
Following similar logic, the astronaut can boost his survival chance by limiting the number of people saved. He can form this intention: select and reheat the passengers one by one. As soon as he feels he has been reheated halt the entire process and let all the remaining astronauts die. This will link his survival to a smaller reference class, which can boost its probability. How much will it help depends on the “correct” reference class. If the correct reference class only entails the astronauts then it would be very significant. If the correct reference class includes all “observers” in the universe then the increase would be marginal almost zero. But nonetheless, it would be greater than 50%.
This conclusion is very counter-intuitive. E.g. if I have been reheated, should I keep to the intention of killing the remaining astronauts? How can it still affect my chance of survival? It seems retro-causing.
I consider this a counterargument against SSA and the doomsday argument. But I like this thought experiment. It shows in order to actually conduct a sampling process among a group of agents that includes the first person, it has to forfeit the first-person perspective. e.g. from the viewpoint of an impartial outsider, in this case the security cameras
This maybe sounds more like he is preventing possible futures in which he doesn’t exist, like if I rig a world destroying bomb when I die then a larger percentage of possible futures will have an older me.