hmm. it seems to me that the sleeping mechanism problem is missing a perspective—there are more types of question you could ask the sleeping mechanism that are of interest. I’d say the measure increased by waking is not able to make predictions about what universe it is; but that, given waking, the mechanism should estimate the average of the two universe’s wake counts, and assume the mechanism has 1.5 wakings of causal impact on the environment around the awoken mechanism. In other words, it seems to me that the decision-relevant anthropic question is how many places a symmetric process exists; inferring the properties of the universe around you, it is invalid to update about likely causal processes based on the fact that you exist; but on finding out you exist, you can update about where your actions are likely to impact, a different measure that does not allow making inferences about, eg, universal constants.
if, for example, the sleeping beauty problem is run ten times, and each time the being wakes, it is written to a log; after the experiment, there will be on average 1.5x as many logs as there are samples. but the agent should still predict 50%, because the predictive accuracy score is a question of whether the bet the agent makes can be beaten by other knowledge. when the mechanism wakes, it should know it has more action weight in one world than the other, but that doesn’t allow it to update about what bet most accurately predicts the most recent sample. two thirds of the mechanism’s actions occur in one world, one third in the other, but the mechanism can’t use that knowledge to infer about the past.
I get the sense that I might be missing something here. the thirder position makes intuitive sense on some level. but my intuition is that it’s conflating things. I’ve encountered the sleeping beauty problem before and something about it unsettles me—it feels like a confused question, and I might be wrong about this attempted deconfusion.
but this explanation matches my intuition that simulating a billion more copies of myself would be great, but not make me more likely to have existed.
hmm. it seems to me that the sleeping mechanism problem is missing a perspective—there are more types of question you could ask the sleeping mechanism that are of interest. I’d say the measure increased by waking is not able to make predictions about what universe it is; but that, given waking, the mechanism should estimate the average of the two universe’s wake counts, and assume the mechanism has 1.5 wakings of causal impact on the environment around the awoken mechanism. In other words, it seems to me that the decision-relevant anthropic question is how many places a symmetric process exists; inferring the properties of the universe around you, it is invalid to update about likely causal processes based on the fact that you exist; but on finding out you exist, you can update about where your actions are likely to impact, a different measure that does not allow making inferences about, eg, universal constants.
if, for example, the sleeping beauty problem is run ten times, and each time the being wakes, it is written to a log; after the experiment, there will be on average 1.5x as many logs as there are samples. but the agent should still predict 50%, because the predictive accuracy score is a question of whether the bet the agent makes can be beaten by other knowledge. when the mechanism wakes, it should know it has more action weight in one world than the other, but that doesn’t allow it to update about what bet most accurately predicts the most recent sample. two thirds of the mechanism’s actions occur in one world, one third in the other, but the mechanism can’t use that knowledge to infer about the past.
I get the sense that I might be missing something here. the thirder position makes intuitive sense on some level. but my intuition is that it’s conflating things. I’ve encountered the sleeping beauty problem before and something about it unsettles me—it feels like a confused question, and I might be wrong about this attempted deconfusion.
but this explanation matches my intuition that simulating a billion more copies of myself would be great, but not make me more likely to have existed.