If you start with a prior on the coin being 99:1, then no amount of observations will persuade you otherwise. If you start with a prior that is more spread out across the possible biases of the coin—even if it’s 99:1 in expectation—then you can update from observations.
Decision theory proceeds in exactly the same way; decision theory will “update” towards 50:50 unless it starts with a broken prior.
So essentially there are three things: decision theory, utility, and priors. Using those, you can solve all problems, without needing to define anthropic probabilities.
You can solve all problems, except the ones you care about :-) Most human values seem to be only instrumentally about world states like in UDT, but ultimately about which conscious experiences happen and in what proportions. If you think these proportions come from decision-making, what’s the goal of that decision-making?
It comes back to the same issue again: do you value exact duplicates having exactly the same experience as a sum, or is that the same as one copy having it (equivalent to an average)? Or something in between?
Let’s see. If I had memories of being in many anthropic situations, and the frequencies in these memories agreed with SIA to the tenth decimal place, I would probably value my copies according to SIA. Likewise with SSA. So today, before I have any such memories, I seem to be a value-learning agent who’s willing to adopt either SIA or SSA (or something else) depending on future experiences or arguments.
You seem to be saying that it’s better for me to get rid of value learning, replacing it with some specific way of valuing copies. If so, how should I choose which?
If you start with a prior on the coin being 99:1, then no amount of observations will persuade you otherwise. If you start with a prior that is more spread out across the possible biases of the coin—even if it’s 99:1 in expectation—then you can update from observations.
Decision theory proceeds in exactly the same way; decision theory will “update” towards 50:50 unless it starts with a broken prior.
So essentially there are three things: decision theory, utility, and priors. Using those, you can solve all problems, without needing to define anthropic probabilities.
You can solve all problems, except the ones you care about :-) Most human values seem to be only instrumentally about world states like in UDT, but ultimately about which conscious experiences happen and in what proportions. If you think these proportions come from decision-making, what’s the goal of that decision-making?
It comes back to the same issue again: do you value exact duplicates having exactly the same experience as a sum, or is that the same as one copy having it (equivalent to an average)? Or something in between?
Let’s see. If I had memories of being in many anthropic situations, and the frequencies in these memories agreed with SIA to the tenth decimal place, I would probably value my copies according to SIA. Likewise with SSA. So today, before I have any such memories, I seem to be a value-learning agent who’s willing to adopt either SIA or SSA (or something else) depending on future experiences or arguments.
You seem to be saying that it’s better for me to get rid of value learning, replacing it with some specific way of valuing copies. If so, how should I choose which?
Edit: I’ve expanded this idea to a post.