You can get probabilities from decisions by maximising a proper scoring function for estimates of expectation of an event happening. It works in all cases that probability does. A broken prior will break both probabilities and decision theory.
In the case of anthropics, the probability breaks—as the expectation of an event isn’t well defined across duplicates—while decision theory doesn’t.
In my scenario there are two things that can be called “probability”:
1) The 99:1 odds that you use for decisions. We know how this thing works. You’ve shown that it doesn’t work in anthropic situations.
2) The 50:50 odds that you observe regardless of your decisions. Nobody knows how this thing works. You haven’t shown anything about it in anthropic situations.
If you start with a prior on the coin being 99:1, then no amount of observations will persuade you otherwise. If you start with a prior that is more spread out across the possible biases of the coin—even if it’s 99:1 in expectation—then you can update from observations.
Decision theory proceeds in exactly the same way; decision theory will “update” towards 50:50 unless it starts with a broken prior.
So essentially there are three things: decision theory, utility, and priors. Using those, you can solve all problems, without needing to define anthropic probabilities.
You can solve all problems, except the ones you care about :-) Most human values seem to be only instrumentally about world states like in UDT, but ultimately about which conscious experiences happen and in what proportions. If you think these proportions come from decision-making, what’s the goal of that decision-making?
It comes back to the same issue again: do you value exact duplicates having exactly the same experience as a sum, or is that the same as one copy having it (equivalent to an average)? Or something in between?
Let’s see. If I had memories of being in many anthropic situations, and the frequencies in these memories agreed with SIA to the tenth decimal place, I would probably value my copies according to SIA. Likewise with SSA. So today, before I have any such memories, I seem to be a value-learning agent who’s willing to adopt either SIA or SSA (or something else) depending on future experiences or arguments.
You seem to be saying that it’s better for me to get rid of value learning, replacing it with some specific way of valuing copies. If so, how should I choose which?
You can get probabilities from decisions by maximising a proper scoring function for estimates of expectation of an event happening. It works in all cases that probability does. A broken prior will break both probabilities and decision theory.
In the case of anthropics, the probability breaks—as the expectation of an event isn’t well defined across duplicates—while decision theory doesn’t.
In my scenario there are two things that can be called “probability”:
1) The 99:1 odds that you use for decisions. We know how this thing works. You’ve shown that it doesn’t work in anthropic situations.
2) The 50:50 odds that you observe regardless of your decisions. Nobody knows how this thing works. You haven’t shown anything about it in anthropic situations.
If you start with a prior on the coin being 99:1, then no amount of observations will persuade you otherwise. If you start with a prior that is more spread out across the possible biases of the coin—even if it’s 99:1 in expectation—then you can update from observations.
Decision theory proceeds in exactly the same way; decision theory will “update” towards 50:50 unless it starts with a broken prior.
So essentially there are three things: decision theory, utility, and priors. Using those, you can solve all problems, without needing to define anthropic probabilities.
You can solve all problems, except the ones you care about :-) Most human values seem to be only instrumentally about world states like in UDT, but ultimately about which conscious experiences happen and in what proportions. If you think these proportions come from decision-making, what’s the goal of that decision-making?
It comes back to the same issue again: do you value exact duplicates having exactly the same experience as a sum, or is that the same as one copy having it (equivalent to an average)? Or something in between?
Let’s see. If I had memories of being in many anthropic situations, and the frequencies in these memories agreed with SIA to the tenth decimal place, I would probably value my copies according to SIA. Likewise with SSA. So today, before I have any such memories, I seem to be a value-learning agent who’s willing to adopt either SIA or SSA (or something else) depending on future experiences or arguments.
You seem to be saying that it’s better for me to get rid of value learning, replacing it with some specific way of valuing copies. If so, how should I choose which?
Edit: I’ve expanded this idea to a post.