Personally I feel like saying “yes to all your maybes”—yes, just use some updateless theory to maximize whatever aggregation function over your copies you want.
I guess I have two things to say.
First, I think thinking in terms of payoff is better even if you care about credences—you can just prove something like “for any preferences there is a way to factor credences from them” but in much less confusing way.
And second:
I feel like I’m just asking a question about what’s true, about what kind of world I’m living in
Most of the examples so far were not about that—they were about what kind of world many copies of you living in. And to decide whether they answer these questions well enough you have to score and aggregate them.
Personally I feel like saying “yes to all your maybes”—yes, just use some updateless theory to maximize whatever aggregation function over your copies you want.
I guess I have two things to say.
First, I think thinking in terms of payoff is better even if you care about credences—you can just prove something like “for any preferences there is a way to factor credences from them” but in much less confusing way.
And second:
Most of the examples so far were not about that—they were about what kind of world many copies of you living in. And to decide whether they answer these questions well enough you have to score and aggregate them.