I did particularly like the “Sleeping Loop” version, which manages to even confuse the question of how many times you’ve been awakened: just once, or infinitely many times? Congratulations!
My follow-up question for almost all of them though, is based on use of the word “should” in the question. Since it presumably is not any moral version of “should”, it’s presumably a meaning in the direction of “best achieves a desired outcome”.
What outcome am I trying to maximize, here? Am I trying to maximize some particular metric over prediction accuracy? In which case, which metric and how is it being applied? If I give the same answer twice based on the same information, is that scored differently from giving that answer once? If some p-zombie answers that same way that I would have if I were conscious, does that score count for my prediction or is it considered irrelevant? (Although this comment ends here, don’t worry—I have a lot more questions!)
My follow-up question for almost all of them though, is based on use of the word “should” in the question. Since it presumably is not any moral version of “should”, it’s presumably a meaning in the direction of “best achieves a desired outcome”.
The ‘should’ only designates what you think epistemic rationality requires of you in the situation. That might be something consequentialist (which is what I think you mean by “best achieves a desired outcome”), like maximizing accuracy[1], but it need not be; you could think there are other norms[2].
To see why epistemic consequentialism might not be the whole story, consider the following case from Greaves (2013) where the agent seemingly maximises accuracy by ignoring evidence and believing an obviously false thing.
Imps. Emily is taking a walk through the Garden of Epistemic Imps. A child plays on the grass in front of her. In a nearby summerhouse are n further children, each of whom may or may not come out to play in a minute. They are able to read Emily ’s mind, and their algorithm for deciding whether to play outdoors is as follows. If she forms degree of belief 0 that there is now a child before her, they will come out to play. If she forms degree of belief 1 that there is a child before her, they will roll a fair die, and come out to play iff the outcome is an even number. More generally, the summerhouse children will play with chance (1−12q(C0)), where q(C0)is the degree of belief Emily adopts in the proposition C0 that there is now a child before her. Emily ’s epistemic decision is the choice of credences in the proposition C0 that there is now a child before her, and, for each j=1,…,n, the proposition Cj that the jth summerhouse child will be outdoors in a few minutes’ time.
If I give the same answer twice based on the same information, is that scored differently from giving that answer once?
Once again, this depends on your preferred view of epistemic rationality, and specifically how you want to formulate the accuracy-first perspective. Whether you want to maximize individual, average or total accuracy is up to you! The problems formulated here are supposed to be agnostic with regard to such things; indeed, these are the types of discussions one wants to motivate by formulating philosophical dilemmas.
I did particularly like the “Sleeping Loop” version, which manages to even confuse the question of how many times you’ve been awakened: just once, or infinitely many times? Congratulations!
My follow-up question for almost all of them though, is based on use of the word “should” in the question. Since it presumably is not any moral version of “should”, it’s presumably a meaning in the direction of “best achieves a desired outcome”.
What outcome am I trying to maximize, here? Am I trying to maximize some particular metric over prediction accuracy? In which case, which metric and how is it being applied? If I give the same answer twice based on the same information, is that scored differently from giving that answer once? If some p-zombie answers that same way that I would have if I were conscious, does that score count for my prediction or is it considered irrelevant? (Although this comment ends here, don’t worry—I have a lot more questions!)
The ‘should’ only designates what you think epistemic rationality requires of you in the situation. That might be something consequentialist (which is what I think you mean by “best achieves a desired outcome”), like maximizing accuracy[1], but it need not be; you could think there are other norms[2].
To see why epistemic consequentialism might not be the whole story, consider the following case from Greaves (2013) where the agent seemingly maximises accuracy by ignoring evidence and believing an obviously false thing.
See Konek and Levinstein (2019) for a good discussion, though.
Once again, this depends on your preferred view of epistemic rationality, and specifically how you want to formulate the accuracy-first perspective. Whether you want to maximize individual, average or total accuracy is up to you! The problems formulated here are supposed to be agnostic with regard to such things; indeed, these are the types of discussions one wants to motivate by formulating philosophical dilemmas.
This is plausibly cashed out by tying your epistemic utility function to a proper scoring rule, e.g. the Brier score.
See e.g. Sylvan (2020) for a discussion of what non-consequentialism might look like in the general, non-anthropic, case.