A common trope is for magic to work only when you believe in it. For example, in Harry Potter, you can only get to the magical train platform 9 3⁄4 if you believe that you can pass through the wall to get there.
Are you familiar with Greaves’ (2013) epistemic decision theory? These types of cases are precisely the ones she considers, although she is entirely focused on the epistemic side of things. For example (p. 916):
Leap. Bob stands on the brink of a chasm, summoning up the courage to try and leap across it. Confidence helps him in such situations: specifically, for any value of x between 0 and 1, if Bob attempted to leap across the chasm while having degree of belief x that he would succeed, his chance of success would then be x. What credence in success is it epistemically rational for Bob to have?
And even more interesting cases (p. 917):
Embezzlement. One of Charlie’s colleagues is accused of embezzling funds. Charlie happens to have conclusive evidence that her colleague is guilty. She is to be interviewed by the disciplinary tribunal. But Charlie’s colleague has had an opportunity to randomize the content of several otherwise informative files (files, let us say, that the tribunal will want to examine if Charlie gives a damning testimony). Further, in so far as the colleague thinks that Charlie believes him guilty, he will have done so. Specifically, if x is the colleague’s prediction for Charlie’s degree of belief that he’s guilty, then there is a chance x that he has set in motion a process by which each proposition originally in the files is replaced by its own negation if a fair coin lands Heads, and is left unaltered if the coin lands Tails. The colleague is a very reliable predictor of Charlie’s doxastic states. After such randomization (if any occurred), Charlie has now read the files; they (now) purport to testify to the truth of n propositions P1,…,Pn. Charlie’s credence in each of the propositions Pi, conditional on the proposition that the files have been randomized, is 1/2; her credence in each Pi conditional on the proposition that the files have not been randomized is 1. What credence is it epistemically rational for Charlie to have in the proposition G that her colleague is guilty and in the propositions Pi that the files purport to testify to the truth of?
In particular, Greaves’ (2013, §8, pp. 43-49) epistemic version of Arntzenius’ (2008) deliberational (causal) decision theory might be seen as a way of making sense of the first part of your theory. The idea, inspired by Skyrms (1990), is that deciding on a credence involves a cycle of calculating epistemic expected utility (measured by a proper scoring rule), adjusting credences, and recalculating utilities until an equilibrium is obtained. For example, in Leap above, epistemic D(C)DT would find any credence permissible. And I guess that the second part of your theory serves as a way of breaking ties.
You might also find the following cases interesting (with self-locating uncertainty as an additional dimension), from this post.
Sleeping Newcomb-1. Some researchers, led by the infamous superintelligence Omega, are going to put you to sleep. During the two days that your sleep will last, they will briefly wake you up either once or twice, depending on the toss of a biased coin (Heads: once; Tails: twice). After each waking, they will put you back to sleep with a drug that makes you forget that waking. The weight of the coin is determined by what the superintelligence predicts that you would say when you are awakened and asked to what degree ought you believe that the outcome of the coin toss is Heads. Specifically, if the superintelligence predicted that you would have a degree of belief pin Heads, then they will have weighted the coin such that the ‘objective chance’ of Heads is p. So, when you are awakened, to what degree ought you believe that the outcome of the coin toss is Heads?
Sleeping Newcomb-2. Some researchers, led by the superintelligence Omega, are going to put you to sleep. During the two days that your sleep will last, they will briefly wake you up either once or twice, depending on the toss of a biased coin (Heads: once; Tails: twice). After each waking, they will put you back to sleep with a drug that makes you forget that waking. The weight of the coin is determined by what the superintelligence predicts your response would be when you are awakened and asked to what degree you ought to believe that the outcome of the coin toss is Heads. Specifically, if Omega predicted that you would have a degree of belief pin Heads, then they will have weighted the coin such that the ‘objective chance’ of Heads is 1−p. Then: when you are in fact awakened, to what degree ought you believe that the outcome of the coin toss is Heads?
Yes, thanks for citing it here! I should have mentioned it, really.
I see the Skyrms iterative idea as quite different from the “just take a fixed point” theory I sketch here, although clearly they have something in common. FixDT makes it easier to combine both epistemic and instrumental concerns—every fixed point obeys the epistemic requirement; and then the choice between them obeys the instrumental requirement. If we iteratively zoom in on a fixed point instead of selecting from the set, this seems harder?
If we try the Skyrms iteration thing, maybe the most sensible thing would be to move toward the beliefs of greatest expected utility—but do so in a setting where epistemic utility emerges naturally from pragmatic concerts (such as A Pragmatists Guide to Epistemic Decision Theory by Ben Levinstein). So the agent is only ever revising its beliefs in pragmatic ways, but we assume enough about the environment that it wants to obey both the epistemic and instrumental constraints? But, possibly, this assumption would just be inconsistent with the sort of decision problem which motivates FixDT (and Greaves).
Are you familiar with Greaves’ (2013) epistemic decision theory? These types of cases are precisely the ones she considers, although she is entirely focused on the epistemic side of things. For example (p. 916):
And even more interesting cases (p. 917):
In particular, Greaves’ (2013, §8, pp. 43-49) epistemic version of Arntzenius’ (2008) deliberational (causal) decision theory might be seen as a way of making sense of the first part of your theory. The idea, inspired by Skyrms (1990), is that deciding on a credence involves a cycle of calculating epistemic expected utility (measured by a proper scoring rule), adjusting credences, and recalculating utilities until an equilibrium is
obtained. For example, in Leap above, epistemic D(C)DT would find any credence permissible. And I guess that the second part of your theory serves as a way of breaking ties.
You might also find the following cases interesting (with self-locating uncertainty as an additional dimension), from this post.
Yes, thanks for citing it here! I should have mentioned it, really.
I see the Skyrms iterative idea as quite different from the “just take a fixed point” theory I sketch here, although clearly they have something in common. FixDT makes it easier to combine both epistemic and instrumental concerns—every fixed point obeys the epistemic requirement; and then the choice between them obeys the instrumental requirement. If we iteratively zoom in on a fixed point instead of selecting from the set, this seems harder?
If we try the Skyrms iteration thing, maybe the most sensible thing would be to move toward the beliefs of greatest expected utility—but do so in a setting where epistemic utility emerges naturally from pragmatic concerts (such as A Pragmatists Guide to Epistemic Decision Theory by Ben Levinstein). So the agent is only ever revising its beliefs in pragmatic ways, but we assume enough about the environment that it wants to obey both the epistemic and instrumental constraints? But, possibly, this assumption would just be inconsistent with the sort of decision problem which motivates FixDT (and Greaves).