While I value true knowledge, I carve out an exception for beliefs about my own capabilities, as represented by this modified version of the Litany of Tarski:
If I can X, then I desire to believe I can X If believing that I can not X would make it such that I could not X, and it is plausible that I can X, and there are no dire consequences for failure if I X, then I desire to believe I can X. It is plausible that I can X. There are no dire consequences for failure if I X. Let me not become attached to beliefs I may not want.
It is plausible that I can X.
There are no dire consequences for failure if I X.
That doesn’t seem appropiate for arbitrary X. It is the sort of thing you would have to use ordinary epistemic rationality to evaluate for a particular X.
I left out a bit of the implied procedure that goes with reciting this. You’re supposed to truth-check those two lines as you say them, and stop if they aren’t true, with the understanding that (as a prior probability) they usually will be.
While I value true knowledge, I carve out an exception for beliefs about my own capabilities, as represented by this modified version of the Litany of Tarski:
If I can X,
then I desire to believe I can X
If believing that I can not X would make it such that I could not X,
and it is plausible that I can X,
and there are no dire consequences for failure if I X,
then I desire to believe I can X.
It is plausible that I can X.
There are no dire consequences for failure if I X.
Let me not become attached to beliefs I may not want.
That doesn’t seem appropiate for arbitrary X. It is the sort of thing you would have to use ordinary epistemic rationality to evaluate for a particular X.
I left out a bit of the implied procedure that goes with reciting this. You’re supposed to truth-check those two lines as you say them, and stop if they aren’t true, with the understanding that (as a prior probability) they usually will be.
What? Where did that prior probability come from?