Well it can’t still be instrumental rationality anymore. I mean suppose the value being minimized is overall suffering and you are offered a (non-zero probability one time...and you know there are no other possible infinitary outcomes) threat that if you don’t believe some false claim X god will create an infinite (no other infinite outcomes) amount of suffering. You know before the choice to believe the false claim that no effect of believing it will increase expected suffering to overwhelm the harm of not believing it.
But the real rub is what do you say about the situation where the rocks turn out to be rocks cleverly disguised as people. You still have every indication that your behavior convincing yourself is an attempt to believe a false statement but it is actually true.
Does the decision procedure which says whatever you want it to normally say but makes a special exception that you can deceive yourself if (description of this situation which happens to identify it uniquely in the world).
In other words is it a relation to truth that you demand. In which case the rule gets better whenever you make exceptions that happen (no matter how unlikely it is) in the actual world to generate true and instrumentally useful beliefs. Or is it some notion of approaching evidence?
If the latter you seem to be committed to the existence of something like Carnap’s logical probability, i.e., something deducible from pure reason that assigns priors to all possible theories of the world. This is a notoriously unsolvable (in the sense that it doesn’t have one) unsolvable problem.
At the very least can you state some formal conditions that constrain a rule for deciding between actions (or however you want to model it) that captures the constraint you want?
If we change the story as you describe I guess the moral of the story would probably become “investigate thoroughly”. Obviously Bayesians are never really certain—but deliberate manipulation of one’s own map of probabilities is unwise unless there is an overwhelmingly good reason (your hypothetical would probably be one—but I believe in real life we rarely run into that species of situation).
The story itself is not the argument, but an illustration of it—it is “a calculation of the instrumentality of various options ought to include a generalised weighting of the truth (resistence to self-deception) because of the consequences of self-deception tend to be hidden and negative”. I additionally feel that this weighting is neglected when the focus in on “winning”. I can’t prove the emprical part of the first claim, because its based on general life experience, but I don’t feel its going to be challenged by any reasonable person (does anyone here think self-deception doesn’t generally lead to unseen, negative consequences?).
I don’t feel confident precribing a specific formula to quantify that weighting at this time. I’m merely suggesting the weight should be something, and be significant in most situations.
Well it can’t still be instrumental rationality anymore. I mean suppose the value being minimized is overall suffering and you are offered a (non-zero probability one time...and you know there are no other possible infinitary outcomes) threat that if you don’t believe some false claim X god will create an infinite (no other infinite outcomes) amount of suffering. You know before the choice to believe the false claim that no effect of believing it will increase expected suffering to overwhelm the harm of not believing it.
But the real rub is what do you say about the situation where the rocks turn out to be rocks cleverly disguised as people. You still have every indication that your behavior convincing yourself is an attempt to believe a false statement but it is actually true.
Does the decision procedure which says whatever you want it to normally say but makes a special exception that you can deceive yourself if (description of this situation which happens to identify it uniquely in the world).
In other words is it a relation to truth that you demand. In which case the rule gets better whenever you make exceptions that happen (no matter how unlikely it is) in the actual world to generate true and instrumentally useful beliefs. Or is it some notion of approaching evidence?
If the latter you seem to be committed to the existence of something like Carnap’s logical probability, i.e., something deducible from pure reason that assigns priors to all possible theories of the world. This is a notoriously unsolvable (in the sense that it doesn’t have one) unsolvable problem.
At the very least can you state some formal conditions that constrain a rule for deciding between actions (or however you want to model it) that captures the constraint you want?
Thanks for the reply.
If we change the story as you describe I guess the moral of the story would probably become “investigate thoroughly”. Obviously Bayesians are never really certain—but deliberate manipulation of one’s own map of probabilities is unwise unless there is an overwhelmingly good reason (your hypothetical would probably be one—but I believe in real life we rarely run into that species of situation).
The story itself is not the argument, but an illustration of it—it is “a calculation of the instrumentality of various options ought to include a generalised weighting of the truth (resistence to self-deception) because of the consequences of self-deception tend to be hidden and negative”. I additionally feel that this weighting is neglected when the focus in on “winning”. I can’t prove the emprical part of the first claim, because its based on general life experience, but I don’t feel its going to be challenged by any reasonable person (does anyone here think self-deception doesn’t generally lead to unseen, negative consequences?).
I don’t feel confident precribing a specific formula to quantify that weighting at this time. I’m merely suggesting the weight should be something, and be significant in most situations.