The conflict seems to be that, according to the advice, a rationalist ought to (a) try to find out which of their ideas are false, and (b) evict those ideas. A policy of strategic ignorance avoids having to do (b) by deliberately doing a crappy job of (a).
The few specific situations that I drilled down on I found that “deliberately doing a crappy job of (a)” never came up. Some times however the choice was between doing (a)+(b) with topic (d) or doing (a)+(b) with topic (e), where it is unproductive to know (d). The choice is clearly to do (a)+(b) with (e) because it is more productive.
Then there is not conflict with “Whatever can be destroyed by the truth, should be.” because what needs to be destroyed is prioritized.
Can you provide a specific example where conflict with “Whatever can be destroyed by the truth, should be.” is ensured?
A person who wants to have multiple sexual partners may resist getting himself tested for sexual disease. If he was tested, he might find out he had a disease, and then he’d be accused of knowingly endangering others if he didn’t tell them about his disease. If he isn’t tested, he’ll only be accused of not finding out that information, which is often considered less serious.
Let’s call the person Alex. Alex avoids getting tested in order to avoid possible blame; assuming Alex is selfish and doesn’t care about their partners’ sexual health (or the knock-on effects of people in general not caring about their partners’ sexual health) at all, then this is the right choice instrumentally.
However, by acting this way Alex is deliberately protecting an invalid belief from being destroyed by the truth. Alex currently believes or should believe that they have a low probability (at the demographic average) of carrying a sexual disease. If Alex got tested, then this belief would be destroyed one way or the other; if the test was positive, then the posterior probability goes way upwards, and if the test is negative, then it goes downwards a smaller but still non-trivial amount.
Instead of doing this, Alex simply acts as though they already knew the results of the test to be negative in advance, and even goes on to spread the truth-destroyable-belief by encouraging others to take it on as well. By avoiding evidence, particularly useful evidence (where by useful I mean easy to gather and having a reasonably high magnitude of impact on your priors if gathered), Alex is being epistemically irrational (even though they might well be instrumentally rational).
This is what I meant by “deliberately doing a crappy job of [finding out which ideas are false]”.
This does bring up an interesting idea, though, which is that it might not be (instrumentally) rational to be maximally (epistemically) rational.
Additional necessary assumption seems to be that Alex cares about “Whatever can be destroyed by the truth, should be.” He is selfish but does his best to act rationally.
Let’s call the person Alex. Alex avoids getting tested in order to avoid possible blame; assuming Alex is selfish and doesn’t care about their partners’ sexual health (or the knock-on effects of people in general not caring about their partners’ sexual health) at all, then this is the right choice instrumentally.
Therefore Alex does not value knowing whether or not his has an std and instead pursues other knowledge.
However, by acting this way Alex is deliberately protecting an invalid belief from being destroyed by the truth. Alex currently believes or should believe that they have a low probability (at the demographic average) of carrying a sexual disease. If Alex got tested, then this belief would be destroyed one way or the other; if the test was positive, then the posterior probability goes way upwards, and if the test is negative, then it goes downwards a smaller but still non-trivial amount.
Alex is faced with the choice of getting an std test and improving his probability estimate of his state of infection or spending his time doing something he considers more valuable. He chooses to not to get an std test because the information is not very valuable him and focuses on more important matters.
Instead of doing this, Alex simply acts as though they already knew the results of the test to be negative in advance, and even goes on to spread the truth-destroyable-belief by encouraging others to take it on as well.
Alex is selfish and does not care that he is misleading people.
By avoiding evidence, particularly useful evidence (where by useful I mean easy to gather and having a reasonably high magnitude of impact on your priors if gathered), Alex is being epistemically irrational (even though they might well be instrumentally rational).
Avoiding the evidence would be irrational. Focusing on more important evidence is not. Alex is not doing a “crappy” job of finding out what is false he has just maximized finding out the truth he cares about.
I tried to present a rational, selfish, uncaring, Alex who chooses not to get an STD test even though he cares deeply about “Whatever can be destroyed by the truth, should be.”, as far as his personal beliefs are concerned.
Avoiding the evidence would be irrational. Focusing on more important evidence is not.
I disagree. In the least convenient world where the STD test imposes no costs on Alex, he would still be instrumentally rational to not take it. This is because Alex knows the plausibility of his claims that he does not have an STD will be sabotaged if the test comes out positive, because he is not a perfect liar.
(I don’t think this situation is even particularly implausible. Some situation at a college where they’ll give you a cookie if you take an STD test seems quite likely, along the same lines as free condoms.)
I disagree. In the least convenient world where the STD test imposes no costs on Alex, he would still be instrumentally rational to not take it. This is because Alex knows the plausibility of his claims that he does not have an STD will be sabotaged if the test comes out positive, because he is not a perfect liar.
In a world were STD tests cost absolutely nothing, including time, effort, thought, there would be no excuse to not have taken a test and I do not see a method for generating plausible deniability by not knowing.
Some situation at a college where they’ll give you a cookie if you take an STD test seems quite likely
You situation does not really count as no cost though. In a world in which you must spend effort to avoid getting a STD test it seems unlikely that plausible deniability can be generated in the first place.
You are correct “Avoiding the evidence would be irrational.” does seem to be incorrect in general and I generalized too strongly from the example I was working on.
Though this does not seem to answer my original question. Is there a by definition conflict between “Whatever can be destroyed by the truth, should be.” and generating plausible deniability. The answer I still come up with is no conflict. Some truths should be destroyed before others and this allows for some plausible deniability for untruths low in priority.
In a world were STD tests cost absolutely nothing, including time, effort, thought, there would be no excuse to not have taken a test and I do not see a method for generating plausible deniability by not knowing.
That’s not really the least convenient possible world though, is it? The least convenient possible world is one where STD tests impose no additional cost on him, but other people don’t know this, so he still has plausible deniability. Let’s say that he’s taking a sexuality course where the students are assigned to take STD tests, or if they have some objection, are forced to do a make up assignment which imposes equivalent inconvenience. Nobody he wants to have sex with is aware that he’s in this class or that it imposes this assignment.
Some situation at a college where they’ll give you a cookie if you take an STD test seems quite likely
You situation does not really count as no cost though. In a world in which you must spend effort to avoid getting a STD test it seems unlikely that plausible deniability can be generated in the first place.
I don’t see how the situation is meaningfully different from no cost. “I couldn’t be bothered to get it done” is hardly an acceptable excuse on the face of it, but despite that people will judge you more harshly when you harm knowingly rather than when you harm through avoidable ignorance, even though that ignorance is your own fault. I don’t think they do so because they perceive a justifying cost.
I think the point that others have been trying to make is that gaining the evidence isn’t merely of lower importance to the agent than some other pursuits, it’s that gaining the evidence appears to be actually harmful to what the agent wants.
I think the point that others have been trying to make is that gaining the evidence isn’t merely of lower importance to the agent than some other pursuits, it’s that gaining the evidence appears to be actually harmful to what the agent wants.
Yes I was proposed the alternative situation where the evidence is just considered as lower value as an alternative that produces the same result.
I don’t see how the situation is meaningfully different from no cost. “I couldn’t be bothered to get it done” is hardly an acceptable excuse on the face of it
At zero cost(in the economic sense not in the monetary sense) you can not say it was a bother to get it done because a bother would be a cost.
Is the standard then that it’s instrumentally rational to prioritize Bayesian experiments by how likely their outcomes are to affect one’s decisions?
It weighs into the decision, but it seems like it is insufficient by itself. An experiment can change my decision radically but be on unimportant topic(s). Topics that do not effect goal achieving ability. It is possible to imagine spending ones time on experiments that change one’s decisions and never get close to achieving any goals. The vague answer seems to be prioritize by how much the experiments will be likely to help achieve ones goals.
The conflict seems to be that, according to the advice, a rationalist ought to (a) try to find out which of their ideas are false, and (b) evict those ideas. A policy of strategic ignorance avoids having to do (b) by deliberately doing a crappy job of (a).
The few specific situations that I drilled down on I found that “deliberately doing a crappy job of (a)” never came up. Some times however the choice was between doing (a)+(b) with topic (d) or doing (a)+(b) with topic (e), where it is unproductive to know (d). The choice is clearly to do (a)+(b) with (e) because it is more productive.
Then there is not conflict with “Whatever can be destroyed by the truth, should be.” because what needs to be destroyed is prioritized.
Can you provide a specific example where conflict with “Whatever can be destroyed by the truth, should be.” is ensured?
Okay, I think this example from the OP works:
Let’s call the person Alex. Alex avoids getting tested in order to avoid possible blame; assuming Alex is selfish and doesn’t care about their partners’ sexual health (or the knock-on effects of people in general not caring about their partners’ sexual health) at all, then this is the right choice instrumentally.
However, by acting this way Alex is deliberately protecting an invalid belief from being destroyed by the truth. Alex currently believes or should believe that they have a low probability (at the demographic average) of carrying a sexual disease. If Alex got tested, then this belief would be destroyed one way or the other; if the test was positive, then the posterior probability goes way upwards, and if the test is negative, then it goes downwards a smaller but still non-trivial amount.
Instead of doing this, Alex simply acts as though they already knew the results of the test to be negative in advance, and even goes on to spread the truth-destroyable-belief by encouraging others to take it on as well. By avoiding evidence, particularly useful evidence (where by useful I mean easy to gather and having a reasonably high magnitude of impact on your priors if gathered), Alex is being epistemically irrational (even though they might well be instrumentally rational).
This is what I meant by “deliberately doing a crappy job of [finding out which ideas are false]”.
This does bring up an interesting idea, though, which is that it might not be (instrumentally) rational to be maximally (epistemically) rational.
Additional necessary assumption seems to be that Alex cares about “Whatever can be destroyed by the truth, should be.” He is selfish but does his best to act rationally.
Therefore Alex does not value knowing whether or not his has an std and instead pursues other knowledge.
Alex is faced with the choice of getting an std test and improving his probability estimate of his state of infection or spending his time doing something he considers more valuable. He chooses to not to get an std test because the information is not very valuable him and focuses on more important matters.
Alex is selfish and does not care that he is misleading people.
Avoiding the evidence would be irrational. Focusing on more important evidence is not. Alex is not doing a “crappy” job of finding out what is false he has just maximized finding out the truth he cares about.
I tried to present a rational, selfish, uncaring, Alex who chooses not to get an STD test even though he cares deeply about “Whatever can be destroyed by the truth, should be.”, as far as his personal beliefs are concerned.
I disagree. In the least convenient world where the STD test imposes no costs on Alex, he would still be instrumentally rational to not take it. This is because Alex knows the plausibility of his claims that he does not have an STD will be sabotaged if the test comes out positive, because he is not a perfect liar.
(I don’t think this situation is even particularly implausible. Some situation at a college where they’ll give you a cookie if you take an STD test seems quite likely, along the same lines as free condoms.)
In a world were STD tests cost absolutely nothing, including time, effort, thought, there would be no excuse to not have taken a test and I do not see a method for generating plausible deniability by not knowing.
You situation does not really count as no cost though. In a world in which you must spend effort to avoid getting a STD test it seems unlikely that plausible deniability can be generated in the first place.
You are correct “Avoiding the evidence would be irrational.” does seem to be incorrect in general and I generalized too strongly from the example I was working on.
Though this does not seem to answer my original question. Is there a by definition conflict between “Whatever can be destroyed by the truth, should be.” and generating plausible deniability. The answer I still come up with is no conflict. Some truths should be destroyed before others and this allows for some plausible deniability for untruths low in priority.
That’s not really the least convenient possible world though, is it? The least convenient possible world is one where STD tests impose no additional cost on him, but other people don’t know this, so he still has plausible deniability. Let’s say that he’s taking a sexuality course where the students are assigned to take STD tests, or if they have some objection, are forced to do a make up assignment which imposes equivalent inconvenience. Nobody he wants to have sex with is aware that he’s in this class or that it imposes this assignment.
I don’t see how the situation is meaningfully different from no cost. “I couldn’t be bothered to get it done” is hardly an acceptable excuse on the face of it, but despite that people will judge you more harshly when you harm knowingly rather than when you harm through avoidable ignorance, even though that ignorance is your own fault. I don’t think they do so because they perceive a justifying cost.
I think the point that others have been trying to make is that gaining the evidence isn’t merely of lower importance to the agent than some other pursuits, it’s that gaining the evidence appears to be actually harmful to what the agent wants.
Yes I was proposed the alternative situation where the evidence is just considered as lower value as an alternative that produces the same result.
At zero cost(in the economic sense not in the monetary sense) you can not say it was a bother to get it done because a bother would be a cost.
This is a very good point. We cannot gather all possible evidence all the time, and trying to do so would certainly be instrumentally irrational.
Is the standard then that it’s instrumentally rational to prioritize Bayesian experiments by how likely their outcomes are to affect one’s decisions?
It weighs into the decision, but it seems like it is insufficient by itself. An experiment can change my decision radically but be on unimportant topic(s). Topics that do not effect goal achieving ability. It is possible to imagine spending ones time on experiments that change one’s decisions and never get close to achieving any goals. The vague answer seems to be prioritize by how much the experiments will be likely to help achieve ones goals.