The maxim is incorrect (or at least overly general to sound deeply wise).
Cultivating ignorance in an adversary or competitor can give you a comparative advantage. A child taking the advice of trained and informed mental health professions that they are not ready to learn about something, say human sexuality, might preserve their emotional development. A person living under a totalitarian regime might do well to avoid sources of classified information, if learning that information makes them a threat to the state. Not telling my friend that their religious literature contains various harmful prescriptions makes sense until I can convince them that the literature is not morally infallible. Not reading the theorems for certain mathematical results increases my future expectation of fun, since I can then derive the results on my own. Privacy is often considered intrinsically valuable. Double-blind experimental procedure is used to filter out cognitive bias. For many more examples of hazardous information and strategic ignorance, see Nick Bostrom’s draft paper on the subject here (.pdf).
A child taking the advice of trained and informed mental health professions that they are not ready to learn about something, say human sexuality, might preserve their emotional development.
Perhaps, but I’m skeptical that anyone’s emotional development is really harmed by learning about human sexuality at an early age provided it’s not done in a particularly shocking way. Sure, plenty of kids find it discomforting, and don’t want to think about their parents “doing it,” but does it cause lasting psychological harm? Without actual research backing up that conclusion, my initial guess would be “almost never.”
Data point: I used to take books out of the adult section of the library as a fairly young child (8-9 years old) and though I was a little baffled by the sexual content, I don’t remember finding it at all disturbing. I’ve been told that I now have an unusually open attitude to sex, though I’m still a little baffled by the whole phenomenon.
At most, people learn which things they’ve heard aren’t true about sexuality from school. Peers and media tell one what there is to know at a young age, if untrue things besides.
It was more a template of hazardous information than an object level factual claim. Feel free to insert anything you would keep away from a child to protect their innocence. A number of shock sites come to mind.
Yes, the maxim is overly broad. It is the nature of maxims.
EDIT: I understand where I erred now. In quoting EY, I accidentally claimed more than I thought I was. It’s clear to me now that the above factors into two values: minimizing personal delusion, and minimizing other people’s delusions. I hold the former, and I’m not as picky about the latter. (E.g., I have no problem refraining from going around disabusing people of their theism.)
I’m concerned that having done this was an ad-hoc justification after reading your laundry list of counter-examples, but I can’t unread them, so...
I personally strive to know as much as I can about myself, even if it ultimately means that I believe a lot of less than flattering things.
Then I try to either use this knowledge to fix the problems, or figure out workarounds in presenting myself to others.
Some people are pretty okay with you knowing bad things about yourself if you wish that they aren’t true. A lot of my closer friends are like that, so I can continue being totally honest with them. If someone isn’t okay with that, then I either preempt all complaints by saying I messed up (many people find that less offensive than evasiveness), or avoid the conversations entirely.
In extreme cases, I’d rather know something about myself and hide it (either by omission and lying) or just let other people judge me for knowing it.
One convenient thing about allowing yourself to learn inconvenient truths is that its easier to realize when you’re wrong, and should apologize. Apologies tend to work really well when you mean them, and understand why the other person is mad at you.
You could want the extra dollar. ($6 instead of $5)
You could want to feel like someone who care about others.
You could genuinely care about others.
The point of the research in the post, if I understand it, is that (many) people want 1 and 2, and often the best way to get both those things is to be ignorant of the actual effects of your behavior. In my view a rationalist should decide either that they want 1 (throwing 2 and 3 out the window) or that they want 3 (forgetting 1). Either way you can know the truth and still win.
So it’s not only strategic ignorance, but selective ignorance too. By which I mean to say it only applies highly selectively.
If you have enough knowledge about the situation to know it’s going to be 6⁄1 and 5⁄5, or 5⁄1 and 6⁄5, then that’s a pretty clear distinction. You have quite a bit of knowledge, enough to narrow it to only two situations.
But as you raised, it could be 6⁄1 & 5⁄5, or 6⁄1 & 5/1000 or 6/(.0001% increase of global existential risk) & 5/(.0001% increase of the singularity within your lifetime).
The implications of your point being, if you don’t know what’s at stake, it’s better to learn what’s at stake.
Then I guess sometimes, ---ists (as I like to refer to them) should remain purposefully ignorant, in contradiction to the maxim—if, that is, they actually care about the advantages of ignorance.
The conflict seems to be that, according to the advice, a rationalist ought to (a) try to find out which of their ideas are false, and (b) evict those ideas. A policy of strategic ignorance avoids having to do (b) by deliberately doing a crappy job of (a).
The few specific situations that I drilled down on I found that “deliberately doing a crappy job of (a)” never came up. Some times however the choice was between doing (a)+(b) with topic (d) or doing (a)+(b) with topic (e), where it is unproductive to know (d). The choice is clearly to do (a)+(b) with (e) because it is more productive.
Then there is not conflict with “Whatever can be destroyed by the truth, should be.” because what needs to be destroyed is prioritized.
Can you provide a specific example where conflict with “Whatever can be destroyed by the truth, should be.” is ensured?
A person who wants to have multiple sexual partners may resist getting himself tested for sexual disease. If he was tested, he might find out he had a disease, and then he’d be accused of knowingly endangering others if he didn’t tell them about his disease. If he isn’t tested, he’ll only be accused of not finding out that information, which is often considered less serious.
Let’s call the person Alex. Alex avoids getting tested in order to avoid possible blame; assuming Alex is selfish and doesn’t care about their partners’ sexual health (or the knock-on effects of people in general not caring about their partners’ sexual health) at all, then this is the right choice instrumentally.
However, by acting this way Alex is deliberately protecting an invalid belief from being destroyed by the truth. Alex currently believes or should believe that they have a low probability (at the demographic average) of carrying a sexual disease. If Alex got tested, then this belief would be destroyed one way or the other; if the test was positive, then the posterior probability goes way upwards, and if the test is negative, then it goes downwards a smaller but still non-trivial amount.
Instead of doing this, Alex simply acts as though they already knew the results of the test to be negative in advance, and even goes on to spread the truth-destroyable-belief by encouraging others to take it on as well. By avoiding evidence, particularly useful evidence (where by useful I mean easy to gather and having a reasonably high magnitude of impact on your priors if gathered), Alex is being epistemically irrational (even though they might well be instrumentally rational).
This is what I meant by “deliberately doing a crappy job of [finding out which ideas are false]”.
This does bring up an interesting idea, though, which is that it might not be (instrumentally) rational to be maximally (epistemically) rational.
Additional necessary assumption seems to be that Alex cares about “Whatever can be destroyed by the truth, should be.” He is selfish but does his best to act rationally.
Let’s call the person Alex. Alex avoids getting tested in order to avoid possible blame; assuming Alex is selfish and doesn’t care about their partners’ sexual health (or the knock-on effects of people in general not caring about their partners’ sexual health) at all, then this is the right choice instrumentally.
Therefore Alex does not value knowing whether or not his has an std and instead pursues other knowledge.
However, by acting this way Alex is deliberately protecting an invalid belief from being destroyed by the truth. Alex currently believes or should believe that they have a low probability (at the demographic average) of carrying a sexual disease. If Alex got tested, then this belief would be destroyed one way or the other; if the test was positive, then the posterior probability goes way upwards, and if the test is negative, then it goes downwards a smaller but still non-trivial amount.
Alex is faced with the choice of getting an std test and improving his probability estimate of his state of infection or spending his time doing something he considers more valuable. He chooses to not to get an std test because the information is not very valuable him and focuses on more important matters.
Instead of doing this, Alex simply acts as though they already knew the results of the test to be negative in advance, and even goes on to spread the truth-destroyable-belief by encouraging others to take it on as well.
Alex is selfish and does not care that he is misleading people.
By avoiding evidence, particularly useful evidence (where by useful I mean easy to gather and having a reasonably high magnitude of impact on your priors if gathered), Alex is being epistemically irrational (even though they might well be instrumentally rational).
Avoiding the evidence would be irrational. Focusing on more important evidence is not. Alex is not doing a “crappy” job of finding out what is false he has just maximized finding out the truth he cares about.
I tried to present a rational, selfish, uncaring, Alex who chooses not to get an STD test even though he cares deeply about “Whatever can be destroyed by the truth, should be.”, as far as his personal beliefs are concerned.
Avoiding the evidence would be irrational. Focusing on more important evidence is not.
I disagree. In the least convenient world where the STD test imposes no costs on Alex, he would still be instrumentally rational to not take it. This is because Alex knows the plausibility of his claims that he does not have an STD will be sabotaged if the test comes out positive, because he is not a perfect liar.
(I don’t think this situation is even particularly implausible. Some situation at a college where they’ll give you a cookie if you take an STD test seems quite likely, along the same lines as free condoms.)
I disagree. In the least convenient world where the STD test imposes no costs on Alex, he would still be instrumentally rational to not take it. This is because Alex knows the plausibility of his claims that he does not have an STD will be sabotaged if the test comes out positive, because he is not a perfect liar.
In a world were STD tests cost absolutely nothing, including time, effort, thought, there would be no excuse to not have taken a test and I do not see a method for generating plausible deniability by not knowing.
Some situation at a college where they’ll give you a cookie if you take an STD test seems quite likely
You situation does not really count as no cost though. In a world in which you must spend effort to avoid getting a STD test it seems unlikely that plausible deniability can be generated in the first place.
You are correct “Avoiding the evidence would be irrational.” does seem to be incorrect in general and I generalized too strongly from the example I was working on.
Though this does not seem to answer my original question. Is there a by definition conflict between “Whatever can be destroyed by the truth, should be.” and generating plausible deniability. The answer I still come up with is no conflict. Some truths should be destroyed before others and this allows for some plausible deniability for untruths low in priority.
In a world were STD tests cost absolutely nothing, including time, effort, thought, there would be no excuse to not have taken a test and I do not see a method for generating plausible deniability by not knowing.
That’s not really the least convenient possible world though, is it? The least convenient possible world is one where STD tests impose no additional cost on him, but other people don’t know this, so he still has plausible deniability. Let’s say that he’s taking a sexuality course where the students are assigned to take STD tests, or if they have some objection, are forced to do a make up assignment which imposes equivalent inconvenience. Nobody he wants to have sex with is aware that he’s in this class or that it imposes this assignment.
Some situation at a college where they’ll give you a cookie if you take an STD test seems quite likely
You situation does not really count as no cost though. In a world in which you must spend effort to avoid getting a STD test it seems unlikely that plausible deniability can be generated in the first place.
I don’t see how the situation is meaningfully different from no cost. “I couldn’t be bothered to get it done” is hardly an acceptable excuse on the face of it, but despite that people will judge you more harshly when you harm knowingly rather than when you harm through avoidable ignorance, even though that ignorance is your own fault. I don’t think they do so because they perceive a justifying cost.
I think the point that others have been trying to make is that gaining the evidence isn’t merely of lower importance to the agent than some other pursuits, it’s that gaining the evidence appears to be actually harmful to what the agent wants.
I think the point that others have been trying to make is that gaining the evidence isn’t merely of lower importance to the agent than some other pursuits, it’s that gaining the evidence appears to be actually harmful to what the agent wants.
Yes I was proposed the alternative situation where the evidence is just considered as lower value as an alternative that produces the same result.
I don’t see how the situation is meaningfully different from no cost. “I couldn’t be bothered to get it done” is hardly an acceptable excuse on the face of it
At zero cost(in the economic sense not in the monetary sense) you can not say it was a bother to get it done because a bother would be a cost.
Is the standard then that it’s instrumentally rational to prioritize Bayesian experiments by how likely their outcomes are to affect one’s decisions?
It weighs into the decision, but it seems like it is insufficient by itself. An experiment can change my decision radically but be on unimportant topic(s). Topics that do not effect goal achieving ability. It is possible to imagine spending ones time on experiments that change one’s decisions and never get close to achieving any goals. The vague answer seems to be prioritize by how much the experiments will be likely to help achieve ones goals.
This stands directly in the way of the maxim, “Whatever can be destroyed by the truth, should be.”
The maxim is incorrect (or at least overly general to sound deeply wise).
Cultivating ignorance in an adversary or competitor can give you a comparative advantage. A child taking the advice of trained and informed mental health professions that they are not ready to learn about something, say human sexuality, might preserve their emotional development. A person living under a totalitarian regime might do well to avoid sources of classified information, if learning that information makes them a threat to the state. Not telling my friend that their religious literature contains various harmful prescriptions makes sense until I can convince them that the literature is not morally infallible. Not reading the theorems for certain mathematical results increases my future expectation of fun, since I can then derive the results on my own. Privacy is often considered intrinsically valuable. Double-blind experimental procedure is used to filter out cognitive bias. For many more examples of hazardous information and strategic ignorance, see Nick Bostrom’s draft paper on the subject here (.pdf).
Perhaps, but I’m skeptical that anyone’s emotional development is really harmed by learning about human sexuality at an early age provided it’s not done in a particularly shocking way. Sure, plenty of kids find it discomforting, and don’t want to think about their parents “doing it,” but does it cause lasting psychological harm? Without actual research backing up that conclusion, my initial guess would be “almost never.”
Data point: I used to take books out of the adult section of the library as a fairly young child (8-9 years old) and though I was a little baffled by the sexual content, I don’t remember finding it at all disturbing. I’ve been told that I now have an unusually open attitude to sex, though I’m still a little baffled by the whole phenomenon.
At most, people learn which things they’ve heard aren’t true about sexuality from school. Peers and media tell one what there is to know at a young age, if untrue things besides.
It was more a template of hazardous information than an object level factual claim. Feel free to insert anything you would keep away from a child to protect their innocence. A number of shock sites come to mind.
Yes, the maxim is overly broad. It is the nature of maxims.
EDIT: I understand where I erred now. In quoting EY, I accidentally claimed more than I thought I was. It’s clear to me now that the above factors into two values: minimizing personal delusion, and minimizing other people’s delusions. I hold the former, and I’m not as picky about the latter. (E.g., I have no problem refraining from going around disabusing people of their theism.)
I’m concerned that having done this was an ad-hoc justification after reading your laundry list of counter-examples, but I can’t unread them, so...
I think the article is describing some existing mechanisms rather than prescribing what a rationalist should be doing.
But rationalists should win.
Good point. How would you resolve this contradiction, then?
I personally strive to know as much as I can about myself, even if it ultimately means that I believe a lot of less than flattering things.
Then I try to either use this knowledge to fix the problems, or figure out workarounds in presenting myself to others.
Some people are pretty okay with you knowing bad things about yourself if you wish that they aren’t true. A lot of my closer friends are like that, so I can continue being totally honest with them. If someone isn’t okay with that, then I either preempt all complaints by saying I messed up (many people find that less offensive than evasiveness), or avoid the conversations entirely.
In extreme cases, I’d rather know something about myself and hide it (either by omission and lying) or just let other people judge me for knowing it.
One convenient thing about allowing yourself to learn inconvenient truths is that its easier to realize when you’re wrong, and should apologize. Apologies tend to work really well when you mean them, and understand why the other person is mad at you.
There are three things you could want:
You could want the extra dollar. ($6 instead of $5)
You could want to feel like someone who care about others.
You could genuinely care about others.
The point of the research in the post, if I understand it, is that (many) people want 1 and 2, and often the best way to get both those things is to be ignorant of the actual effects of your behavior. In my view a rationalist should decide either that they want 1 (throwing 2 and 3 out the window) or that they want 3 (forgetting 1). Either way you can know the truth and still win.
The problem with strategic ignorance is if the situation is something like 6⁄1 vs. 5/1000.
Most people care more about themselves than others, but I think that at that level most people would just choose to lose a dollar and give 999 more.
If you choose to not learn something, then you don’t know what you’re causing to happen, even if it would entirely change what you would want to do.
So it’s not only strategic ignorance, but selective ignorance too. By which I mean to say it only applies highly selectively.
If you have enough knowledge about the situation to know it’s going to be 6⁄1 and 5⁄5, or 5⁄1 and 6⁄5, then that’s a pretty clear distinction. You have quite a bit of knowledge, enough to narrow it to only two situations.
But as you raised, it could be 6⁄1 & 5⁄5, or 6⁄1 & 5/1000 or 6/(.0001% increase of global existential risk) & 5/(.0001% increase of the singularity within your lifetime).
The implications of your point being, if you don’t know what’s at stake, it’s better to learn what’s at stake.
Yeah, pretty much.
Then I guess sometimes, ---ists (as I like to refer to them) should remain purposefully ignorant, in contradiction to the maxim—if, that is, they actually care about the advantages of ignorance.
I do not see an obvious and direct conflict, can you provide an example?
The conflict seems to be that, according to the advice, a rationalist ought to (a) try to find out which of their ideas are false, and (b) evict those ideas. A policy of strategic ignorance avoids having to do (b) by deliberately doing a crappy job of (a).
The few specific situations that I drilled down on I found that “deliberately doing a crappy job of (a)” never came up. Some times however the choice was between doing (a)+(b) with topic (d) or doing (a)+(b) with topic (e), where it is unproductive to know (d). The choice is clearly to do (a)+(b) with (e) because it is more productive.
Then there is not conflict with “Whatever can be destroyed by the truth, should be.” because what needs to be destroyed is prioritized.
Can you provide a specific example where conflict with “Whatever can be destroyed by the truth, should be.” is ensured?
Okay, I think this example from the OP works:
Let’s call the person Alex. Alex avoids getting tested in order to avoid possible blame; assuming Alex is selfish and doesn’t care about their partners’ sexual health (or the knock-on effects of people in general not caring about their partners’ sexual health) at all, then this is the right choice instrumentally.
However, by acting this way Alex is deliberately protecting an invalid belief from being destroyed by the truth. Alex currently believes or should believe that they have a low probability (at the demographic average) of carrying a sexual disease. If Alex got tested, then this belief would be destroyed one way or the other; if the test was positive, then the posterior probability goes way upwards, and if the test is negative, then it goes downwards a smaller but still non-trivial amount.
Instead of doing this, Alex simply acts as though they already knew the results of the test to be negative in advance, and even goes on to spread the truth-destroyable-belief by encouraging others to take it on as well. By avoiding evidence, particularly useful evidence (where by useful I mean easy to gather and having a reasonably high magnitude of impact on your priors if gathered), Alex is being epistemically irrational (even though they might well be instrumentally rational).
This is what I meant by “deliberately doing a crappy job of [finding out which ideas are false]”.
This does bring up an interesting idea, though, which is that it might not be (instrumentally) rational to be maximally (epistemically) rational.
Additional necessary assumption seems to be that Alex cares about “Whatever can be destroyed by the truth, should be.” He is selfish but does his best to act rationally.
Therefore Alex does not value knowing whether or not his has an std and instead pursues other knowledge.
Alex is faced with the choice of getting an std test and improving his probability estimate of his state of infection or spending his time doing something he considers more valuable. He chooses to not to get an std test because the information is not very valuable him and focuses on more important matters.
Alex is selfish and does not care that he is misleading people.
Avoiding the evidence would be irrational. Focusing on more important evidence is not. Alex is not doing a “crappy” job of finding out what is false he has just maximized finding out the truth he cares about.
I tried to present a rational, selfish, uncaring, Alex who chooses not to get an STD test even though he cares deeply about “Whatever can be destroyed by the truth, should be.”, as far as his personal beliefs are concerned.
I disagree. In the least convenient world where the STD test imposes no costs on Alex, he would still be instrumentally rational to not take it. This is because Alex knows the plausibility of his claims that he does not have an STD will be sabotaged if the test comes out positive, because he is not a perfect liar.
(I don’t think this situation is even particularly implausible. Some situation at a college where they’ll give you a cookie if you take an STD test seems quite likely, along the same lines as free condoms.)
In a world were STD tests cost absolutely nothing, including time, effort, thought, there would be no excuse to not have taken a test and I do not see a method for generating plausible deniability by not knowing.
You situation does not really count as no cost though. In a world in which you must spend effort to avoid getting a STD test it seems unlikely that plausible deniability can be generated in the first place.
You are correct “Avoiding the evidence would be irrational.” does seem to be incorrect in general and I generalized too strongly from the example I was working on.
Though this does not seem to answer my original question. Is there a by definition conflict between “Whatever can be destroyed by the truth, should be.” and generating plausible deniability. The answer I still come up with is no conflict. Some truths should be destroyed before others and this allows for some plausible deniability for untruths low in priority.
That’s not really the least convenient possible world though, is it? The least convenient possible world is one where STD tests impose no additional cost on him, but other people don’t know this, so he still has plausible deniability. Let’s say that he’s taking a sexuality course where the students are assigned to take STD tests, or if they have some objection, are forced to do a make up assignment which imposes equivalent inconvenience. Nobody he wants to have sex with is aware that he’s in this class or that it imposes this assignment.
I don’t see how the situation is meaningfully different from no cost. “I couldn’t be bothered to get it done” is hardly an acceptable excuse on the face of it, but despite that people will judge you more harshly when you harm knowingly rather than when you harm through avoidable ignorance, even though that ignorance is your own fault. I don’t think they do so because they perceive a justifying cost.
I think the point that others have been trying to make is that gaining the evidence isn’t merely of lower importance to the agent than some other pursuits, it’s that gaining the evidence appears to be actually harmful to what the agent wants.
Yes I was proposed the alternative situation where the evidence is just considered as lower value as an alternative that produces the same result.
At zero cost(in the economic sense not in the monetary sense) you can not say it was a bother to get it done because a bother would be a cost.
This is a very good point. We cannot gather all possible evidence all the time, and trying to do so would certainly be instrumentally irrational.
Is the standard then that it’s instrumentally rational to prioritize Bayesian experiments by how likely their outcomes are to affect one’s decisions?
It weighs into the decision, but it seems like it is insufficient by itself. An experiment can change my decision radically but be on unimportant topic(s). Topics that do not effect goal achieving ability. It is possible to imagine spending ones time on experiments that change one’s decisions and never get close to achieving any goals. The vague answer seems to be prioritize by how much the experiments will be likely to help achieve ones goals.