This post is seeing some pretty heavy downvoting, but the opinions I’m seeing in the comments so far seem to be more mixed; I suppose this isn’t unusual.
I have a question, then, for people who downvoted this post: what specifically did you dislike about it? This is a data-gathering exercise that will hopefully allow me to identify flaws in my writing and/or thinking and then correct them. Was the argument being made just obviously wrong? Was it insufficiently justified? Did my examples suck? Were there rhetorical tactics that you particularly disliked? Was it structured badly? Are you incredibly annoyed by the formatting errors I can’t figure out how to fix?
Those are broadly the sorts of answers I’m looking for. I am specifically not looking for justifications for downvotes; really, all I want is your help in becoming stronger. With luck, I will be able to waste less of your time in the future.
I’ve just identified something else that was nagging at me about this post: the irony of the author of this post making an argument that closely parallels an argument some thoughtful conservatives make against condoning alternative lifestyles like polyamory.
The essence of that argument is that humans are not sufficiently intelligent, rational or self-controlled to deal with the freedom to pursue their own happiness without the structure and limits imposed by evolved cultural and social norms that keep their baser instincts in check. That cultural norms exist for a reason (a kind of cultural selection for societies with norms that give them a competitive advantage) and that it is dangerous to mess with traditional norms when we don’t fully understand why they exist.
I don’t really subscribe to the conservative argument (though I have more sympathy for it than the argument made in this post) but it takes a similar form to this argument when it suggests that some things are too dangerous for mere humans to meddle with.
While there are some superficial parallels, I don’t think the two cases are actually very similar.
Humans don’t have a polyamory-bias; if the scientific consensus on neurotransmitters like oxytocin and vasopressin is accurate, it’s quite the opposite. Deliberate action in defiance of bias is not dangerous. There’s no back door for evolution to exploit.
Maybe I just don’t see the distinction or the argument that you are making, but I still don’t. Do you really think that thinking about polyamory isn’t likely to impact values somewhat relative to unquestioned monogamy?
Oh, it’s quite likely to impact values. But it won’t impact your values without some accompanying level of conscious awareness. It’s unconscious value shifts that the post is concerned about.
I think it would’ve been better received if some attention was given to defense mechanisms—ie, rather than phrasing it as some true things being unconditionally bad to know, phrase it as some true things being bad to know unless you have the appropriate prerequisites in place. For example, knowing about differences between races is bad unless you are very good at avoiding confirmation bias, and knowing how to detect errors in reasoning is bad unless you are very good at avoiding motivated cognition.
I have a question, then, for people who downvoted this post: what specifically did you dislike about it? This is a data-gathering exercise that will hopefully allow me to identify flaws in my writing and/or thinking and then correct them. Was the argument being made just obviously wrong? Was it insufficiently justified? Did my examples suck? Were there rhetorical tactics that you particularly disliked? Was it structured badly? Are you incredibly annoyed by the formatting errors I can’t figure out how to fix?
I upvoted your post, because I think that you raise a possibility that we should consider. It should not be dismissed out of hand.
However, your examples do kind of suck :). As Sarah pointed out, none of us is likely to become a dictator, and dictators are probably not typical people. So the history of dictators is not great information about how we ought to tend to our epistemological garden. Your claims about how data on group differences in intelligence affect people would be strong evidence if it were backed up by more than anecdote and speculation. As it is, though, it is at least as likely that you are suffering from confirmation bias.
This, primarily. At least obviously wrong by my value system where believing true things is a core value. To the extent that this is also the value system of less wrong as a whole it seems contrary to the core values of the site without acknowledging the conflict explicitly enough.
I didn’t think the examples were very good either. I think the argument is wrong even for value systems that place a lower value on truth than mine and the examples aren’t enough to persuade me otherwise.
I also found the (presumably) joke about hunting down and killing anyone who disagrees with you jarring and in rather poor taste. I’m generally in favour of tasteless and offensive jokes but this one just didn’t work for me.
At least obviously wrong by my value system where believing true things is a core value.
Beware identity. It seems that a hero shouldn’t kill, ever, but sometimes it’s the right thing to do. Unless it’s your sole value, there will be situations where it should give way.
Unless it’s your sole value, there will be situations where it should give way.
This seems like it should generally be true but in practice I haven’t encountered any plausible examples where I prefer ignorance. This includes a number of hypotheticals where many people claim they would prefer ignorance which leads me to believe the value I place on truth is outside the norm.
Truth / knowledge is a little paradoxical in this sense as well. I believe that killing is generally wrong but there is no paradox in killing in certain situations because it appears to be the right choice. The feedback effect of truth on your decision making / value defining apparatus makes it unlike other core values that might sometimes be abandoned.
This seems like it should generally be true but in practice I haven’t encountered any plausible examples where I prefer ignorance. This includes a number of hypotheticals where many people claim they would prefer ignorance which leads me to believe the value I place on truth is outside the norm.
I agree with this, my objection is to the particular argument you used, not necessarily the implied conclusion.
This, primarily. At least obviously wrong by my value system where believing true things is a core value.
I really don’t think that the OP can be called “obviously wrong”. For example, your brain is imperfect, so it may be that believing some true things makes it less likely that you will believe other more important true things. Then, even if your core value is to believe true things, you are going to want to be careful about letting the dangerous beliefs into your head.
And the circularity that WrongBot and Vladimir Nesov have pointed out rears its head here, too. Suppose that the possibility that I pose above is true. Then, if you knew this, it might undermine the extent to which you hold believing true things to be a core value. That is precisely the kind of unwanted utility-function change that Wrongbot is warning us about.
It’s probably too pessimistic to say that you could never believe the dangerous true things. But it seems reasonably possible that some true beliefs are too dangerous unless you are very careful about the way in which you come to believe them. It may be unwise to just charge in and absorb true facts willy-nilly.
Here’s another way to come at WrongBot’s argument. It’s obvious that we sometimes should keep secrets. Sometimes more harm than good would result if someone else knew something that we know. It’s not obvious, but it is at least plausible, that the “harm” could be that the other person’s utility function would change in a way that we don’t want. At least, this is certainly not obviously wrong. The final step in the argument is then to acknowledge that the “other person” might be the part of yourself over which you do not have perfect control — which is, after all, most of you.
It’s obvious that we sometimes should keep secrets. Sometimes more harm than good would result if someone else knew something that we know.
I believe some other people’s reports that there are things they would prefer not to know and would be inclined to honor their preference if I knew such a secret but I can’t think of any examples of such secrets for myself. In almost all cases I can think of I would want to be informed of any true information that was being withheld from me. The only possible exceptions are ‘pleasant surprises’ that are being kept secret on a strictly time-limited basis to enhance enjoyment (surprise gifts, parties, etc.) but I think these are not really what we’re talking about.
I can certainly think of many examples of secrets that people keep secret out of self-interest and attempt to justify by claiming they are doing it in the best interests of the ignorant party. In most such cases the ‘more harm than good’ would accrue to the party requesting the keeping of the secret rather than the party from whom the secret is being withheld. Sometimes keeping such secrets might be the ‘right thing’ morally (the Nazi at the door looking for fugitives) but this is not because you are acting in the interests of the party from whom you are keeping information.
In almost all cases I can think of I would want to be informed of any true information that was being withheld from me.
Maybe this is an example:
I was once working hard to meet a deadline. Then I saw in my e-mail that I’d just received the referee reports for a journal article that I’d submitted. Even when a referee report recommends acceptance, it will almost always request changes, however minor. I knew that if I looked at the reports, I would feel a very strong pull to work on whatever was in them, which would probably take at least several hours. Even if I resisted this pull, resistance alone would be a major tax on my attention. My brain, of its own accord, would grab mental CPU cycles from my current project to compose responses to whatever the referees said. I decided that I couldn’t spare this distraction before I met my deadline. So I left the reports unread until I’d completed my project.
In short, I kept myself ignorant because I expected that knowledge of the reports’ contents would induce me to pursue the wrong actions.
This is an example of a pretty different kind of thing to what WrongBot is talking about. It’s a hack for rationing attention or a technique for avoiding distraction and keeping focus for a period of time. You read the email once your current time-critical priority was dealt with, you didn’t permanently delete it. Such tactics can be useful and I use them myself. It is quite different from permanently avoiding some information for fear of permanent corruption of your brain.
I’m a little surprised that you would have thought that this example fell into the same class of things as WrongBot or I were talking about. Perhaps we need to define what kinds of ‘dangerous thought’ we are talking about a little more clearly. I’m rather bemused that people are conflating this kind of avoidance of viscerally unpleasant experiences with ‘dangerous thoughts’ as well. It seems others are interpreting the scope of the article massively more broadly than I am.
One thing is to operationally avoid gaining certain data at a certain moment in order to better function overall. Because we need to keep our attention focused.
Another thing is to strategically avoid gaining certain kinds of information that could possibly lead us astray.
I’d guess most people here agree with this kind of “self-deception” that the former entails. And it seems that the post is arguing pro this kind of “self-deception” in the latter case as well, although there isn’t as much consensus — some people seem to welcome any kind of truth whatsoever, at any time.
However… It seems to me now that, frankly, both cases are incredibly similar! So I may be conflating them, too.
The major difference seems to be the scale adopted: checking your email is an information hazard at that moment, and you want to postpone it for a couple of hours. Knowing about certain truths is an information hazard at this moment, and you want to postpone it for a couple of… decades. If ever. When your brain is stronger enough to handle it smoothly.
It all boils down to knowing we are not robots, that our brains are a kludge, and that certain stimuli (however real or true) are undesired.
This is an example of a pretty different kind of thing to what WrongBot is talking about.
I think that you can just twiddle some parameters with my example to see something more like WrongBot’s examples. My example had a known deadline, after which I knew it would be safe to read the reports. But suppose that I didn’t know exactly when it would be safe to read the reports. My current project is the sort of thing where I don’t currently know when I will have done enough. I don’t yet know what the conditions for success are, so I don’t yet know what I need to do to create safe conditions to read the reports. It is possible that it will never be safe to read the reports, that I will never be able to afford the distraction of suppressing my brain’s desire to compose responses.
My understanding is that WrongBot views group-intelligence differences analogously. The argument is that it’s not safe to learn such truths now, and we don’t yet know what we need to do to create safe conditions for learning these truths. Maybe we will never find such conditions. At any rate, we should be very careful about exposing our brains to these truths before we’ve figured out the safe conditions. That is my reading of the argument.
More or less. I’m generally sufficiently optimistic about the future that I don’t think that there are kinds of true knowledge that will continue to be dangerous indefinitely; I’m just trying to highlight things I think might not be safe right now, when we’re all stuck doing serious thinking with opaquely-designed sacks of meat.
Like Matt, I don’t think your example does the same thing as WrongBot’s, even with your twiddling.
WrongBot doesn’t want the “dangerous thoughts” to influence him to revise his beliefs and values. That wasn’t the case for you: you didn’t want to avoid revising your beliefs about your paper; you just didn’t want to deal with the cognitive distraction of it during the short term. If you avoided reading your reports because you wanted to avoid believing that your article needed any improvement, then I think your situation would be more analogous to WrongBot’s.
The argument is that it’s not safe to learn such truths now, and we don’t yet know what we need to do to create safe conditions for learning these truths. Maybe we will never find such conditions. At any rate, we should be very careful about exposing our brains to these truths before we’ve figured out the safe conditions.
But there’s another difference here: when you decided to not expose yourself to that knowledge, you knew at the time when the safe conditions would occur, and that those conditions would occur very soon. That’s not the case for WrongBot, who has sworn off certain kinds of knowledge indefinitely.
Putting oneself at risk of error for a short and capped time frame is much different from putting oneself at risk of error indefinitely.
WrongBot doesn’t want the “dangerous thoughts” to influence him to revise his beliefs and values. That wasn’t the case for you: you didn’t want to avoid revising your beliefs about your paper; you just didn’t want to deal with the cognitive distraction of it during the short term.
The beliefs that I didn’t want to revise were my beliefs about the contents of the reports. Before I read them, my beliefs about their contents were general and vague. Were I to read the reports, I would have specific knowledge about what they said. My worry was that this would revise my values: after gaining that specific knowledge, my brain would excessively value replying to the reports over working on my current project. Despite my intention to focus solely on my current project, my brain would allocate significant resources to composing responses to what I’d read in the reports.
But there’s another difference here: when you decided to not expose yourself to that knowledge, you knew at the time when the safe conditions would occur, and that those conditions would occur very soon.
But in the “twiddled” version, I don’t know when the safe conditions will occur . . .
That’s not the case for WrongBot, who has sworn off certain kinds of knowledge indefinitely.
To be fair, WrongBot thinks that we will be able to learn this knowledge eventually. We just shouldn’t take it as obvious that we know what the safe conditions are yet.
I still say that there is a difference between what you and WrongBot are doing, even if you’re successfully shooting down my attempts to articulate it. I might need a few more tries to be able to correctly articulate that intuition.
My worry was that this would revise my values: after gaining that specific knowledge, my brain would excessively value replying to the reports over working on my current project.
These are not the same types of values. You were worried about your values about priorities changing, while under time pressure. WrongBot is worried about his moral values changing about he treats certain groups of people.
But in the “twiddled” version, I don’t know when the safe conditions will occur . . .
True, but there wasn’t the same magnitude or type of uncertainty, right? You knew that you would probably be able to read your reports after your deadline...? All predictions about the future are uncertain, but not all types of uncertainty are created equal.
I would be interested to hear your opinion of a little thought experiment. What if I was a creationist, and you recommend me a book debunking creationism. I say that I won’t read it because it might change my values, at least not only the conditions are safe for me. If I say that I can’t read it this week because I have a deadline, but maybe next week, you’ll probably give me a pass. But what if I put off reading it indefinitely? Is that rational?
It seems that since we recognize that rationalists are human, we can and should give them a pass on scrutinizing certain thoughts or investigating certain ideas when they are under time pressure or emotional pressure in the short term, like in your example. But how long can one dodge inquiry in a certain area before one’s rationalist creds become suspect?
My worry was that this would revise my values: after gaining that specific knowledge, my brain would excessively value replying to the reports over working on my current project.
These are not the same types of values. You were worried about your values about priorities changing, while under time pressure. WrongBot is worried about his moral values changing about he treats certain groups of people.
I’m having trouble seeing this distinction. What if I had a moral obligation to do as well as possible on my current project, because people were depending on me, say? My concern would be that, if I read the reports, I would feel a pull to act immorally. I might even rationalize away the immorality under the influence of this pull. In effect, I would act according to different moral values. Would that make the situation more analogous in your view, or would something still be missing?
I’m getting the sense that the problem with my example is that it has nothing to do with political correctness. Is it key for you that WrongBot wants to keep information out of his/her brain because of political correctness specifically?
But in the “twiddled” version, I don’t know when the safe conditions will occur . . .
True, but there wasn’t the same magnitude or type of uncertainty, right? You knew that you would probably be able to read your reports after your deadline...? All predictions about the future are uncertain, but not all types of uncertainty are created equal.
I called it a “twiddled” version because I was thinking of the uncertainty as a continuous parameter that I could set to a wide spectrum of values. In the actual situation, the dial was pegged at “almost complete certainty”. But I can imagine situations where I’m very uncertain. It looks like part of your problem with this is that such a quantitative change amounts to a qualitative change in your view. Is that right?
I would be interested to hear your opinion of a little thought experiment. What if I was a creationist, and you recommend me a book debunking creationism. I say that I won’t read it because it might change my values, at least not only the conditions are safe for me. If I say that I can’t read it this week because I have a deadline, but maybe next week, you’ll probably give me a pass. But what if I put off reading it indefinitely? Is that rational?
I take it that your concern would be that losing creationism would change your moral values in a dangerous way. Whether you are being rational then depends on what “put off reading it indefinitely” means. I would say that you are being rational to avoid the book for now only if you are making a good-faith effort to determine rationally the conditions under which it would be safe to read the book, with the intention of reading the book once you’ve found sufficiently safe conditions.
Part of the problem I’m having with your example is my perception of the magnitude of the gap between what you are talking about and WrongBot’s examples. While they share certain similarities it appears roughly equivalent to a discussion about losing your entire life savings which you are comparing to the time you dropped a dime down the back of the sofa.
Sometimes a sufficiently large difference of magnitude can be treated for most purposes as a difference in kind.
Part of the problem I’m having with your example is my perception of the magnitude of the gap between what you are talking about and WrongBot’s examples.
What is the axis along which the gap lies? Is it the degree of uncertainty about when it will be safe to learn the dangerous knowledge?
Degree of uncertainty and magnitude of duration of the length of time before it will be ‘safe’.
Degree of effort involved in avoidance (temporarily holding off on reading a specific email vs. actively avoiding certain knowledge and filtering all information for a long and unspecified duration).
Severity of consequences (delayed or somewhat sub-standard performance on a near term project deadline vs. fundamental change or damage to your core values)
Scope of filtering (avoiding detailed contents of a specific email with a known and clearly delineated area of significance vs. general avoidance of whole areas of knowledge where you may not even have a good idea of what knowledge you may be missing out on).
Mental resources emphasized (short term attentional resources vs. deeply considered core beliefs and modes of thought and high level knowledge and understanding).
In my perception, the gap is less about certainty and more about timescale; I’d draw a line between “in a normal human lifetime” and “when I have a better brain” as the two qualitatively different timescales that you’re talking about.
I can certainly think of many examples of secrets that people keep secret out of self-interest and attempt to justify by claiming they are doing it in the best interests of the ignorant party. In most such cases the ‘more harm than good’ would accrue to the party requesting the keeping of the secret rather than the party from whom the secret is being withheld.
But this is the way to think of WrongBot’s claim. The conscious you, the part over which you have deliberate control, is but a small part of the goal-seeking activity that goes on in your brain. Some of that goal-seeking activity is guided by interests that aren’t really yours. Sometimes you ought to ignore the interests of these other agents in your brain. There is some possibility that you should sometimes do this by keeping information from reaching those other agents, even though this means keeping the information from yourself as well.
This post is seeing some pretty heavy downvoting, but the opinions I’m seeing in the comments so far seem to be more mixed; I suppose this isn’t unusual.
I have a question, then, for people who downvoted this post: what specifically did you dislike about it? This is a data-gathering exercise that will hopefully allow me to identify flaws in my writing and/or thinking and then correct them. Was the argument being made just obviously wrong? Was it insufficiently justified? Did my examples suck? Were there rhetorical tactics that you particularly disliked? Was it structured badly? Are you incredibly annoyed by the formatting errors I can’t figure out how to fix?
Those are broadly the sorts of answers I’m looking for. I am specifically not looking for justifications for downvotes; really, all I want is your help in becoming stronger. With luck, I will be able to waste less of your time in the future.
Thanks.
I’ve just identified something else that was nagging at me about this post: the irony of the author of this post making an argument that closely parallels an argument some thoughtful conservatives make against condoning alternative lifestyles like polyamory.
The essence of that argument is that humans are not sufficiently intelligent, rational or self-controlled to deal with the freedom to pursue their own happiness without the structure and limits imposed by evolved cultural and social norms that keep their baser instincts in check. That cultural norms exist for a reason (a kind of cultural selection for societies with norms that give them a competitive advantage) and that it is dangerous to mess with traditional norms when we don’t fully understand why they exist.
I don’t really subscribe to the conservative argument (though I have more sympathy for it than the argument made in this post) but it takes a similar form to this argument when it suggests that some things are too dangerous for mere humans to meddle with.
While there are some superficial parallels, I don’t think the two cases are actually very similar.
Humans don’t have a polyamory-bias; if the scientific consensus on neurotransmitters like oxytocin and vasopressin is accurate, it’s quite the opposite. Deliberate action in defiance of bias is not dangerous. There’s no back door for evolution to exploit.
This just seems unreasoned to me.
Erm, how so?
It occurs to me that I should clarify that when I said
I meant that it is not dangerous thinking of the sort I have attempted to describe.
Maybe I just don’t see the distinction or the argument that you are making, but I still don’t. Do you really think that thinking about polyamory isn’t likely to impact values somewhat relative to unquestioned monogamy?
Oh, it’s quite likely to impact values. But it won’t impact your values without some accompanying level of conscious awareness. It’s unconscious value shifts that the post is concerned about.
How can you be so sure? As in I dissagree.
How people value different kinds of sexual behaviours seems to be very strongly influenced by the subconscious.
I think it would’ve been better received if some attention was given to defense mechanisms—ie, rather than phrasing it as some true things being unconditionally bad to know, phrase it as some true things being bad to know unless you have the appropriate prerequisites in place. For example, knowing about differences between races is bad unless you are very good at avoiding confirmation bias, and knowing how to detect errors in reasoning is bad unless you are very good at avoiding motivated cognition.
I upvoted your post, because I think that you raise a possibility that we should consider. It should not be dismissed out of hand.
However, your examples do kind of suck :). As Sarah pointed out, none of us is likely to become a dictator, and dictators are probably not typical people. So the history of dictators is not great information about how we ought to tend to our epistemological garden. Your claims about how data on group differences in intelligence affect people would be strong evidence if it were backed up by more than anecdote and speculation. As it is, though, it is at least as likely that you are suffering from confirmation bias.
Thank you. I should have held off on making the post for a few days and worked out better examples at the very least. I will do better.
This, primarily. At least obviously wrong by my value system where believing true things is a core value. To the extent that this is also the value system of less wrong as a whole it seems contrary to the core values of the site without acknowledging the conflict explicitly enough.
I didn’t think the examples were very good either. I think the argument is wrong even for value systems that place a lower value on truth than mine and the examples aren’t enough to persuade me otherwise.
I also found the (presumably) joke about hunting down and killing anyone who disagrees with you jarring and in rather poor taste. I’m generally in favour of tasteless and offensive jokes but this one just didn’t work for me.
Beware identity. It seems that a hero shouldn’t kill, ever, but sometimes it’s the right thing to do. Unless it’s your sole value, there will be situations where it should give way.
This seems like it should generally be true but in practice I haven’t encountered any plausible examples where I prefer ignorance. This includes a number of hypotheticals where many people claim they would prefer ignorance which leads me to believe the value I place on truth is outside the norm.
Truth / knowledge is a little paradoxical in this sense as well. I believe that killing is generally wrong but there is no paradox in killing in certain situations because it appears to be the right choice. The feedback effect of truth on your decision making / value defining apparatus makes it unlike other core values that might sometimes be abandoned.
I agree with this, my objection is to the particular argument you used, not necessarily the implied conclusion.
I really don’t think that the OP can be called “obviously wrong”. For example, your brain is imperfect, so it may be that believing some true things makes it less likely that you will believe other more important true things. Then, even if your core value is to believe true things, you are going to want to be careful about letting the dangerous beliefs into your head.
And the circularity that WrongBot and Vladimir Nesov have pointed out rears its head here, too. Suppose that the possibility that I pose above is true. Then, if you knew this, it might undermine the extent to which you hold believing true things to be a core value. That is precisely the kind of unwanted utility-function change that Wrongbot is warning us about.
It’s probably too pessimistic to say that you could never believe the dangerous true things. But it seems reasonably possible that some true beliefs are too dangerous unless you are very careful about the way in which you come to believe them. It may be unwise to just charge in and absorb true facts willy-nilly.
Here’s another way to come at WrongBot’s argument. It’s obvious that we sometimes should keep secrets. Sometimes more harm than good would result if someone else knew something that we know. It’s not obvious, but it is at least plausible, that the “harm” could be that the other person’s utility function would change in a way that we don’t want. At least, this is certainly not obviously wrong. The final step in the argument is then to acknowledge that the “other person” might be the part of yourself over which you do not have perfect control — which is, after all, most of you.
I believe some other people’s reports that there are things they would prefer not to know and would be inclined to honor their preference if I knew such a secret but I can’t think of any examples of such secrets for myself. In almost all cases I can think of I would want to be informed of any true information that was being withheld from me. The only possible exceptions are ‘pleasant surprises’ that are being kept secret on a strictly time-limited basis to enhance enjoyment (surprise gifts, parties, etc.) but I think these are not really what we’re talking about.
I can certainly think of many examples of secrets that people keep secret out of self-interest and attempt to justify by claiming they are doing it in the best interests of the ignorant party. In most such cases the ‘more harm than good’ would accrue to the party requesting the keeping of the secret rather than the party from whom the secret is being withheld. Sometimes keeping such secrets might be the ‘right thing’ morally (the Nazi at the door looking for fugitives) but this is not because you are acting in the interests of the party from whom you are keeping information.
Maybe this is an example:
I was once working hard to meet a deadline. Then I saw in my e-mail that I’d just received the referee reports for a journal article that I’d submitted. Even when a referee report recommends acceptance, it will almost always request changes, however minor. I knew that if I looked at the reports, I would feel a very strong pull to work on whatever was in them, which would probably take at least several hours. Even if I resisted this pull, resistance alone would be a major tax on my attention. My brain, of its own accord, would grab mental CPU cycles from my current project to compose responses to whatever the referees said. I decided that I couldn’t spare this distraction before I met my deadline. So I left the reports unread until I’d completed my project.
In short, I kept myself ignorant because I expected that knowledge of the reports’ contents would induce me to pursue the wrong actions.
This is an example of a pretty different kind of thing to what WrongBot is talking about. It’s a hack for rationing attention or a technique for avoiding distraction and keeping focus for a period of time. You read the email once your current time-critical priority was dealt with, you didn’t permanently delete it. Such tactics can be useful and I use them myself. It is quite different from permanently avoiding some information for fear of permanent corruption of your brain.
I’m a little surprised that you would have thought that this example fell into the same class of things as WrongBot or I were talking about. Perhaps we need to define what kinds of ‘dangerous thought’ we are talking about a little more clearly. I’m rather bemused that people are conflating this kind of avoidance of viscerally unpleasant experiences with ‘dangerous thoughts’ as well. It seems others are interpreting the scope of the article massively more broadly than I am.
Or putting it differently:
One thing is to operationally avoid gaining certain data at a certain moment in order to better function overall. Because we need to keep our attention focused.
Another thing is to strategically avoid gaining certain kinds of information that could possibly lead us astray.
I’d guess most people here agree with this kind of “self-deception” that the former entails. And it seems that the post is arguing pro this kind of “self-deception” in the latter case as well, although there isn’t as much consensus — some people seem to welcome any kind of truth whatsoever, at any time.
However… It seems to me now that, frankly, both cases are incredibly similar! So I may be conflating them, too.
The major difference seems to be the scale adopted: checking your email is an information hazard at that moment, and you want to postpone it for a couple of hours. Knowing about certain truths is an information hazard at this moment, and you want to postpone it for a couple of… decades. If ever. When your brain is stronger enough to handle it smoothly.
It all boils down to knowing we are not robots, that our brains are a kludge, and that certain stimuli (however real or true) are undesired.
I think that you can just twiddle some parameters with my example to see something more like WrongBot’s examples. My example had a known deadline, after which I knew it would be safe to read the reports. But suppose that I didn’t know exactly when it would be safe to read the reports. My current project is the sort of thing where I don’t currently know when I will have done enough. I don’t yet know what the conditions for success are, so I don’t yet know what I need to do to create safe conditions to read the reports. It is possible that it will never be safe to read the reports, that I will never be able to afford the distraction of suppressing my brain’s desire to compose responses.
My understanding is that WrongBot views group-intelligence differences analogously. The argument is that it’s not safe to learn such truths now, and we don’t yet know what we need to do to create safe conditions for learning these truths. Maybe we will never find such conditions. At any rate, we should be very careful about exposing our brains to these truths before we’ve figured out the safe conditions. That is my reading of the argument.
More or less. I’m generally sufficiently optimistic about the future that I don’t think that there are kinds of true knowledge that will continue to be dangerous indefinitely; I’m just trying to highlight things I think might not be safe right now, when we’re all stuck doing serious thinking with opaquely-designed sacks of meat.
Like Matt, I don’t think your example does the same thing as WrongBot’s, even with your twiddling.
WrongBot doesn’t want the “dangerous thoughts” to influence him to revise his beliefs and values. That wasn’t the case for you: you didn’t want to avoid revising your beliefs about your paper; you just didn’t want to deal with the cognitive distraction of it during the short term. If you avoided reading your reports because you wanted to avoid believing that your article needed any improvement, then I think your situation would be more analogous to WrongBot’s.
But there’s another difference here: when you decided to not expose yourself to that knowledge, you knew at the time when the safe conditions would occur, and that those conditions would occur very soon. That’s not the case for WrongBot, who has sworn off certain kinds of knowledge indefinitely.
Putting oneself at risk of error for a short and capped time frame is much different from putting oneself at risk of error indefinitely.
The beliefs that I didn’t want to revise were my beliefs about the contents of the reports. Before I read them, my beliefs about their contents were general and vague. Were I to read the reports, I would have specific knowledge about what they said. My worry was that this would revise my values: after gaining that specific knowledge, my brain would excessively value replying to the reports over working on my current project. Despite my intention to focus solely on my current project, my brain would allocate significant resources to composing responses to what I’d read in the reports.
But in the “twiddled” version, I don’t know when the safe conditions will occur . . .
To be fair, WrongBot thinks that we will be able to learn this knowledge eventually. We just shouldn’t take it as obvious that we know what the safe conditions are yet.
I still say that there is a difference between what you and WrongBot are doing, even if you’re successfully shooting down my attempts to articulate it. I might need a few more tries to be able to correctly articulate that intuition.
These are not the same types of values. You were worried about your values about priorities changing, while under time pressure. WrongBot is worried about his moral values changing about he treats certain groups of people.
True, but there wasn’t the same magnitude or type of uncertainty, right? You knew that you would probably be able to read your reports after your deadline...? All predictions about the future are uncertain, but not all types of uncertainty are created equal.
I would be interested to hear your opinion of a little thought experiment. What if I was a creationist, and you recommend me a book debunking creationism. I say that I won’t read it because it might change my values, at least not only the conditions are safe for me. If I say that I can’t read it this week because I have a deadline, but maybe next week, you’ll probably give me a pass. But what if I put off reading it indefinitely? Is that rational?
It seems that since we recognize that rationalists are human, we can and should give them a pass on scrutinizing certain thoughts or investigating certain ideas when they are under time pressure or emotional pressure in the short term, like in your example. But how long can one dodge inquiry in a certain area before one’s rationalist creds become suspect?
I’m having trouble seeing this distinction. What if I had a moral obligation to do as well as possible on my current project, because people were depending on me, say? My concern would be that, if I read the reports, I would feel a pull to act immorally. I might even rationalize away the immorality under the influence of this pull. In effect, I would act according to different moral values. Would that make the situation more analogous in your view, or would something still be missing?
I’m getting the sense that the problem with my example is that it has nothing to do with political correctness. Is it key for you that WrongBot wants to keep information out of his/her brain because of political correctness specifically?
I called it a “twiddled” version because I was thinking of the uncertainty as a continuous parameter that I could set to a wide spectrum of values. In the actual situation, the dial was pegged at “almost complete certainty”. But I can imagine situations where I’m very uncertain. It looks like part of your problem with this is that such a quantitative change amounts to a qualitative change in your view. Is that right?
I take it that your concern would be that losing creationism would change your moral values in a dangerous way. Whether you are being rational then depends on what “put off reading it indefinitely” means. I would say that you are being rational to avoid the book for now only if you are making a good-faith effort to determine rationally the conditions under which it would be safe to read the book, with the intention of reading the book once you’ve found sufficiently safe conditions.
Part of the problem I’m having with your example is my perception of the magnitude of the gap between what you are talking about and WrongBot’s examples. While they share certain similarities it appears roughly equivalent to a discussion about losing your entire life savings which you are comparing to the time you dropped a dime down the back of the sofa.
Sometimes a sufficiently large difference of magnitude can be treated for most purposes as a difference in kind.
Quantity has a quality all of its own.
What is the axis along which the gap lies? Is it the degree of uncertainty about when it will be safe to learn the dangerous knowledge?
Multiple axes:
Degree of uncertainty and magnitude of duration of the length of time before it will be ‘safe’.
Degree of effort involved in avoidance (temporarily holding off on reading a specific email vs. actively avoiding certain knowledge and filtering all information for a long and unspecified duration).
Severity of consequences (delayed or somewhat sub-standard performance on a near term project deadline vs. fundamental change or damage to your core values)
Scope of filtering (avoiding detailed contents of a specific email with a known and clearly delineated area of significance vs. general avoidance of whole areas of knowledge where you may not even have a good idea of what knowledge you may be missing out on).
Mental resources emphasized (short term attentional resources vs. deeply considered core beliefs and modes of thought and high level knowledge and understanding).
That’s part of it, and also how far into the future one thinks that might occur.
In my perception, the gap is less about certainty and more about timescale; I’d draw a line between “in a normal human lifetime” and “when I have a better brain” as the two qualitatively different timescales that you’re talking about.
But this is the way to think of WrongBot’s claim. The conscious you, the part over which you have deliberate control, is but a small part of the goal-seeking activity that goes on in your brain. Some of that goal-seeking activity is guided by interests that aren’t really yours. Sometimes you ought to ignore the interests of these other agents in your brain. There is some possibility that you should sometimes do this by keeping information from reaching those other agents, even though this means keeping the information from yourself as well.