Like Matt, I don’t think your example does the same thing as WrongBot’s, even with your twiddling.
WrongBot doesn’t want the “dangerous thoughts” to influence him to revise his beliefs and values. That wasn’t the case for you: you didn’t want to avoid revising your beliefs about your paper; you just didn’t want to deal with the cognitive distraction of it during the short term. If you avoided reading your reports because you wanted to avoid believing that your article needed any improvement, then I think your situation would be more analogous to WrongBot’s.
The argument is that it’s not safe to learn such truths now, and we don’t yet know what we need to do to create safe conditions for learning these truths. Maybe we will never find such conditions. At any rate, we should be very careful about exposing our brains to these truths before we’ve figured out the safe conditions.
But there’s another difference here: when you decided to not expose yourself to that knowledge, you knew at the time when the safe conditions would occur, and that those conditions would occur very soon. That’s not the case for WrongBot, who has sworn off certain kinds of knowledge indefinitely.
Putting oneself at risk of error for a short and capped time frame is much different from putting oneself at risk of error indefinitely.
WrongBot doesn’t want the “dangerous thoughts” to influence him to revise his beliefs and values. That wasn’t the case for you: you didn’t want to avoid revising your beliefs about your paper; you just didn’t want to deal with the cognitive distraction of it during the short term.
The beliefs that I didn’t want to revise were my beliefs about the contents of the reports. Before I read them, my beliefs about their contents were general and vague. Were I to read the reports, I would have specific knowledge about what they said. My worry was that this would revise my values: after gaining that specific knowledge, my brain would excessively value replying to the reports over working on my current project. Despite my intention to focus solely on my current project, my brain would allocate significant resources to composing responses to what I’d read in the reports.
But there’s another difference here: when you decided to not expose yourself to that knowledge, you knew at the time when the safe conditions would occur, and that those conditions would occur very soon.
But in the “twiddled” version, I don’t know when the safe conditions will occur . . .
That’s not the case for WrongBot, who has sworn off certain kinds of knowledge indefinitely.
To be fair, WrongBot thinks that we will be able to learn this knowledge eventually. We just shouldn’t take it as obvious that we know what the safe conditions are yet.
I still say that there is a difference between what you and WrongBot are doing, even if you’re successfully shooting down my attempts to articulate it. I might need a few more tries to be able to correctly articulate that intuition.
My worry was that this would revise my values: after gaining that specific knowledge, my brain would excessively value replying to the reports over working on my current project.
These are not the same types of values. You were worried about your values about priorities changing, while under time pressure. WrongBot is worried about his moral values changing about he treats certain groups of people.
But in the “twiddled” version, I don’t know when the safe conditions will occur . . .
True, but there wasn’t the same magnitude or type of uncertainty, right? You knew that you would probably be able to read your reports after your deadline...? All predictions about the future are uncertain, but not all types of uncertainty are created equal.
I would be interested to hear your opinion of a little thought experiment. What if I was a creationist, and you recommend me a book debunking creationism. I say that I won’t read it because it might change my values, at least not only the conditions are safe for me. If I say that I can’t read it this week because I have a deadline, but maybe next week, you’ll probably give me a pass. But what if I put off reading it indefinitely? Is that rational?
It seems that since we recognize that rationalists are human, we can and should give them a pass on scrutinizing certain thoughts or investigating certain ideas when they are under time pressure or emotional pressure in the short term, like in your example. But how long can one dodge inquiry in a certain area before one’s rationalist creds become suspect?
My worry was that this would revise my values: after gaining that specific knowledge, my brain would excessively value replying to the reports over working on my current project.
These are not the same types of values. You were worried about your values about priorities changing, while under time pressure. WrongBot is worried about his moral values changing about he treats certain groups of people.
I’m having trouble seeing this distinction. What if I had a moral obligation to do as well as possible on my current project, because people were depending on me, say? My concern would be that, if I read the reports, I would feel a pull to act immorally. I might even rationalize away the immorality under the influence of this pull. In effect, I would act according to different moral values. Would that make the situation more analogous in your view, or would something still be missing?
I’m getting the sense that the problem with my example is that it has nothing to do with political correctness. Is it key for you that WrongBot wants to keep information out of his/her brain because of political correctness specifically?
But in the “twiddled” version, I don’t know when the safe conditions will occur . . .
True, but there wasn’t the same magnitude or type of uncertainty, right? You knew that you would probably be able to read your reports after your deadline...? All predictions about the future are uncertain, but not all types of uncertainty are created equal.
I called it a “twiddled” version because I was thinking of the uncertainty as a continuous parameter that I could set to a wide spectrum of values. In the actual situation, the dial was pegged at “almost complete certainty”. But I can imagine situations where I’m very uncertain. It looks like part of your problem with this is that such a quantitative change amounts to a qualitative change in your view. Is that right?
I would be interested to hear your opinion of a little thought experiment. What if I was a creationist, and you recommend me a book debunking creationism. I say that I won’t read it because it might change my values, at least not only the conditions are safe for me. If I say that I can’t read it this week because I have a deadline, but maybe next week, you’ll probably give me a pass. But what if I put off reading it indefinitely? Is that rational?
I take it that your concern would be that losing creationism would change your moral values in a dangerous way. Whether you are being rational then depends on what “put off reading it indefinitely” means. I would say that you are being rational to avoid the book for now only if you are making a good-faith effort to determine rationally the conditions under which it would be safe to read the book, with the intention of reading the book once you’ve found sufficiently safe conditions.
Part of the problem I’m having with your example is my perception of the magnitude of the gap between what you are talking about and WrongBot’s examples. While they share certain similarities it appears roughly equivalent to a discussion about losing your entire life savings which you are comparing to the time you dropped a dime down the back of the sofa.
Sometimes a sufficiently large difference of magnitude can be treated for most purposes as a difference in kind.
Part of the problem I’m having with your example is my perception of the magnitude of the gap between what you are talking about and WrongBot’s examples.
What is the axis along which the gap lies? Is it the degree of uncertainty about when it will be safe to learn the dangerous knowledge?
Degree of uncertainty and magnitude of duration of the length of time before it will be ‘safe’.
Degree of effort involved in avoidance (temporarily holding off on reading a specific email vs. actively avoiding certain knowledge and filtering all information for a long and unspecified duration).
Severity of consequences (delayed or somewhat sub-standard performance on a near term project deadline vs. fundamental change or damage to your core values)
Scope of filtering (avoiding detailed contents of a specific email with a known and clearly delineated area of significance vs. general avoidance of whole areas of knowledge where you may not even have a good idea of what knowledge you may be missing out on).
Mental resources emphasized (short term attentional resources vs. deeply considered core beliefs and modes of thought and high level knowledge and understanding).
In my perception, the gap is less about certainty and more about timescale; I’d draw a line between “in a normal human lifetime” and “when I have a better brain” as the two qualitatively different timescales that you’re talking about.
Like Matt, I don’t think your example does the same thing as WrongBot’s, even with your twiddling.
WrongBot doesn’t want the “dangerous thoughts” to influence him to revise his beliefs and values. That wasn’t the case for you: you didn’t want to avoid revising your beliefs about your paper; you just didn’t want to deal with the cognitive distraction of it during the short term. If you avoided reading your reports because you wanted to avoid believing that your article needed any improvement, then I think your situation would be more analogous to WrongBot’s.
But there’s another difference here: when you decided to not expose yourself to that knowledge, you knew at the time when the safe conditions would occur, and that those conditions would occur very soon. That’s not the case for WrongBot, who has sworn off certain kinds of knowledge indefinitely.
Putting oneself at risk of error for a short and capped time frame is much different from putting oneself at risk of error indefinitely.
The beliefs that I didn’t want to revise were my beliefs about the contents of the reports. Before I read them, my beliefs about their contents were general and vague. Were I to read the reports, I would have specific knowledge about what they said. My worry was that this would revise my values: after gaining that specific knowledge, my brain would excessively value replying to the reports over working on my current project. Despite my intention to focus solely on my current project, my brain would allocate significant resources to composing responses to what I’d read in the reports.
But in the “twiddled” version, I don’t know when the safe conditions will occur . . .
To be fair, WrongBot thinks that we will be able to learn this knowledge eventually. We just shouldn’t take it as obvious that we know what the safe conditions are yet.
I still say that there is a difference between what you and WrongBot are doing, even if you’re successfully shooting down my attempts to articulate it. I might need a few more tries to be able to correctly articulate that intuition.
These are not the same types of values. You were worried about your values about priorities changing, while under time pressure. WrongBot is worried about his moral values changing about he treats certain groups of people.
True, but there wasn’t the same magnitude or type of uncertainty, right? You knew that you would probably be able to read your reports after your deadline...? All predictions about the future are uncertain, but not all types of uncertainty are created equal.
I would be interested to hear your opinion of a little thought experiment. What if I was a creationist, and you recommend me a book debunking creationism. I say that I won’t read it because it might change my values, at least not only the conditions are safe for me. If I say that I can’t read it this week because I have a deadline, but maybe next week, you’ll probably give me a pass. But what if I put off reading it indefinitely? Is that rational?
It seems that since we recognize that rationalists are human, we can and should give them a pass on scrutinizing certain thoughts or investigating certain ideas when they are under time pressure or emotional pressure in the short term, like in your example. But how long can one dodge inquiry in a certain area before one’s rationalist creds become suspect?
I’m having trouble seeing this distinction. What if I had a moral obligation to do as well as possible on my current project, because people were depending on me, say? My concern would be that, if I read the reports, I would feel a pull to act immorally. I might even rationalize away the immorality under the influence of this pull. In effect, I would act according to different moral values. Would that make the situation more analogous in your view, or would something still be missing?
I’m getting the sense that the problem with my example is that it has nothing to do with political correctness. Is it key for you that WrongBot wants to keep information out of his/her brain because of political correctness specifically?
I called it a “twiddled” version because I was thinking of the uncertainty as a continuous parameter that I could set to a wide spectrum of values. In the actual situation, the dial was pegged at “almost complete certainty”. But I can imagine situations where I’m very uncertain. It looks like part of your problem with this is that such a quantitative change amounts to a qualitative change in your view. Is that right?
I take it that your concern would be that losing creationism would change your moral values in a dangerous way. Whether you are being rational then depends on what “put off reading it indefinitely” means. I would say that you are being rational to avoid the book for now only if you are making a good-faith effort to determine rationally the conditions under which it would be safe to read the book, with the intention of reading the book once you’ve found sufficiently safe conditions.
Part of the problem I’m having with your example is my perception of the magnitude of the gap between what you are talking about and WrongBot’s examples. While they share certain similarities it appears roughly equivalent to a discussion about losing your entire life savings which you are comparing to the time you dropped a dime down the back of the sofa.
Sometimes a sufficiently large difference of magnitude can be treated for most purposes as a difference in kind.
Quantity has a quality all of its own.
What is the axis along which the gap lies? Is it the degree of uncertainty about when it will be safe to learn the dangerous knowledge?
Multiple axes:
Degree of uncertainty and magnitude of duration of the length of time before it will be ‘safe’.
Degree of effort involved in avoidance (temporarily holding off on reading a specific email vs. actively avoiding certain knowledge and filtering all information for a long and unspecified duration).
Severity of consequences (delayed or somewhat sub-standard performance on a near term project deadline vs. fundamental change or damage to your core values)
Scope of filtering (avoiding detailed contents of a specific email with a known and clearly delineated area of significance vs. general avoidance of whole areas of knowledge where you may not even have a good idea of what knowledge you may be missing out on).
Mental resources emphasized (short term attentional resources vs. deeply considered core beliefs and modes of thought and high level knowledge and understanding).
That’s part of it, and also how far into the future one thinks that might occur.
In my perception, the gap is less about certainty and more about timescale; I’d draw a line between “in a normal human lifetime” and “when I have a better brain” as the two qualitatively different timescales that you’re talking about.