My impression from the “phone” allegory etc. was that Looking is just supposed to be such a difficult concept that most people have almost no tools in their epistemic arsenal to understand it. This is very different from saying that people already know in their hearts what Looking is but don’t want to acknowledge it because it would disrupt some self-deception.
People don’t need to already know it in order for this dynamic to play out. All that’s required is that the person have some kind of idea of what type of impact it’ll have on their mental architecture — and that “some kind of idea” needn’t be accurate.
This gets badly exacerbated if the concept is hard to understand. See e.g. “consciousness collapses quantum uncertainty” type beliefs. This does a reasonably good job of immunizing a mind against more materialist orientations to quantum phenomena.
But to illustrate in a little more detail how this might make Looking more difficult to understand, here’s a slightly fictionalized exchange I’ve had with many, many people:
Them: “Give me an example of Looking.”
Me: “Okay. If you Look at your hand, you can separate the interpretation of ‘hand’ and ‘blood flow’ and all that, and just directly experience the this-ness of what’s there…”
Them: “That sounds like woo.”
Me: “I’m not sure what you mean by ‘woo’ here. I’m inviting you to pay attention to something that’s already present in your experience.”
Them: “Nope, I don’t believe you. You’re trying to sell me snake oil.”
After a few months of exploring this, I gathered that the problem was that Looking didn’t have a conceptual place to land in their framework that didn’t set off “mystical woo” alarm bells. Suddenly I’m talking to their epistemic immunization maximizer, which has some sense that whatever “Looking” is might affect its epistemic methods and therefore is Bad™. Everything from that point forward in the conversation just plays out that subsystem’s need to justify its predetermined rejection of attempts to understand what I’m saying.
Certainly not everyone does this particular one. I’m just offering one specific example of a type.
Alright, I think I now understand much better what you mean, thank you. It is true that there are things that set off epistemic immune responses despite being “innocuous” (e.g. X-risk vs. “doomsday prophecies” and rationalist community vs. “cult”). However, it is also the case that, these immune responses are there for a reason. If you want to promote an idea which sets off such responses then, IMHO, you need to make it clear as early as possible how your idea is different from the “pathogens” against which the response is intended. Specifically in the case of Looking, what rings my alarm bells is not so much the “this-ness” etc. but the claim that Looking is beyond rational explanation (which Kaj seems to be challenging in this post).
Alright, I think I now understand much better what you mean, thank you.
Great. :-)
[…]these immune responses are there for a reason.
Of course. As with all other systems.
Specifically in the case of Looking, what rings my alarm bells is not so much the “this-ness” etc. but the claim that Looking is beyond rational explanation (which Kaj seems to be challenging in this post).
The following has been said many times already, but I’ll go ahead and reiterate it here once more: I was not trying to claim that Looking is beyond rational explanation.
People don’t need to already know it in order for this dynamic to play out. All that’s required is that the person have some kind of idea of what type of impact it’ll have on their mental architecture — and that “some kind of idea” needn’t be accurate.
This gets badly exacerbated if the concept is hard to understand. See e.g. “consciousness collapses quantum uncertainty” type beliefs. This does a reasonably good job of immunizing a mind against more materialist orientations to quantum phenomena.
But to illustrate in a little more detail how this might make Looking more difficult to understand, here’s a slightly fictionalized exchange I’ve had with many, many people:
Them: “Give me an example of Looking.”
Me: “Okay. If you Look at your hand, you can separate the interpretation of ‘hand’ and ‘blood flow’ and all that, and just directly experience the this-ness of what’s there…”
Them: “That sounds like woo.”
Me: “I’m not sure what you mean by ‘woo’ here. I’m inviting you to pay attention to something that’s already present in your experience.”
Them: “Nope, I don’t believe you. You’re trying to sell me snake oil.”
After a few months of exploring this, I gathered that the problem was that Looking didn’t have a conceptual place to land in their framework that didn’t set off “mystical woo” alarm bells. Suddenly I’m talking to their epistemic immunization maximizer, which has some sense that whatever “Looking” is might affect its epistemic methods and therefore is Bad™. Everything from that point forward in the conversation just plays out that subsystem’s need to justify its predetermined rejection of attempts to understand what I’m saying.
Certainly not everyone does this particular one. I’m just offering one specific example of a type.
Alright, I think I now understand much better what you mean, thank you. It is true that there are things that set off epistemic immune responses despite being “innocuous” (e.g. X-risk vs. “doomsday prophecies” and rationalist community vs. “cult”). However, it is also the case that, these immune responses are there for a reason. If you want to promote an idea which sets off such responses then, IMHO, you need to make it clear as early as possible how your idea is different from the “pathogens” against which the response is intended. Specifically in the case of Looking, what rings my alarm bells is not so much the “this-ness” etc. but the claim that Looking is beyond rational explanation (which Kaj seems to be challenging in this post).
Great. :-)
Of course. As with all other systems.
The following has been said many times already, but I’ll go ahead and reiterate it here once more: I was not trying to claim that Looking is beyond rational explanation.