Of course we can use reductionist materialism to reason about processes that happen in our brain when we are doing this very reasoning.
I’m not disagreeing with that. I’m saying that:
It’s pretty normal to miss the confusion in this case.
Looking isn’t reasoning.
The reason the paperclip maximizer won’t listen is because it doesn’t care, not because it doesn’t understand what you’re saying. So, this allegory would only make sense if, some parts of our mind don’t care about the benefits of Looking while other parts do care. It still shouldn’t be an impediment to understand what Looking is.
…unless it suspects that understanding what Looking is might make it less effective at maximizing paperclips.
How can understanding something make you less effective at doing something? Are you modeling the mind as an adversarial system, where one subagent wants to prevent another from gaining some knowledge? Or, is Looking some kind of infohazard that can damage a mind just via the knowledge itself? In either case it makes Looking sound like something very dangerous.
How can understanding something make you less effective at doing something? Are you modeling the mind as an adversarial system, where one subagent wants to prevent another from gaining some knowledge?
This is roughly the kind of thing that can happen. For example, suppose that it’s an important feature of your identity / self-concept that you’re a good, kind person, such that seeing strong evidence that you’re not such a person would be psychologically devastating to you—you wouldn’t be able to trust yourself to interact with other people, or something, and so you’d hole up in your room and just be depressed, or at least some part of you is afraid that something like this is possible. Then that part of you will be highly motivated to ignore evidence that you’re not a good, kind person, and avoid situations or thoughts that might lead you to see such evidence.
My experience is that many or even most people have a thing like this (and don’t know it). At CFAR we use the term “load-bearing bug” to refer to a bug you have that actively resists being solved, because some part of you is worried that solving it might be devastating in this way. For me the study of rationality doesn’t really begin until you encounter your first such bug.
So yes, you’re correct that Looking can be dangerous, in loosely the same way that telling your parents the truth about something might be dangerous if they’re not ready to handle it and might respond by e.g. kicking you out of the house. But that’s mostly a fact about your parents, and mostly not a fact about the nature of truth-telling.
It’s certainly true that there are truths about which people are lying to themselves. However, I’m confused about this being an explanation for why Looking is so difficult to explain. My impression from the “phone” allegory etc. was that Looking is just supposed to be such a difficult concept that most people have almost no tools in their epistemic arsenal to understand it. This is very different from saying that people already know in their hearts what Looking is but don’t want to acknowledge it because it would disrupt some self-deception.
My impression from the “phone” allegory etc. was that Looking is just supposed to be such a difficult concept that most people have almost no tools in their epistemic arsenal to understand it. This is very different from saying that people already know in their hearts what Looking is but don’t want to acknowledge it because it would disrupt some self-deception.
People don’t need to already know it in order for this dynamic to play out. All that’s required is that the person have some kind of idea of what type of impact it’ll have on their mental architecture — and that “some kind of idea” needn’t be accurate.
This gets badly exacerbated if the concept is hard to understand. See e.g. “consciousness collapses quantum uncertainty” type beliefs. This does a reasonably good job of immunizing a mind against more materialist orientations to quantum phenomena.
But to illustrate in a little more detail how this might make Looking more difficult to understand, here’s a slightly fictionalized exchange I’ve had with many, many people:
Them: “Give me an example of Looking.”
Me: “Okay. If you Look at your hand, you can separate the interpretation of ‘hand’ and ‘blood flow’ and all that, and just directly experience the this-ness of what’s there…”
Them: “That sounds like woo.”
Me: “I’m not sure what you mean by ‘woo’ here. I’m inviting you to pay attention to something that’s already present in your experience.”
Them: “Nope, I don’t believe you. You’re trying to sell me snake oil.”
After a few months of exploring this, I gathered that the problem was that Looking didn’t have a conceptual place to land in their framework that didn’t set off “mystical woo” alarm bells. Suddenly I’m talking to their epistemic immunization maximizer, which has some sense that whatever “Looking” is might affect its epistemic methods and therefore is Bad™. Everything from that point forward in the conversation just plays out that subsystem’s need to justify its predetermined rejection of attempts to understand what I’m saying.
Certainly not everyone does this particular one. I’m just offering one specific example of a type.
Alright, I think I now understand much better what you mean, thank you. It is true that there are things that set off epistemic immune responses despite being “innocuous” (e.g. X-risk vs. “doomsday prophecies” and rationalist community vs. “cult”). However, it is also the case that, these immune responses are there for a reason. If you want to promote an idea which sets off such responses then, IMHO, you need to make it clear as early as possible how your idea is different from the “pathogens” against which the response is intended. Specifically in the case of Looking, what rings my alarm bells is not so much the “this-ness” etc. but the claim that Looking is beyond rational explanation (which Kaj seems to be challenging in this post).
Alright, I think I now understand much better what you mean, thank you.
Great. :-)
[…]these immune responses are there for a reason.
Of course. As with all other systems.
Specifically in the case of Looking, what rings my alarm bells is not so much the “this-ness” etc. but the claim that Looking is beyond rational explanation (which Kaj seems to be challenging in this post).
The following has been said many times already, but I’ll go ahead and reiterate it here once more: I was not trying to claim that Looking is beyond rational explanation.
I’m not disagreeing with that. I’m saying that:
It’s pretty normal to miss the confusion in this case.
Looking isn’t reasoning.
…unless it suspects that understanding what Looking is might make it less effective at maximizing paperclips.
How can understanding something make you less effective at doing something? Are you modeling the mind as an adversarial system, where one subagent wants to prevent another from gaining some knowledge? Or, is Looking some kind of infohazard that can damage a mind just via the knowledge itself? In either case it makes Looking sound like something very dangerous.
This is roughly the kind of thing that can happen. For example, suppose that it’s an important feature of your identity / self-concept that you’re a good, kind person, such that seeing strong evidence that you’re not such a person would be psychologically devastating to you—you wouldn’t be able to trust yourself to interact with other people, or something, and so you’d hole up in your room and just be depressed, or at least some part of you is afraid that something like this is possible. Then that part of you will be highly motivated to ignore evidence that you’re not a good, kind person, and avoid situations or thoughts that might lead you to see such evidence.
My experience is that many or even most people have a thing like this (and don’t know it). At CFAR we use the term “load-bearing bug” to refer to a bug you have that actively resists being solved, because some part of you is worried that solving it might be devastating in this way. For me the study of rationality doesn’t really begin until you encounter your first such bug.
So yes, you’re correct that Looking can be dangerous, in loosely the same way that telling your parents the truth about something might be dangerous if they’re not ready to handle it and might respond by e.g. kicking you out of the house. But that’s mostly a fact about your parents, and mostly not a fact about the nature of truth-telling.
It’s certainly true that there are truths about which people are lying to themselves. However, I’m confused about this being an explanation for why Looking is so difficult to explain. My impression from the “phone” allegory etc. was that Looking is just supposed to be such a difficult concept that most people have almost no tools in their epistemic arsenal to understand it. This is very different from saying that people already know in their hearts what Looking is but don’t want to acknowledge it because it would disrupt some self-deception.
People don’t need to already know it in order for this dynamic to play out. All that’s required is that the person have some kind of idea of what type of impact it’ll have on their mental architecture — and that “some kind of idea” needn’t be accurate.
This gets badly exacerbated if the concept is hard to understand. See e.g. “consciousness collapses quantum uncertainty” type beliefs. This does a reasonably good job of immunizing a mind against more materialist orientations to quantum phenomena.
But to illustrate in a little more detail how this might make Looking more difficult to understand, here’s a slightly fictionalized exchange I’ve had with many, many people:
Them: “Give me an example of Looking.”
Me: “Okay. If you Look at your hand, you can separate the interpretation of ‘hand’ and ‘blood flow’ and all that, and just directly experience the this-ness of what’s there…”
Them: “That sounds like woo.”
Me: “I’m not sure what you mean by ‘woo’ here. I’m inviting you to pay attention to something that’s already present in your experience.”
Them: “Nope, I don’t believe you. You’re trying to sell me snake oil.”
After a few months of exploring this, I gathered that the problem was that Looking didn’t have a conceptual place to land in their framework that didn’t set off “mystical woo” alarm bells. Suddenly I’m talking to their epistemic immunization maximizer, which has some sense that whatever “Looking” is might affect its epistemic methods and therefore is Bad™. Everything from that point forward in the conversation just plays out that subsystem’s need to justify its predetermined rejection of attempts to understand what I’m saying.
Certainly not everyone does this particular one. I’m just offering one specific example of a type.
Alright, I think I now understand much better what you mean, thank you. It is true that there are things that set off epistemic immune responses despite being “innocuous” (e.g. X-risk vs. “doomsday prophecies” and rationalist community vs. “cult”). However, it is also the case that, these immune responses are there for a reason. If you want to promote an idea which sets off such responses then, IMHO, you need to make it clear as early as possible how your idea is different from the “pathogens” against which the response is intended. Specifically in the case of Looking, what rings my alarm bells is not so much the “this-ness” etc. but the claim that Looking is beyond rational explanation (which Kaj seems to be challenging in this post).
Great. :-)
Of course. As with all other systems.
The following has been said many times already, but I’ll go ahead and reiterate it here once more: I was not trying to claim that Looking is beyond rational explanation.