So my brother was watching Bullshit, and saw an exorcist claim that whenever a kid mentions having an invisible friend, they (the exorcist) tell the kid that the friend is a demon that needs exorcising.
Now, being a professional exorcist does not give a high prior for rationality.
But still, even given that background, that’s a really uncritically stupid thing to say. And it occurred to me that in general, humans say some really uncritically stupid things to children.
I wonder if this uncriticality has anything to do with, well, not expecting to be criticized. If most of the hacks that humans use in place of rationality are socially motivated, we can safely turn them off when speaking to a child who doesn’t know any better.
I wonder how much benefit we’d get, then, by imagining ourselves in all our internal dialogues to be speaking to someone very critical, and far smarter than us?
I wonder how much benefit we’d get, then, by imagining ourselves in all our internal dialogues to be speaking to someone very critical, and far smarter than us?
Probably not very, because we can’t actually imagine what that hypothetical person would say to us. It’d probably end up used as a way to affirm your positions by only testing strong points.
While I have difficulty imagining what someone far smarter than myself would say, what I can do is imagine explaining myself to a smart person who doesn’t have my particular set of biases and hangups; and I find that does sometimes help.
I wonder how much benefit we’d get, then, by imagining ourselves in all our internal dialogues to be speaking to someone very critical, and far smarter than us?
I do it too—using some of the smarter and more critical posters on LW, actually—and I also think it helps. I think this diffuses some of LucasSloan’s criticisms below—if it’s a real person, you can to a reasonable extent imagine how they might reply.
I think it works because placing yourself in a conflict (even an imaginary one) narrows and sharpens your focus as the subconscious processes get activated that try to ‘win’ it.
The risk is though, that like any opinion formed or argued under the presence of an emotion, is that you become unreasonably certain of it.
I don’t get the ‘conflict’ feeling when I do it. It feels more like ‘betting mode’, but with more specific counterarguments. Since it’s all imaginary anyway, I don’t feel committed enough to one side to activate conflict mode.
So my brother was watching Bullshit, and saw an exorcist claim that whenever a kid mentions having an invisible friend, they (the exorcist) tell the kid that the friend is a demon that needs exorcising.
Now, being a professional exorcist does not give a high prior for rationality.
But still, even given that background, that’s a really uncritically stupid thing to say. And it occurred to me that in general, humans say some really uncritically stupid things to children.
I wonder if this uncriticality has anything to do with, well, not expecting to be criticized. If most of the hacks that humans use in place of rationality are socially motivated, we can safely turn them off when speaking to a child who doesn’t know any better.
I wonder how much benefit we’d get, then, by imagining ourselves in all our internal dialogues to be speaking to someone very critical, and far smarter than us?
Probably not very, because we can’t actually imagine what that hypothetical person would say to us. It’d probably end up used as a way to affirm your positions by only testing strong points.
While I have difficulty imagining what someone far smarter than myself would say, what I can do is imagine explaining myself to a smart person who doesn’t have my particular set of biases and hangups; and I find that does sometimes help.
I do it sometimes, and I think it helps.
I do it too—using some of the smarter and more critical posters on LW, actually—and I also think it helps. I think this diffuses some of LucasSloan’s criticisms below—if it’s a real person, you can to a reasonable extent imagine how they might reply.
I think it works because placing yourself in a conflict (even an imaginary one) narrows and sharpens your focus as the subconscious processes get activated that try to ‘win’ it.
The risk is though, that like any opinion formed or argued under the presence of an emotion, is that you become unreasonably certain of it.
I don’t get the ‘conflict’ feeling when I do it. It feels more like ‘betting mode’, but with more specific counterarguments. Since it’s all imaginary anyway, I don’t feel committed enough to one side to activate conflict mode.