[General pushback in the opposite direction and providing an alternate view.]
Counterclaim 1: It’s less of a status / posture thing. Most people just aren’t thinking most of the time, but totally have the ability to form beliefs if pressed. Thinking of them as lazy evaluators might make more sense. “Smart” people are just those who have more metacognitive activity and are thus asking themselves questions and answering them with less need for external prompts from the environment.
Counterclaim 2: Yes, I think this model might be useful in providing better explanations for why “normal” people do things. But I also think that it can limit the way that “smart” people interact with “normal” people.
Models I myself tend to use try to focus on answering the question, “How can I take actions that improve this person’s worldview / life trajectory?” which might involve using the concept that they don’t have well-formed beliefs to inform how I move forward, but it certainly doesn’t just end with noting that they’re being mindless and writing them off as hopeless.
I guess I’m just worried that these sorts of models become an excuse for “smart” people to not even try when it comes to communicating “complex” ideas to “normal” people. I think there’s something good that happens on both sides when your focus is on bridging inferential gaps and less on just modeling the other party as some sort of mindless adaptation executor.
I mean, that’s part of why we end up with ontologies that refer to objects that only phenomenologically, right? Because it turns out that we get all sorts of cool additional functions when we start looking a little deeper.
[General pushback in the opposite direction and providing an alternate view.]
Counterclaim 1: It’s less of a status / posture thing. Most people just aren’t thinking most of the time, but totally have the ability to form beliefs if pressed. Thinking of them as lazy evaluators might make more sense. “Smart” people are just those who have more metacognitive activity and are thus asking themselves questions and answering them with less need for external prompts from the environment.
Counterclaim 2: Yes, I think this model might be useful in providing better explanations for why “normal” people do things. But I also think that it can limit the way that “smart” people interact with “normal” people.
Models I myself tend to use try to focus on answering the question, “How can I take actions that improve this person’s worldview / life trajectory?” which might involve using the concept that they don’t have well-formed beliefs to inform how I move forward, but it certainly doesn’t just end with noting that they’re being mindless and writing them off as hopeless.
I guess I’m just worried that these sorts of models become an excuse for “smart” people to not even try when it comes to communicating “complex” ideas to “normal” people. I think there’s something good that happens on both sides when your focus is on bridging inferential gaps and less on just modeling the other party as some sort of mindless adaptation executor.
I mean, that’s part of why we end up with ontologies that refer to objects that only phenomenologically, right? Because it turns out that we get all sorts of cool additional functions when we start looking a little deeper.