This isn’t in itself culty—but it is the cult attractor that’s causing your problems, in a roundabout way.
When we hear people talking about some proposition, we normally either throw it out entirely or tentatively integrate its consequences into our thinking, modulo considerations of status and so forth. Normally higher-impact propositions assume higher priority in our thinking: a perfect stranger shouting “My God, the office is on fire!” takes higher priority than a friend telling you your shoelace is untied.
The memetic ecosystem we live in does contain cults and similar predators, though, which are best recognized (almost defined) by wildly overvaluing their core values. That translates into communication as what might be described as great vehemence. Very high-impact propositions, therefore, carry strong if unconscious connotations of MIND-KILLER STAY AWAY STAY AWAY; after some inflection point, they’ll start getting discarded often enough that the priority effects are overtaken.
This isn’t just theory. We’re all constantly bombarded with exhortations to save the world, and for people who are not domain experts or highly skilled rational thinkers there’s no good way to differentiate reliable world-saving imperatives from unreliable ones. The obvious priority-preserving move is to make sympathetic noises and refuse to update—which indeed turns out to be the polite, socially expected response. If you don’t want that to happen, expressing your views in terms of saving the world is strongly contraindicated.
So I guess the “save the world” part should get dropped then. Entirely.
Upon further reflection, it seems like a lot of people are already trying to do that (biomedical research, environmental causes, various anti-poverty charities, etc).
So now the question is “How do you teach rationality to people in a way that helps them do what they’re doing in a no-strings attached way such that they actually use the information to improve”. People still do whatever they were choosing to do, just more effectively.
The kind of rationality we’re investigating is inextricably bound to improvement; if it’s being transmitted effectively, we don’t need to attach extra semantic content to it to get people to adopt better practices, look at the future through critical rather than ideological eyes, et cetera. I’d actually strongly advise against attaching that sort of content; doing that would implicitly carry the message that rationality is tribal, like Lysenkoism or intelligent design.
This is true, at least, for improving in terms of habits of thought; improvement in habits of action has to do with instrumental rationality, and hasn’t received much attention here. That does seem to be changing, though.
Helpful comment with regards to cultishness. A pretty large number of people are already working to save the world in various ways. Tentatively, doing 2 without reference would probably be better.
This isn’t in itself culty—but it is the cult attractor that’s causing your problems, in a roundabout way.
When we hear people talking about some proposition, we normally either throw it out entirely or tentatively integrate its consequences into our thinking, modulo considerations of status and so forth. Normally higher-impact propositions assume higher priority in our thinking: a perfect stranger shouting “My God, the office is on fire!” takes higher priority than a friend telling you your shoelace is untied.
The memetic ecosystem we live in does contain cults and similar predators, though, which are best recognized (almost defined) by wildly overvaluing their core values. That translates into communication as what might be described as great vehemence. Very high-impact propositions, therefore, carry strong if unconscious connotations of MIND-KILLER STAY AWAY STAY AWAY; after some inflection point, they’ll start getting discarded often enough that the priority effects are overtaken.
This isn’t just theory. We’re all constantly bombarded with exhortations to save the world, and for people who are not domain experts or highly skilled rational thinkers there’s no good way to differentiate reliable world-saving imperatives from unreliable ones. The obvious priority-preserving move is to make sympathetic noises and refuse to update—which indeed turns out to be the polite, socially expected response. If you don’t want that to happen, expressing your views in terms of saving the world is strongly contraindicated.
So I guess the “save the world” part should get dropped then. Entirely.
Upon further reflection, it seems like a lot of people are already trying to do that (biomedical research, environmental causes, various anti-poverty charities, etc).
So now the question is “How do you teach rationality to people in a way that helps them do what they’re doing in a no-strings attached way such that they actually use the information to improve”. People still do whatever they were choosing to do, just more effectively.
Would that work better?
The kind of rationality we’re investigating is inextricably bound to improvement; if it’s being transmitted effectively, we don’t need to attach extra semantic content to it to get people to adopt better practices, look at the future through critical rather than ideological eyes, et cetera. I’d actually strongly advise against attaching that sort of content; doing that would implicitly carry the message that rationality is tribal, like Lysenkoism or intelligent design.
This is true, at least, for improving in terms of habits of thought; improvement in habits of action has to do with instrumental rationality, and hasn’t received much attention here. That does seem to be changing, though.
Er, there seems to have been miscommunication.
I’m not suggesting adding semantic content, I’m asking how you transmit rationality effectively.
Helpful comment with regards to cultishness. A pretty large number of people are already working to save the world in various ways. Tentatively, doing 2 without reference would probably be better.
And that’s all I can say before I go to school.