FWIW, I am inclined to think that “rationality” is a bad brand identification for a good thing. Rationality conjures up “Spock” (the Star Trek character) not “Spock” (the compassionate and wise child rearing guru). It puts an emphasis on a very inhuman part of the kind of human being you feel you are becoming.
Whatever it means in your context, as a brand to evangelize to others about its benefits, it is lacking. Better, in the sense of offering a positive vision, perhaps than “atheism” or “secularism” but not still not grounded and humane enough. I like “naturalist” better, although it is loaded with the connotation of bird watching, and also “humanist” although the term, without the modifier “secular” can mean little more than someone who gives a damn. “Enlightened” (as in the Enlightenment era) might be a good term if it weren’t so damned arrogant in the modern vernacular.
The sense that I think you are trying to capture of something of the sense conveyed by the title to Carl Sagan’s book “Demon Haunted World.” You want to convey the joys of having exorcised the demons and opening yourself to seeing the world more clearly. But, to sell it to others, I think it is necessary to find a better marketing plan.
On the “spock” front, I dislike the identification of “rational” with “Inhuman”. These, too, are human qualities! However I certainly agree that many people do see this negatively.
There’s an interesting tension in marketing plans—how far can we go in using marketing, which is normally about exploiting irrational responses, in pushing rationality?
If people see rationalists using irrational arguments to push rationality, does it blow our credibility?
If people see rationalists using irrational arguments to push rationality, does it blow our credibility?
The local jargon term appears to be “dark arts”.
The tricky thing is that it’s hard to effectively interact with the typical not-particularly-rational human in a manner that someone, somewhere, couldn’t conceivably interpret as dark arts.
I tend to resolve this by doing something that seems to have a reasonable chance of working, not actively seeking to deceive and seeking a win-win outcome. Would the subject feel socially ripped-off? If no, then fine. (This heuristic is somewhat inchoate and may not stand up to detailed examination, which I would welcome.)
Dunno about detailed examination, but will you settle for equally inchoate thoughts?
If I think about how N independent perfectly rational AI agents might communicate about the world, if they all had the intention of cooperating in a shared enterprise of learning as much as they can about it… one approach is for each agent to upload all their observations to a well-indexed central repository, and for each agent to periodically download all novel observations and then update on that.
They might also upload their inferences, in order to save one another the trouble of computing them… basically a performance optimization.
And they might have a mechanism for callibrating their inference engines… that is, agents A1 and A2 might periodically ensure that they are drawing the same conclusions from the same data, and engage in some diagnostic/repair work if not.
So that’s more or less my understanding of communication on the “light side of the Force:” share well-indexed data, avoid double-counting evidence, share the results of computationally expensive inferences (clearly labeled as such), and compare the inference process and point out discrepancies to support self-diagnostics and repair.
Humans don’t come anywhere near being able to do that, of course. But we can treat that as an ideal, and ask how well we are approximating it.
One obvious divergence from that ideal is that we’re dealing with other humans, who are not only just as flawed as we are, but are sometimes not even playing the same game: they may be actively distorting their transmissions in order to manipulate our behavior in various ways.
So right away, one thing I have to do is build models of other agents and estimate how they are likely to distort their output, and then apply correction algorithms to my human-generated inputs accordingly. And since they’re all doing the same thing, I have to model their likely models of me, and adjust my output to compensate for their distortions (aka corrections) of it.
So before either of us even opens our mouths, we are already two levels deep into a duel of the dark arts. The question is, how far am I willing to go?
In general, I draw my lines based on goals, not tactics.
What am I trying to accomplish? If I’m trying to understand someone, or be understood, or make progress towards a goal they value, or act in their interests, I’m generally cool with that. If I’m acting against their interests, I’m not so cool with that. If I’m trying to protect myself from damage (including social damage) or advance my own interests, I’m generally cool with that. These factors are sometimes in mutual opposition.
And then multiply that pairwise computation by the mutual interactions of all the other people we know, plus some dogs I really like, and approximate ruthlessly because I don’t have a hope of doing that matrix computation.
One doesn’t have to use irrational arguments to push rationality, but one of the lessons we draw from how people make decisions is that people simply do not make decisions about how to view and understand the world, even a decision to do so rationally, in an entirely rational way. The emotional connection matters as well.
Rational ideas proferred without an emotional counterpart wither. The political landscape is full of people who advanced good, rational programs or policy ideas or views about science that crashed and burned for long periods of time because the audience didn’t respond.
Look at the argument of SarahC’s original post itself. It isn’t a philosophical proof with Boolean logic, it is a testimonial about the emotional benefits of this kind of outlook. This is prefectly valid evidence, even if it is not obtained by a “reasoning process” of deduction. In the same way, I took particular pride when my non-superstitutiously raised daughter won the highest good character award in her elementary school, because it showed that rational thinking isn’t inconsistent with good moral character.
While one doesn’t want to undermine one’s own credibility with the approach one uses to make an argument, it is also important to defuse false inferences in arguments to oppose rationality. One of the false inferences is that rational is synonomous with ammoral. Another is that rational is synonomous with emotionally vacant and unfulfilling. A third is the sense that rationality implies that one use individual reason alone without the benefiit of a social network and context, because that is the character of a lot of activities (e.g. math homework or tax return preparation or logic problems) that are commonly characterized as “rational.” Simple anecdote can show that these stereotypes aren’t always present. Evidence from a variety of sources can show that these stereotypes are usually inapt.
When one looks at the worldview one chooses for oneself, it isn’t enough to argue that rationality gives correct answers, one must establish that if gives answers in a way that allows you to feel good about how your are living your life. Without testimonials and other emotional evidence, you don’t establish that there are not hidden costs which you are withholding from the audience for your statement.
Moreover, marketing, in the sense I am using the word is not about “exploiting irrational responses.” It is about something much more basic—using words that will convey to the intended audience the message that you actually intend to convey. Care in one’s use of words so as to avoid confusion in one’s audience is quintessentially consistent with good practice of someone seeking to apply a rational method in philosophy.
I think that “atheist” is a term that we do not need, in the same way that we don’t need a word for someone who rejects astrology. We simply do not call people “non-astrologers.” All we need are words like “reason” and “evidence” and “common sense” and “bullshit” to put astrologers in their place, and so it could be with religion.
I’d like to bring up a comparison with a similar term that isn’t used much any more: “abolitionist”. It’s very rare to find anyone these days who wouldn’t agree with those in the pre-Civil War United States who called themselves abolitionists. We don’t need the term today, but we did need it back then...
So far as I know I’ve been the one doing most of the asking, and I don’t have a large enough sample size to declare anything, just seven people. The results have been mostly neutral, with one enthusiastically positive and two slightly negative. If I were to extrapolate from this, I’d say that enough people are at least neutral to the word that it won’t harm us to use it.
If our goal was to find an optimum marketing word, I’d wait until we’d done much more substantial testing. But I think there’s benefit to changing the Spock Perception, so as long as people are mostly neutral towards the word, it’s worth using. (I’d still want more than seven responses before committing to it)
The specific question I’ve been asking people is:
“I’m just curious, if someone were to describe themselves as a Rationalist to you, what stereotypes would come to your mind about that person?”
(The first time I started by saying “what thoughts and feelings come to your mind if say “Rationality?” That prompted some questions and confusion that I don’t have time for in the typical elevator ride, which is where I do the asking. By the third query I had narrowed it down to the phrasing above, because it seemed to cut to the heart of the matter. People might be okay with “rationality” but are they okay with people who strongly “identify” with rationality?)
When I hit some arbitrary milestone that triggers warm fuzzies, I’ll post the results so far and invite analysis. (I’m thinking 20′s a decent number to start with)
I’ve been using the word “Luminous” to explicitly refer to “LessWrong rationality” (as opposed to “Spock rationality”). It’s a bit of a kludge, but the concept has always felt central to what I get out of LessWrong. I’m not sure how true this is for others.
Luminosity is already a technical term for a subset of rationality skills. If it’s the subset you usually have cause to talk about, there’s nothing wrong with that, but calling the entire thing that seems just mistaken.
nods I am aware it’s a subset, thus calling it a kludge.
Certainly, I’m open to a better term, but I happen to deal with a lot of “Spock” rationalists, as have many of the people I talk to, so having some way of distinguishing “no I don’t mean that idiocy” is important to me, and this is the best-fit that I’ve found so far.
The chain of thought, if you’re curious: On a non-verbal/intuitive level, I feel like the sub-skill of Luminosity is a lot of what distinguishes “LessWrong” rational from “Spock” rational. Since “LessWrong Rationality” is itself fairly awkward phrase (referring as it does to a single specific community), I substituted “Luminous rationality”, and that eventually got short-handed back to just “Luminous”. English allows for all sorts of weird confusing things where a word refers to both the set and a specific subset (frex, “man” referring to both “humans” and “humans who are male”), so while it’s kludgy, it works for me.
I can completely understand this word not working well for others :)
It was kind of inspired by the gay movement as an attempt to find a word for atheism that was more socially acceptable ie without all the negative baggage, and embracing/popularising it.
I think it’s an awful name, exactly on the grounds of having huge negative baggage. For me, at least, it has strong associations of smug, superior, condescending, and other such qualities.
Yep—correlating it with “being intelligent” seems to be a bit of a PR disaster… which the brights have tried to counter by calling non-brights “supers” .
Not sure if that’s worked at all… I keep occasional tabs on what’s happening in that community but don’t really consider myself an active member. I think the heart is in the right place—especially in the US where religiosity is at a much more fervent level. but not sure it’s really proven effective yet… but then I might be able to say similar things about this community :)
I’ve never been particularly fond of it, it always struck me as too self aggrandizing. It particularly upset me when my sister started identifying me as a Bright to other people without my permission.
I have, and even started to mention it, but figured that I was going too far afield. I think the problem there is that the established meaning of “Bright” as intelligent, overshadows the secondary meaning that is sought. I think “light” as a metaphor is promising, but the word “Bright” in particular, is inapt.
FWIW, I am inclined to think that “rationality” is a bad brand identification for a good thing. Rationality conjures up “Spock” (the Star Trek character) not “Spock” (the compassionate and wise child rearing guru). It puts an emphasis on a very inhuman part of the kind of human being you feel you are becoming.
Whatever it means in your context, as a brand to evangelize to others about its benefits, it is lacking. Better, in the sense of offering a positive vision, perhaps than “atheism” or “secularism” but not still not grounded and humane enough. I like “naturalist” better, although it is loaded with the connotation of bird watching, and also “humanist” although the term, without the modifier “secular” can mean little more than someone who gives a damn. “Enlightened” (as in the Enlightenment era) might be a good term if it weren’t so damned arrogant in the modern vernacular.
The sense that I think you are trying to capture of something of the sense conveyed by the title to Carl Sagan’s book “Demon Haunted World.” You want to convey the joys of having exorcised the demons and opening yourself to seeing the world more clearly. But, to sell it to others, I think it is necessary to find a better marketing plan.
On the “spock” front, I dislike the identification of “rational” with “Inhuman”. These, too, are human qualities! However I certainly agree that many people do see this negatively.
There’s an interesting tension in marketing plans—how far can we go in using marketing, which is normally about exploiting irrational responses, in pushing rationality?
If people see rationalists using irrational arguments to push rationality, does it blow our credibility?
The local jargon term appears to be “dark arts”.
The tricky thing is that it’s hard to effectively interact with the typical not-particularly-rational human in a manner that someone, somewhere, couldn’t conceivably interpret as dark arts.
I tend to resolve this by doing something that seems to have a reasonable chance of working, not actively seeking to deceive and seeking a win-win outcome. Would the subject feel socially ripped-off? If no, then fine. (This heuristic is somewhat inchoate and may not stand up to detailed examination, which I would welcome.)
Dunno about detailed examination, but will you settle for equally inchoate thoughts?
If I think about how N independent perfectly rational AI agents might communicate about the world, if they all had the intention of cooperating in a shared enterprise of learning as much as they can about it… one approach is for each agent to upload all their observations to a well-indexed central repository, and for each agent to periodically download all novel observations and then update on that.
They might also upload their inferences, in order to save one another the trouble of computing them… basically a performance optimization.
And they might have a mechanism for callibrating their inference engines… that is, agents A1 and A2 might periodically ensure that they are drawing the same conclusions from the same data, and engage in some diagnostic/repair work if not.
So that’s more or less my understanding of communication on the “light side of the Force:” share well-indexed data, avoid double-counting evidence, share the results of computationally expensive inferences (clearly labeled as such), and compare the inference process and point out discrepancies to support self-diagnostics and repair.
Humans don’t come anywhere near being able to do that, of course. But we can treat that as an ideal, and ask how well we are approximating it.
One obvious divergence from that ideal is that we’re dealing with other humans, who are not only just as flawed as we are, but are sometimes not even playing the same game: they may be actively distorting their transmissions in order to manipulate our behavior in various ways.
So right away, one thing I have to do is build models of other agents and estimate how they are likely to distort their output, and then apply correction algorithms to my human-generated inputs accordingly. And since they’re all doing the same thing, I have to model their likely models of me, and adjust my output to compensate for their distortions (aka corrections) of it.
So before either of us even opens our mouths, we are already two levels deep into a duel of the dark arts. The question is, how far am I willing to go?
In general, I draw my lines based on goals, not tactics.
What am I trying to accomplish? If I’m trying to understand someone, or be understood, or make progress towards a goal they value, or act in their interests, I’m generally cool with that. If I’m acting against their interests, I’m not so cool with that. If I’m trying to protect myself from damage (including social damage) or advance my own interests, I’m generally cool with that. These factors are sometimes in mutual opposition.
And then multiply that pairwise computation by the mutual interactions of all the other people we know, plus some dogs I really like, and approximate ruthlessly because I don’t have a hope of doing that matrix computation.
One doesn’t have to use irrational arguments to push rationality, but one of the lessons we draw from how people make decisions is that people simply do not make decisions about how to view and understand the world, even a decision to do so rationally, in an entirely rational way. The emotional connection matters as well.
Rational ideas proferred without an emotional counterpart wither. The political landscape is full of people who advanced good, rational programs or policy ideas or views about science that crashed and burned for long periods of time because the audience didn’t respond.
Look at the argument of SarahC’s original post itself. It isn’t a philosophical proof with Boolean logic, it is a testimonial about the emotional benefits of this kind of outlook. This is prefectly valid evidence, even if it is not obtained by a “reasoning process” of deduction. In the same way, I took particular pride when my non-superstitutiously raised daughter won the highest good character award in her elementary school, because it showed that rational thinking isn’t inconsistent with good moral character.
While one doesn’t want to undermine one’s own credibility with the approach one uses to make an argument, it is also important to defuse false inferences in arguments to oppose rationality. One of the false inferences is that rational is synonomous with ammoral. Another is that rational is synonomous with emotionally vacant and unfulfilling. A third is the sense that rationality implies that one use individual reason alone without the benefiit of a social network and context, because that is the character of a lot of activities (e.g. math homework or tax return preparation or logic problems) that are commonly characterized as “rational.” Simple anecdote can show that these stereotypes aren’t always present. Evidence from a variety of sources can show that these stereotypes are usually inapt.
When one looks at the worldview one chooses for oneself, it isn’t enough to argue that rationality gives correct answers, one must establish that if gives answers in a way that allows you to feel good about how your are living your life. Without testimonials and other emotional evidence, you don’t establish that there are not hidden costs which you are withholding from the audience for your statement.
Moreover, marketing, in the sense I am using the word is not about “exploiting irrational responses.” It is about something much more basic—using words that will convey to the intended audience the message that you actually intend to convey. Care in one’s use of words so as to avoid confusion in one’s audience is quintessentially consistent with good practice of someone seeking to apply a rational method in philosophy.
I think Sam Harris gets it mostly right.
I’d like to bring up a comparison with a similar term that isn’t used much any more: “abolitionist”. It’s very rare to find anyone these days who wouldn’t agree with those in the pre-Civil War United States who called themselves abolitionists. We don’t need the term today, but we did need it back then...
“Reason” and “evidence based” are both quite nice words to convey the idea.
Thanks for this. That talk was an informative read.
NYLW has done some preliminary testing, asking people what they think of when they hear the word “rational”. So far the results have been positive.
So far as I know I’ve been the one doing most of the asking, and I don’t have a large enough sample size to declare anything, just seven people. The results have been mostly neutral, with one enthusiastically positive and two slightly negative. If I were to extrapolate from this, I’d say that enough people are at least neutral to the word that it won’t harm us to use it.
If our goal was to find an optimum marketing word, I’d wait until we’d done much more substantial testing. But I think there’s benefit to changing the Spock Perception, so as long as people are mostly neutral towards the word, it’s worth using. (I’d still want more than seven responses before committing to it)
The specific question I’ve been asking people is:
“I’m just curious, if someone were to describe themselves as a Rationalist to you, what stereotypes would come to your mind about that person?”
(The first time I started by saying “what thoughts and feelings come to your mind if say “Rationality?” That prompted some questions and confusion that I don’t have time for in the typical elevator ride, which is where I do the asking. By the third query I had narrowed it down to the phrasing above, because it seemed to cut to the heart of the matter. People might be okay with “rationality” but are they okay with people who strongly “identify” with rationality?)
When I hit some arbitrary milestone that triggers warm fuzzies, I’ll post the results so far and invite analysis. (I’m thinking 20′s a decent number to start with)
I’ve been using the word “Luminous” to explicitly refer to “LessWrong rationality” (as opposed to “Spock rationality”). It’s a bit of a kludge, but the concept has always felt central to what I get out of LessWrong. I’m not sure how true this is for others.
Tongue-in-cheek, I’d also suggest “Illuminati” ;)
Luminosity is already a technical term for a subset of rationality skills. If it’s the subset you usually have cause to talk about, there’s nothing wrong with that, but calling the entire thing that seems just mistaken.
nods I am aware it’s a subset, thus calling it a kludge.
Certainly, I’m open to a better term, but I happen to deal with a lot of “Spock” rationalists, as have many of the people I talk to, so having some way of distinguishing “no I don’t mean that idiocy” is important to me, and this is the best-fit that I’ve found so far.
The chain of thought, if you’re curious: On a non-verbal/intuitive level, I feel like the sub-skill of Luminosity is a lot of what distinguishes “LessWrong” rational from “Spock” rational. Since “LessWrong Rationality” is itself fairly awkward phrase (referring as it does to a single specific community), I substituted “Luminous rationality”, and that eventually got short-handed back to just “Luminous”. English allows for all sorts of weird confusing things where a word refers to both the set and a specific subset (frex, “man” referring to both “humans” and “humans who are male”), so while it’s kludgy, it works for me.
I can completely understand this word not working well for others :)
Have you heard of The Brights movement ?
It was kind of inspired by the gay movement as an attempt to find a word for atheism that was more socially acceptable ie without all the negative baggage, and embracing/popularising it.
I have heard of it.
I think it’s an awful name, exactly on the grounds of having huge negative baggage. For me, at least, it has strong associations of smug, superior, condescending, and other such qualities.
Yep—correlating it with “being intelligent” seems to be a bit of a PR disaster… which the brights have tried to counter by calling non-brights “supers” .
Not sure if that’s worked at all… I keep occasional tabs on what’s happening in that community but don’t really consider myself an active member. I think the heart is in the right place—especially in the US where religiosity is at a much more fervent level. but not sure it’s really proven effective yet… but then I might be able to say similar things about this community :)
I’ve never been particularly fond of it, it always struck me as too self aggrandizing. It particularly upset me when my sister started identifying me as a Bright to other people without my permission.
I proposed a new logo for the Brights in 2007 :-)
The images don’t seem to work there?
When I open the image in a separate window, I get a message that I don’t have permission to access it.
It took me looking at Brights on Wikipedia then a moment’s imagination to work out what he would have come up with.
I have, and even started to mention it, but figured that I was going too far afield. I think the problem there is that the established meaning of “Bright” as intelligent, overshadows the secondary meaning that is sought. I think “light” as a metaphor is promising, but the word “Bright” in particular, is inapt.