I think this is a good distinction, and anyone somehow trying to shift social norms (perhaps within a subcommunity) might be well-advised to shift the norms in order: First, teach people that others have a right to criticize their opinion; then, teach them that they have no right to an opinion.
“teach them that they have no right to an opinion.”
I know people throw the term around (I try not to), but this is maybe the most fascist thing I’ve seen on this board. They have no right to an opinion? You might want to rephrase this, as many of my opinions are somewhat involuntary.
It seems that in this article, Robin is co-defining “opinion” with “belief”. This isn’t, exactly, incorrect, but I don’t think it maps completely onto the common use, which may be causing misunderstanding. If I say “it’s my opinion that [insert factual proposition here]”, then Robin’s remarks certainly apply. But if it’s my opinion that chocolate chip cookie dough ice cream is delicious—which is certainly a way people often use the word “opinion”—then in what way might I not be entitled to that? Unless I turn out to be mistaken in my use of the term “chocolate chip cookie dough ice cream”, or something, but assume I’m not.
Robin was clear about what he meant by “opinion”. From his first paragraph, with emphasis added:
You are entitled to your desires, and sometimes to your choices. You might own a choice, and if you can choose your preferences, you may have the right to do so. But your beliefs are not about you; beliefs are about the world. Your beliefs should be your best available estimate of the way things are; anything else is a lie.
Though I agree that it can cause problems to use “opinion” in an unusual way, even in the context of explicitly stating one’s unusual definition, when people are going to quote the conclusion as a slogan out of the clarifying context.
On the other hand, “You are entitled to your utility function but not your epistemology” would not make an effective slogan. (Well maybe, if it has enough “secret knowledge” appeal to motivate people to figure out what it means.)
In this case, it means that you’re not entitled to refuse to change a belief that’s been proven wrong.
If you think “everyone likes chocolate ice cream”, and I introduce you to my hypothetical friend Bill who doesn’t like chocolate ice cream, you’re not entitled to still believe that ‘everyone’ likes chocolate ice cream. You could still believe that ‘most people’ like chocolate ice cream, but if I was able to come up with a competent survey showing that 51% of people do not like chocolate ice cream, you wouldn’t be entitled to that belief, either, unless you could point me to an even more definitive study that agreed with you.
Even the belief “I like chocolate ice cream” could be proven false in some situations—peoples’ tastes do change over time, and you could try it one summer and discover that you just don’t enjoy it any more.
It also implies that you’re supposed to go looking for proof of your claims before you make them—that you’re not ‘entitled’ to have or spread an opinion, but instead must earn the right by doing or referencing research.
That article is entitled “You Are Never Entitled to Your Opinion” and says:
If you ever feel tempted to resist an argument or conclusion by
saying “everyone is entitled to their opinion,” stop! This is
as clear a bias indicator as they come.
I don’t think Robin really means that people aren’t entitled to their opinions. I think what he really means is people aren’t allowed to say “I’m entitled to my opinion”—that is, to use that phrase as a defense.
There’s a big difference. When people use that defense they don’t really mean “I’m entitled to have an opinion”, but instead “I’m entitled to express my opinion without having it criticised”.
In other words “I’m entitled to my opinion” is really a code for “all opinions are equally valid and thus can’t be criticised”.
That said, I do think it is valid to say “I am entitled to an opinion” in situations where your right to expression is being attacked.
I’m not saying you always do have a right to freely and fully express yourself. But in situations when you do have some measure of this, it can be unfairly stomped on.
For example, you might be in a business meeting where you should be able to have input on a matter but one person keeps cutting you off.
Or say you’re with friends and you’re outlining your view on some topic and, though you’re able to get your view out there, someone else always responds with personal attacks.
Sometimes people are just trying to shut you down.
For example, you might be in a business meeting where you should be able to have input on a matter but one person keeps cutting you off.
Or say you’re with friends and you’re outlining your view on some topic and, though you’re able to get your view out there, someone else always responds with personal attacks.
I don’t see how “I’m entitled to my opinion” is a particularly optimal or meaningful response to these situations. What about “it’s unfair not to give me a chance to express my position” in the former situation, and “concluding I’m an asshole because I’m pro-X isn’t justified” in the latter?
Right, “opinion” is so overloaded with meaning that in order to determine if the use of “I’m entitle to my opinion” or “You are not entitled to your opinion” is virtuous, one should taboo “opinion”, and probably “entitled” as well, and express the thought in way that is specific to the situation, such as in your examples. And of course, having gone through the mental exercise of validating that what you say makes sense, you should give everyone else the benifet of this thought process and actually communicate the alternate form, so they also can tell if it is virtuous.
Agreed, absolutely. I have nothing against hearing about people’s half-baked theories—something about the theory or their logic may turn out to be useful, or give me an idea about something else, even if the theory is wrong. But it’d be nice to be able to ask “so why do you think that?” without risking an unpleasant reaction. It might even lead me to figure out that some idea that I would have otherwise dismissed is actually correct!
Yes, but maybe if there was a social norm such that if I asked that and they couldn’t answer, they would take the social-status hit, instead of me, they wouldn’t act that way.
Social pressure is pretty much the only thing that can force normal people to acknowledge failures of rationality, in my experience. In a milieu in which a rationalization of that failure will be accepted or even merely tolerated, they’ll short-circuit directly to explaining the failure away rather than forcing themselves to acknowledge the problem.
Yeah, it’d be nice, but it’s probably not going to happen.
It took me years to even recognize that I was doing that, and I still haven’t managed to stop completely.
One obstacle: as long as they aren’t expected to produce obvious results to meet your expectations, people really, really like being given too much credit. And they really, really dislike being given precisely enough credit when they’re nothing special, even if it lets them off the hook.
Many of my social ‘problems’ began once I recognized that other people didn’t think like I did, and were usually profoundly stupid. That’s not a recognition that lends itself to frictionless interaction with others.
This little tidbit highlights so much of what’s wrong with this community:
“Many of my social ‘problems’ began once I recognized that other people didn’t think like I did, and were usually profoundly stupid. That’s not a recognition that lends itself to frictionless interaction with others.”
You’d think a specimen of your gargantuan brainpower would have the social intelligence to handily conceal your disdain for the commonfolk. Perhaps it’s some sort of signaling?
I think you’re underestimating the degree of social intelligence required. To pull that off while still keeping the rationalistic habits that such people find offensive, you’d have to:
Recognize the problem, which is nontrivial,
Find a way of figuring out who falls on which side of the line, without tipping people off,
Determine all of the rationalistic habits that are likely to offend people who are not trying to become more rational,
Find non-offensive ways of achieving those goals, or find ways of avoiding those situations entirely,
Find a way not to slip up in conversation and apply the habits anyway—again, nontrivial. Keeping this degree of focus in realtime is hard.
You’d also probably have to at least to some degree integrate the idea that it’s ‘okay’ (not correct, just acceptable) to be irrational into your general thought process, to avoid unintentional signaling that you think poorly of them. If anything, irrational people are more likely to notice such subtle signals, since so much of their communication is based on them.
You’d also probably have to at least to some degree integrate the idea that it’s ‘okay’ (not correct, just acceptable) to be irrational into your general thought process, to avoid unintentional signaling that you think poorly of them. If anything, irrational people are more likely to notice such subtle signals, since so much of their communication is based on them.
Or, you could just treat the existence of irrationality as a mere fact, like the fact that water freezes or runs downhill. Facts are not a matter of correctness or acceptability, they just are.
In fact (no pun intended), assigning “should-ness” to facts or their opposites in our brains is a significant force in our own irrationality. To say that people “should” be rational is like saying that water “should” run uphill—it says more about your value system than about the thing supposedly being pointed to.
Functionally, beliefs about “should” and “should not” assign aversive consequences to current reality—if I say water “should” run uphill, then I am saying that is is bad that it does not. The practical result of this is to incur an aversive emotional response every time I am exposed to the fact that water runs downhill—a response which does not benefit me in any way.
A saner, E-prime-like translation of “water should run uphill” might be, “I would prefer that water ran uphill”. My preference is just as unlikely to be met in that case, but I do not experience any aversion to the fact that reality does not currently match my preference. And I can still experience a positive emotional response from, say, crafting nice fountains that pump water uphill.
It seems to me that a rationalist would experience better results in life if he or she did not experience aversive emotions from exposure to common facts… such as the fact that human beings run on hardware that’s poorly designed for rationality.
Without such aversions, it would be unnecessary to craft complex strategies to avoid signaling them to others. And, equally important, having aversive responses to impersonal facts is a strong driver of motivated reasoning that’s hard to detect in ourselves!
Good summary; the confusion of treating natural mindless phenomena with intentional stance was addressed in the Three Fallacies of Teleology post.
When it is possible to change the situation, emotion directed the right way acts as reinforcement signal, and helps to learn the correct behavior (and generally to focus on figuring out a way of improving the situation). Attaching the right amount of right emotions to the right situations is an indispensable tool, good for efficiency and comfort.
When it is possible to change the situation, emotion directed the right way acts as reinforcement signal, and helps to learn the correct behavior (and generally to focus on figuring out a way of improving the situation). Attaching the right amount of right emotions to the right situations is an indispensable tool, good for efficiency and comfort.
The piece you may have missed is that even if the situation can be changed, it is still sufficient to use a positive reinforcement to motivate action, and in human beings, it is generally most useful to use positive reinforcement to motivate positive action.
This is because, on the human platform at least, positive reinforcement leads to exploratory, creative, and risk-taking behaviors, whereas negative reinforcement leads to defensive, risk-avoidance, and passive behaviors. So if the best way to change a situation is to avoid it, then by all means, use negative reinforcement.
However, if the best way to change the situation is to engage with it, then negative emotions and “shoulds” are your enemy, not your friend, as they will cause your mind and body to suggest less-useful behaviors (and signals to others).
IAWYC, modulo the use of “should”: at least with connotations assumed on Less Wrong, it isn’t associated with compulsion or emotional load, it merely denotes preference. “Ought” would be closer.
IAWYC, modulo the use of “should”: at least with connotations assumed on Less Wrong, it isn’t associated with compulsion or emotional load, it merely denotes preference. “Ought” would be closer.
It’s true that in technical contexts “should” has less emotional connotation; however even in say, standards documents, one capitalizes SHOULD and MUST to highlight the technical, rather than colloquial sense of these words. Banishing them from one’s personal vocabulary greatly reduces suffering, and is the central theme of “The Work” of Byron Katie (who teaches a simple 4-question model for turning “shoulds” into facts and felt-preferences).
Among a community of rationalists striving for better communication, it would be helpful to either taboo the words or create alternatives. As it is, a lot of “shoulds” get thrown around here without reference to what goal or preference the shoulds are supposed to serve.
“One should X” conveys no information about what positive or negative consequences are being asserted to stem from doing or not-doing X—and that’s precisely the sort of information that we would like to have if we are to understand each other.
Agreed. Even innocuous-looking exceptions, like phrases of the form, “if your goal is to X, then you should Y”, have to make not-necessarily-obvious assumptions about what exactly Y is optimizing.
Avoiding existing words is in many cases a counterproductive injunction, it’s a normal practice when words get stolen for terms of art. Should refers to a sum total of ideal preference, the top level terminal goal, over all of the details (consequences) together.
Should may require a consequentialist explanation for instrumental actions, or a moral argument for preference over consequences.
The problems you cite in bullets are only nontrivial if you don’t sufficiently value social cohesion. My biggest faux pas have sufficiently conditioned me to make them less often because I put a high premium on that cohesion. So I think it’s less a question of social intelligence and more one of priorities. I don’t have to keep “constant focus”—after a few faux pas it becomes plainly apparent which subjects are controversial and which aren’t, and when we do come around to touchy ones I watch myself a little more.
I thought I would get away with that simplification. Heh.
Those skills do come naturally to some people, but not everyone. They certainly don’t come naturally to me. Even if I’m in a social group with rules that allow me to notice that a faux pas has occurred (not all do; some groups consider it normal to obscure such things to the point where I’ll find out weeks or months later, if at all), it’s still not usually obvious what I did wrong or what else I could do instead, and I have to intentionally sit down and come up with theories that I may or may not even have a chance to test.
Right, I get that people fare differently when it comes to this stuff, but I do think it’s a matter of practice and attention more than innate ability (for most people). And this is really my point, that the sort of monastic rationality frequently espoused on these boards can have politically antirational effects. It’s way easier to influence others if you first establish a decent rapport with them.
I don’t at all disagree that the skills are good to learn, especially if you’re going to be focusing on tasks that involve dealing with non-rationalists. I think it may be a bit of an over generalization to say that they should be a high priority for everyone, but probably not much of one.
I do have a problem with judging people for not having already mastered those skills, or for having higher priorities than tackling those skills immediately with all their energy, though, which seems to be what you’re doing. Am I inferring too much when I come to that conclusion?
Look, this whole thread started because of Annoyance’s judgment of people who have higher priorities than rationality, right? Did you have a problem with that?
All I’m saying is that this community in general gives way too short shrift to the utility of social cohesion. Sorry if that bothers you.
Most of what he said condenses to “people who are not practicing rationality are irrational”, which is only an insult if you consider ‘irrational’ to be an insult, which I didn’t see any evidence of. I saw frustration at the difficulty in dealing with them without social awkwardness, but that’s not the same.
Yes, and most of what I said reduces to “Annoyance is not practicing rationality with statements like “‘social cohesion is one of the enemies of rationality.’” You said you had a “problem” with my contention and then I pointed out that Annoyance had made a qualitatively similar claim that hadn’t bothered you. Aside from our apparent disagreement on the point I don’t get how my claim could be a problem for you.
I think I’ve made myself clear and this is getting tiresome so I’ll invite you to have the last word.
I hope I’m not the only one who sees the irony in you refusing to answer my question about your reasoning, given where this thread started.
I guess the best option now is to sum this disagreement up in condensations. For simplicity’s sake, I’m only going to do comments on the branch that leads directly here. I’m starting with this comment.
Annoyance: counterargument: “Most people are not interested enough in being rational for that suggestion to work; they’ll find a way around it, instead”
Me: disagreement with Annoyance—I was wrong
Annoyance: Pointed out my mistake
Me: “Oh, right”
Annoyance: “That is a common mistake, and one that I haven’t fully overcome yet, which means I still have trouble communicating with people who are not practicing rationality” (probably intended to make me feel better)
You: “I object to the above exchange; you’re just masking your prejudice against irrational people by refusing to communicate clearly with them”
Me: “Actually, it’s not a refusal, it’s just hard.”
You: “No, it’s not hard, and refusal to do it means that you don’t value social cohesion.” with a personal example of it not being hard.
Me: “Okay, you got me. It’s only hard for some people.”
You: “Okay, it is hard for some people, but it’s still learnable, and harmful to the cause of rationality if you present yourself as a rationalist without having those skills.”
Me: “They’re good to learn, but I think you’re over-valuing them, and judging people for not sharing your values.”
You: “Why are you complaining about me being judgmental when you didn’t complain about Annoyance being judgmental?”, plus what appears to be some social-signaling stuff intended to indicate that I’m a bad person because I don’t care about social cohesion. I don’t know enough about what you mean by “social cohesion” to make sense of that part of the thread, but I suspect that your assertion that I don’t value it is correct.
Me: “Where was Annoyance judgmental? I didn’t see him being judgmental anywhere.”
This brings us to your comment directly above, which doesn’t condense well. You didn’t answer my question (and I don’t take this as proof that there is no instance of Annoyance being judgmental—I may have missed something somewhere—but I consider it pretty unlikely that you’d refuse to defend your assertion if there was a clear one, so it’s at least strong evidence that there isn’t), accused Annoyance of being irrational, and claimed that I should be accepting your claim even though you refuse to actually defend it.
I do agree with you that the skills involved in dealing with irrational people are useful to learn. But we obviously disagree in many, many ways on what kinds of support should be necessary for an argument to be taken seriously here.
That’s not a judgment against less intelligent people; it’s a judgment against all of us, himself included. I recognize it as being the more rational decision in the situation I mentioned here as one that I’m failing at from a rationalist standpoint, and am not going to bother challenging his rational view on a rational forum when the best defense I can think of is “yes, but you shouldn’t say that to the muggles”.
Social cohesion is one of the enemies of rationality.
It’s not necessarily so in that it’s not always opposed to it, but it is incompatible with the mechanisms that bring it about and permit it to error-correct. It tends to reinforce error. When it happens to reinforce correctness, it’s not needed, and when it doesn’t, it makes it significantly harder to correct the errors.
“When it happens to reinforce correctness, it’s not needed”
Can you elaborate?
I’ll note that rationality isn’t an end. My ideal world state would involve a healthy serving of both rationality and social cohesion. There are many situations in which these forces work in tandem and many where they’re at odds.
A perfect example is this site. There are rules the community follows to maintain a certain level of social cohesion, which in turn aides us in the pursuit of rationality. Or are the rules not needed?
It’s demonstrated by the fact that you can up/down vote and report anyone’s posts, and that you need a certain number of upvotes to write articles. This is a method of policing the discourse on the site so that social cohesion doesn’t break down to an extent which impairs our discussion. These mechanisms “reinforce correctness,” in your terms. So I’ll ask again, can we do away with them?
I don’t think humanity follows obviously from rationality, which is what I meant about rationality being a means rather than an end.
There are rules the community follows to maintain a certain level of social cohesion, which in turn aides us in the pursuit of rationality.
How is that demonstrated?
Those rules are rarely discussed outright, at least not comprehensively.
I’m pretty sure if I started posting half of my comments in pig-Latin or French or something, for no apparent reason, and refused to explain or stop, I’d be asked to leave fairly quickly, though. That all communication will be in plain English unless there’s a reason for it not to be is one example. I’m sure there are others.
I disagree. It is rational to exploit interpersonal communication for clarity between persons and comfortable use. If the ‘language of rationality’ can’t be understood by the ‘irrational people’, it is rational to translate best you can, and that can include utilizing societal norms. (For clarity and lubrication of the general process.)
Oh, I’m sorry I misunderstood you. Yeah, it can be tiring. I’m a fairly introverted person and need a good amount of downtime between socialization. I guess I was projecting a little—I use to think social norms were garbage and useless, until I realized neglecting their utility was irrational and it was primarily an emotional bias against them in never feeling like I ‘fit in’. Sometimes it feels like you never stop discovering unfortunate things about yourself...
I agree here: Reading stuff like this totally makes me cringe. I don’t know why people of above average intelligence want to make everyone else feel like useless proles, but it seems pretty rampant. Some humility is probably a blessing here, I mean, as frustrating as it is to deal with the ‘profoundly stupid’, at least you yourself aren’t profoundly stupid.
Of course, they probably think given the same start the ‘profoundly stupid’ person was given, they would have made the best of it and would be just as much of a genius as they are currently.
It’s a difficult realization, when you become aware you’re more intelligent then average, to be dropped into the pool with a lot of other smart people and realize you really aren’t that special. I mean, in a world of some six billion odd, if you are a one-in-a-million genius, that still means you likely aren’t in the top hundred smartest people in the world and probably not in the top thousand. It kind of reminds me of grad school stories I’ve read, with kids who think they are going to be a total gift to their chosen subject ending up extremely cynical and disappointed.
I think people online like to exaggerate their eccentricity and disregard for societal norms in an effort to appeal to the stereotypes for geniuses. I’ve met a few real geniuses IRL and I know you can be a genius without being horribly dysfunctional.
Rationality and intelligence are not the same thing—I’ve seen plenty of discussions here despairing about the existence of obviously-intelligent people, masters in their fields, who haven’t decided to practice rationality. I also know people who are observably less intelligent than I am, who practice rationality about as well as I do. One major difference between people in that latter group, and people who are not practicing rationality, no matter what the irrational peoples’ intelligence levels are, is that those people don’t get offended when someone points out a flaw in their reasoning, just as I don’t get offended when they, or even people who are not practicing rationality, point out a flaw in mine. People who are less intelligent will probably progress more slowly with rationality, as with any mental skill-set, but that’s not under discussion here. The irrational unwillingness to accept criticism is.
Being called ‘profoundly stupid’ is not exactly a criticism of someone’s reasoning. (Not that anybody was called that.) I think we’re objecting to this because of how it’ll offend people outside of the ‘in group’ anyway. Besides that, As much as we might wish we were immune to the emotional shock or glee at our thoughts and concepts being ridiculed or praised. I think it would be a rarity here to find someone who didn’t. People socializing and exchanging ideas is a type of system—It has to be understood and used effectively in order to produce the best results—and calling, essentially, everybody who disagrees with you ‘profoundly stupid’ is not good social lubrication.
You appear to be putting words into my mouth, but I’m currently too irritated to detangle this much beyond that point.
“Giving people too much credit” was a reference to peoples’ desire to be rational. I tend to assume that that’s significantly above zero in every case, even though the evidence does not seem to support that assumption. This is a failure to be rational on my part. (I doubt I’ll fix that; it’s the basis for most of my faith in humanity.)
I make no such assumption about intelligence (I do not assume that people want to be more intelligent than they are), and make a conscious effort to remove irrational biases toward intelligent people from my thought process when I encounter them. I have been doing so for years, with a significant degree of success, especially considering that I was significantly prejudiced against less intelligent people, before I realized that it was wrong to hold that view.
I have also put significant effort into learning how to bridge both of those communication gaps, and the skills required in each case are different. When I’m simply dealing with someone who’s less intelligent, I moderate my vocabulary, use lots of supporting social signaling, make smaller leaps of logic, and request feedback frequently to make sure I haven’t lost them. (Those skills are just as useful in regular conversation as they are in explaining things.) When I’m dealing with someone who’s not practicing rationality, I have to be very aware of their particular worldview, and only thoughtfully challenge it—which requires lots of complicated forethought, and can require outright lies.
The lack of either of those sets of communication skills will make dealing with the relevant people difficult, and can lead to them thinking poorly of you, whether you actually are prejudiced against them or not. Assuming that someone who does not have one of those sets of skills is prejudiced does not, in practice, work—there’s a very high risk of getting a false-positive.
When I’m dealing with someone who’s not practicing rationality, I have to be very aware of their particular worldview, and only thoughtfully challenge it -
A person who is ‘thinking’ irrationally can only be challeneged to the degree that they’re being rational. If they eschew rationality completely, there isn’t any way to communicate with them.
What have you actually accomplished, if you use social signals to get someone to switch their concept-allegiances?
I thought we’d already defined “practicing rationality” as “intentionally trying to make rational decisions and intentionally trying to become more rational”. Whether we had or not, that was what I meant by the term.
Someone can be being somewhat rational without ‘practicing’ rationality, and to the degree that they can accurately predict what effects follow what causes, or accomplish other tasks that depend on rationality, every person I know is at least somewhat rational. Even animals can be slightly rational—cats for example are well known for learning that the sound of a can opener is an accurate sign that they may be fed in the near future, even if they aren’t rational enough to make stronger predictions about which instances of that sound signal mealtime.
While social signaling can be used on its own to cause someone to switch their allegiances to concepts that they don’t value especially highly, that’s not the only possible use of it, and it’s not a use I consider acceptable. The use of social-signaling that I recommend is intended to keep a person from becoming defensive while ‘rationality-level appropriate’ rational arguments are used to actually encourage them to change their mind.
I thought we’d already defined “practicing rationality” as “intentionally trying to make rational decisions and intentionally trying to become more rational”.
No, only if you rationally try to make rational decisions and rationally try to become more rational.
If you’re acting irrationally, you’re not practicing rationality, in the same way that you’re not practicing vegetarianism if you’re eating meat.
You should expand this into a top-level post. Communication is difficult and I think most people could use advice about it. As it stands, it sounds like broad strokes which are obviously good ideas, but probably hard to implement without more details.
I’ve been considering it, actually, for my own use if not to post here. I think it’d be useful in several ways to try to come up with actual wordings for the tricks I’ve picked up.
I don’t know why people of above average intelligence want to make everyone else feel like useless proles
Isn’t it obvious? Almost everyone is a “useless prole”, as you put it, and even the people who aren’t have to sweat blood to avoid that fate.
Recognizing that unpleasant truth is the first step towards becoming non-useless—but most people can’t think usefully enough to recognize it in the first place, so the problem perpetuates itself.
I know I’m usually a moron. I’ve also developed the ability to distinguish quality thinking from moronicity, which makes it possible for me to (slowly, terribly slowly) wean myself away from stupid thinking and reinforce what little quality I can produce. That’s what makes it possible for me to occasionally NOT be a moron, at least at a rate greater than chance alone would permit.
It’s the vast numbers of morons who believe they’re smart, reasonable, worthwhile people that are the problem.
I was reading around on the site today, and I think I’ve figured out why this attitude sends me running the other way. What clued me in was Eliezer’s description of Spock in his post “Why Truth? And...”.
Eliezer’s point there is that Spock’s behavior goes against the actual ideals of rationality, so people who actually value rationality won’t mimic him. (He’s well enough known that people who want to signal that they’re rational will likely mimic him, and people who want to both be and signal being rational will probably mimic him in at least some ways, and also note that the fact that reversed stupidity is not intelligence is relevant.)
It may come as a shock, but in my case, being rational is not my highest priority. I haven’t actually come up with a proper wording for my highest priority yet, but one of my major goals in pursuing that priority is to facilitate a universal ability for people to pursue their own goals (with the normal caveats about not harming or overly interfering with other people, of course). One of the primary reasons I pursue rationality is to support that goal. I suspect that this is not an uncommon kind of reason for pursuing rationality, even here.
As I mentioned in the comment that I referenced, I’ve avoided facing the fact that most people prefer not to pursue rationality, because it appears that that realization leads directly to the attitude you’re showing here, and I can reasonably predict that if I were to have the attitude you’re showing here, I would no longer support the idea that everyone should have as much freedom as can be arranged, and I don’t want to do that. Very few people would want to take the pill that’d turn them into a psychopath, even if they’d be perfectly okay with being a psychopath after they took the pill.
But there’s an assumption going on in there. Does accepting that fact actually have to lead to that attitude? Is it impossible to be an x-rationalist and still value people?
Is it impossible to be an x-rationalist and still value people?
This is something I’ve thought a lot about. I’m worried about the consequences of certain negative ideologies present here on Less Wrong, but, actually, I feel that x-rationality, combined with greater self-awareness, would be the best weapon against them. X-rationality—identifying facts that are true and strategies that work—is inherently neutral. The way you interpret those facts (and what you use your strategies for) is the result of your other values.
Consider, to begin with, the tautology that 99.7% of the population is less intelligent than 0.3% of the population, by some well-defined, arbitrary metric of intelligence. Suppose also, that someone determined they were in the top 0.3%. They could feel any number of ways about this fact: completely neutral, for example, or loftily superior, or weightily responsible. Seen in this way, feeling contempt for “less intelligent” people is clearly the result of a worldview biased in some negative way.
Generally, humanity is so complex that however anyone feels about humanity says more about them than it does about humanity. Various forces (skepticism and despair; humanism and a sense of purpose) have been vying throughout history: rationality isn’t going to settle it now. We need to pick our side and move on … and notice which sides other people have picked when we evaluate their POV.
I always find it ironic, when ‘rationalists’ are especially misanthropic here on Less Wrong, that Eliezer wants to develop a friendly AI. Implicit with this goal—built right in—is the awareness that rationality alone would not induce the machine to be friendly. So why would we expect that a single-minded pursuit of rationality would not leave us vulnerable to misanthropic forces? Just as we would build friendliness into a perfectly logical, intelligent machine; we must build friendliness into our ideology before we let go of “intuition” and other irrational ways we have of “feeling” what is right, because they contain our humanism, which is outside rationality.
We do not want to be completely rational because being rational is neutral. Being more neutral without perfect rationality would leave us vulnerable to negative forces, and, anyway, we want to be a positive force.
If we assume he has goals other than simply being a self-abasing misanthrope, the attitude Annoyance is showing is far from rational. Arbitrarily defining the vast majority of humans as useless “problems” is, ironically, itself a useless and problematic belief, and it represents an even more fundamental failure than being Spocklike—Spock, at least, does not repeatedly shoot himself in the foot and then seek to blame anything but himself.
I’ve pretty much figured that out. If nothing else, Annoyance is being an excellent example of that right now.
Next question: Is it something about this method of approaching rationality that encourages that failure mode? How did Annoyance fall off the path, and can I avoid doing the same if I proceed?
I’m starting to think that the answer to that last question is yes, though.
How did Annoyance fall off the path, and can I avoid doing the same if I proceed?
While I find conversations with Annoyance rather void, I would encourage you to not try and lift (him ?) up as an example of falling off the path or entering failure modes. If you care about the question I would make a post using generic examples. This does a few things:
Gets you away from any emotional responses to Annoyance (both in yourself and anyone else).
Provides a clear-cut example that can be picked apart without making this entire thread required reading. It also cleans up many straw men and red herrings before they happen, since the specifics in the thread are mostly unneeded with relation to the question you have just asked.
Brings attention to the core problem that needs to be addressed and avoids any specific diagnoses of Annoyance (for better or worse)
That’s very good advice. However, I’m not going to take it today, and probably won’t at all. It seems more useful at this point to take a break from this entirely and give myself a chance to sort out the information I’ve already gained.
I’ll definitely be interested in looking at it, in a few days, if someone else wants to come up with that example and continue thinking about it here.
If we assume he has goals other than simply being a self-abasing misanthrope, the attitude Annoyance is showing is far from rational.
A logically incorrect statement. An attitude is rational if it consistently and explicitly follows from data gathered about the world and its functioning. As there are other consequences from my behavior other than the one you so contemptuously dismiss, and you have no grounds for deciding what my goals are or whether my actions achieve them, your claim is simply wrong. Trivially so, in fact.
Arbitrarily defining the vast majority of humans as useless “problems”
It’s not arbitrary.
The rational thing to do when confronted with a position you don’t understand is ask yourself “Why did that person adopt that position?”
If your actions accomplish your goals, fine. However, it’s safe to say most of the people here don’t want to be Annoyances, and it’s important to point out that your behavior does not reflect a requirement or implication of rationality.
If you disagree, I hope you will explicitly list the assumptions leading to your belief that it’s a good idea to treat people with condescension.
The rational thing to do when confronted with a position you don’t understand is ask yourself “Why did that person adopt that position?”
[...]
Worthwhile questions are rarely answered easily.
Search for an answer requires the question to be worthwhile, which is far from prior expectation for the research of inane-sounding positions people hold.
Search for an answer requires the question to be worthwhile, which is far from prior expectation for inane-sounding positions.
If you want to convince someone of something, it’s generally a good idea to understand why they believe what they believe now. People generally have to be convinced out of one belief before they can be convinced into another, and you can’t refute or reframe their evidence unless you know what the evidence is.
Even if their reasoning is epistemologically unsound, if you know how it’s unsound, you can utilize the same type of reasoning to change their belief. For example, if someone only believes things they “see with their own eyes”, you would then know it is a waste of time to try to prove something to them mathematically.
I agree, but in this case the benefit comes not from the expectation of finding insight in the person’s position, but from the expectation of successful communication (education), which was not the motivation referred in Annoyance’s comment.
It may come as a shock, but in my case, being rational is not my highest priority. I haven’t actually come up with a proper wording for my highest priority yet, but one of my major goals in pursuing that priority is to facilitate a universal ability for people to pursue their own goals (with the normal caveats about not harming or overly interfering with other people, of course). One of the primary reasons I pursue rationality is to support that goal.
Once I realized that achieving anything, no matter what, required my being rational, I quickly bumped “being rational” to the top of my to-do list.
Is it impossible to be an x-rationalist and still value people?
‘People’ do not lend themselves to any particular utility. The Master of the Way treats people as straw dogs.
It may come as a shock, but in my case, being rational is not my highest priority. I haven’t actually come up with a proper wording for my highest priority yet, but one of my major goals in pursuing that priority is to facilitate a universal ability for people to pursue their own goals (with the normal caveats about not harming or overly interfering with other people, of course). One of the primary reasons I pursue rationality is to support that goal.
Once I realized that achieving anything, no matter what, required my being rational, I quickly bumped “being rational” to the top of my to-do list.
Yes, I see that you did that. Why would I want to do that, given my current utility function? I appear to be accomplishing things reasonably well as is, and it looks like if I made that change, I wouldn’t wind up accomplishing things that my current utility function values at all.
Is it impossible to be an x-rationalist and still value people?
‘People’ do not lend themselves to any particular utility. The Master of the Way treats people as straw dogs.
Why would I want to do that, given my current utility function?
What’s the function you use to evaluate your utility function?
And what function do I use to evaluate that, and on to infinity. Right. Or, I can just accept that my core utility function is not actually rational, examine it to make sure it’s something that’s not actually impossible, and get on with my life.
Or does Eliezer have a truly-rational reason behind the kind of altruism that’s leading him to devote his life to FAI that I’m not aware of?
Persuasiveness: You fail at it.
Persuasiveness: what I was not aiming for.
Oh, silly me for assuming that you were trying to raise the rationality level around here. It’s only the entire point of the blog, after all.
So if you’re not actually trying to convince me that being more rational would actually be a good thing, what’s have you been doing? Self-signaling? Making pointless appeals to your own non-existent authority? Performing some bizarre experiment regarding your karma score?
Sets of terminal values can be coherent. Logical specifications for computing terminal values can be consistent. What would it mean for one to be rational?
Or, I can just accept that my core utility function is not actually rational,
If there’s isn’t a tiny grain of rationality at the core of that infinite regression, you’re in great trouble.
The ability to anticipate how reality will react to something you do depends entirely on the ability to update your mental models to match data derived from reality. That’s rationality right there.
If there’s even a tiny spark, it can be fanned into flame. But if there’s no spark there’s nothing to build on. I strongly suspect that some degree of rationality is present in your utility function, but if not, your case is hopeless.
Oh, silly me for assuming that you were trying to raise the rationality level around here.
Why would I try to do that? Nothing I do can cause the rationality level to go up. Only the people here can do that. If I could ‘make’ people be rational, I would. But there’s no spoon, there.
All I can do is point to the sky and hope that people will choose to pay less attention to the finger than what it indicates.
If there’s even a tiny spark, it can be fanned into flame. But if there’s no spark there’s nothing to build on. I strongly suspect that some degree of rationality is present in your utility function, but if not, your case is hopeless.
Out of curiosity, can someone who does not have a grain of rationality in them ever become more rational? In other words, can someone be so far gone that they literally can never be rational?
I am honestly having trouble picturing such a person. Perhaps that is because I never thought about it that way before.
Out of curiosity, can someone who does not have a grain of rationality in them ever become more rational?
They may stumble across rationality as life causes their core functions to randomly vary. As far as I can tell, that’s how explicit and self-referential standards of thought first arose—they seem to have occurred in societies where there were many different ideas and claims being made about everything, and people needed a way to sift through the rich bed of assertions.
So complex and mutually-incompatible cultural fluxes seem to not only be necessary to produce the first correct standards, but encourage them to be developed as well. That argument applies more to societies than individuals, but I think a similar one holds there too.
Understood. I guess the followup question is about where the general human being starts. Do we start with any rationality in us? My guess is that it is somewhat random. Some do; some do not.
The opposite of rational is “wrong” or “ineffective”. A person can’t be wrong or ineffective about everything, that’s senseless. I think all the confusion has arisen from Annoyance claiming that terminal values must have some spark of rationality, but Eliezer explained that he might have meant they must be coherent. So if I may paraphrase your question (which interests me as well), the question is: how may terminal values be incoherent?
You need to be more careful with problem statement, it seems too confused. For example, taboo “rational” (to distinguish irrational people from rocks), taboo “never” (to distinguish the deep properties of the phenomenon from limitations created by life span and available cultural environment).
Yeah, I would agree. I meant it as a specific response to what Annoyance wrote and figured I could just reuse the term. I didn’t expect so many people to jump in. :)
“Never” as in “This scenario is impossible and cannot happen.”
“Become more rational” can be restated “gain more rationality.”
Rewording the entire question:
Can someone who has no rationality in them ever gain more rationality?
The tricky clause is now “rationality in them.” Any more defining of terms brings this into a bigger topic. It would probably make a good top-level post, if anyone is interested.
I’d like to see a top post on this. My example of cats having a degree of rationality may be useful:
Even animals can be slightly rational—cats for example are well known for learning that the sound of a can opener is an accurate sign that they may be fed in the near future, even if they aren’t rational enough to make stronger predictions about which instances of that sound signal mealtime.
(Warning) This is a huge mind-dump created while on lunch break. By all means pick it apart, but I am not planning on defending it in any way. Take it with all the salt in the world.
Personally, I find the concept of animal rationality to be more of a distraction. For some reason, my linguistic matrix finds the word “intelligent” to describe cats responded to a can opener. Animals are very smart. Humans are very smart. But smart does not imply rational and a smart human is not necessarily imply rationality.
I tend to reserve rationality for describing the next “level” of intelligence. Rationality is the form or method of increase intelligence. An analogy is speed versus acceleration. Acceleration increases speed; rationality increases intelligence. This is more of a rough, instinctive definition, however, and one of my personal reasons for being here at Less Wrong is to learn more about rationality. My analogy does not seem accurate in application. Rationality seems connected to intelligence but to say that rationality implies change in intelligence does not fit with its reverse: irrationality does not decrease intelligence.
I am missing something, but it seems that whatever I am looking for in my definitions is not found in cats. But, as you may have meant, if cats have no rationality and cannot have rationality, is it because they have no rationality?
If this were the case, and rationality builds on itself, where does our initial rationality come from? If I claim to be rational, should I be able to point to a sequence of events in my life and say, “There it started”? It seems that fully understanding rationality implies knowing its limits; its beginning and ending. To further our rationality we should be able to know what helps or hinders our rationality.
Annoyance claims that the first instances of rationality may be caused by chance. If this were true, could we remove the chance? Could we learn what events chanced our own rationality and inflict similar events on other people?
Annoyance also seems to claim that rationality begets rationality. But something else must produce that first spark in us. That spark is worth studying. That spark is annoyingly difficult to define and observe. How do we stop and examine ourselves to know if we have the spark? If two people walk before us claiming rationality yet one is lying, how do we test and observe the truth?
Right now, we do so by their actions. But if the liar knows the rational actions and mimics them without believing in their validity or truth, how would we know? Would such a liar really be lying? Does the liar’s beliefs matter? Does rationality imply more than correct actions?
To make this more extreme, if I build a machine to mimic rationality, is it rational? This is a classic question with many forms. If I make a machine that acts human, is it human? I claim that “rationality” cannot be measured in a cat. Could it be measured in a machine? A program? Why am I so fixated on humanity? Is this bias?
Rationality is a label attached to a behavior but I believe it will eventually be reattached to a particular source of the behavior. I do not think that rational behavior is impossible to fake. Pragmatically, a Liar that acts rational is not much different from a rational person. If the Liar penetrates our community and suddenly goes ape than the lies are obvious. How do we predict the Liars before they reveal themselves? What if the Liars believe their own lies?
I do not mean “believe” as in “having convinced themselves”. What if they are not rational but believe they are? The lie is not conscious; it is a desire to be rational but not possessing the Way. How do we spot the fake rationalists? More importantly, how do I know that I, myself, have rationality?
Does this question have a reasonable answer? What if the answer is “No”? If I examine myself and find myself to be irrational, what do I do? What if I desire to be rational? Is it possible for me to become rational? Am I denied the Way?
I think much of the confusion comes from the inability to define rationality. We cannot offer a rationality test or exam. We can only describe behavior. I believe this currently necessary but I believe it will change. I think the path to this change has to do with finding the causations behind rationality and developing a finer measuring stick for determining rational behavior. I see this as the primary goal of Less Wrong.
Once we gather more information about the causes of our own rationality we can begin development methods for causing rationality in others along with drastically increasing our own rationality. I see this as the secondary goal of Less Wrong.
This is why I do not think Annoyance’s answer was sufficient. “Chance” may be how we describe our fortune but this is inoculative answer. During Eliezer’s comments on vitalism he says this:
I call theories such as vitalism mysterious answers to mysterious questions. These are the signs of mysterious answers: First, the explanation acts as a curiosity-stopper rather than an anticipation-controller. Second, the hypothesis has no moving parts—the model is not a specific complex mechanism, but a blankly solid substance or force. The mysterious substance or mysterious force may be said to be here or there, to do this or that; but the reason why the mysterious force behaves thus is wrapped in a blank unity. Third, those who proffer the explanation cherish their ignorance; they speak proudly of how the phenomenon defeats ordinary science or is unlike merely mundane phenomena. Fourth, even after the answer is given, the phenomenon is still a mystery and possesses the same quality of sacred inexplicability that it had at the start.
(Emphasis original. You will have to search for the paragraph, it is about three-quarters down the page.)
“Chance” hits 3 of 4, giving Annoyance benefit of the doubt and assuming there is no cherished ignorance. So, “chance” works for now because we have no better words to describe the beginning of rationality, but there is a true cause out there flipping the light bulbs on inside of heads and producing the behavior we have labeled “rationality.” Let’s go find it.
(PS) Annoyance, this wasn’t meant to pick on what you said, it just happened to be in my mind and relevant to the discussion. You were answering a very specific question and the answer satisfied what was asked at the time.
My point was that some animals do appear to be able to be rational, to a degree. (I’m defining ‘rational’ as something like ’able to create accurate representations of how the world works, which can be used to make accurate predictions.)
I can even come up with examples of some animals being able to be more rational than some humans. I used to work in a nursing home, and one of the residents there was mentally retarded as part of her condition, and never did figure out that the cats could not understand her when she talked to them, and sometimes seemed to actually expect them to talk. On the other hand, most animals that have been raised around humans seem to have a pretty reasonable grasp on what we can and can’t understand of their forms of communication. Unfortunately, most of my data for the last assertion there is personal observation. The bias against even considering that animals could communicate intentionally is strong enough in modern society that it’s rarely studied at all, as far as I know. Still, consider the behavior of not-formally-trained domesticated animals that you’ve known, compared to feral examples of the same species.
Basic prediction-ability seems like such a universally useful skill that I’d be pretty surprised if we didn’t find it in at least a minimal form in any creature with a brain. It may not look like it does in humans, in those cases, but then, given what’s been discussed about possible minds, that shouldn’t be too much of a problem.
The bias against even considering that animals could communicate intentionally is strong enough in modern society that it’s rarely studied at all, as far as I know.
Animals obviously communicate with one another. The last I heard, there was a lot of studying being done on dolphins and whales. Anyone who has trained a dog in anything can tell you that dogs can “learn” English words. The record I remember hearing about was a Border Collie with a vocabulary of over 100 words. (No reference, sorry. It was in a trivia book.)
As for your point, I understand and acknowledge it. I think of rationality as something different, I guess. I do not know how useful continuing the cat analogy is when we seem to think of “rational” differently.
Hmm, maybe you could define ‘intelligence’ as you use it here:
Rationality is the form or method of increase intelligence.
I define intelligence as the ability to know how to do things (talk, add, read, write, do calculus, convince a person of something—yes, there are different forms of intelligence) and rationality as the ability to know which things to do in a given situation to get what you want out of that situation, which involves knowing what things can be gotten out of a given situation in the first place.
Well, the mind dump from earlier was mostly food for thought, not a staking out claims or definitions. I guess my rough definition of intelligence fits what I find in the dictionary:
The ability to acquire and apply knowledge and skills
The same dictionary, however, defines rationality as a form of the word rational:
Based on or in accordance with reason or logic
I take intelligence to mean, “the ability to accomplish stuff,” and rationality to mean, “how to get intelligence.” Abstracted, rationality more or less becomes, “how to get the ability to accomplish stuff.” This is contrasted with “learning” which is:
Gaining or acquiring knowledge of or skill in (something) by study, experience, or being taught
I am not proposing this definition of rationality is what anyone else should use. Rather, it is a placeholder concept until I feel comfortable sitting down and tackling the problem as a whole. Right now I am still in aggregation mode which is essentially collecting other people’s thoughts on the subject.
Honestly, all of this discussion is interesting but it may not be helpful. I think Eliezer’s concept of the nameless virtue is good to keep in mind during these kinds of discussions:
You may try to name the highest principle with names such as “the map that reflects the territory” or “experience of success and failure” or “Bayesian decision theory”. But perhaps you describe incorrectly the nameless virtue. How will you discover your mistake? Not by comparing your description to itself, but by comparing it to that which you did not name.
Further information: The person I mentioned was able to do some intelligence-based things that I would not expect cats to do, like read and write (though not well). She may also have been able to understand that cats don’t speak English if someone actually explained it to her—I don’t think anyone ever actually did. Even so, nobody sits cats or dogs down and explains our limitations to them, either, so I think the playing field is pretty level in that respect.
Seriously, doing this in non-silly manner is highly nontrivial.
Oh, no joke. But we have to start somewhere. :)
Honestly, until we have a better word/definition than “rationality,” we get to play with fuzzy words. I am happy with that for now but it is a dull future.
I made more causal comments on this subject in a different comment and would appreciate your thoughts. It is kind of long, however, so no worries if you would rather not. :)
You’ve never thought about it that way before because it’s completely silly. How on earth does Annoyance make these judgments? I’m not nearly prideful enough to think I can know others’ minds to the extent Annoyance can, or, in other words, I imagine there are circumstances which could change most people in profound ways, both for ill and good. So the only thing judging people in this manner does is reinforce one’s social prejudices. Writing off people who seem resistant to reason only encourages their ignorance, and remedying their condition is both an exercise and example of reason’s power, which, incidentally, is why I’m trying so hard with Annoyance!
If there’s isn’t a tiny grain of rationality at the core of that infinite regression, you’re in great trouble.
You did catch that I’m talking about a terminal value, right? It’s the nature of those that you want them because you want them, not because they lead to something else that you want. I want everybody to be happy. That’s a terminal value. If you ask me why I want that, I’m going to have some serious trouble answering, because there is no answer. I just want it, and there’s nothing that I know of that I want more, or that I would consider a good reason to give up that goal.
All I can do is point to the sky and hope that people will choose to pay less attention to the finger than what it indicates.
Right now, it’s pointing at “don’t make this mistake”, which I was unlikely to do anyway, but now I have the opportunity to point the mistake out to you, so you can (if you choose to; I can’t force you) stop making it, which would raise the rationality around here, which seems like a good thing to me. Or, I can not point it out, and you keep doing what you’re doing. It’s like one of those lottery problems, and I concluded that the chance of one or both of us becoming more rational was worth the cost of having this discussion. (And, it paid off at least somewhat—I think I have enough insight into that particular mistake to be able to avoid it without avoiding the situation entirely, now.)
“Heaven and earth are ruthless, and treat the myriad creatures as straw dogs; the sage is ruthless, and treats the people as straw dogs.”
One might accuse this of falling afoul of the appeal to nature, but that would assume a fact not in evidence, to wit, that Annoyance’s motivations resemble that of a typical LW poster (to the extent that such a beast exists).
Once I realized that achieving anything, no matter what, required my being rational, I quickly bumped “being rational” to the top of my to-do list.
Voted down because your realization is flawed. Achieving anything does not require you to be rational, as evidenced by this post.
The Master of the Way treats people as straw dogs.
Your strategy of dealing with people is also flawed: does the Master of the Way always defect? If you were a skilled exploiter, you wouldn’t give obvious signals that you are an exploiter. Instead, you seem to be signaling “Vote me off the island!” to society, and this community. You may want to reconsider that position.
Annoyance, you’re still dodging the question. Joe didn’t ask whether or not in your opinion everyone is a useless prole, he asked why it’s useful to make people feel like that. Your notion that “social cohesion is the enemy of rationality” was best debunked, I think by pjeby’s point here:
Annoyance, your argument has devolved into inanity. If you don’t want to popularly cultivate rationality then you disagree with one of the core tenets of this community. It’s in the second paragraph of the “about” page:
“Less Wrong is devoted to refining the art of human rationality—the art of thinking. The new math and science deserves to be applied to our daily lives, and heard in our public voices.”
Your circular word games do no good for this community.
Or perhaps simply the recognition that it’s sometimes impossible to fluff other people’s egos and drive discussion along rational paths at the same time.
If people become offended when you point out weaknesses in their arguments—if they become offended if you even examine them and don’t automatically treat their ideas as inherently beyond reproach—there’s no way to avoid offending them while also acting rationally. It becomes necessary to choose.
there’s no way to avoid offending them while also acting rationally. It becomes necessary to choose.
Really? Have you tried, maybe, just not pointing out the weaknesses in their arguments? Mightn’t that be the rational thing to do? Just a polite smile and nod, or a gentle, “Have you considered some alternative?” Or even, “You may well be right.” (This is true of pretty much any non-contradictory statement.) Or there are many different ways to argue with someone without being confrontational. Asking curious-sounding questions works fairly well.
It’s generally easy to recognize how well a person will react to an argument against him. If you have basic people skills, you’ll be able to understand what type of argument/approach will communicate your point effectively, and when you simply don’t have a chance. The idea that it’s necessary to offend people to act rationally seems completely absurd (at least in this context). If it’s going to offend them, it’s going to accomplish the opposite of your goal, so, rationally, you shouldn’t do it.
This whole discussion reminds me of the Dave Barry quote that may well have been used earlier on this site:
“I argue very well. Ask any of my remaining friends. I can win an argument on any topic, against any opponent. People know this, and steer clear of me at parties. Often, as a sign of their great respect, they don’t even invite me.”
I was going to say “there are more workarounds than you think”, but that’s probably my selection bias talking again. That said, there are workarounds, in some situations. It’s still not a trivial thing to learn, though.
It’s not just nontrivial, it’s incredibly hard. Engaging “system 2” reasoning takes a lot of effort, lowering sensitivity to, and acute awareness of, social cues and signals.
The mindset of “let’s analyze arguments to find weaknesses,” aka Annoynance’s “rational paths,” is a completely different ballgame than most people are willing to play. Rationalists may opt for that game, but they can’t win, and may be reinforcing illogical behavior. Such a rationalist is focused on whether arguments about a particular topic are valid and sound, not the other person’s rational development. If the topic is a belief, attempting to reason it out with the person is counterproductive. Making no ground when engaging with people on a topic should be a red flag: “maybe I’m doing the wrong thing.”
Does anyone care enough for me to make a post about workarounds? Maybe we can collaborate somehow Adelene, I have a little experience in this area.
It’s not just nontrivial, it’s incredibly hard. Engaging “system 2” reasoning takes a lot of effort, lowering sensitivity to, and acute awareness of, social cues and signals.
Engaging system 2 is precisely what you don’t want to do, since evolutionarily speaking, a big function of system 2 is to function as a decoy/shield mechanism for keeping ideas out of a person. And increasing a person’s skill at system 2 reasoning just increases their resistance to ideas.
To actually change attitudes and beliefs requires the engagement of system 1. Otherwise, even if you convince someone that something is logical, they’ll stick with their emotional belief and just avoid you so they don’t have to deal with the cognitive dissonance.
(Note that this principle also applies to changing your own beliefs and attitudes—it’s not your logical mind that needs convincing. See Eliezer’s story about overcoming a fear of lurking serial killers for an example of mapping System 2 thinking to System 1 thinking to change an emotional-level belief.)
pjeby, sorry I wasn’t clear, I should have given some context. I am referencing system 1 and 2 as simplified categories of thinking as used by cognitive science, particularly in behavioral economics. Here’s Daniel Kahneman discussing them. I’m not sure what you’re referring to with decoys and shields, which I’ll just leave at that.
To add to my quoted statement, workarounds are incredibly hard, and focusing on reasoning (system 2) about an issue or belief leaves few cycles for receiving and sending social cues and signals. While reasoning, we can pick up those cues and signals, but they’ll break our concentration, so we tend to ignore them while reasoning carefully. The automatic, intuitive processing of the face interferes with the reasoning task; e.g. we usually look somewhere else when reasoning during a conversation. To execute a workaround strategy, however, we need to be attuned to the other person.
When I refer to belief, I’m not referring to fear of the dark or serial killers, or phobias. Those tend to be conditioned responses—the person knows the belief is irrational—and they can be treated easily enough with systematic desensitization and a little CBT thrown in for good measure. Calling them beliefs isn’t wrong, but since the person usually knows they’re irrational, they’re outside my intended scope of discussion: beliefs that are perceived by the believer to be rational.
People are automatically resistant to being asked to question their beliefs. Usually it’s perceived as unfair, if not an actual attack on them as a person: those beliefs are associated with their identity, which they won’t abandon outright. We shouldn’t expect them to. It’s unrealistic.
What should we do, then? Play at the periphery of belief. To reformulate the interaction as a parable: We’ll always lose if we act like the wind, trying to blow the cloak off the traveller. If we act like the sun, the traveller might remove his cloak on his own. I’ll think about putting a post together on this.
I’m not sure what you’re referring to with decoys and shields, which I’ll just leave at that.
My hypothesis is that reasoning as we know it evolved as a mechanism to both persuade others, and to defend against being persuaded by others.
Consider priming, which works as long as you’re not aware of it and therefore defending against it. But it makes no sense to evolve a mechanism to avoid being primed, unless the priming mechanism were being exploited by our tribe-mates. (After all, they’re the only ones besides us with the language skill to trigger it.)
In other words, once we evolved language, we became more gullible, because we were now verbally suggestible. This would then have resulted in an arms race of intelligence to both persuade, and defend against persuasion, with tribal status and resources as the prize.
And once we evolved to the point of being able to defend ourselves against any belief-change we’re determined to avoid, the prize would’ve become being able to convince neutral bystanders who didn’t already have something at stake.
The system 1⁄2 distinctions cataloged by Stanovich & West don’t quite match my own observation, in that I consider any abstract processing to be system 2, whether it’s good reasoning or fallacious, and whether it’s cached or a work-in-progress. (Cached S2 reasoning isn’t demanding of brainpower, and in fact can be easily parroted back in many forms once an appropriate argument has been heard, without the user ever needing to figure it out for themselves.)
In my view, the primary functional purpose of human of reasoning is to persuade or prevent persuasion, with other uses being an extra bonus. So in this view, using system 2 for truly rational thought is actually an abuse of the system… which would explain why it’s so demanding of cognitive capacity, compared to using it as a generator of confabulation and rhetoric. And it also explains why it requires so much learning to use properly: it’s not what the hardware was put there for.
The S&W model is IMO a bit biased by the desire to find “normative” reasoning (i.e., correct reasoning) in the brain, even though there’s really no evolutionary reason for us to have truly rational thought or to be particularly open-minded. In fact, there’s every evolutionary reason for us to not be persuadable whenever we have something at stake, and to not reason things out in a truly fair or logical manner.
Hence, some of the attributes they give system 2 are (in my view) attributes of learned reasoning running on top of system 2 in real time, rather than native attributes of system 2 itself, or reflective of cached system 2 thinking.
Anyway, IAWYC re: the rest, I just wanted to clarify this particular bit.
Actually, system one can handle a surprising amount of abstraction; I don’t have one handy, but any comprehensive description of conceptual synesthesia should do a good job of explaining it. (I’m significantly enough conceptually synesthetic that I don’t need it explained, and have never actually needed an especially good reference before.)
The fact that I can literally see that the concept ‘deserve X’ depends on the emotional version of the concept ‘should do X’, because the pattern for one contains the pattern for the other, makes it very clear to me that such abstractions are not dependent on the rational processing system.
It’s also noteworthy that synesthesia appears to be a normal developmental phase; it seems pretty likely to me that I’m merely more aware of how my brain is processing things, rather than having a radically different mode of processing altogether.
Actually, system one can handle a surprising amount of abstraction; I don’t have one handy, but any comprehensive description of conceptual synesthesia should do a good job of explaining it.
I’d certainly be interested in that. My own definitions are aimed at teaching people not to abstract away from experience, including emotional experience. Certainly there is some abstraction at that level, it’s just a different kind of abstraction (ISTM) than system 2 abstraction.
In particular, what I’m calilng system 1 does not generally use complex sentence structure or long utterances, and the referents of its “sentences” are almost always concrete nouns, with its principal abstractions being emotional labels rather than conceptual ones.
The fact that I can literally see that the concept ‘deserve X’ depends on the emotional version of the concept ‘should do X’, because the pattern for one contains the pattern for the other, makes it very clear to me that such abstractions are not dependent on the rational processing system.
I consider “should X” and “deserve X” to both be emotional labels, since they code for attitude and action towards X, and so both are well within system 1 scope. When used by system 2, they may carry totally different connotations, and have nothing to do with what the speaker actually believes they deserve or should do, and especially little to do with what they’ll actually do.
For example, a statement like, “People should respect the rights of others and let them have what they deserve” is absolutely System 2, whereas, a statement like “I don’t deserve it” (especially if experienced emotionally) is well within System 1 territory.
It’s entirely possible that my definition of system 1⁄2 is more than a little out of whack with yours or the original S&W definition, but under my definition it’s pretty easy to learn to distinguish S1 utterances from S2 utterances, at least within the context of mind hacking, where I or someone else is trying to find out what’s really going on in System 1 in relation to a topic, and distinguish it from System 2′s confabulated theories.
However, since you claim to be able to observe system 1 directly, this would seem to put you in a privileged position with respect to changing yourself—in principle you should be able to observe what beliefs create any undesired behaviors or emotional responses. Since that’s the hard part of mind hacking IME, I’m a bit surprised you haven’t done more with the “easy” part (i.e. changing the contents of System 1).
In particular, what I’m calilng system 1 does not generally use complex sentence structure or long utterances, and the referents of its “sentences” are almost always concrete nouns, with its principal abstractions being emotional labels rather than conceptual ones.
Yep, it mostly uses nouns, simple verbs, relatedness catgegorizations (‘because’) , behavior categorizations (‘should’, ‘avoid with this degree of priority’), and a few semi-abstract concepts like ‘this week’. Surprisingly, I don’t often ‘see’ the concepts of good or bad—they seem to be more built-in to certain nouns and verbs, and changing my opinion of a thing causes it to ‘look’ completely different. (That’s also not the only thing that can cause a concept to change appearance—one of my closest friends has mellowed from a very nervous shade of orange to a wonderfully centered and calm medium-dark chocolate color over the course of the last year or so.)
I consider “should X” and “deserve X” to both be emotional labels, since they code for attitude and action towards X, and so both are well within system 1 scope. When used by system 2, they may carry totally different connotations, and have nothing to do with what the speaker actually believes they deserve or should do, and especially little to do with what they’ll actually do.
For example, a statement like, “People should respect the rights of others and let them have what they deserve” is absolutely System 2, whereas, a statement like “I don’t deserve it” (especially if experienced emotionally) is well within System 1 territory.
Hmm… heh, it actually sounds like I just don’t use system 2, then.
However, since you claim to be able to observe system 1 directly, this would seem to put you in a privileged position with respect to changing yourself—in principle you should be able to observe what beliefs create any undesired behaviors or emotional responses. Since that’s the hard part of mind hacking IME, I’m a bit surprised you haven’t done more with the “easy” part (i.e. changing the contents of System 1).
I have and do, actually, and there’s very little that’s ‘undesirable’ left in there that I’m aware of (an irrational but so far not problematic fear of teenagers and a rationally-based but problematic fear of mental health professionals and, by extension, doctors are the only two things that come to mind that I’d change, and I’ve already done significant work on the second or I wouldn’t be able to calmly have this conversation with you). The major limitation is that I can only see what’s at hand, and it takes a degree of concentration to do so. I can’t detangle my thought process directly while I’m trying to carry on a conversation, unless it’s directly related to exactly what I’m doing at the moment, and I can’t fix problems that I haven’t noticed or have forgotten about.
I’m going to be putting together a simple display on conceptual synesthesia for my Neuroversity project this week… I’ll be sure to send you a link when it’s done.
I’ve been thinking more about this… or, not really. One of the downsides to my particular mind-setup is that it takes a long time to retrieve things from long-term memory, but I did retrieve something interesting just now.
When I was younger, I think I did use system two moderately regularly. I do vaguely remember intentionally trying to ‘figure things out’ using non-synesthetic reasoning—before I realized that the synesthesia was both real and useful—and coming to conclusions. I very distinctly remember having a mindset more than once of “I made this decision, so this is what I’m going to do, whether it makes sense now or not”. I also remember that I was unable to retain the logic behind those decisions, which made me very inflexible about them—I couldn’t use new data to update my decision, because I didn’t know how I’d come to the conclusion or how the new data should fit in. Using that system is demanding enough that it simply wasn’t possible to re-do my logic every single time a potentially-relevant piece of data turned up, and in fact I couldn’t remember enough of my reasoning to even figure out which pieces of data were likely to be relevant. The resulting single-mindedness is much less useful than the ability to actually be flexible about your actions, and after having that forcibly pointed out by reality a few times, I stopped using that method altogether.
There does seem to be a degree of epistemic hygiene necessary to switch entirely to using system one, though. I do remember, vaguely, that one problem I had when I first started using system one for actual problems was that I was fairly easy to persuade—it took a while to really get comfortable with the idea that someone could have an opinion that was well-formed and made sense but still not be something that I would ‘have to’ support or even take into consideration, for example. Essentially my own concepts of what I wanted were not strong enough to handle being challenged directly, at first. (I got better.)
I feel I should jump in here, as you appear to be talking past each other. There is no confusion in the system 1/system 2 distinction; you’re both using the same definition, but the bit about decoys and shields was actually the core of PJ’s post, and of the difference between your positions. PJ holds that to change someone’s mind you must focus on their S1 response, because if they engage S2, it will just rationalize and confabulate to defend whatever position their S1 holds. Now, I have no idea how one would go about altering the S1 response of someone who didn’t want their response altered, but I do know that many people respond very badly to rational arguments that go against their intuition, increasing their own irrationality as much as necessary to avoid admitting their mistake.
I don’t believe we are, because I know of no evidence of the following:
evolutionarily speaking, a big function of system 2 is to function as a decoy/shield mechanism for keeping ideas out of a person. And increasing a person’s skill at system 2 reasoning just increases their resistance to ideas.
Originally, I was making a case that attempting to reason was the wrong strategy. Given your interpretation, it looks like pjeby didn’t understand I was suggesting that, and then suggested essentially the same thing.
My experience, across various believers (Christian, Jehovah’s Witness, New Age woo-de-doo) is that system 2 is never engaged on the defensive, and the sort of rationalization we’re talking about never uses it. Instead, they construct and explain rationalizations that are narratives. I claim this largely because I observed how “disruptable” they were during explanations—not very.
How to approach changing belief: avoid resistance by avoiding the issue and finding something at the periphery of belief. Assist in developing rational thinking where the person has no resistance, and empower them. Strategically, them admitting their mistake is not the goal. It’s not even in the same ballpark. The goal is rational empowerment.
Part of the problem, which I know has been mentioned here before, is unfamiliarity with fallacies and what they imply. When we recognize fallacies, most of the time it’s intuitive. We recognize a pattern likely to be a fallacy, and respond. We’ve built up that skill in our toolbox, but it’s still intuitive, like a chess master who can walk by a board and say “white mates in three.”
Now, I have no idea how one would go about altering the S1 response of someone who didn’t want their response altered,
Tell them stories. If you’ll notice, that’s what Eliezer does. Even his posts that don’t use fiction per se use engaging examples with sensory detail. That’s the stuff S1 runs on.
Eliezer uses a bit more S2 logic in his stories than is perhaps ideal for a general audience; it’s about right for a sympathetic audience with some S2+ skills, though.
On a general audience, what might be called “trance logic” or “dramatic logic” works just fine on its own. The key is that even if your argument can be supported by S2 logic, to really convince someone you must get a translation to S1 logic.
A person who’s being “reasonable” may or may not do the S2->S1 translation for you. A person who’s being “unreasonable” will not do it for you; you have to embed S1 logic in the story so that any effort to escape it with S2 will be unconvincing by comparison.
This, by the way, is how people who promote things like intelligent design work: they set up analogies and metaphors that are much more concretely convincing on the S1 level, so that the only way to refute them is to use a massive burst of S2 reasoning that leaves the audience utterly unconvinced, because the “proof” is sitting right there in S1 without any effort being required to accept it.
I hadn’t actually found the system 1/system 2 meme before this, but it maps nicely onto how I handle those situations. The main trick is to make lots of little leaps of logic, instead of one big one, while pushing as few emotional buttons as you can get away with, and using the emotional buttons you do push to guide the conversation along.
An example of that is here. In the original example, telling someone directly that they’re wrong pushes all kinds of emotional buttons, and a fully thought out explanation of why is obviously too much for them to handle with system one, so it’s going to fall flat, unless they want to understand why they’re wrong, which you’ve already interfered with by pushing their buttons.
In my example, I made a much smaller leap of logic—“you’re using a different definition of ‘okay’ than most people do”—which can be parsed by system one, I think. I also used social signaling rather than words to communicate that the definition is not okay, which is a good idea because social signaling can communicate that with much more finesse and fewer emotional buttons pushed, and because people are simply wired to go along with that kind of influence more easily.
My sanity-saver … but obviously not rationality-saver… has been to learn to encourage the people I’m dealing with to be more rational, at least when dealing with me. My inner circle of friends is made up almost entirely of people who ask themselves and each other that kind of question just as a matter of course, now, and dissect the answers to make sure they’re correct and rational and well-integrated with the other things we know about each other.
That doesn’t help at all when I’m trying to think about society in general, though.
They establish conclusions, then go searching for ‘reasons’ to cite.
And worse, they can cite completely incoherent “reasons”, which can be observed by noting that the sequence resulting from repeated application of “what do you mean by X” basically diverges. It reminds me of the value “bottom” in a lifted type system. It denotes an informationless “result”, such as that of a non-terminating computation.
put another way, I think the problem is a norm that says “the right to have an opinion means the right to not have it criticised”
I think this is a good distinction, and anyone somehow trying to shift social norms (perhaps within a subcommunity) might be well-advised to shift the norms in order: First, teach people that others have a right to criticize their opinion; then, teach them that they have no right to an opinion.
“teach them that they have no right to an opinion.”
I know people throw the term around (I try not to), but this is maybe the most fascist thing I’ve seen on this board. They have no right to an opinion? You might want to rephrase this, as many of my opinions are somewhat involuntary.
http://www.overcomingbias.com/2006/12/you_are_never_e.html
It seems that in this article, Robin is co-defining “opinion” with “belief”. This isn’t, exactly, incorrect, but I don’t think it maps completely onto the common use, which may be causing misunderstanding. If I say “it’s my opinion that [insert factual proposition here]”, then Robin’s remarks certainly apply. But if it’s my opinion that chocolate chip cookie dough ice cream is delicious—which is certainly a way people often use the word “opinion”—then in what way might I not be entitled to that? Unless I turn out to be mistaken in my use of the term “chocolate chip cookie dough ice cream”, or something, but assume I’m not.
Robin was clear about what he meant by “opinion”. From his first paragraph, with emphasis added:
Though I agree that it can cause problems to use “opinion” in an unusual way, even in the context of explicitly stating one’s unusual definition, when people are going to quote the conclusion as a slogan out of the clarifying context.
On the other hand, “You are entitled to your utility function but not your epistemology” would not make an effective slogan. (Well maybe, if it has enough “secret knowledge” appeal to motivate people to figure out what it means.)
Thank you. An opinion is a thought. What does it mean to say that you are not entitled to a thought?
In this case, it means that you’re not entitled to refuse to change a belief that’s been proven wrong.
If you think “everyone likes chocolate ice cream”, and I introduce you to my hypothetical friend Bill who doesn’t like chocolate ice cream, you’re not entitled to still believe that ‘everyone’ likes chocolate ice cream. You could still believe that ‘most people’ like chocolate ice cream, but if I was able to come up with a competent survey showing that 51% of people do not like chocolate ice cream, you wouldn’t be entitled to that belief, either, unless you could point me to an even more definitive study that agreed with you.
Even the belief “I like chocolate ice cream” could be proven false in some situations—peoples’ tastes do change over time, and you could try it one summer and discover that you just don’t enjoy it any more.
It also implies that you’re supposed to go looking for proof of your claims before you make them—that you’re not ‘entitled’ to have or spread an opinion, but instead must earn the right by doing or referencing research.
(And I agree with the two posters in the other comment-branches who pointed out that it’s a poor wording.)
That article is entitled “You Are Never Entitled to Your Opinion” and says:
I don’t think Robin really means that people aren’t entitled to their opinions. I think what he really means is people aren’t allowed to say “I’m entitled to my opinion”—that is, to use that phrase as a defense.
There’s a big difference. When people use that defense they don’t really mean “I’m entitled to have an opinion”, but instead “I’m entitled to express my opinion without having it criticised”.
In other words “I’m entitled to my opinion” is really a code for “all opinions are equally valid and thus can’t be criticised”.
That said, I do think it is valid to say “I am entitled to an opinion” in situations where your right to expression is being attacked.
I’m not saying you always do have a right to freely and fully express yourself. But in situations when you do have some measure of this, it can be unfairly stomped on.
For example, you might be in a business meeting where you should be able to have input on a matter but one person keeps cutting you off.
Or say you’re with friends and you’re outlining your view on some topic and, though you’re able to get your view out there, someone else always responds with personal attacks.
Sometimes people are just trying to shut you down.
I don’t see how “I’m entitled to my opinion” is a particularly optimal or meaningful response to these situations. What about “it’s unfair not to give me a chance to express my position” in the former situation, and “concluding I’m an asshole because I’m pro-X isn’t justified” in the latter?
Right, “opinion” is so overloaded with meaning that in order to determine if the use of “I’m entitle to my opinion” or “You are not entitled to your opinion” is virtuous, one should taboo “opinion”, and probably “entitled” as well, and express the thought in way that is specific to the situation, such as in your examples. And of course, having gone through the mental exercise of validating that what you say makes sense, you should give everyone else the benifet of this thought process and actually communicate the alternate form, so they also can tell if it is virtuous.
Agreed, absolutely. I have nothing against hearing about people’s half-baked theories—something about the theory or their logic may turn out to be useful, or give me an idea about something else, even if the theory is wrong. But it’d be nice to be able to ask “so why do you think that?” without risking an unpleasant reaction. It might even lead me to figure out that some idea that I would have otherwise dismissed is actually correct!
Most people don’t derive their conclusions from reasons. They establish conclusions, then go searching for ‘reasons’ to cite.
Asking for the reasons for the conclusion, in a way that indicates the conclusion ought to follow from them, is perceived by most people as an attack.
The only way not to risk receiving an unpleasant reaction is to avoid talking to such people.
Yes, but maybe if there was a social norm such that if I asked that and they couldn’t answer, they would take the social-status hit, instead of me, they wouldn’t act that way.
Social pressure is pretty much the only thing that can force normal people to acknowledge failures of rationality, in my experience. In a milieu in which a rationalization of that failure will be accepted or even merely tolerated, they’ll short-circuit directly to explaining the failure away rather than forcing themselves to acknowledge the problem.
Yeah, it’d be nice, but it’s probably not going to happen.
Yes, I was giving people too much credit again, wasn’t I?
It took me years to even recognize that I was doing that, and I still haven’t managed to stop completely.
One obstacle: as long as they aren’t expected to produce obvious results to meet your expectations, people really, really like being given too much credit. And they really, really dislike being given precisely enough credit when they’re nothing special, even if it lets them off the hook.
Many of my social ‘problems’ began once I recognized that other people didn’t think like I did, and were usually profoundly stupid. That’s not a recognition that lends itself to frictionless interaction with others.
This little tidbit highlights so much of what’s wrong with this community:
“Many of my social ‘problems’ began once I recognized that other people didn’t think like I did, and were usually profoundly stupid. That’s not a recognition that lends itself to frictionless interaction with others.”
You’d think a specimen of your gargantuan brainpower would have the social intelligence to handily conceal your disdain for the commonfolk. Perhaps it’s some sort of signaling?
I think you’re underestimating the degree of social intelligence required. To pull that off while still keeping the rationalistic habits that such people find offensive, you’d have to:
Recognize the problem, which is nontrivial,
Find a way of figuring out who falls on which side of the line, without tipping people off,
Determine all of the rationalistic habits that are likely to offend people who are not trying to become more rational,
Find non-offensive ways of achieving those goals, or find ways of avoiding those situations entirely,
Find a way not to slip up in conversation and apply the habits anyway—again, nontrivial. Keeping this degree of focus in realtime is hard.
You’d also probably have to at least to some degree integrate the idea that it’s ‘okay’ (not correct, just acceptable) to be irrational into your general thought process, to avoid unintentional signaling that you think poorly of them. If anything, irrational people are more likely to notice such subtle signals, since so much of their communication is based on them.
Or, you could just treat the existence of irrationality as a mere fact, like the fact that water freezes or runs downhill. Facts are not a matter of correctness or acceptability, they just are.
In fact (no pun intended), assigning “should-ness” to facts or their opposites in our brains is a significant force in our own irrationality. To say that people “should” be rational is like saying that water “should” run uphill—it says more about your value system than about the thing supposedly being pointed to.
Functionally, beliefs about “should” and “should not” assign aversive consequences to current reality—if I say water “should” run uphill, then I am saying that is is bad that it does not. The practical result of this is to incur an aversive emotional response every time I am exposed to the fact that water runs downhill—a response which does not benefit me in any way.
A saner, E-prime-like translation of “water should run uphill” might be, “I would prefer that water ran uphill”. My preference is just as unlikely to be met in that case, but I do not experience any aversion to the fact that reality does not currently match my preference. And I can still experience a positive emotional response from, say, crafting nice fountains that pump water uphill.
It seems to me that a rationalist would experience better results in life if he or she did not experience aversive emotions from exposure to common facts… such as the fact that human beings run on hardware that’s poorly designed for rationality.
Without such aversions, it would be unnecessary to craft complex strategies to avoid signaling them to others. And, equally important, having aversive responses to impersonal facts is a strong driver of motivated reasoning that’s hard to detect in ourselves!
Good summary; the confusion of treating natural mindless phenomena with intentional stance was addressed in the Three Fallacies of Teleology post.
When it is possible to change the situation, emotion directed the right way acts as reinforcement signal, and helps to learn the correct behavior (and generally to focus on figuring out a way of improving the situation). Attaching the right amount of right emotions to the right situations is an indispensable tool, good for efficiency and comfort.
The piece you may have missed is that even if the situation can be changed, it is still sufficient to use a positive reinforcement to motivate action, and in human beings, it is generally most useful to use positive reinforcement to motivate positive action.
This is because, on the human platform at least, positive reinforcement leads to exploratory, creative, and risk-taking behaviors, whereas negative reinforcement leads to defensive, risk-avoidance, and passive behaviors. So if the best way to change a situation is to avoid it, then by all means, use negative reinforcement.
However, if the best way to change the situation is to engage with it, then negative emotions and “shoulds” are your enemy, not your friend, as they will cause your mind and body to suggest less-useful behaviors (and signals to others).
IAWYC, modulo the use of “should”: at least with connotations assumed on Less Wrong, it isn’t associated with compulsion or emotional load, it merely denotes preference. “Ought” would be closer.
It’s true that in technical contexts “should” has less emotional connotation; however even in say, standards documents, one capitalizes SHOULD and MUST to highlight the technical, rather than colloquial sense of these words. Banishing them from one’s personal vocabulary greatly reduces suffering, and is the central theme of “The Work” of Byron Katie (who teaches a simple 4-question model for turning “shoulds” into facts and felt-preferences).
Among a community of rationalists striving for better communication, it would be helpful to either taboo the words or create alternatives. As it is, a lot of “shoulds” get thrown around here without reference to what goal or preference the shoulds are supposed to serve.
“One should X” conveys no information about what positive or negative consequences are being asserted to stem from doing or not-doing X—and that’s precisely the sort of information that we would like to have if we are to understand each other.
Agreed. Even innocuous-looking exceptions, like phrases of the form, “if your goal is to X, then you should Y”, have to make not-necessarily-obvious assumptions about what exactly Y is optimizing.
Avoiding existing words is in many cases a counterproductive injunction, it’s a normal practice when words get stolen for terms of art. Should refers to a sum total of ideal preference, the top level terminal goal, over all of the details (consequences) together.
Should may require a consequentialist explanation for instrumental actions, or a moral argument for preference over consequences.
Agreed. This is one of the major themes of some (most?) meditation practices and seems to be one of the most useful.
I seriously doubt we’re capable of not associating it with those things, though.
I think of “should” and “ought” as exactly synonymous, btw.
Thanks to both of you for expressing so clearly what I failed to, and with links!
That’s just what I was trying to get at. Thanks for the clarification.
The problems you cite in bullets are only nontrivial if you don’t sufficiently value social cohesion. My biggest faux pas have sufficiently conditioned me to make them less often because I put a high premium on that cohesion. So I think it’s less a question of social intelligence and more one of priorities. I don’t have to keep “constant focus”—after a few faux pas it becomes plainly apparent which subjects are controversial and which aren’t, and when we do come around to touchy ones I watch myself a little more.
I thought I would get away with that simplification. Heh.
Those skills do come naturally to some people, but not everyone. They certainly don’t come naturally to me. Even if I’m in a social group with rules that allow me to notice that a faux pas has occurred (not all do; some groups consider it normal to obscure such things to the point where I’ll find out weeks or months later, if at all), it’s still not usually obvious what I did wrong or what else I could do instead, and I have to intentionally sit down and come up with theories that I may or may not even have a chance to test.
Right, I get that people fare differently when it comes to this stuff, but I do think it’s a matter of practice and attention more than innate ability (for most people). And this is really my point, that the sort of monastic rationality frequently espoused on these boards can have politically antirational effects. It’s way easier to influence others if you first establish a decent rapport with them.
I don’t at all disagree that the skills are good to learn, especially if you’re going to be focusing on tasks that involve dealing with non-rationalists. I think it may be a bit of an over generalization to say that they should be a high priority for everyone, but probably not much of one.
I do have a problem with judging people for not having already mastered those skills, or for having higher priorities than tackling those skills immediately with all their energy, though, which seems to be what you’re doing. Am I inferring too much when I come to that conclusion?
Look, this whole thread started because of Annoyance’s judgment of people who have higher priorities than rationality, right? Did you have a problem with that?
All I’m saying is that this community in general gives way too short shrift to the utility of social cohesion. Sorry if that bothers you.
Quote, please?
Most of what he said condenses to “people who are not practicing rationality are irrational”, which is only an insult if you consider ‘irrational’ to be an insult, which I didn’t see any evidence of. I saw frustration at the difficulty in dealing with them without social awkwardness, but that’s not the same.
Have I missed something?
Yes, and most of what I said reduces to “Annoyance is not practicing rationality with statements like “‘social cohesion is one of the enemies of rationality.’” You said you had a “problem” with my contention and then I pointed out that Annoyance had made a qualitatively similar claim that hadn’t bothered you. Aside from our apparent disagreement on the point I don’t get how my claim could be a problem for you.
I think I’ve made myself clear and this is getting tiresome so I’ll invite you to have the last word.
I hope I’m not the only one who sees the irony in you refusing to answer my question about your reasoning, given where this thread started.
I guess the best option now is to sum this disagreement up in condensations. For simplicity’s sake, I’m only going to do comments on the branch that leads directly here. I’m starting with this comment.
JamesCole: Quoted hypothetical social-norm suggestion, disagreed, offered altenate suggestion suggestion, offered supporting logic.
JamesCole: Restated supporting logic.
Me: Agreed, offered more support.
Annoyance: counterargument: “Most people are not interested enough in being rational for that suggestion to work; they’ll find a way around it, instead”
Me: disagreement with Annoyance—I was wrong
Annoyance: Pointed out my mistake
Me: “Oh, right”
Annoyance: “That is a common mistake, and one that I haven’t fully overcome yet, which means I still have trouble communicating with people who are not practicing rationality” (probably intended to make me feel better)
You: “I object to the above exchange; you’re just masking your prejudice against irrational people by refusing to communicate clearly with them”
Me: “Actually, it’s not a refusal, it’s just hard.”
You: “No, it’s not hard, and refusal to do it means that you don’t value social cohesion.” with a personal example of it not being hard.
Me: “Okay, you got me. It’s only hard for some people.”
You: “Okay, it is hard for some people, but it’s still learnable, and harmful to the cause of rationality if you present yourself as a rationalist without having those skills.”
Me: “They’re good to learn, but I think you’re over-valuing them, and judging people for not sharing your values.”
You: “Why are you complaining about me being judgmental when you didn’t complain about Annoyance being judgmental?”, plus what appears to be some social-signaling stuff intended to indicate that I’m a bad person because I don’t care about social cohesion. I don’t know enough about what you mean by “social cohesion” to make sense of that part of the thread, but I suspect that your assertion that I don’t value it is correct.
Me: “Where was Annoyance judgmental? I didn’t see him being judgmental anywhere.”
This brings us to your comment directly above, which doesn’t condense well. You didn’t answer my question (and I don’t take this as proof that there is no instance of Annoyance being judgmental—I may have missed something somewhere—but I consider it pretty unlikely that you’d refuse to defend your assertion if there was a clear one, so it’s at least strong evidence that there isn’t), accused Annoyance of being irrational, and claimed that I should be accepting your claim even though you refuse to actually defend it.
I do agree with you that the skills involved in dealing with irrational people are useful to learn. But we obviously disagree in many, many ways on what kinds of support should be necessary for an argument to be taken seriously here.
Hmm, might you have been referring to this?
That’s not a judgment against less intelligent people; it’s a judgment against all of us, himself included. I recognize it as being the more rational decision in the situation I mentioned here as one that I’m failing at from a rationalist standpoint, and am not going to bother challenging his rational view on a rational forum when the best defense I can think of is “yes, but you shouldn’t say that to the muggles”.
Social cohesion is one of the enemies of rationality.
It’s not necessarily so in that it’s not always opposed to it, but it is incompatible with the mechanisms that bring it about and permit it to error-correct. It tends to reinforce error. When it happens to reinforce correctness, it’s not needed, and when it doesn’t, it makes it significantly harder to correct the errors.
“When it happens to reinforce correctness, it’s not needed”
Can you elaborate?
I’ll note that rationality isn’t an end. My ideal world state would involve a healthy serving of both rationality and social cohesion. There are many situations in which these forces work in tandem and many where they’re at odds.
A perfect example is this site. There are rules the community follows to maintain a certain level of social cohesion, which in turn aides us in the pursuit of rationality. Or are the rules not needed?
Why can’t it be?
How is that demonstrated?
It’s demonstrated by the fact that you can up/down vote and report anyone’s posts, and that you need a certain number of upvotes to write articles. This is a method of policing the discourse on the site so that social cohesion doesn’t break down to an extent which impairs our discussion. These mechanisms “reinforce correctness,” in your terms. So I’ll ask again, can we do away with them?
I don’t think humanity follows obviously from rationality, which is what I meant about rationality being a means rather than an end.
You’re assuming a fact not in evidence.
So you tell me what you think they’re for, then.
Those rules are rarely discussed outright, at least not comprehensively.
I’m pretty sure if I started posting half of my comments in pig-Latin or French or something, for no apparent reason, and refused to explain or stop, I’d be asked to leave fairly quickly, though. That all communication will be in plain English unless there’s a reason for it not to be is one example. I’m sure there are others.
I disagree. It is rational to exploit interpersonal communication for clarity between persons and comfortable use. If the ‘language of rationality’ can’t be understood by the ‘irrational people’, it is rational to translate best you can, and that can include utilizing societal norms. (For clarity and lubrication of the general process.)
Yes, I agree—my point was that the skill of translating is a difficult one to acquire, not that it’s irrational to acquire it.
Oh, I’m sorry I misunderstood you. Yeah, it can be tiring. I’m a fairly introverted person and need a good amount of downtime between socialization. I guess I was projecting a little—I use to think social norms were garbage and useless, until I realized neglecting their utility was irrational and it was primarily an emotional bias against them in never feeling like I ‘fit in’. Sometimes it feels like you never stop discovering unfortunate things about yourself...
I agree here: Reading stuff like this totally makes me cringe. I don’t know why people of above average intelligence want to make everyone else feel like useless proles, but it seems pretty rampant. Some humility is probably a blessing here, I mean, as frustrating as it is to deal with the ‘profoundly stupid’, at least you yourself aren’t profoundly stupid.
Of course, they probably think given the same start the ‘profoundly stupid’ person was given, they would have made the best of it and would be just as much of a genius as they are currently.
It’s a difficult realization, when you become aware you’re more intelligent then average, to be dropped into the pool with a lot of other smart people and realize you really aren’t that special. I mean, in a world of some six billion odd, if you are a one-in-a-million genius, that still means you likely aren’t in the top hundred smartest people in the world and probably not in the top thousand. It kind of reminds me of grad school stories I’ve read, with kids who think they are going to be a total gift to their chosen subject ending up extremely cynical and disappointed.
I think people online like to exaggerate their eccentricity and disregard for societal norms in an effort to appeal to the stereotypes for geniuses. I’ve met a few real geniuses IRL and I know you can be a genius without being horribly dysfunctional.
Rationality and intelligence are not the same thing—I’ve seen plenty of discussions here despairing about the existence of obviously-intelligent people, masters in their fields, who haven’t decided to practice rationality. I also know people who are observably less intelligent than I am, who practice rationality about as well as I do. One major difference between people in that latter group, and people who are not practicing rationality, no matter what the irrational peoples’ intelligence levels are, is that those people don’t get offended when someone points out a flaw in their reasoning, just as I don’t get offended when they, or even people who are not practicing rationality, point out a flaw in mine. People who are less intelligent will probably progress more slowly with rationality, as with any mental skill-set, but that’s not under discussion here. The irrational unwillingness to accept criticism is.
Being called ‘profoundly stupid’ is not exactly a criticism of someone’s reasoning. (Not that anybody was called that.) I think we’re objecting to this because of how it’ll offend people outside of the ‘in group’ anyway. Besides that, As much as we might wish we were immune to the emotional shock or glee at our thoughts and concepts being ridiculed or praised. I think it would be a rarity here to find someone who didn’t. People socializing and exchanging ideas is a type of system—It has to be understood and used effectively in order to produce the best results—and calling, essentially, everybody who disagrees with you ‘profoundly stupid’ is not good social lubrication.
You appear to be putting words into my mouth, but I’m currently too irritated to detangle this much beyond that point.
“Giving people too much credit” was a reference to peoples’ desire to be rational. I tend to assume that that’s significantly above zero in every case, even though the evidence does not seem to support that assumption. This is a failure to be rational on my part. (I doubt I’ll fix that; it’s the basis for most of my faith in humanity.)
I make no such assumption about intelligence (I do not assume that people want to be more intelligent than they are), and make a conscious effort to remove irrational biases toward intelligent people from my thought process when I encounter them. I have been doing so for years, with a significant degree of success, especially considering that I was significantly prejudiced against less intelligent people, before I realized that it was wrong to hold that view.
I have also put significant effort into learning how to bridge both of those communication gaps, and the skills required in each case are different. When I’m simply dealing with someone who’s less intelligent, I moderate my vocabulary, use lots of supporting social signaling, make smaller leaps of logic, and request feedback frequently to make sure I haven’t lost them. (Those skills are just as useful in regular conversation as they are in explaining things.) When I’m dealing with someone who’s not practicing rationality, I have to be very aware of their particular worldview, and only thoughtfully challenge it—which requires lots of complicated forethought, and can require outright lies.
The lack of either of those sets of communication skills will make dealing with the relevant people difficult, and can lead to them thinking poorly of you, whether you actually are prejudiced against them or not. Assuming that someone who does not have one of those sets of skills is prejudiced does not, in practice, work—there’s a very high risk of getting a false-positive.
A person who is ‘thinking’ irrationally can only be challeneged to the degree that they’re being rational. If they eschew rationality completely, there isn’t any way to communicate with them.
What have you actually accomplished, if you use social signals to get someone to switch their concept-allegiances?
I thought we’d already defined “practicing rationality” as “intentionally trying to make rational decisions and intentionally trying to become more rational”. Whether we had or not, that was what I meant by the term.
Someone can be being somewhat rational without ‘practicing’ rationality, and to the degree that they can accurately predict what effects follow what causes, or accomplish other tasks that depend on rationality, every person I know is at least somewhat rational. Even animals can be slightly rational—cats for example are well known for learning that the sound of a can opener is an accurate sign that they may be fed in the near future, even if they aren’t rational enough to make stronger predictions about which instances of that sound signal mealtime.
While social signaling can be used on its own to cause someone to switch their allegiances to concepts that they don’t value especially highly, that’s not the only possible use of it, and it’s not a use I consider acceptable. The use of social-signaling that I recommend is intended to keep a person from becoming defensive while ‘rationality-level appropriate’ rational arguments are used to actually encourage them to change their mind.
No, only if you rationally try to make rational decisions and rationally try to become more rational.
If you’re acting irrationally, you’re not practicing rationality, in the same way that you’re not practicing vegetarianism if you’re eating meat.
I wrote this rant before I saw the thing above. I’m not deleting it, because someone may find this useful, but the issue has been resolved. :)
You should expand this into a top-level post. Communication is difficult and I think most people could use advice about it. As it stands, it sounds like broad strokes which are obviously good ideas, but probably hard to implement without more details.
I’ve been considering it, actually, for my own use if not to post here. I think it’d be useful in several ways to try to come up with actual wordings for the tricks I’ve picked up.
Isn’t it obvious? Almost everyone is a “useless prole”, as you put it, and even the people who aren’t have to sweat blood to avoid that fate.
Recognizing that unpleasant truth is the first step towards becoming non-useless—but most people can’t think usefully enough to recognize it in the first place, so the problem perpetuates itself.
I know I’m usually a moron. I’ve also developed the ability to distinguish quality thinking from moronicity, which makes it possible for me to (slowly, terribly slowly) wean myself away from stupid thinking and reinforce what little quality I can produce. That’s what makes it possible for me to occasionally NOT be a moron, at least at a rate greater than chance alone would permit.
It’s the vast numbers of morons who believe they’re smart, reasonable, worthwhile people that are the problem.
I was reading around on the site today, and I think I’ve figured out why this attitude sends me running the other way. What clued me in was Eliezer’s description of Spock in his post “Why Truth? And...”.
Eliezer’s point there is that Spock’s behavior goes against the actual ideals of rationality, so people who actually value rationality won’t mimic him. (He’s well enough known that people who want to signal that they’re rational will likely mimic him, and people who want to both be and signal being rational will probably mimic him in at least some ways, and also note that the fact that reversed stupidity is not intelligence is relevant.)
It may come as a shock, but in my case, being rational is not my highest priority. I haven’t actually come up with a proper wording for my highest priority yet, but one of my major goals in pursuing that priority is to facilitate a universal ability for people to pursue their own goals (with the normal caveats about not harming or overly interfering with other people, of course). One of the primary reasons I pursue rationality is to support that goal. I suspect that this is not an uncommon kind of reason for pursuing rationality, even here.
As I mentioned in the comment that I referenced, I’ve avoided facing the fact that most people prefer not to pursue rationality, because it appears that that realization leads directly to the attitude you’re showing here, and I can reasonably predict that if I were to have the attitude you’re showing here, I would no longer support the idea that everyone should have as much freedom as can be arranged, and I don’t want to do that. Very few people would want to take the pill that’d turn them into a psychopath, even if they’d be perfectly okay with being a psychopath after they took the pill.
But there’s an assumption going on in there. Does accepting that fact actually have to lead to that attitude? Is it impossible to be an x-rationalist and still value people?
This is something I’ve thought a lot about. I’m worried about the consequences of certain negative ideologies present here on Less Wrong, but, actually, I feel that x-rationality, combined with greater self-awareness, would be the best weapon against them. X-rationality—identifying facts that are true and strategies that work—is inherently neutral. The way you interpret those facts (and what you use your strategies for) is the result of your other values.
Consider, to begin with, the tautology that 99.7% of the population is less intelligent than 0.3% of the population, by some well-defined, arbitrary metric of intelligence. Suppose also, that someone determined they were in the top 0.3%. They could feel any number of ways about this fact: completely neutral, for example, or loftily superior, or weightily responsible. Seen in this way, feeling contempt for “less intelligent” people is clearly the result of a worldview biased in some negative way.
Generally, humanity is so complex that however anyone feels about humanity says more about them than it does about humanity. Various forces (skepticism and despair; humanism and a sense of purpose) have been vying throughout history: rationality isn’t going to settle it now. We need to pick our side and move on … and notice which sides other people have picked when we evaluate their POV.
I always find it ironic, when ‘rationalists’ are especially misanthropic here on Less Wrong, that Eliezer wants to develop a friendly AI. Implicit with this goal—built right in—is the awareness that rationality alone would not induce the machine to be friendly. So why would we expect that a single-minded pursuit of rationality would not leave us vulnerable to misanthropic forces? Just as we would build friendliness into a perfectly logical, intelligent machine; we must build friendliness into our ideology before we let go of “intuition” and other irrational ways we have of “feeling” what is right, because they contain our humanism, which is outside rationality.
We do not want to be completely rational because being rational is neutral. Being more neutral without perfect rationality would leave us vulnerable to negative forces, and, anyway, we want to be a positive force.
If we assume he has goals other than simply being a self-abasing misanthrope, the attitude Annoyance is showing is far from rational. Arbitrarily defining the vast majority of humans as useless “problems” is, ironically, itself a useless and problematic belief, and it represents an even more fundamental failure than being Spocklike—Spock, at least, does not repeatedly shoot himself in the foot and then seek to blame anything but himself.
I’ve pretty much figured that out. If nothing else, Annoyance is being an excellent example of that right now.
Next question: Is it something about this method of approaching rationality that encourages that failure mode? How did Annoyance fall off the path, and can I avoid doing the same if I proceed?
I’m starting to think that the answer to that last question is yes, though.
While I find conversations with Annoyance rather void, I would encourage you to not try and lift (him ?) up as an example of falling off the path or entering failure modes. If you care about the question I would make a post using generic examples. This does a few things:
Gets you away from any emotional responses to Annoyance (both in yourself and anyone else).
Provides a clear-cut example that can be picked apart without making this entire thread required reading. It also cleans up many straw men and red herrings before they happen, since the specifics in the thread are mostly unneeded with relation to the question you have just asked.
Brings attention to the core problem that needs to be addressed and avoids any specific diagnoses of Annoyance (for better or worse)
That’s very good advice. However, I’m not going to take it today, and probably won’t at all. It seems more useful at this point to take a break from this entirely and give myself a chance to sort out the information I’ve already gained.
I’ll definitely be interested in looking at it, in a few days, if someone else wants to come up with that example and continue thinking about it here.
I would agree.
I pass. The discussion of that topic would be interesting to me but writing the article is not. I have too many partial articles as it is… :P
A logically incorrect statement. An attitude is rational if it consistently and explicitly follows from data gathered about the world and its functioning. As there are other consequences from my behavior other than the one you so contemptuously dismiss, and you have no grounds for deciding what my goals are or whether my actions achieve them, your claim is simply wrong. Trivially so, in fact.
It’s not arbitrary.
The rational thing to do when confronted with a position you don’t understand is ask yourself “Why did that person adopt that position?”
If your actions accomplish your goals, fine. However, it’s safe to say most of the people here don’t want to be Annoyances, and it’s important to point out that your behavior does not reflect a requirement or implication of rationality.
If you disagree, I hope you will explicitly list the assumptions leading to your belief that it’s a good idea to treat people with condescension.
This is of low value, if the answer doesn’t come easily.
Easy answers are rarely worthwhile. Worthwhile questions are rarely answered easily.
Search for an answer requires the question to be worthwhile, which is far from prior expectation for the research of inane-sounding positions people hold.
If you want to convince someone of something, it’s generally a good idea to understand why they believe what they believe now. People generally have to be convinced out of one belief before they can be convinced into another, and you can’t refute or reframe their evidence unless you know what the evidence is.
Even if their reasoning is epistemologically unsound, if you know how it’s unsound, you can utilize the same type of reasoning to change their belief. For example, if someone only believes things they “see with their own eyes”, you would then know it is a waste of time to try to prove something to them mathematically.
I agree, but in this case the benefit comes not from the expectation of finding insight in the person’s position, but from the expectation of successful communication (education), which was not the motivation referred in Annoyance’s comment.
Once I realized that achieving anything, no matter what, required my being rational, I quickly bumped “being rational” to the top of my to-do list.
‘People’ do not lend themselves to any particular utility. The Master of the Way treats people as straw dogs.
Yes, I see that you did that. Why would I want to do that, given my current utility function? I appear to be accomplishing things reasonably well as is, and it looks like if I made that change, I wouldn’t wind up accomplishing things that my current utility function values at all.
Persuasiveness: You fail at it.
What’s the function you use to evaluate your utility function?
Persuasiveness: what I was not aiming for.
And what function do I use to evaluate that, and on to infinity. Right. Or, I can just accept that my core utility function is not actually rational, examine it to make sure it’s something that’s not actually impossible, and get on with my life.
Or does Eliezer have a truly-rational reason behind the kind of altruism that’s leading him to devote his life to FAI that I’m not aware of?
Oh, silly me for assuming that you were trying to raise the rationality level around here. It’s only the entire point of the blog, after all.
So if you’re not actually trying to convince me that being more rational would actually be a good thing, what’s have you been doing? Self-signaling? Making pointless appeals to your own non-existent authority? Performing some bizarre experiment regarding your karma score?
Sets of terminal values can be coherent. Logical specifications for computing terminal values can be consistent. What would it mean for one to be rational?
I have no idea.
As far as I can tell, my terminal values are not rational in the same sense that blue is not greater than three.
If there’s isn’t a tiny grain of rationality at the core of that infinite regression, you’re in great trouble.
The ability to anticipate how reality will react to something you do depends entirely on the ability to update your mental models to match data derived from reality. That’s rationality right there.
If there’s even a tiny spark, it can be fanned into flame. But if there’s no spark there’s nothing to build on. I strongly suspect that some degree of rationality is present in your utility function, but if not, your case is hopeless.
Why would I try to do that? Nothing I do can cause the rationality level to go up. Only the people here can do that. If I could ‘make’ people be rational, I would. But there’s no spoon, there.
All I can do is point to the sky and hope that people will choose to pay less attention to the finger than what it indicates.
It’s usually more effective if you don’t use your middle finger to do the pointing.
Out of curiosity, can someone who does not have a grain of rationality in them ever become more rational? In other words, can someone be so far gone that they literally can never be rational?
I am honestly having trouble picturing such a person. Perhaps that is because I never thought about it that way before.
They may stumble across rationality as life causes their core functions to randomly vary. As far as I can tell, that’s how explicit and self-referential standards of thought first arose—they seem to have occurred in societies where there were many different ideas and claims being made about everything, and people needed a way to sift through the rich bed of assertions.
So complex and mutually-incompatible cultural fluxes seem to not only be necessary to produce the first correct standards, but encourage them to be developed as well. That argument applies more to societies than individuals, but I think a similar one holds there too.
Short answer: only by chance, I think.
Understood. I guess the followup question is about where the general human being starts. Do we start with any rationality in us? My guess is that it is somewhat random. Some do; some do not.
The opposite of rational is “wrong” or “ineffective”. A person can’t be wrong or ineffective about everything, that’s senseless. I think all the confusion has arisen from Annoyance claiming that terminal values must have some spark of rationality, but Eliezer explained that he might have meant they must be coherent. So if I may paraphrase your question (which interests me as well), the question is: how may terminal values be incoherent?
You need to be more careful with problem statement, it seems too confused. For example, taboo “rational” (to distinguish irrational people from rocks), taboo “never” (to distinguish the deep properties of the phenomenon from limitations created by life span and available cultural environment).
Yeah, I would agree. I meant it as a specific response to what Annoyance wrote and figured I could just reuse the term. I didn’t expect so many people to jump in. :)
“Never” as in “This scenario is impossible and cannot happen.” “Become more rational” can be restated “gain more rationality.”
Rewording the entire question:
The tricky clause is now “rationality in them.” Any more defining of terms brings this into a bigger topic. It would probably make a good top-level post, if anyone is interested.
I’d like to see a top post on this. My example of cats having a degree of rationality may be useful:
(Warning) This is a huge mind-dump created while on lunch break. By all means pick it apart, but I am not planning on defending it in any way. Take it with all the salt in the world.
Personally, I find the concept of animal rationality to be more of a distraction. For some reason, my linguistic matrix finds the word “intelligent” to describe cats responded to a can opener. Animals are very smart. Humans are very smart. But smart does not imply rational and a smart human is not necessarily imply rationality.
I tend to reserve rationality for describing the next “level” of intelligence. Rationality is the form or method of increase intelligence. An analogy is speed versus acceleration. Acceleration increases speed; rationality increases intelligence. This is more of a rough, instinctive definition, however, and one of my personal reasons for being here at Less Wrong is to learn more about rationality. My analogy does not seem accurate in application. Rationality seems connected to intelligence but to say that rationality implies change in intelligence does not fit with its reverse: irrationality does not decrease intelligence.
I am missing something, but it seems that whatever I am looking for in my definitions is not found in cats. But, as you may have meant, if cats have no rationality and cannot have rationality, is it because they have no rationality?
If this were the case, and rationality builds on itself, where does our initial rationality come from? If I claim to be rational, should I be able to point to a sequence of events in my life and say, “There it started”? It seems that fully understanding rationality implies knowing its limits; its beginning and ending. To further our rationality we should be able to know what helps or hinders our rationality.
Annoyance claims that the first instances of rationality may be caused by chance. If this were true, could we remove the chance? Could we learn what events chanced our own rationality and inflict similar events on other people?
Annoyance also seems to claim that rationality begets rationality. But something else must produce that first spark in us. That spark is worth studying. That spark is annoyingly difficult to define and observe. How do we stop and examine ourselves to know if we have the spark? If two people walk before us claiming rationality yet one is lying, how do we test and observe the truth?
Right now, we do so by their actions. But if the liar knows the rational actions and mimics them without believing in their validity or truth, how would we know? Would such a liar really be lying? Does the liar’s beliefs matter? Does rationality imply more than correct actions?
To make this more extreme, if I build a machine to mimic rationality, is it rational? This is a classic question with many forms. If I make a machine that acts human, is it human? I claim that “rationality” cannot be measured in a cat. Could it be measured in a machine? A program? Why am I so fixated on humanity? Is this bias?
Rationality is a label attached to a behavior but I believe it will eventually be reattached to a particular source of the behavior. I do not think that rational behavior is impossible to fake. Pragmatically, a Liar that acts rational is not much different from a rational person. If the Liar penetrates our community and suddenly goes ape than the lies are obvious. How do we predict the Liars before they reveal themselves? What if the Liars believe their own lies?
I do not mean “believe” as in “having convinced themselves”. What if they are not rational but believe they are? The lie is not conscious; it is a desire to be rational but not possessing the Way. How do we spot the fake rationalists? More importantly, how do I know that I, myself, have rationality?
Does this question have a reasonable answer? What if the answer is “No”? If I examine myself and find myself to be irrational, what do I do? What if I desire to be rational? Is it possible for me to become rational? Am I denied the Way?
I think much of the confusion comes from the inability to define rationality. We cannot offer a rationality test or exam. We can only describe behavior. I believe this currently necessary but I believe it will change. I think the path to this change has to do with finding the causations behind rationality and developing a finer measuring stick for determining rational behavior. I see this as the primary goal of Less Wrong.
Once we gather more information about the causes of our own rationality we can begin development methods for causing rationality in others along with drastically increasing our own rationality. I see this as the secondary goal of Less Wrong.
This is why I do not think Annoyance’s answer was sufficient. “Chance” may be how we describe our fortune but this is inoculative answer. During Eliezer’s comments on vitalism he says this:
(Emphasis original. You will have to search for the paragraph, it is about three-quarters down the page.)
“Chance” hits 3 of 4, giving Annoyance benefit of the doubt and assuming there is no cherished ignorance. So, “chance” works for now because we have no better words to describe the beginning of rationality, but there is a true cause out there flipping the light bulbs on inside of heads and producing the behavior we have labeled “rationality.” Let’s go find it.
(PS) Annoyance, this wasn’t meant to pick on what you said, it just happened to be in my mind and relevant to the discussion. You were answering a very specific question and the answer satisfied what was asked at the time.
Rationality-as-acceleration seems to match the semi-serious label of x-rationality.
My point was that some animals do appear to be able to be rational, to a degree. (I’m defining ‘rational’ as something like ’able to create accurate representations of how the world works, which can be used to make accurate predictions.)
I can even come up with examples of some animals being able to be more rational than some humans. I used to work in a nursing home, and one of the residents there was mentally retarded as part of her condition, and never did figure out that the cats could not understand her when she talked to them, and sometimes seemed to actually expect them to talk. On the other hand, most animals that have been raised around humans seem to have a pretty reasonable grasp on what we can and can’t understand of their forms of communication. Unfortunately, most of my data for the last assertion there is personal observation. The bias against even considering that animals could communicate intentionally is strong enough in modern society that it’s rarely studied at all, as far as I know. Still, consider the behavior of not-formally-trained domesticated animals that you’ve known, compared to feral examples of the same species.
Basic prediction-ability seems like such a universally useful skill that I’d be pretty surprised if we didn’t find it in at least a minimal form in any creature with a brain. It may not look like it does in humans, in those cases, but then, given what’s been discussed about possible minds, that shouldn’t be too much of a problem.
Animals obviously communicate with one another. The last I heard, there was a lot of studying being done on dolphins and whales. Anyone who has trained a dog in anything can tell you that dogs can “learn” English words. The record I remember hearing about was a Border Collie with a vocabulary of over 100 words. (No reference, sorry. It was in a trivia book.)
As for your point, I understand and acknowledge it. I think of rationality as something different, I guess. I do not know how useful continuing the cat analogy is when we seem to think of “rational” differently.
Hmm, maybe you could define ‘intelligence’ as you use it here:
I define intelligence as the ability to know how to do things (talk, add, read, write, do calculus, convince a person of something—yes, there are different forms of intelligence) and rationality as the ability to know which things to do in a given situation to get what you want out of that situation, which involves knowing what things can be gotten out of a given situation in the first place.
Well, the mind dump from earlier was mostly food for thought, not a staking out claims or definitions. I guess my rough definition of intelligence fits what I find in the dictionary:
The same dictionary, however, defines rationality as a form of the word rational:
I take intelligence to mean, “the ability to accomplish stuff,” and rationality to mean, “how to get intelligence.” Abstracted, rationality more or less becomes, “how to get the ability to accomplish stuff.” This is contrasted with “learning” which is:
I am not proposing this definition of rationality is what anyone else should use. Rather, it is a placeholder concept until I feel comfortable sitting down and tackling the problem as a whole. Right now I am still in aggregation mode which is essentially collecting other people’s thoughts on the subject.
Honestly, all of this discussion is interesting but it may not be helpful. I think Eliezer’s concept of the nameless virtue is good to keep in mind during these kinds of discussions:
Further information: The person I mentioned was able to do some intelligence-based things that I would not expect cats to do, like read and write (though not well). She may also have been able to understand that cats don’t speak English if someone actually explained it to her—I don’t think anyone ever actually did. Even so, nobody sits cats or dogs down and explains our limitations to them, either, so I think the playing field is pretty level in that respect.
If you can develop it well.
Yeah. If I were to do it I would probably start from the question of defining someone’s level of rationality. The topic itself assumes:
“Rationality” is not boolean. People can be more or less rational on a scale.
People can be completely irrational in the sense that they score a 0 on the scale.
The question becomes: Can such a person increase their level on the scale?
Further thoughts:
How does one increase their level on the scale?
Does it require rationality to get more rationality?
Is there an upper bound? If the lower bound is 0...
If there is an upper bound, can this upper bound be achieved?
...and then you prove that the level of rationality and operations on it correspond to Bayesian probability up to isomorphism. ;-)
Seriously, doing this in non-silly manner is highly nontrivial.
Oh, no joke. But we have to start somewhere. :)
Honestly, until we have a better word/definition than “rationality,” we get to play with fuzzy words. I am happy with that for now but it is a dull future.
I made more causal comments on this subject in a different comment and would appreciate your thoughts. It is kind of long, however, so no worries if you would rather not. :)
You’ve never thought about it that way before because it’s completely silly. How on earth does Annoyance make these judgments? I’m not nearly prideful enough to think I can know others’ minds to the extent Annoyance can, or, in other words, I imagine there are circumstances which could change most people in profound ways, both for ill and good. So the only thing judging people in this manner does is reinforce one’s social prejudices. Writing off people who seem resistant to reason only encourages their ignorance, and remedying their condition is both an exercise and example of reason’s power, which, incidentally, is why I’m trying so hard with Annoyance!
You did catch that I’m talking about a terminal value, right? It’s the nature of those that you want them because you want them, not because they lead to something else that you want. I want everybody to be happy. That’s a terminal value. If you ask me why I want that, I’m going to have some serious trouble answering, because there is no answer. I just want it, and there’s nothing that I know of that I want more, or that I would consider a good reason to give up that goal.
Right now, it’s pointing at “don’t make this mistake”, which I was unlikely to do anyway, but now I have the opportunity to point the mistake out to you, so you can (if you choose to; I can’t force you) stop making it, which would raise the rationality around here, which seems like a good thing to me. Or, I can not point it out, and you keep doing what you’re doing. It’s like one of those lottery problems, and I concluded that the chance of one or both of us becoming more rational was worth the cost of having this discussion. (And, it paid off at least somewhat—I think I have enough insight into that particular mistake to be able to avoid it without avoiding the situation entirely, now.)
What are you aiming for?
Could you elucidate what you intend with this gem?
“The Master of the Way treats people as straw dogs.”
It’s from the Tao Te Ching:
“Heaven and earth are ruthless, and treat the myriad creatures as straw dogs; the sage is ruthless, and treats the people as straw dogs.”
One might accuse this of falling afoul of the appeal to nature, but that would assume a fact not in evidence, to wit, that Annoyance’s motivations resemble that of a typical LW poster (to the extent that such a beast exists).
Voted down because your realization is flawed. Achieving anything does not require you to be rational, as evidenced by this post.
Your strategy of dealing with people is also flawed: does the Master of the Way always defect? If you were a skilled exploiter, you wouldn’t give obvious signals that you are an exploiter. Instead, you seem to be signaling “Vote me off the island!” to society, and this community. You may want to reconsider that position.
Wanting to accomplish thing X, and being able to expect it to occur as a result of actions I take, requires rationality.
Your objection is incorrect.
Your understanding of my strategy is incorrect, as evidenced by your question.
Annoyance, you’re still dodging the question. Joe didn’t ask whether or not in your opinion everyone is a useless prole, he asked why it’s useful to make people feel like that. Your notion that “social cohesion is the enemy of rationality” was best debunked, I think by pjeby’s point here:
http://lesswrong.com/lw/za/a_social_norm_against_unjustified_opinions/rrk
more flies with honey and all that.
I don’t want to catch flies.
Annoyance, your argument has devolved into inanity. If you don’t want to popularly cultivate rationality then you disagree with one of the core tenets of this community. It’s in the second paragraph of the “about” page:
“Less Wrong is devoted to refining the art of human rationality—the art of thinking. The new math and science deserves to be applied to our daily lives, and heard in our public voices.”
Your circular word games do no good for this community.
Or perhaps simply the recognition that it’s sometimes impossible to fluff other people’s egos and drive discussion along rational paths at the same time.
If people become offended when you point out weaknesses in their arguments—if they become offended if you even examine them and don’t automatically treat their ideas as inherently beyond reproach—there’s no way to avoid offending them while also acting rationally. It becomes necessary to choose.
Really? Have you tried, maybe, just not pointing out the weaknesses in their arguments? Mightn’t that be the rational thing to do? Just a polite smile and nod, or a gentle, “Have you considered some alternative?” Or even, “You may well be right.” (This is true of pretty much any non-contradictory statement.) Or there are many different ways to argue with someone without being confrontational. Asking curious-sounding questions works fairly well.
It’s generally easy to recognize how well a person will react to an argument against him. If you have basic people skills, you’ll be able to understand what type of argument/approach will communicate your point effectively, and when you simply don’t have a chance. The idea that it’s necessary to offend people to act rationally seems completely absurd (at least in this context). If it’s going to offend them, it’s going to accomplish the opposite of your goal, so, rationally, you shouldn’t do it.
This whole discussion reminds me of the Dave Barry quote that may well have been used earlier on this site:
“I argue very well. Ask any of my remaining friends. I can win an argument on any topic, against any opponent. People know this, and steer clear of me at parties. Often, as a sign of their great respect, they don’t even invite me.”
This. Is. Not. Winning.
I was going to say “there are more workarounds than you think”, but that’s probably my selection bias talking again. That said, there are workarounds, in some situations. It’s still not a trivial thing to learn, though.
It’s not just nontrivial, it’s incredibly hard. Engaging “system 2” reasoning takes a lot of effort, lowering sensitivity to, and acute awareness of, social cues and signals.
The mindset of “let’s analyze arguments to find weaknesses,” aka Annoynance’s “rational paths,” is a completely different ballgame than most people are willing to play. Rationalists may opt for that game, but they can’t win, and may be reinforcing illogical behavior. Such a rationalist is focused on whether arguments about a particular topic are valid and sound, not the other person’s rational development. If the topic is a belief, attempting to reason it out with the person is counterproductive. Making no ground when engaging with people on a topic should be a red flag: “maybe I’m doing the wrong thing.”
Does anyone care enough for me to make a post about workarounds? Maybe we can collaborate somehow Adelene, I have a little experience in this area.
Engaging system 2 is precisely what you don’t want to do, since evolutionarily speaking, a big function of system 2 is to function as a decoy/shield mechanism for keeping ideas out of a person. And increasing a person’s skill at system 2 reasoning just increases their resistance to ideas.
To actually change attitudes and beliefs requires the engagement of system 1. Otherwise, even if you convince someone that something is logical, they’ll stick with their emotional belief and just avoid you so they don’t have to deal with the cognitive dissonance.
(Note that this principle also applies to changing your own beliefs and attitudes—it’s not your logical mind that needs convincing. See Eliezer’s story about overcoming a fear of lurking serial killers for an example of mapping System 2 thinking to System 1 thinking to change an emotional-level belief.)
pjeby, sorry I wasn’t clear, I should have given some context. I am referencing system 1 and 2 as simplified categories of thinking as used by cognitive science, particularly in behavioral economics. Here’s Daniel Kahneman discussing them. I’m not sure what you’re referring to with decoys and shields, which I’ll just leave at that.
To add to my quoted statement, workarounds are incredibly hard, and focusing on reasoning (system 2) about an issue or belief leaves few cycles for receiving and sending social cues and signals. While reasoning, we can pick up those cues and signals, but they’ll break our concentration, so we tend to ignore them while reasoning carefully. The automatic, intuitive processing of the face interferes with the reasoning task; e.g. we usually look somewhere else when reasoning during a conversation. To execute a workaround strategy, however, we need to be attuned to the other person.
When I refer to belief, I’m not referring to fear of the dark or serial killers, or phobias. Those tend to be conditioned responses—the person knows the belief is irrational—and they can be treated easily enough with systematic desensitization and a little CBT thrown in for good measure. Calling them beliefs isn’t wrong, but since the person usually knows they’re irrational, they’re outside my intended scope of discussion: beliefs that are perceived by the believer to be rational.
People are automatically resistant to being asked to question their beliefs. Usually it’s perceived as unfair, if not an actual attack on them as a person: those beliefs are associated with their identity, which they won’t abandon outright. We shouldn’t expect them to. It’s unrealistic.
What should we do, then? Play at the periphery of belief. To reformulate the interaction as a parable: We’ll always lose if we act like the wind, trying to blow the cloak off the traveller. If we act like the sun, the traveller might remove his cloak on his own. I’ll think about putting a post together on this.
My hypothesis is that reasoning as we know it evolved as a mechanism to both persuade others, and to defend against being persuaded by others.
Consider priming, which works as long as you’re not aware of it and therefore defending against it. But it makes no sense to evolve a mechanism to avoid being primed, unless the priming mechanism were being exploited by our tribe-mates. (After all, they’re the only ones besides us with the language skill to trigger it.)
In other words, once we evolved language, we became more gullible, because we were now verbally suggestible. This would then have resulted in an arms race of intelligence to both persuade, and defend against persuasion, with tribal status and resources as the prize.
And once we evolved to the point of being able to defend ourselves against any belief-change we’re determined to avoid, the prize would’ve become being able to convince neutral bystanders who didn’t already have something at stake.
The system 1⁄2 distinctions cataloged by Stanovich & West don’t quite match my own observation, in that I consider any abstract processing to be system 2, whether it’s good reasoning or fallacious, and whether it’s cached or a work-in-progress. (Cached S2 reasoning isn’t demanding of brainpower, and in fact can be easily parroted back in many forms once an appropriate argument has been heard, without the user ever needing to figure it out for themselves.)
In my view, the primary functional purpose of human of reasoning is to persuade or prevent persuasion, with other uses being an extra bonus. So in this view, using system 2 for truly rational thought is actually an abuse of the system… which would explain why it’s so demanding of cognitive capacity, compared to using it as a generator of confabulation and rhetoric. And it also explains why it requires so much learning to use properly: it’s not what the hardware was put there for.
The S&W model is IMO a bit biased by the desire to find “normative” reasoning (i.e., correct reasoning) in the brain, even though there’s really no evolutionary reason for us to have truly rational thought or to be particularly open-minded. In fact, there’s every evolutionary reason for us to not be persuadable whenever we have something at stake, and to not reason things out in a truly fair or logical manner.
Hence, some of the attributes they give system 2 are (in my view) attributes of learned reasoning running on top of system 2 in real time, rather than native attributes of system 2 itself, or reflective of cached system 2 thinking.
Anyway, IAWYC re: the rest, I just wanted to clarify this particular bit.
Actually, system one can handle a surprising amount of abstraction; I don’t have one handy, but any comprehensive description of conceptual synesthesia should do a good job of explaining it. (I’m significantly enough conceptually synesthetic that I don’t need it explained, and have never actually needed an especially good reference before.)
The fact that I can literally see that the concept ‘deserve X’ depends on the emotional version of the concept ‘should do X’, because the pattern for one contains the pattern for the other, makes it very clear to me that such abstractions are not dependent on the rational processing system.
It’s also noteworthy that synesthesia appears to be a normal developmental phase; it seems pretty likely to me that I’m merely more aware of how my brain is processing things, rather than having a radically different mode of processing altogether.
I’d certainly be interested in that. My own definitions are aimed at teaching people not to abstract away from experience, including emotional experience. Certainly there is some abstraction at that level, it’s just a different kind of abstraction (ISTM) than system 2 abstraction.
In particular, what I’m calilng system 1 does not generally use complex sentence structure or long utterances, and the referents of its “sentences” are almost always concrete nouns, with its principal abstractions being emotional labels rather than conceptual ones.
I consider “should X” and “deserve X” to both be emotional labels, since they code for attitude and action towards X, and so both are well within system 1 scope. When used by system 2, they may carry totally different connotations, and have nothing to do with what the speaker actually believes they deserve or should do, and especially little to do with what they’ll actually do.
For example, a statement like, “People should respect the rights of others and let them have what they deserve” is absolutely System 2, whereas, a statement like “I don’t deserve it” (especially if experienced emotionally) is well within System 1 territory.
It’s entirely possible that my definition of system 1⁄2 is more than a little out of whack with yours or the original S&W definition, but under my definition it’s pretty easy to learn to distinguish S1 utterances from S2 utterances, at least within the context of mind hacking, where I or someone else is trying to find out what’s really going on in System 1 in relation to a topic, and distinguish it from System 2′s confabulated theories.
However, since you claim to be able to observe system 1 directly, this would seem to put you in a privileged position with respect to changing yourself—in principle you should be able to observe what beliefs create any undesired behaviors or emotional responses. Since that’s the hard part of mind hacking IME, I’m a bit surprised you haven’t done more with the “easy” part (i.e. changing the contents of System 1).
Yep, it mostly uses nouns, simple verbs, relatedness catgegorizations (‘because’) , behavior categorizations (‘should’, ‘avoid with this degree of priority’), and a few semi-abstract concepts like ‘this week’. Surprisingly, I don’t often ‘see’ the concepts of good or bad—they seem to be more built-in to certain nouns and verbs, and changing my opinion of a thing causes it to ‘look’ completely different. (That’s also not the only thing that can cause a concept to change appearance—one of my closest friends has mellowed from a very nervous shade of orange to a wonderfully centered and calm medium-dark chocolate color over the course of the last year or so.)
Hmm… heh, it actually sounds like I just don’t use system 2, then.
I have and do, actually, and there’s very little that’s ‘undesirable’ left in there that I’m aware of (an irrational but so far not problematic fear of teenagers and a rationally-based but problematic fear of mental health professionals and, by extension, doctors are the only two things that come to mind that I’d change, and I’ve already done significant work on the second or I wouldn’t be able to calmly have this conversation with you). The major limitation is that I can only see what’s at hand, and it takes a degree of concentration to do so. I can’t detangle my thought process directly while I’m trying to carry on a conversation, unless it’s directly related to exactly what I’m doing at the moment, and I can’t fix problems that I haven’t noticed or have forgotten about.
I’m going to be putting together a simple display on conceptual synesthesia for my Neuroversity project this week… I’ll be sure to send you a link when it’s done.
I’ve been thinking more about this… or, not really. One of the downsides to my particular mind-setup is that it takes a long time to retrieve things from long-term memory, but I did retrieve something interesting just now.
When I was younger, I think I did use system two moderately regularly. I do vaguely remember intentionally trying to ‘figure things out’ using non-synesthetic reasoning—before I realized that the synesthesia was both real and useful—and coming to conclusions. I very distinctly remember having a mindset more than once of “I made this decision, so this is what I’m going to do, whether it makes sense now or not”. I also remember that I was unable to retain the logic behind those decisions, which made me very inflexible about them—I couldn’t use new data to update my decision, because I didn’t know how I’d come to the conclusion or how the new data should fit in. Using that system is demanding enough that it simply wasn’t possible to re-do my logic every single time a potentially-relevant piece of data turned up, and in fact I couldn’t remember enough of my reasoning to even figure out which pieces of data were likely to be relevant. The resulting single-mindedness is much less useful than the ability to actually be flexible about your actions, and after having that forcibly pointed out by reality a few times, I stopped using that method altogether.
There does seem to be a degree of epistemic hygiene necessary to switch entirely to using system one, though. I do remember, vaguely, that one problem I had when I first started using system one for actual problems was that I was fairly easy to persuade—it took a while to really get comfortable with the idea that someone could have an opinion that was well-formed and made sense but still not be something that I would ‘have to’ support or even take into consideration, for example. Essentially my own concepts of what I wanted were not strong enough to handle being challenged directly, at first. (I got better.)
I feel I should jump in here, as you appear to be talking past each other. There is no confusion in the system 1/system 2 distinction; you’re both using the same definition, but the bit about decoys and shields was actually the core of PJ’s post, and of the difference between your positions. PJ holds that to change someone’s mind you must focus on their S1 response, because if they engage S2, it will just rationalize and confabulate to defend whatever position their S1 holds. Now, I have no idea how one would go about altering the S1 response of someone who didn’t want their response altered, but I do know that many people respond very badly to rational arguments that go against their intuition, increasing their own irrationality as much as necessary to avoid admitting their mistake.
I don’t believe we are, because I know of no evidence of the following:
Perhaps one or both of us misunderstands the model. Here is a better description of the two.
Originally, I was making a case that attempting to reason was the wrong strategy. Given your interpretation, it looks like pjeby didn’t understand I was suggesting that, and then suggested essentially the same thing.
My experience, across various believers (Christian, Jehovah’s Witness, New Age woo-de-doo) is that system 2 is never engaged on the defensive, and the sort of rationalization we’re talking about never uses it. Instead, they construct and explain rationalizations that are narratives. I claim this largely because I observed how “disruptable” they were during explanations—not very.
How to approach changing belief: avoid resistance by avoiding the issue and finding something at the periphery of belief. Assist in developing rational thinking where the person has no resistance, and empower them. Strategically, them admitting their mistake is not the goal. It’s not even in the same ballpark. The goal is rational empowerment.
Part of the problem, which I know has been mentioned here before, is unfamiliarity with fallacies and what they imply. When we recognize fallacies, most of the time it’s intuitive. We recognize a pattern likely to be a fallacy, and respond. We’ve built up that skill in our toolbox, but it’s still intuitive, like a chess master who can walk by a board and say “white mates in three.”
This. Exactly this. YES.
Tell them stories. If you’ll notice, that’s what Eliezer does. Even his posts that don’t use fiction per se use engaging examples with sensory detail. That’s the stuff S1 runs on.
Eliezer uses a bit more S2 logic in his stories than is perhaps ideal for a general audience; it’s about right for a sympathetic audience with some S2+ skills, though.
On a general audience, what might be called “trance logic” or “dramatic logic” works just fine on its own. The key is that even if your argument can be supported by S2 logic, to really convince someone you must get a translation to S1 logic.
A person who’s being “reasonable” may or may not do the S2->S1 translation for you. A person who’s being “unreasonable” will not do it for you; you have to embed S1 logic in the story so that any effort to escape it with S2 will be unconvincing by comparison.
This, by the way, is how people who promote things like intelligent design work: they set up analogies and metaphors that are much more concretely convincing on the S1 level, so that the only way to refute them is to use a massive burst of S2 reasoning that leaves the audience utterly unconvinced, because the “proof” is sitting right there in S1 without any effort being required to accept it.
I hadn’t actually found the system 1/system 2 meme before this, but it maps nicely onto how I handle those situations. The main trick is to make lots of little leaps of logic, instead of one big one, while pushing as few emotional buttons as you can get away with, and using the emotional buttons you do push to guide the conversation along.
An example of that is here. In the original example, telling someone directly that they’re wrong pushes all kinds of emotional buttons, and a fully thought out explanation of why is obviously too much for them to handle with system one, so it’s going to fall flat, unless they want to understand why they’re wrong, which you’ve already interfered with by pushing their buttons.
In my example, I made a much smaller leap of logic—“you’re using a different definition of ‘okay’ than most people do”—which can be parsed by system one, I think. I also used social signaling rather than words to communicate that the definition is not okay, which is a good idea because social signaling can communicate that with much more finesse and fewer emotional buttons pushed, and because people are simply wired to go along with that kind of influence more easily.
No kidding.
My sanity-saver … but obviously not rationality-saver… has been to learn to encourage the people I’m dealing with to be more rational, at least when dealing with me. My inner circle of friends is made up almost entirely of people who ask themselves and each other that kind of question just as a matter of course, now, and dissect the answers to make sure they’re correct and rational and well-integrated with the other things we know about each other.
That doesn’t help at all when I’m trying to think about society in general, though.
And worse, they can cite completely incoherent “reasons”, which can be observed by noting that the sequence resulting from repeated application of “what do you mean by X” basically diverges. It reminds me of the value “bottom” in a lifted type system. It denotes an informationless “result”, such as that of a non-terminating computation.