I agree here: Reading stuff like this totally makes me cringe. I don’t know why people of above average intelligence want to make everyone else feel like useless proles, but it seems pretty rampant. Some humility is probably a blessing here, I mean, as frustrating as it is to deal with the ‘profoundly stupid’, at least you yourself aren’t profoundly stupid.
Of course, they probably think given the same start the ‘profoundly stupid’ person was given, they would have made the best of it and would be just as much of a genius as they are currently.
It’s a difficult realization, when you become aware you’re more intelligent then average, to be dropped into the pool with a lot of other smart people and realize you really aren’t that special. I mean, in a world of some six billion odd, if you are a one-in-a-million genius, that still means you likely aren’t in the top hundred smartest people in the world and probably not in the top thousand. It kind of reminds me of grad school stories I’ve read, with kids who think they are going to be a total gift to their chosen subject ending up extremely cynical and disappointed.
I think people online like to exaggerate their eccentricity and disregard for societal norms in an effort to appeal to the stereotypes for geniuses. I’ve met a few real geniuses IRL and I know you can be a genius without being horribly dysfunctional.
Rationality and intelligence are not the same thing—I’ve seen plenty of discussions here despairing about the existence of obviously-intelligent people, masters in their fields, who haven’t decided to practice rationality. I also know people who are observably less intelligent than I am, who practice rationality about as well as I do. One major difference between people in that latter group, and people who are not practicing rationality, no matter what the irrational peoples’ intelligence levels are, is that those people don’t get offended when someone points out a flaw in their reasoning, just as I don’t get offended when they, or even people who are not practicing rationality, point out a flaw in mine. People who are less intelligent will probably progress more slowly with rationality, as with any mental skill-set, but that’s not under discussion here. The irrational unwillingness to accept criticism is.
Being called ‘profoundly stupid’ is not exactly a criticism of someone’s reasoning. (Not that anybody was called that.) I think we’re objecting to this because of how it’ll offend people outside of the ‘in group’ anyway. Besides that, As much as we might wish we were immune to the emotional shock or glee at our thoughts and concepts being ridiculed or praised. I think it would be a rarity here to find someone who didn’t. People socializing and exchanging ideas is a type of system—It has to be understood and used effectively in order to produce the best results—and calling, essentially, everybody who disagrees with you ‘profoundly stupid’ is not good social lubrication.
You appear to be putting words into my mouth, but I’m currently too irritated to detangle this much beyond that point.
“Giving people too much credit” was a reference to peoples’ desire to be rational. I tend to assume that that’s significantly above zero in every case, even though the evidence does not seem to support that assumption. This is a failure to be rational on my part. (I doubt I’ll fix that; it’s the basis for most of my faith in humanity.)
I make no such assumption about intelligence (I do not assume that people want to be more intelligent than they are), and make a conscious effort to remove irrational biases toward intelligent people from my thought process when I encounter them. I have been doing so for years, with a significant degree of success, especially considering that I was significantly prejudiced against less intelligent people, before I realized that it was wrong to hold that view.
I have also put significant effort into learning how to bridge both of those communication gaps, and the skills required in each case are different. When I’m simply dealing with someone who’s less intelligent, I moderate my vocabulary, use lots of supporting social signaling, make smaller leaps of logic, and request feedback frequently to make sure I haven’t lost them. (Those skills are just as useful in regular conversation as they are in explaining things.) When I’m dealing with someone who’s not practicing rationality, I have to be very aware of their particular worldview, and only thoughtfully challenge it—which requires lots of complicated forethought, and can require outright lies.
The lack of either of those sets of communication skills will make dealing with the relevant people difficult, and can lead to them thinking poorly of you, whether you actually are prejudiced against them or not. Assuming that someone who does not have one of those sets of skills is prejudiced does not, in practice, work—there’s a very high risk of getting a false-positive.
When I’m dealing with someone who’s not practicing rationality, I have to be very aware of their particular worldview, and only thoughtfully challenge it -
A person who is ‘thinking’ irrationally can only be challeneged to the degree that they’re being rational. If they eschew rationality completely, there isn’t any way to communicate with them.
What have you actually accomplished, if you use social signals to get someone to switch their concept-allegiances?
I thought we’d already defined “practicing rationality” as “intentionally trying to make rational decisions and intentionally trying to become more rational”. Whether we had or not, that was what I meant by the term.
Someone can be being somewhat rational without ‘practicing’ rationality, and to the degree that they can accurately predict what effects follow what causes, or accomplish other tasks that depend on rationality, every person I know is at least somewhat rational. Even animals can be slightly rational—cats for example are well known for learning that the sound of a can opener is an accurate sign that they may be fed in the near future, even if they aren’t rational enough to make stronger predictions about which instances of that sound signal mealtime.
While social signaling can be used on its own to cause someone to switch their allegiances to concepts that they don’t value especially highly, that’s not the only possible use of it, and it’s not a use I consider acceptable. The use of social-signaling that I recommend is intended to keep a person from becoming defensive while ‘rationality-level appropriate’ rational arguments are used to actually encourage them to change their mind.
I thought we’d already defined “practicing rationality” as “intentionally trying to make rational decisions and intentionally trying to become more rational”.
No, only if you rationally try to make rational decisions and rationally try to become more rational.
If you’re acting irrationally, you’re not practicing rationality, in the same way that you’re not practicing vegetarianism if you’re eating meat.
You should expand this into a top-level post. Communication is difficult and I think most people could use advice about it. As it stands, it sounds like broad strokes which are obviously good ideas, but probably hard to implement without more details.
I’ve been considering it, actually, for my own use if not to post here. I think it’d be useful in several ways to try to come up with actual wordings for the tricks I’ve picked up.
I don’t know why people of above average intelligence want to make everyone else feel like useless proles
Isn’t it obvious? Almost everyone is a “useless prole”, as you put it, and even the people who aren’t have to sweat blood to avoid that fate.
Recognizing that unpleasant truth is the first step towards becoming non-useless—but most people can’t think usefully enough to recognize it in the first place, so the problem perpetuates itself.
I know I’m usually a moron. I’ve also developed the ability to distinguish quality thinking from moronicity, which makes it possible for me to (slowly, terribly slowly) wean myself away from stupid thinking and reinforce what little quality I can produce. That’s what makes it possible for me to occasionally NOT be a moron, at least at a rate greater than chance alone would permit.
It’s the vast numbers of morons who believe they’re smart, reasonable, worthwhile people that are the problem.
I was reading around on the site today, and I think I’ve figured out why this attitude sends me running the other way. What clued me in was Eliezer’s description of Spock in his post “Why Truth? And...”.
Eliezer’s point there is that Spock’s behavior goes against the actual ideals of rationality, so people who actually value rationality won’t mimic him. (He’s well enough known that people who want to signal that they’re rational will likely mimic him, and people who want to both be and signal being rational will probably mimic him in at least some ways, and also note that the fact that reversed stupidity is not intelligence is relevant.)
It may come as a shock, but in my case, being rational is not my highest priority. I haven’t actually come up with a proper wording for my highest priority yet, but one of my major goals in pursuing that priority is to facilitate a universal ability for people to pursue their own goals (with the normal caveats about not harming or overly interfering with other people, of course). One of the primary reasons I pursue rationality is to support that goal. I suspect that this is not an uncommon kind of reason for pursuing rationality, even here.
As I mentioned in the comment that I referenced, I’ve avoided facing the fact that most people prefer not to pursue rationality, because it appears that that realization leads directly to the attitude you’re showing here, and I can reasonably predict that if I were to have the attitude you’re showing here, I would no longer support the idea that everyone should have as much freedom as can be arranged, and I don’t want to do that. Very few people would want to take the pill that’d turn them into a psychopath, even if they’d be perfectly okay with being a psychopath after they took the pill.
But there’s an assumption going on in there. Does accepting that fact actually have to lead to that attitude? Is it impossible to be an x-rationalist and still value people?
Is it impossible to be an x-rationalist and still value people?
This is something I’ve thought a lot about. I’m worried about the consequences of certain negative ideologies present here on Less Wrong, but, actually, I feel that x-rationality, combined with greater self-awareness, would be the best weapon against them. X-rationality—identifying facts that are true and strategies that work—is inherently neutral. The way you interpret those facts (and what you use your strategies for) is the result of your other values.
Consider, to begin with, the tautology that 99.7% of the population is less intelligent than 0.3% of the population, by some well-defined, arbitrary metric of intelligence. Suppose also, that someone determined they were in the top 0.3%. They could feel any number of ways about this fact: completely neutral, for example, or loftily superior, or weightily responsible. Seen in this way, feeling contempt for “less intelligent” people is clearly the result of a worldview biased in some negative way.
Generally, humanity is so complex that however anyone feels about humanity says more about them than it does about humanity. Various forces (skepticism and despair; humanism and a sense of purpose) have been vying throughout history: rationality isn’t going to settle it now. We need to pick our side and move on … and notice which sides other people have picked when we evaluate their POV.
I always find it ironic, when ‘rationalists’ are especially misanthropic here on Less Wrong, that Eliezer wants to develop a friendly AI. Implicit with this goal—built right in—is the awareness that rationality alone would not induce the machine to be friendly. So why would we expect that a single-minded pursuit of rationality would not leave us vulnerable to misanthropic forces? Just as we would build friendliness into a perfectly logical, intelligent machine; we must build friendliness into our ideology before we let go of “intuition” and other irrational ways we have of “feeling” what is right, because they contain our humanism, which is outside rationality.
We do not want to be completely rational because being rational is neutral. Being more neutral without perfect rationality would leave us vulnerable to negative forces, and, anyway, we want to be a positive force.
If we assume he has goals other than simply being a self-abasing misanthrope, the attitude Annoyance is showing is far from rational. Arbitrarily defining the vast majority of humans as useless “problems” is, ironically, itself a useless and problematic belief, and it represents an even more fundamental failure than being Spocklike—Spock, at least, does not repeatedly shoot himself in the foot and then seek to blame anything but himself.
I’ve pretty much figured that out. If nothing else, Annoyance is being an excellent example of that right now.
Next question: Is it something about this method of approaching rationality that encourages that failure mode? How did Annoyance fall off the path, and can I avoid doing the same if I proceed?
I’m starting to think that the answer to that last question is yes, though.
How did Annoyance fall off the path, and can I avoid doing the same if I proceed?
While I find conversations with Annoyance rather void, I would encourage you to not try and lift (him ?) up as an example of falling off the path or entering failure modes. If you care about the question I would make a post using generic examples. This does a few things:
Gets you away from any emotional responses to Annoyance (both in yourself and anyone else).
Provides a clear-cut example that can be picked apart without making this entire thread required reading. It also cleans up many straw men and red herrings before they happen, since the specifics in the thread are mostly unneeded with relation to the question you have just asked.
Brings attention to the core problem that needs to be addressed and avoids any specific diagnoses of Annoyance (for better or worse)
That’s very good advice. However, I’m not going to take it today, and probably won’t at all. It seems more useful at this point to take a break from this entirely and give myself a chance to sort out the information I’ve already gained.
I’ll definitely be interested in looking at it, in a few days, if someone else wants to come up with that example and continue thinking about it here.
If we assume he has goals other than simply being a self-abasing misanthrope, the attitude Annoyance is showing is far from rational.
A logically incorrect statement. An attitude is rational if it consistently and explicitly follows from data gathered about the world and its functioning. As there are other consequences from my behavior other than the one you so contemptuously dismiss, and you have no grounds for deciding what my goals are or whether my actions achieve them, your claim is simply wrong. Trivially so, in fact.
Arbitrarily defining the vast majority of humans as useless “problems”
It’s not arbitrary.
The rational thing to do when confronted with a position you don’t understand is ask yourself “Why did that person adopt that position?”
If your actions accomplish your goals, fine. However, it’s safe to say most of the people here don’t want to be Annoyances, and it’s important to point out that your behavior does not reflect a requirement or implication of rationality.
If you disagree, I hope you will explicitly list the assumptions leading to your belief that it’s a good idea to treat people with condescension.
The rational thing to do when confronted with a position you don’t understand is ask yourself “Why did that person adopt that position?”
[...]
Worthwhile questions are rarely answered easily.
Search for an answer requires the question to be worthwhile, which is far from prior expectation for the research of inane-sounding positions people hold.
Search for an answer requires the question to be worthwhile, which is far from prior expectation for inane-sounding positions.
If you want to convince someone of something, it’s generally a good idea to understand why they believe what they believe now. People generally have to be convinced out of one belief before they can be convinced into another, and you can’t refute or reframe their evidence unless you know what the evidence is.
Even if their reasoning is epistemologically unsound, if you know how it’s unsound, you can utilize the same type of reasoning to change their belief. For example, if someone only believes things they “see with their own eyes”, you would then know it is a waste of time to try to prove something to them mathematically.
I agree, but in this case the benefit comes not from the expectation of finding insight in the person’s position, but from the expectation of successful communication (education), which was not the motivation referred in Annoyance’s comment.
It may come as a shock, but in my case, being rational is not my highest priority. I haven’t actually come up with a proper wording for my highest priority yet, but one of my major goals in pursuing that priority is to facilitate a universal ability for people to pursue their own goals (with the normal caveats about not harming or overly interfering with other people, of course). One of the primary reasons I pursue rationality is to support that goal.
Once I realized that achieving anything, no matter what, required my being rational, I quickly bumped “being rational” to the top of my to-do list.
Is it impossible to be an x-rationalist and still value people?
‘People’ do not lend themselves to any particular utility. The Master of the Way treats people as straw dogs.
It may come as a shock, but in my case, being rational is not my highest priority. I haven’t actually come up with a proper wording for my highest priority yet, but one of my major goals in pursuing that priority is to facilitate a universal ability for people to pursue their own goals (with the normal caveats about not harming or overly interfering with other people, of course). One of the primary reasons I pursue rationality is to support that goal.
Once I realized that achieving anything, no matter what, required my being rational, I quickly bumped “being rational” to the top of my to-do list.
Yes, I see that you did that. Why would I want to do that, given my current utility function? I appear to be accomplishing things reasonably well as is, and it looks like if I made that change, I wouldn’t wind up accomplishing things that my current utility function values at all.
Is it impossible to be an x-rationalist and still value people?
‘People’ do not lend themselves to any particular utility. The Master of the Way treats people as straw dogs.
Why would I want to do that, given my current utility function?
What’s the function you use to evaluate your utility function?
And what function do I use to evaluate that, and on to infinity. Right. Or, I can just accept that my core utility function is not actually rational, examine it to make sure it’s something that’s not actually impossible, and get on with my life.
Or does Eliezer have a truly-rational reason behind the kind of altruism that’s leading him to devote his life to FAI that I’m not aware of?
Persuasiveness: You fail at it.
Persuasiveness: what I was not aiming for.
Oh, silly me for assuming that you were trying to raise the rationality level around here. It’s only the entire point of the blog, after all.
So if you’re not actually trying to convince me that being more rational would actually be a good thing, what’s have you been doing? Self-signaling? Making pointless appeals to your own non-existent authority? Performing some bizarre experiment regarding your karma score?
Sets of terminal values can be coherent. Logical specifications for computing terminal values can be consistent. What would it mean for one to be rational?
Or, I can just accept that my core utility function is not actually rational,
If there’s isn’t a tiny grain of rationality at the core of that infinite regression, you’re in great trouble.
The ability to anticipate how reality will react to something you do depends entirely on the ability to update your mental models to match data derived from reality. That’s rationality right there.
If there’s even a tiny spark, it can be fanned into flame. But if there’s no spark there’s nothing to build on. I strongly suspect that some degree of rationality is present in your utility function, but if not, your case is hopeless.
Oh, silly me for assuming that you were trying to raise the rationality level around here.
Why would I try to do that? Nothing I do can cause the rationality level to go up. Only the people here can do that. If I could ‘make’ people be rational, I would. But there’s no spoon, there.
All I can do is point to the sky and hope that people will choose to pay less attention to the finger than what it indicates.
If there’s even a tiny spark, it can be fanned into flame. But if there’s no spark there’s nothing to build on. I strongly suspect that some degree of rationality is present in your utility function, but if not, your case is hopeless.
Out of curiosity, can someone who does not have a grain of rationality in them ever become more rational? In other words, can someone be so far gone that they literally can never be rational?
I am honestly having trouble picturing such a person. Perhaps that is because I never thought about it that way before.
Out of curiosity, can someone who does not have a grain of rationality in them ever become more rational?
They may stumble across rationality as life causes their core functions to randomly vary. As far as I can tell, that’s how explicit and self-referential standards of thought first arose—they seem to have occurred in societies where there were many different ideas and claims being made about everything, and people needed a way to sift through the rich bed of assertions.
So complex and mutually-incompatible cultural fluxes seem to not only be necessary to produce the first correct standards, but encourage them to be developed as well. That argument applies more to societies than individuals, but I think a similar one holds there too.
Understood. I guess the followup question is about where the general human being starts. Do we start with any rationality in us? My guess is that it is somewhat random. Some do; some do not.
The opposite of rational is “wrong” or “ineffective”. A person can’t be wrong or ineffective about everything, that’s senseless. I think all the confusion has arisen from Annoyance claiming that terminal values must have some spark of rationality, but Eliezer explained that he might have meant they must be coherent. So if I may paraphrase your question (which interests me as well), the question is: how may terminal values be incoherent?
You need to be more careful with problem statement, it seems too confused. For example, taboo “rational” (to distinguish irrational people from rocks), taboo “never” (to distinguish the deep properties of the phenomenon from limitations created by life span and available cultural environment).
Yeah, I would agree. I meant it as a specific response to what Annoyance wrote and figured I could just reuse the term. I didn’t expect so many people to jump in. :)
“Never” as in “This scenario is impossible and cannot happen.”
“Become more rational” can be restated “gain more rationality.”
Rewording the entire question:
Can someone who has no rationality in them ever gain more rationality?
The tricky clause is now “rationality in them.” Any more defining of terms brings this into a bigger topic. It would probably make a good top-level post, if anyone is interested.
I’d like to see a top post on this. My example of cats having a degree of rationality may be useful:
Even animals can be slightly rational—cats for example are well known for learning that the sound of a can opener is an accurate sign that they may be fed in the near future, even if they aren’t rational enough to make stronger predictions about which instances of that sound signal mealtime.
(Warning) This is a huge mind-dump created while on lunch break. By all means pick it apart, but I am not planning on defending it in any way. Take it with all the salt in the world.
Personally, I find the concept of animal rationality to be more of a distraction. For some reason, my linguistic matrix finds the word “intelligent” to describe cats responded to a can opener. Animals are very smart. Humans are very smart. But smart does not imply rational and a smart human is not necessarily imply rationality.
I tend to reserve rationality for describing the next “level” of intelligence. Rationality is the form or method of increase intelligence. An analogy is speed versus acceleration. Acceleration increases speed; rationality increases intelligence. This is more of a rough, instinctive definition, however, and one of my personal reasons for being here at Less Wrong is to learn more about rationality. My analogy does not seem accurate in application. Rationality seems connected to intelligence but to say that rationality implies change in intelligence does not fit with its reverse: irrationality does not decrease intelligence.
I am missing something, but it seems that whatever I am looking for in my definitions is not found in cats. But, as you may have meant, if cats have no rationality and cannot have rationality, is it because they have no rationality?
If this were the case, and rationality builds on itself, where does our initial rationality come from? If I claim to be rational, should I be able to point to a sequence of events in my life and say, “There it started”? It seems that fully understanding rationality implies knowing its limits; its beginning and ending. To further our rationality we should be able to know what helps or hinders our rationality.
Annoyance claims that the first instances of rationality may be caused by chance. If this were true, could we remove the chance? Could we learn what events chanced our own rationality and inflict similar events on other people?
Annoyance also seems to claim that rationality begets rationality. But something else must produce that first spark in us. That spark is worth studying. That spark is annoyingly difficult to define and observe. How do we stop and examine ourselves to know if we have the spark? If two people walk before us claiming rationality yet one is lying, how do we test and observe the truth?
Right now, we do so by their actions. But if the liar knows the rational actions and mimics them without believing in their validity or truth, how would we know? Would such a liar really be lying? Does the liar’s beliefs matter? Does rationality imply more than correct actions?
To make this more extreme, if I build a machine to mimic rationality, is it rational? This is a classic question with many forms. If I make a machine that acts human, is it human? I claim that “rationality” cannot be measured in a cat. Could it be measured in a machine? A program? Why am I so fixated on humanity? Is this bias?
Rationality is a label attached to a behavior but I believe it will eventually be reattached to a particular source of the behavior. I do not think that rational behavior is impossible to fake. Pragmatically, a Liar that acts rational is not much different from a rational person. If the Liar penetrates our community and suddenly goes ape than the lies are obvious. How do we predict the Liars before they reveal themselves? What if the Liars believe their own lies?
I do not mean “believe” as in “having convinced themselves”. What if they are not rational but believe they are? The lie is not conscious; it is a desire to be rational but not possessing the Way. How do we spot the fake rationalists? More importantly, how do I know that I, myself, have rationality?
Does this question have a reasonable answer? What if the answer is “No”? If I examine myself and find myself to be irrational, what do I do? What if I desire to be rational? Is it possible for me to become rational? Am I denied the Way?
I think much of the confusion comes from the inability to define rationality. We cannot offer a rationality test or exam. We can only describe behavior. I believe this currently necessary but I believe it will change. I think the path to this change has to do with finding the causations behind rationality and developing a finer measuring stick for determining rational behavior. I see this as the primary goal of Less Wrong.
Once we gather more information about the causes of our own rationality we can begin development methods for causing rationality in others along with drastically increasing our own rationality. I see this as the secondary goal of Less Wrong.
This is why I do not think Annoyance’s answer was sufficient. “Chance” may be how we describe our fortune but this is inoculative answer. During Eliezer’s comments on vitalism he says this:
I call theories such as vitalism mysterious answers to mysterious questions. These are the signs of mysterious answers: First, the explanation acts as a curiosity-stopper rather than an anticipation-controller. Second, the hypothesis has no moving parts—the model is not a specific complex mechanism, but a blankly solid substance or force. The mysterious substance or mysterious force may be said to be here or there, to do this or that; but the reason why the mysterious force behaves thus is wrapped in a blank unity. Third, those who proffer the explanation cherish their ignorance; they speak proudly of how the phenomenon defeats ordinary science or is unlike merely mundane phenomena. Fourth, even after the answer is given, the phenomenon is still a mystery and possesses the same quality of sacred inexplicability that it had at the start.
(Emphasis original. You will have to search for the paragraph, it is about three-quarters down the page.)
“Chance” hits 3 of 4, giving Annoyance benefit of the doubt and assuming there is no cherished ignorance. So, “chance” works for now because we have no better words to describe the beginning of rationality, but there is a true cause out there flipping the light bulbs on inside of heads and producing the behavior we have labeled “rationality.” Let’s go find it.
(PS) Annoyance, this wasn’t meant to pick on what you said, it just happened to be in my mind and relevant to the discussion. You were answering a very specific question and the answer satisfied what was asked at the time.
My point was that some animals do appear to be able to be rational, to a degree. (I’m defining ‘rational’ as something like ’able to create accurate representations of how the world works, which can be used to make accurate predictions.)
I can even come up with examples of some animals being able to be more rational than some humans. I used to work in a nursing home, and one of the residents there was mentally retarded as part of her condition, and never did figure out that the cats could not understand her when she talked to them, and sometimes seemed to actually expect them to talk. On the other hand, most animals that have been raised around humans seem to have a pretty reasonable grasp on what we can and can’t understand of their forms of communication. Unfortunately, most of my data for the last assertion there is personal observation. The bias against even considering that animals could communicate intentionally is strong enough in modern society that it’s rarely studied at all, as far as I know. Still, consider the behavior of not-formally-trained domesticated animals that you’ve known, compared to feral examples of the same species.
Basic prediction-ability seems like such a universally useful skill that I’d be pretty surprised if we didn’t find it in at least a minimal form in any creature with a brain. It may not look like it does in humans, in those cases, but then, given what’s been discussed about possible minds, that shouldn’t be too much of a problem.
The bias against even considering that animals could communicate intentionally is strong enough in modern society that it’s rarely studied at all, as far as I know.
Animals obviously communicate with one another. The last I heard, there was a lot of studying being done on dolphins and whales. Anyone who has trained a dog in anything can tell you that dogs can “learn” English words. The record I remember hearing about was a Border Collie with a vocabulary of over 100 words. (No reference, sorry. It was in a trivia book.)
As for your point, I understand and acknowledge it. I think of rationality as something different, I guess. I do not know how useful continuing the cat analogy is when we seem to think of “rational” differently.
Hmm, maybe you could define ‘intelligence’ as you use it here:
Rationality is the form or method of increase intelligence.
I define intelligence as the ability to know how to do things (talk, add, read, write, do calculus, convince a person of something—yes, there are different forms of intelligence) and rationality as the ability to know which things to do in a given situation to get what you want out of that situation, which involves knowing what things can be gotten out of a given situation in the first place.
Well, the mind dump from earlier was mostly food for thought, not a staking out claims or definitions. I guess my rough definition of intelligence fits what I find in the dictionary:
The ability to acquire and apply knowledge and skills
The same dictionary, however, defines rationality as a form of the word rational:
Based on or in accordance with reason or logic
I take intelligence to mean, “the ability to accomplish stuff,” and rationality to mean, “how to get intelligence.” Abstracted, rationality more or less becomes, “how to get the ability to accomplish stuff.” This is contrasted with “learning” which is:
Gaining or acquiring knowledge of or skill in (something) by study, experience, or being taught
I am not proposing this definition of rationality is what anyone else should use. Rather, it is a placeholder concept until I feel comfortable sitting down and tackling the problem as a whole. Right now I am still in aggregation mode which is essentially collecting other people’s thoughts on the subject.
Honestly, all of this discussion is interesting but it may not be helpful. I think Eliezer’s concept of the nameless virtue is good to keep in mind during these kinds of discussions:
You may try to name the highest principle with names such as “the map that reflects the territory” or “experience of success and failure” or “Bayesian decision theory”. But perhaps you describe incorrectly the nameless virtue. How will you discover your mistake? Not by comparing your description to itself, but by comparing it to that which you did not name.
Further information: The person I mentioned was able to do some intelligence-based things that I would not expect cats to do, like read and write (though not well). She may also have been able to understand that cats don’t speak English if someone actually explained it to her—I don’t think anyone ever actually did. Even so, nobody sits cats or dogs down and explains our limitations to them, either, so I think the playing field is pretty level in that respect.
Seriously, doing this in non-silly manner is highly nontrivial.
Oh, no joke. But we have to start somewhere. :)
Honestly, until we have a better word/definition than “rationality,” we get to play with fuzzy words. I am happy with that for now but it is a dull future.
I made more causal comments on this subject in a different comment and would appreciate your thoughts. It is kind of long, however, so no worries if you would rather not. :)
You’ve never thought about it that way before because it’s completely silly. How on earth does Annoyance make these judgments? I’m not nearly prideful enough to think I can know others’ minds to the extent Annoyance can, or, in other words, I imagine there are circumstances which could change most people in profound ways, both for ill and good. So the only thing judging people in this manner does is reinforce one’s social prejudices. Writing off people who seem resistant to reason only encourages their ignorance, and remedying their condition is both an exercise and example of reason’s power, which, incidentally, is why I’m trying so hard with Annoyance!
If there’s isn’t a tiny grain of rationality at the core of that infinite regression, you’re in great trouble.
You did catch that I’m talking about a terminal value, right? It’s the nature of those that you want them because you want them, not because they lead to something else that you want. I want everybody to be happy. That’s a terminal value. If you ask me why I want that, I’m going to have some serious trouble answering, because there is no answer. I just want it, and there’s nothing that I know of that I want more, or that I would consider a good reason to give up that goal.
All I can do is point to the sky and hope that people will choose to pay less attention to the finger than what it indicates.
Right now, it’s pointing at “don’t make this mistake”, which I was unlikely to do anyway, but now I have the opportunity to point the mistake out to you, so you can (if you choose to; I can’t force you) stop making it, which would raise the rationality around here, which seems like a good thing to me. Or, I can not point it out, and you keep doing what you’re doing. It’s like one of those lottery problems, and I concluded that the chance of one or both of us becoming more rational was worth the cost of having this discussion. (And, it paid off at least somewhat—I think I have enough insight into that particular mistake to be able to avoid it without avoiding the situation entirely, now.)
“Heaven and earth are ruthless, and treat the myriad creatures as straw dogs; the sage is ruthless, and treats the people as straw dogs.”
One might accuse this of falling afoul of the appeal to nature, but that would assume a fact not in evidence, to wit, that Annoyance’s motivations resemble that of a typical LW poster (to the extent that such a beast exists).
Once I realized that achieving anything, no matter what, required my being rational, I quickly bumped “being rational” to the top of my to-do list.
Voted down because your realization is flawed. Achieving anything does not require you to be rational, as evidenced by this post.
The Master of the Way treats people as straw dogs.
Your strategy of dealing with people is also flawed: does the Master of the Way always defect? If you were a skilled exploiter, you wouldn’t give obvious signals that you are an exploiter. Instead, you seem to be signaling “Vote me off the island!” to society, and this community. You may want to reconsider that position.
Annoyance, you’re still dodging the question. Joe didn’t ask whether or not in your opinion everyone is a useless prole, he asked why it’s useful to make people feel like that. Your notion that “social cohesion is the enemy of rationality” was best debunked, I think by pjeby’s point here:
Annoyance, your argument has devolved into inanity. If you don’t want to popularly cultivate rationality then you disagree with one of the core tenets of this community. It’s in the second paragraph of the “about” page:
“Less Wrong is devoted to refining the art of human rationality—the art of thinking. The new math and science deserves to be applied to our daily lives, and heard in our public voices.”
Your circular word games do no good for this community.
I agree here: Reading stuff like this totally makes me cringe. I don’t know why people of above average intelligence want to make everyone else feel like useless proles, but it seems pretty rampant. Some humility is probably a blessing here, I mean, as frustrating as it is to deal with the ‘profoundly stupid’, at least you yourself aren’t profoundly stupid.
Of course, they probably think given the same start the ‘profoundly stupid’ person was given, they would have made the best of it and would be just as much of a genius as they are currently.
It’s a difficult realization, when you become aware you’re more intelligent then average, to be dropped into the pool with a lot of other smart people and realize you really aren’t that special. I mean, in a world of some six billion odd, if you are a one-in-a-million genius, that still means you likely aren’t in the top hundred smartest people in the world and probably not in the top thousand. It kind of reminds me of grad school stories I’ve read, with kids who think they are going to be a total gift to their chosen subject ending up extremely cynical and disappointed.
I think people online like to exaggerate their eccentricity and disregard for societal norms in an effort to appeal to the stereotypes for geniuses. I’ve met a few real geniuses IRL and I know you can be a genius without being horribly dysfunctional.
Rationality and intelligence are not the same thing—I’ve seen plenty of discussions here despairing about the existence of obviously-intelligent people, masters in their fields, who haven’t decided to practice rationality. I also know people who are observably less intelligent than I am, who practice rationality about as well as I do. One major difference between people in that latter group, and people who are not practicing rationality, no matter what the irrational peoples’ intelligence levels are, is that those people don’t get offended when someone points out a flaw in their reasoning, just as I don’t get offended when they, or even people who are not practicing rationality, point out a flaw in mine. People who are less intelligent will probably progress more slowly with rationality, as with any mental skill-set, but that’s not under discussion here. The irrational unwillingness to accept criticism is.
Being called ‘profoundly stupid’ is not exactly a criticism of someone’s reasoning. (Not that anybody was called that.) I think we’re objecting to this because of how it’ll offend people outside of the ‘in group’ anyway. Besides that, As much as we might wish we were immune to the emotional shock or glee at our thoughts and concepts being ridiculed or praised. I think it would be a rarity here to find someone who didn’t. People socializing and exchanging ideas is a type of system—It has to be understood and used effectively in order to produce the best results—and calling, essentially, everybody who disagrees with you ‘profoundly stupid’ is not good social lubrication.
You appear to be putting words into my mouth, but I’m currently too irritated to detangle this much beyond that point.
“Giving people too much credit” was a reference to peoples’ desire to be rational. I tend to assume that that’s significantly above zero in every case, even though the evidence does not seem to support that assumption. This is a failure to be rational on my part. (I doubt I’ll fix that; it’s the basis for most of my faith in humanity.)
I make no such assumption about intelligence (I do not assume that people want to be more intelligent than they are), and make a conscious effort to remove irrational biases toward intelligent people from my thought process when I encounter them. I have been doing so for years, with a significant degree of success, especially considering that I was significantly prejudiced against less intelligent people, before I realized that it was wrong to hold that view.
I have also put significant effort into learning how to bridge both of those communication gaps, and the skills required in each case are different. When I’m simply dealing with someone who’s less intelligent, I moderate my vocabulary, use lots of supporting social signaling, make smaller leaps of logic, and request feedback frequently to make sure I haven’t lost them. (Those skills are just as useful in regular conversation as they are in explaining things.) When I’m dealing with someone who’s not practicing rationality, I have to be very aware of their particular worldview, and only thoughtfully challenge it—which requires lots of complicated forethought, and can require outright lies.
The lack of either of those sets of communication skills will make dealing with the relevant people difficult, and can lead to them thinking poorly of you, whether you actually are prejudiced against them or not. Assuming that someone who does not have one of those sets of skills is prejudiced does not, in practice, work—there’s a very high risk of getting a false-positive.
A person who is ‘thinking’ irrationally can only be challeneged to the degree that they’re being rational. If they eschew rationality completely, there isn’t any way to communicate with them.
What have you actually accomplished, if you use social signals to get someone to switch their concept-allegiances?
I thought we’d already defined “practicing rationality” as “intentionally trying to make rational decisions and intentionally trying to become more rational”. Whether we had or not, that was what I meant by the term.
Someone can be being somewhat rational without ‘practicing’ rationality, and to the degree that they can accurately predict what effects follow what causes, or accomplish other tasks that depend on rationality, every person I know is at least somewhat rational. Even animals can be slightly rational—cats for example are well known for learning that the sound of a can opener is an accurate sign that they may be fed in the near future, even if they aren’t rational enough to make stronger predictions about which instances of that sound signal mealtime.
While social signaling can be used on its own to cause someone to switch their allegiances to concepts that they don’t value especially highly, that’s not the only possible use of it, and it’s not a use I consider acceptable. The use of social-signaling that I recommend is intended to keep a person from becoming defensive while ‘rationality-level appropriate’ rational arguments are used to actually encourage them to change their mind.
No, only if you rationally try to make rational decisions and rationally try to become more rational.
If you’re acting irrationally, you’re not practicing rationality, in the same way that you’re not practicing vegetarianism if you’re eating meat.
I wrote this rant before I saw the thing above. I’m not deleting it, because someone may find this useful, but the issue has been resolved. :)
You should expand this into a top-level post. Communication is difficult and I think most people could use advice about it. As it stands, it sounds like broad strokes which are obviously good ideas, but probably hard to implement without more details.
I’ve been considering it, actually, for my own use if not to post here. I think it’d be useful in several ways to try to come up with actual wordings for the tricks I’ve picked up.
Isn’t it obvious? Almost everyone is a “useless prole”, as you put it, and even the people who aren’t have to sweat blood to avoid that fate.
Recognizing that unpleasant truth is the first step towards becoming non-useless—but most people can’t think usefully enough to recognize it in the first place, so the problem perpetuates itself.
I know I’m usually a moron. I’ve also developed the ability to distinguish quality thinking from moronicity, which makes it possible for me to (slowly, terribly slowly) wean myself away from stupid thinking and reinforce what little quality I can produce. That’s what makes it possible for me to occasionally NOT be a moron, at least at a rate greater than chance alone would permit.
It’s the vast numbers of morons who believe they’re smart, reasonable, worthwhile people that are the problem.
I was reading around on the site today, and I think I’ve figured out why this attitude sends me running the other way. What clued me in was Eliezer’s description of Spock in his post “Why Truth? And...”.
Eliezer’s point there is that Spock’s behavior goes against the actual ideals of rationality, so people who actually value rationality won’t mimic him. (He’s well enough known that people who want to signal that they’re rational will likely mimic him, and people who want to both be and signal being rational will probably mimic him in at least some ways, and also note that the fact that reversed stupidity is not intelligence is relevant.)
It may come as a shock, but in my case, being rational is not my highest priority. I haven’t actually come up with a proper wording for my highest priority yet, but one of my major goals in pursuing that priority is to facilitate a universal ability for people to pursue their own goals (with the normal caveats about not harming or overly interfering with other people, of course). One of the primary reasons I pursue rationality is to support that goal. I suspect that this is not an uncommon kind of reason for pursuing rationality, even here.
As I mentioned in the comment that I referenced, I’ve avoided facing the fact that most people prefer not to pursue rationality, because it appears that that realization leads directly to the attitude you’re showing here, and I can reasonably predict that if I were to have the attitude you’re showing here, I would no longer support the idea that everyone should have as much freedom as can be arranged, and I don’t want to do that. Very few people would want to take the pill that’d turn them into a psychopath, even if they’d be perfectly okay with being a psychopath after they took the pill.
But there’s an assumption going on in there. Does accepting that fact actually have to lead to that attitude? Is it impossible to be an x-rationalist and still value people?
This is something I’ve thought a lot about. I’m worried about the consequences of certain negative ideologies present here on Less Wrong, but, actually, I feel that x-rationality, combined with greater self-awareness, would be the best weapon against them. X-rationality—identifying facts that are true and strategies that work—is inherently neutral. The way you interpret those facts (and what you use your strategies for) is the result of your other values.
Consider, to begin with, the tautology that 99.7% of the population is less intelligent than 0.3% of the population, by some well-defined, arbitrary metric of intelligence. Suppose also, that someone determined they were in the top 0.3%. They could feel any number of ways about this fact: completely neutral, for example, or loftily superior, or weightily responsible. Seen in this way, feeling contempt for “less intelligent” people is clearly the result of a worldview biased in some negative way.
Generally, humanity is so complex that however anyone feels about humanity says more about them than it does about humanity. Various forces (skepticism and despair; humanism and a sense of purpose) have been vying throughout history: rationality isn’t going to settle it now. We need to pick our side and move on … and notice which sides other people have picked when we evaluate their POV.
I always find it ironic, when ‘rationalists’ are especially misanthropic here on Less Wrong, that Eliezer wants to develop a friendly AI. Implicit with this goal—built right in—is the awareness that rationality alone would not induce the machine to be friendly. So why would we expect that a single-minded pursuit of rationality would not leave us vulnerable to misanthropic forces? Just as we would build friendliness into a perfectly logical, intelligent machine; we must build friendliness into our ideology before we let go of “intuition” and other irrational ways we have of “feeling” what is right, because they contain our humanism, which is outside rationality.
We do not want to be completely rational because being rational is neutral. Being more neutral without perfect rationality would leave us vulnerable to negative forces, and, anyway, we want to be a positive force.
If we assume he has goals other than simply being a self-abasing misanthrope, the attitude Annoyance is showing is far from rational. Arbitrarily defining the vast majority of humans as useless “problems” is, ironically, itself a useless and problematic belief, and it represents an even more fundamental failure than being Spocklike—Spock, at least, does not repeatedly shoot himself in the foot and then seek to blame anything but himself.
I’ve pretty much figured that out. If nothing else, Annoyance is being an excellent example of that right now.
Next question: Is it something about this method of approaching rationality that encourages that failure mode? How did Annoyance fall off the path, and can I avoid doing the same if I proceed?
I’m starting to think that the answer to that last question is yes, though.
While I find conversations with Annoyance rather void, I would encourage you to not try and lift (him ?) up as an example of falling off the path or entering failure modes. If you care about the question I would make a post using generic examples. This does a few things:
Gets you away from any emotional responses to Annoyance (both in yourself and anyone else).
Provides a clear-cut example that can be picked apart without making this entire thread required reading. It also cleans up many straw men and red herrings before they happen, since the specifics in the thread are mostly unneeded with relation to the question you have just asked.
Brings attention to the core problem that needs to be addressed and avoids any specific diagnoses of Annoyance (for better or worse)
That’s very good advice. However, I’m not going to take it today, and probably won’t at all. It seems more useful at this point to take a break from this entirely and give myself a chance to sort out the information I’ve already gained.
I’ll definitely be interested in looking at it, in a few days, if someone else wants to come up with that example and continue thinking about it here.
I would agree.
I pass. The discussion of that topic would be interesting to me but writing the article is not. I have too many partial articles as it is… :P
A logically incorrect statement. An attitude is rational if it consistently and explicitly follows from data gathered about the world and its functioning. As there are other consequences from my behavior other than the one you so contemptuously dismiss, and you have no grounds for deciding what my goals are or whether my actions achieve them, your claim is simply wrong. Trivially so, in fact.
It’s not arbitrary.
The rational thing to do when confronted with a position you don’t understand is ask yourself “Why did that person adopt that position?”
If your actions accomplish your goals, fine. However, it’s safe to say most of the people here don’t want to be Annoyances, and it’s important to point out that your behavior does not reflect a requirement or implication of rationality.
If you disagree, I hope you will explicitly list the assumptions leading to your belief that it’s a good idea to treat people with condescension.
This is of low value, if the answer doesn’t come easily.
Easy answers are rarely worthwhile. Worthwhile questions are rarely answered easily.
Search for an answer requires the question to be worthwhile, which is far from prior expectation for the research of inane-sounding positions people hold.
If you want to convince someone of something, it’s generally a good idea to understand why they believe what they believe now. People generally have to be convinced out of one belief before they can be convinced into another, and you can’t refute or reframe their evidence unless you know what the evidence is.
Even if their reasoning is epistemologically unsound, if you know how it’s unsound, you can utilize the same type of reasoning to change their belief. For example, if someone only believes things they “see with their own eyes”, you would then know it is a waste of time to try to prove something to them mathematically.
I agree, but in this case the benefit comes not from the expectation of finding insight in the person’s position, but from the expectation of successful communication (education), which was not the motivation referred in Annoyance’s comment.
Once I realized that achieving anything, no matter what, required my being rational, I quickly bumped “being rational” to the top of my to-do list.
‘People’ do not lend themselves to any particular utility. The Master of the Way treats people as straw dogs.
Yes, I see that you did that. Why would I want to do that, given my current utility function? I appear to be accomplishing things reasonably well as is, and it looks like if I made that change, I wouldn’t wind up accomplishing things that my current utility function values at all.
Persuasiveness: You fail at it.
What’s the function you use to evaluate your utility function?
Persuasiveness: what I was not aiming for.
And what function do I use to evaluate that, and on to infinity. Right. Or, I can just accept that my core utility function is not actually rational, examine it to make sure it’s something that’s not actually impossible, and get on with my life.
Or does Eliezer have a truly-rational reason behind the kind of altruism that’s leading him to devote his life to FAI that I’m not aware of?
Oh, silly me for assuming that you were trying to raise the rationality level around here. It’s only the entire point of the blog, after all.
So if you’re not actually trying to convince me that being more rational would actually be a good thing, what’s have you been doing? Self-signaling? Making pointless appeals to your own non-existent authority? Performing some bizarre experiment regarding your karma score?
Sets of terminal values can be coherent. Logical specifications for computing terminal values can be consistent. What would it mean for one to be rational?
I have no idea.
As far as I can tell, my terminal values are not rational in the same sense that blue is not greater than three.
If there’s isn’t a tiny grain of rationality at the core of that infinite regression, you’re in great trouble.
The ability to anticipate how reality will react to something you do depends entirely on the ability to update your mental models to match data derived from reality. That’s rationality right there.
If there’s even a tiny spark, it can be fanned into flame. But if there’s no spark there’s nothing to build on. I strongly suspect that some degree of rationality is present in your utility function, but if not, your case is hopeless.
Why would I try to do that? Nothing I do can cause the rationality level to go up. Only the people here can do that. If I could ‘make’ people be rational, I would. But there’s no spoon, there.
All I can do is point to the sky and hope that people will choose to pay less attention to the finger than what it indicates.
It’s usually more effective if you don’t use your middle finger to do the pointing.
Out of curiosity, can someone who does not have a grain of rationality in them ever become more rational? In other words, can someone be so far gone that they literally can never be rational?
I am honestly having trouble picturing such a person. Perhaps that is because I never thought about it that way before.
They may stumble across rationality as life causes their core functions to randomly vary. As far as I can tell, that’s how explicit and self-referential standards of thought first arose—they seem to have occurred in societies where there were many different ideas and claims being made about everything, and people needed a way to sift through the rich bed of assertions.
So complex and mutually-incompatible cultural fluxes seem to not only be necessary to produce the first correct standards, but encourage them to be developed as well. That argument applies more to societies than individuals, but I think a similar one holds there too.
Short answer: only by chance, I think.
Understood. I guess the followup question is about where the general human being starts. Do we start with any rationality in us? My guess is that it is somewhat random. Some do; some do not.
The opposite of rational is “wrong” or “ineffective”. A person can’t be wrong or ineffective about everything, that’s senseless. I think all the confusion has arisen from Annoyance claiming that terminal values must have some spark of rationality, but Eliezer explained that he might have meant they must be coherent. So if I may paraphrase your question (which interests me as well), the question is: how may terminal values be incoherent?
You need to be more careful with problem statement, it seems too confused. For example, taboo “rational” (to distinguish irrational people from rocks), taboo “never” (to distinguish the deep properties of the phenomenon from limitations created by life span and available cultural environment).
Yeah, I would agree. I meant it as a specific response to what Annoyance wrote and figured I could just reuse the term. I didn’t expect so many people to jump in. :)
“Never” as in “This scenario is impossible and cannot happen.” “Become more rational” can be restated “gain more rationality.”
Rewording the entire question:
The tricky clause is now “rationality in them.” Any more defining of terms brings this into a bigger topic. It would probably make a good top-level post, if anyone is interested.
I’d like to see a top post on this. My example of cats having a degree of rationality may be useful:
(Warning) This is a huge mind-dump created while on lunch break. By all means pick it apart, but I am not planning on defending it in any way. Take it with all the salt in the world.
Personally, I find the concept of animal rationality to be more of a distraction. For some reason, my linguistic matrix finds the word “intelligent” to describe cats responded to a can opener. Animals are very smart. Humans are very smart. But smart does not imply rational and a smart human is not necessarily imply rationality.
I tend to reserve rationality for describing the next “level” of intelligence. Rationality is the form or method of increase intelligence. An analogy is speed versus acceleration. Acceleration increases speed; rationality increases intelligence. This is more of a rough, instinctive definition, however, and one of my personal reasons for being here at Less Wrong is to learn more about rationality. My analogy does not seem accurate in application. Rationality seems connected to intelligence but to say that rationality implies change in intelligence does not fit with its reverse: irrationality does not decrease intelligence.
I am missing something, but it seems that whatever I am looking for in my definitions is not found in cats. But, as you may have meant, if cats have no rationality and cannot have rationality, is it because they have no rationality?
If this were the case, and rationality builds on itself, where does our initial rationality come from? If I claim to be rational, should I be able to point to a sequence of events in my life and say, “There it started”? It seems that fully understanding rationality implies knowing its limits; its beginning and ending. To further our rationality we should be able to know what helps or hinders our rationality.
Annoyance claims that the first instances of rationality may be caused by chance. If this were true, could we remove the chance? Could we learn what events chanced our own rationality and inflict similar events on other people?
Annoyance also seems to claim that rationality begets rationality. But something else must produce that first spark in us. That spark is worth studying. That spark is annoyingly difficult to define and observe. How do we stop and examine ourselves to know if we have the spark? If two people walk before us claiming rationality yet one is lying, how do we test and observe the truth?
Right now, we do so by their actions. But if the liar knows the rational actions and mimics them without believing in their validity or truth, how would we know? Would such a liar really be lying? Does the liar’s beliefs matter? Does rationality imply more than correct actions?
To make this more extreme, if I build a machine to mimic rationality, is it rational? This is a classic question with many forms. If I make a machine that acts human, is it human? I claim that “rationality” cannot be measured in a cat. Could it be measured in a machine? A program? Why am I so fixated on humanity? Is this bias?
Rationality is a label attached to a behavior but I believe it will eventually be reattached to a particular source of the behavior. I do not think that rational behavior is impossible to fake. Pragmatically, a Liar that acts rational is not much different from a rational person. If the Liar penetrates our community and suddenly goes ape than the lies are obvious. How do we predict the Liars before they reveal themselves? What if the Liars believe their own lies?
I do not mean “believe” as in “having convinced themselves”. What if they are not rational but believe they are? The lie is not conscious; it is a desire to be rational but not possessing the Way. How do we spot the fake rationalists? More importantly, how do I know that I, myself, have rationality?
Does this question have a reasonable answer? What if the answer is “No”? If I examine myself and find myself to be irrational, what do I do? What if I desire to be rational? Is it possible for me to become rational? Am I denied the Way?
I think much of the confusion comes from the inability to define rationality. We cannot offer a rationality test or exam. We can only describe behavior. I believe this currently necessary but I believe it will change. I think the path to this change has to do with finding the causations behind rationality and developing a finer measuring stick for determining rational behavior. I see this as the primary goal of Less Wrong.
Once we gather more information about the causes of our own rationality we can begin development methods for causing rationality in others along with drastically increasing our own rationality. I see this as the secondary goal of Less Wrong.
This is why I do not think Annoyance’s answer was sufficient. “Chance” may be how we describe our fortune but this is inoculative answer. During Eliezer’s comments on vitalism he says this:
(Emphasis original. You will have to search for the paragraph, it is about three-quarters down the page.)
“Chance” hits 3 of 4, giving Annoyance benefit of the doubt and assuming there is no cherished ignorance. So, “chance” works for now because we have no better words to describe the beginning of rationality, but there is a true cause out there flipping the light bulbs on inside of heads and producing the behavior we have labeled “rationality.” Let’s go find it.
(PS) Annoyance, this wasn’t meant to pick on what you said, it just happened to be in my mind and relevant to the discussion. You were answering a very specific question and the answer satisfied what was asked at the time.
Rationality-as-acceleration seems to match the semi-serious label of x-rationality.
My point was that some animals do appear to be able to be rational, to a degree. (I’m defining ‘rational’ as something like ’able to create accurate representations of how the world works, which can be used to make accurate predictions.)
I can even come up with examples of some animals being able to be more rational than some humans. I used to work in a nursing home, and one of the residents there was mentally retarded as part of her condition, and never did figure out that the cats could not understand her when she talked to them, and sometimes seemed to actually expect them to talk. On the other hand, most animals that have been raised around humans seem to have a pretty reasonable grasp on what we can and can’t understand of their forms of communication. Unfortunately, most of my data for the last assertion there is personal observation. The bias against even considering that animals could communicate intentionally is strong enough in modern society that it’s rarely studied at all, as far as I know. Still, consider the behavior of not-formally-trained domesticated animals that you’ve known, compared to feral examples of the same species.
Basic prediction-ability seems like such a universally useful skill that I’d be pretty surprised if we didn’t find it in at least a minimal form in any creature with a brain. It may not look like it does in humans, in those cases, but then, given what’s been discussed about possible minds, that shouldn’t be too much of a problem.
Animals obviously communicate with one another. The last I heard, there was a lot of studying being done on dolphins and whales. Anyone who has trained a dog in anything can tell you that dogs can “learn” English words. The record I remember hearing about was a Border Collie with a vocabulary of over 100 words. (No reference, sorry. It was in a trivia book.)
As for your point, I understand and acknowledge it. I think of rationality as something different, I guess. I do not know how useful continuing the cat analogy is when we seem to think of “rational” differently.
Hmm, maybe you could define ‘intelligence’ as you use it here:
I define intelligence as the ability to know how to do things (talk, add, read, write, do calculus, convince a person of something—yes, there are different forms of intelligence) and rationality as the ability to know which things to do in a given situation to get what you want out of that situation, which involves knowing what things can be gotten out of a given situation in the first place.
Well, the mind dump from earlier was mostly food for thought, not a staking out claims or definitions. I guess my rough definition of intelligence fits what I find in the dictionary:
The same dictionary, however, defines rationality as a form of the word rational:
I take intelligence to mean, “the ability to accomplish stuff,” and rationality to mean, “how to get intelligence.” Abstracted, rationality more or less becomes, “how to get the ability to accomplish stuff.” This is contrasted with “learning” which is:
I am not proposing this definition of rationality is what anyone else should use. Rather, it is a placeholder concept until I feel comfortable sitting down and tackling the problem as a whole. Right now I am still in aggregation mode which is essentially collecting other people’s thoughts on the subject.
Honestly, all of this discussion is interesting but it may not be helpful. I think Eliezer’s concept of the nameless virtue is good to keep in mind during these kinds of discussions:
Further information: The person I mentioned was able to do some intelligence-based things that I would not expect cats to do, like read and write (though not well). She may also have been able to understand that cats don’t speak English if someone actually explained it to her—I don’t think anyone ever actually did. Even so, nobody sits cats or dogs down and explains our limitations to them, either, so I think the playing field is pretty level in that respect.
If you can develop it well.
Yeah. If I were to do it I would probably start from the question of defining someone’s level of rationality. The topic itself assumes:
“Rationality” is not boolean. People can be more or less rational on a scale.
People can be completely irrational in the sense that they score a 0 on the scale.
The question becomes: Can such a person increase their level on the scale?
Further thoughts:
How does one increase their level on the scale?
Does it require rationality to get more rationality?
Is there an upper bound? If the lower bound is 0...
If there is an upper bound, can this upper bound be achieved?
...and then you prove that the level of rationality and operations on it correspond to Bayesian probability up to isomorphism. ;-)
Seriously, doing this in non-silly manner is highly nontrivial.
Oh, no joke. But we have to start somewhere. :)
Honestly, until we have a better word/definition than “rationality,” we get to play with fuzzy words. I am happy with that for now but it is a dull future.
I made more causal comments on this subject in a different comment and would appreciate your thoughts. It is kind of long, however, so no worries if you would rather not. :)
You’ve never thought about it that way before because it’s completely silly. How on earth does Annoyance make these judgments? I’m not nearly prideful enough to think I can know others’ minds to the extent Annoyance can, or, in other words, I imagine there are circumstances which could change most people in profound ways, both for ill and good. So the only thing judging people in this manner does is reinforce one’s social prejudices. Writing off people who seem resistant to reason only encourages their ignorance, and remedying their condition is both an exercise and example of reason’s power, which, incidentally, is why I’m trying so hard with Annoyance!
You did catch that I’m talking about a terminal value, right? It’s the nature of those that you want them because you want them, not because they lead to something else that you want. I want everybody to be happy. That’s a terminal value. If you ask me why I want that, I’m going to have some serious trouble answering, because there is no answer. I just want it, and there’s nothing that I know of that I want more, or that I would consider a good reason to give up that goal.
Right now, it’s pointing at “don’t make this mistake”, which I was unlikely to do anyway, but now I have the opportunity to point the mistake out to you, so you can (if you choose to; I can’t force you) stop making it, which would raise the rationality around here, which seems like a good thing to me. Or, I can not point it out, and you keep doing what you’re doing. It’s like one of those lottery problems, and I concluded that the chance of one or both of us becoming more rational was worth the cost of having this discussion. (And, it paid off at least somewhat—I think I have enough insight into that particular mistake to be able to avoid it without avoiding the situation entirely, now.)
What are you aiming for?
Could you elucidate what you intend with this gem?
“The Master of the Way treats people as straw dogs.”
It’s from the Tao Te Ching:
“Heaven and earth are ruthless, and treat the myriad creatures as straw dogs; the sage is ruthless, and treats the people as straw dogs.”
One might accuse this of falling afoul of the appeal to nature, but that would assume a fact not in evidence, to wit, that Annoyance’s motivations resemble that of a typical LW poster (to the extent that such a beast exists).
Voted down because your realization is flawed. Achieving anything does not require you to be rational, as evidenced by this post.
Your strategy of dealing with people is also flawed: does the Master of the Way always defect? If you were a skilled exploiter, you wouldn’t give obvious signals that you are an exploiter. Instead, you seem to be signaling “Vote me off the island!” to society, and this community. You may want to reconsider that position.
Wanting to accomplish thing X, and being able to expect it to occur as a result of actions I take, requires rationality.
Your objection is incorrect.
Your understanding of my strategy is incorrect, as evidenced by your question.
Annoyance, you’re still dodging the question. Joe didn’t ask whether or not in your opinion everyone is a useless prole, he asked why it’s useful to make people feel like that. Your notion that “social cohesion is the enemy of rationality” was best debunked, I think by pjeby’s point here:
http://lesswrong.com/lw/za/a_social_norm_against_unjustified_opinions/rrk
more flies with honey and all that.
I don’t want to catch flies.
Annoyance, your argument has devolved into inanity. If you don’t want to popularly cultivate rationality then you disagree with one of the core tenets of this community. It’s in the second paragraph of the “about” page:
“Less Wrong is devoted to refining the art of human rationality—the art of thinking. The new math and science deserves to be applied to our daily lives, and heard in our public voices.”
Your circular word games do no good for this community.