It may come as a shock, but in my case, being rational is not my highest priority. I haven’t actually come up with a proper wording for my highest priority yet, but one of my major goals in pursuing that priority is to facilitate a universal ability for people to pursue their own goals (with the normal caveats about not harming or overly interfering with other people, of course). One of the primary reasons I pursue rationality is to support that goal.
Once I realized that achieving anything, no matter what, required my being rational, I quickly bumped “being rational” to the top of my to-do list.
Is it impossible to be an x-rationalist and still value people?
‘People’ do not lend themselves to any particular utility. The Master of the Way treats people as straw dogs.
It may come as a shock, but in my case, being rational is not my highest priority. I haven’t actually come up with a proper wording for my highest priority yet, but one of my major goals in pursuing that priority is to facilitate a universal ability for people to pursue their own goals (with the normal caveats about not harming or overly interfering with other people, of course). One of the primary reasons I pursue rationality is to support that goal.
Once I realized that achieving anything, no matter what, required my being rational, I quickly bumped “being rational” to the top of my to-do list.
Yes, I see that you did that. Why would I want to do that, given my current utility function? I appear to be accomplishing things reasonably well as is, and it looks like if I made that change, I wouldn’t wind up accomplishing things that my current utility function values at all.
Is it impossible to be an x-rationalist and still value people?
‘People’ do not lend themselves to any particular utility. The Master of the Way treats people as straw dogs.
Why would I want to do that, given my current utility function?
What’s the function you use to evaluate your utility function?
And what function do I use to evaluate that, and on to infinity. Right. Or, I can just accept that my core utility function is not actually rational, examine it to make sure it’s something that’s not actually impossible, and get on with my life.
Or does Eliezer have a truly-rational reason behind the kind of altruism that’s leading him to devote his life to FAI that I’m not aware of?
Persuasiveness: You fail at it.
Persuasiveness: what I was not aiming for.
Oh, silly me for assuming that you were trying to raise the rationality level around here. It’s only the entire point of the blog, after all.
So if you’re not actually trying to convince me that being more rational would actually be a good thing, what’s have you been doing? Self-signaling? Making pointless appeals to your own non-existent authority? Performing some bizarre experiment regarding your karma score?
Sets of terminal values can be coherent. Logical specifications for computing terminal values can be consistent. What would it mean for one to be rational?
Or, I can just accept that my core utility function is not actually rational,
If there’s isn’t a tiny grain of rationality at the core of that infinite regression, you’re in great trouble.
The ability to anticipate how reality will react to something you do depends entirely on the ability to update your mental models to match data derived from reality. That’s rationality right there.
If there’s even a tiny spark, it can be fanned into flame. But if there’s no spark there’s nothing to build on. I strongly suspect that some degree of rationality is present in your utility function, but if not, your case is hopeless.
Oh, silly me for assuming that you were trying to raise the rationality level around here.
Why would I try to do that? Nothing I do can cause the rationality level to go up. Only the people here can do that. If I could ‘make’ people be rational, I would. But there’s no spoon, there.
All I can do is point to the sky and hope that people will choose to pay less attention to the finger than what it indicates.
If there’s even a tiny spark, it can be fanned into flame. But if there’s no spark there’s nothing to build on. I strongly suspect that some degree of rationality is present in your utility function, but if not, your case is hopeless.
Out of curiosity, can someone who does not have a grain of rationality in them ever become more rational? In other words, can someone be so far gone that they literally can never be rational?
I am honestly having trouble picturing such a person. Perhaps that is because I never thought about it that way before.
Out of curiosity, can someone who does not have a grain of rationality in them ever become more rational?
They may stumble across rationality as life causes their core functions to randomly vary. As far as I can tell, that’s how explicit and self-referential standards of thought first arose—they seem to have occurred in societies where there were many different ideas and claims being made about everything, and people needed a way to sift through the rich bed of assertions.
So complex and mutually-incompatible cultural fluxes seem to not only be necessary to produce the first correct standards, but encourage them to be developed as well. That argument applies more to societies than individuals, but I think a similar one holds there too.
Understood. I guess the followup question is about where the general human being starts. Do we start with any rationality in us? My guess is that it is somewhat random. Some do; some do not.
The opposite of rational is “wrong” or “ineffective”. A person can’t be wrong or ineffective about everything, that’s senseless. I think all the confusion has arisen from Annoyance claiming that terminal values must have some spark of rationality, but Eliezer explained that he might have meant they must be coherent. So if I may paraphrase your question (which interests me as well), the question is: how may terminal values be incoherent?
You need to be more careful with problem statement, it seems too confused. For example, taboo “rational” (to distinguish irrational people from rocks), taboo “never” (to distinguish the deep properties of the phenomenon from limitations created by life span and available cultural environment).
Yeah, I would agree. I meant it as a specific response to what Annoyance wrote and figured I could just reuse the term. I didn’t expect so many people to jump in. :)
“Never” as in “This scenario is impossible and cannot happen.”
“Become more rational” can be restated “gain more rationality.”
Rewording the entire question:
Can someone who has no rationality in them ever gain more rationality?
The tricky clause is now “rationality in them.” Any more defining of terms brings this into a bigger topic. It would probably make a good top-level post, if anyone is interested.
I’d like to see a top post on this. My example of cats having a degree of rationality may be useful:
Even animals can be slightly rational—cats for example are well known for learning that the sound of a can opener is an accurate sign that they may be fed in the near future, even if they aren’t rational enough to make stronger predictions about which instances of that sound signal mealtime.
(Warning) This is a huge mind-dump created while on lunch break. By all means pick it apart, but I am not planning on defending it in any way. Take it with all the salt in the world.
Personally, I find the concept of animal rationality to be more of a distraction. For some reason, my linguistic matrix finds the word “intelligent” to describe cats responded to a can opener. Animals are very smart. Humans are very smart. But smart does not imply rational and a smart human is not necessarily imply rationality.
I tend to reserve rationality for describing the next “level” of intelligence. Rationality is the form or method of increase intelligence. An analogy is speed versus acceleration. Acceleration increases speed; rationality increases intelligence. This is more of a rough, instinctive definition, however, and one of my personal reasons for being here at Less Wrong is to learn more about rationality. My analogy does not seem accurate in application. Rationality seems connected to intelligence but to say that rationality implies change in intelligence does not fit with its reverse: irrationality does not decrease intelligence.
I am missing something, but it seems that whatever I am looking for in my definitions is not found in cats. But, as you may have meant, if cats have no rationality and cannot have rationality, is it because they have no rationality?
If this were the case, and rationality builds on itself, where does our initial rationality come from? If I claim to be rational, should I be able to point to a sequence of events in my life and say, “There it started”? It seems that fully understanding rationality implies knowing its limits; its beginning and ending. To further our rationality we should be able to know what helps or hinders our rationality.
Annoyance claims that the first instances of rationality may be caused by chance. If this were true, could we remove the chance? Could we learn what events chanced our own rationality and inflict similar events on other people?
Annoyance also seems to claim that rationality begets rationality. But something else must produce that first spark in us. That spark is worth studying. That spark is annoyingly difficult to define and observe. How do we stop and examine ourselves to know if we have the spark? If two people walk before us claiming rationality yet one is lying, how do we test and observe the truth?
Right now, we do so by their actions. But if the liar knows the rational actions and mimics them without believing in their validity or truth, how would we know? Would such a liar really be lying? Does the liar’s beliefs matter? Does rationality imply more than correct actions?
To make this more extreme, if I build a machine to mimic rationality, is it rational? This is a classic question with many forms. If I make a machine that acts human, is it human? I claim that “rationality” cannot be measured in a cat. Could it be measured in a machine? A program? Why am I so fixated on humanity? Is this bias?
Rationality is a label attached to a behavior but I believe it will eventually be reattached to a particular source of the behavior. I do not think that rational behavior is impossible to fake. Pragmatically, a Liar that acts rational is not much different from a rational person. If the Liar penetrates our community and suddenly goes ape than the lies are obvious. How do we predict the Liars before they reveal themselves? What if the Liars believe their own lies?
I do not mean “believe” as in “having convinced themselves”. What if they are not rational but believe they are? The lie is not conscious; it is a desire to be rational but not possessing the Way. How do we spot the fake rationalists? More importantly, how do I know that I, myself, have rationality?
Does this question have a reasonable answer? What if the answer is “No”? If I examine myself and find myself to be irrational, what do I do? What if I desire to be rational? Is it possible for me to become rational? Am I denied the Way?
I think much of the confusion comes from the inability to define rationality. We cannot offer a rationality test or exam. We can only describe behavior. I believe this currently necessary but I believe it will change. I think the path to this change has to do with finding the causations behind rationality and developing a finer measuring stick for determining rational behavior. I see this as the primary goal of Less Wrong.
Once we gather more information about the causes of our own rationality we can begin development methods for causing rationality in others along with drastically increasing our own rationality. I see this as the secondary goal of Less Wrong.
This is why I do not think Annoyance’s answer was sufficient. “Chance” may be how we describe our fortune but this is inoculative answer. During Eliezer’s comments on vitalism he says this:
I call theories such as vitalism mysterious answers to mysterious questions. These are the signs of mysterious answers: First, the explanation acts as a curiosity-stopper rather than an anticipation-controller. Second, the hypothesis has no moving parts—the model is not a specific complex mechanism, but a blankly solid substance or force. The mysterious substance or mysterious force may be said to be here or there, to do this or that; but the reason why the mysterious force behaves thus is wrapped in a blank unity. Third, those who proffer the explanation cherish their ignorance; they speak proudly of how the phenomenon defeats ordinary science or is unlike merely mundane phenomena. Fourth, even after the answer is given, the phenomenon is still a mystery and possesses the same quality of sacred inexplicability that it had at the start.
(Emphasis original. You will have to search for the paragraph, it is about three-quarters down the page.)
“Chance” hits 3 of 4, giving Annoyance benefit of the doubt and assuming there is no cherished ignorance. So, “chance” works for now because we have no better words to describe the beginning of rationality, but there is a true cause out there flipping the light bulbs on inside of heads and producing the behavior we have labeled “rationality.” Let’s go find it.
(PS) Annoyance, this wasn’t meant to pick on what you said, it just happened to be in my mind and relevant to the discussion. You were answering a very specific question and the answer satisfied what was asked at the time.
My point was that some animals do appear to be able to be rational, to a degree. (I’m defining ‘rational’ as something like ’able to create accurate representations of how the world works, which can be used to make accurate predictions.)
I can even come up with examples of some animals being able to be more rational than some humans. I used to work in a nursing home, and one of the residents there was mentally retarded as part of her condition, and never did figure out that the cats could not understand her when she talked to them, and sometimes seemed to actually expect them to talk. On the other hand, most animals that have been raised around humans seem to have a pretty reasonable grasp on what we can and can’t understand of their forms of communication. Unfortunately, most of my data for the last assertion there is personal observation. The bias against even considering that animals could communicate intentionally is strong enough in modern society that it’s rarely studied at all, as far as I know. Still, consider the behavior of not-formally-trained domesticated animals that you’ve known, compared to feral examples of the same species.
Basic prediction-ability seems like such a universally useful skill that I’d be pretty surprised if we didn’t find it in at least a minimal form in any creature with a brain. It may not look like it does in humans, in those cases, but then, given what’s been discussed about possible minds, that shouldn’t be too much of a problem.
The bias against even considering that animals could communicate intentionally is strong enough in modern society that it’s rarely studied at all, as far as I know.
Animals obviously communicate with one another. The last I heard, there was a lot of studying being done on dolphins and whales. Anyone who has trained a dog in anything can tell you that dogs can “learn” English words. The record I remember hearing about was a Border Collie with a vocabulary of over 100 words. (No reference, sorry. It was in a trivia book.)
As for your point, I understand and acknowledge it. I think of rationality as something different, I guess. I do not know how useful continuing the cat analogy is when we seem to think of “rational” differently.
Hmm, maybe you could define ‘intelligence’ as you use it here:
Rationality is the form or method of increase intelligence.
I define intelligence as the ability to know how to do things (talk, add, read, write, do calculus, convince a person of something—yes, there are different forms of intelligence) and rationality as the ability to know which things to do in a given situation to get what you want out of that situation, which involves knowing what things can be gotten out of a given situation in the first place.
Well, the mind dump from earlier was mostly food for thought, not a staking out claims or definitions. I guess my rough definition of intelligence fits what I find in the dictionary:
The ability to acquire and apply knowledge and skills
The same dictionary, however, defines rationality as a form of the word rational:
Based on or in accordance with reason or logic
I take intelligence to mean, “the ability to accomplish stuff,” and rationality to mean, “how to get intelligence.” Abstracted, rationality more or less becomes, “how to get the ability to accomplish stuff.” This is contrasted with “learning” which is:
Gaining or acquiring knowledge of or skill in (something) by study, experience, or being taught
I am not proposing this definition of rationality is what anyone else should use. Rather, it is a placeholder concept until I feel comfortable sitting down and tackling the problem as a whole. Right now I am still in aggregation mode which is essentially collecting other people’s thoughts on the subject.
Honestly, all of this discussion is interesting but it may not be helpful. I think Eliezer’s concept of the nameless virtue is good to keep in mind during these kinds of discussions:
You may try to name the highest principle with names such as “the map that reflects the territory” or “experience of success and failure” or “Bayesian decision theory”. But perhaps you describe incorrectly the nameless virtue. How will you discover your mistake? Not by comparing your description to itself, but by comparing it to that which you did not name.
Further information: The person I mentioned was able to do some intelligence-based things that I would not expect cats to do, like read and write (though not well). She may also have been able to understand that cats don’t speak English if someone actually explained it to her—I don’t think anyone ever actually did. Even so, nobody sits cats or dogs down and explains our limitations to them, either, so I think the playing field is pretty level in that respect.
Seriously, doing this in non-silly manner is highly nontrivial.
Oh, no joke. But we have to start somewhere. :)
Honestly, until we have a better word/definition than “rationality,” we get to play with fuzzy words. I am happy with that for now but it is a dull future.
I made more causal comments on this subject in a different comment and would appreciate your thoughts. It is kind of long, however, so no worries if you would rather not. :)
You’ve never thought about it that way before because it’s completely silly. How on earth does Annoyance make these judgments? I’m not nearly prideful enough to think I can know others’ minds to the extent Annoyance can, or, in other words, I imagine there are circumstances which could change most people in profound ways, both for ill and good. So the only thing judging people in this manner does is reinforce one’s social prejudices. Writing off people who seem resistant to reason only encourages their ignorance, and remedying their condition is both an exercise and example of reason’s power, which, incidentally, is why I’m trying so hard with Annoyance!
If there’s isn’t a tiny grain of rationality at the core of that infinite regression, you’re in great trouble.
You did catch that I’m talking about a terminal value, right? It’s the nature of those that you want them because you want them, not because they lead to something else that you want. I want everybody to be happy. That’s a terminal value. If you ask me why I want that, I’m going to have some serious trouble answering, because there is no answer. I just want it, and there’s nothing that I know of that I want more, or that I would consider a good reason to give up that goal.
All I can do is point to the sky and hope that people will choose to pay less attention to the finger than what it indicates.
Right now, it’s pointing at “don’t make this mistake”, which I was unlikely to do anyway, but now I have the opportunity to point the mistake out to you, so you can (if you choose to; I can’t force you) stop making it, which would raise the rationality around here, which seems like a good thing to me. Or, I can not point it out, and you keep doing what you’re doing. It’s like one of those lottery problems, and I concluded that the chance of one or both of us becoming more rational was worth the cost of having this discussion. (And, it paid off at least somewhat—I think I have enough insight into that particular mistake to be able to avoid it without avoiding the situation entirely, now.)
“Heaven and earth are ruthless, and treat the myriad creatures as straw dogs; the sage is ruthless, and treats the people as straw dogs.”
One might accuse this of falling afoul of the appeal to nature, but that would assume a fact not in evidence, to wit, that Annoyance’s motivations resemble that of a typical LW poster (to the extent that such a beast exists).
Once I realized that achieving anything, no matter what, required my being rational, I quickly bumped “being rational” to the top of my to-do list.
Voted down because your realization is flawed. Achieving anything does not require you to be rational, as evidenced by this post.
The Master of the Way treats people as straw dogs.
Your strategy of dealing with people is also flawed: does the Master of the Way always defect? If you were a skilled exploiter, you wouldn’t give obvious signals that you are an exploiter. Instead, you seem to be signaling “Vote me off the island!” to society, and this community. You may want to reconsider that position.
Once I realized that achieving anything, no matter what, required my being rational, I quickly bumped “being rational” to the top of my to-do list.
‘People’ do not lend themselves to any particular utility. The Master of the Way treats people as straw dogs.
Yes, I see that you did that. Why would I want to do that, given my current utility function? I appear to be accomplishing things reasonably well as is, and it looks like if I made that change, I wouldn’t wind up accomplishing things that my current utility function values at all.
Persuasiveness: You fail at it.
What’s the function you use to evaluate your utility function?
Persuasiveness: what I was not aiming for.
And what function do I use to evaluate that, and on to infinity. Right. Or, I can just accept that my core utility function is not actually rational, examine it to make sure it’s something that’s not actually impossible, and get on with my life.
Or does Eliezer have a truly-rational reason behind the kind of altruism that’s leading him to devote his life to FAI that I’m not aware of?
Oh, silly me for assuming that you were trying to raise the rationality level around here. It’s only the entire point of the blog, after all.
So if you’re not actually trying to convince me that being more rational would actually be a good thing, what’s have you been doing? Self-signaling? Making pointless appeals to your own non-existent authority? Performing some bizarre experiment regarding your karma score?
Sets of terminal values can be coherent. Logical specifications for computing terminal values can be consistent. What would it mean for one to be rational?
I have no idea.
As far as I can tell, my terminal values are not rational in the same sense that blue is not greater than three.
If there’s isn’t a tiny grain of rationality at the core of that infinite regression, you’re in great trouble.
The ability to anticipate how reality will react to something you do depends entirely on the ability to update your mental models to match data derived from reality. That’s rationality right there.
If there’s even a tiny spark, it can be fanned into flame. But if there’s no spark there’s nothing to build on. I strongly suspect that some degree of rationality is present in your utility function, but if not, your case is hopeless.
Why would I try to do that? Nothing I do can cause the rationality level to go up. Only the people here can do that. If I could ‘make’ people be rational, I would. But there’s no spoon, there.
All I can do is point to the sky and hope that people will choose to pay less attention to the finger than what it indicates.
It’s usually more effective if you don’t use your middle finger to do the pointing.
Out of curiosity, can someone who does not have a grain of rationality in them ever become more rational? In other words, can someone be so far gone that they literally can never be rational?
I am honestly having trouble picturing such a person. Perhaps that is because I never thought about it that way before.
They may stumble across rationality as life causes their core functions to randomly vary. As far as I can tell, that’s how explicit and self-referential standards of thought first arose—they seem to have occurred in societies where there were many different ideas and claims being made about everything, and people needed a way to sift through the rich bed of assertions.
So complex and mutually-incompatible cultural fluxes seem to not only be necessary to produce the first correct standards, but encourage them to be developed as well. That argument applies more to societies than individuals, but I think a similar one holds there too.
Short answer: only by chance, I think.
Understood. I guess the followup question is about where the general human being starts. Do we start with any rationality in us? My guess is that it is somewhat random. Some do; some do not.
The opposite of rational is “wrong” or “ineffective”. A person can’t be wrong or ineffective about everything, that’s senseless. I think all the confusion has arisen from Annoyance claiming that terminal values must have some spark of rationality, but Eliezer explained that he might have meant they must be coherent. So if I may paraphrase your question (which interests me as well), the question is: how may terminal values be incoherent?
You need to be more careful with problem statement, it seems too confused. For example, taboo “rational” (to distinguish irrational people from rocks), taboo “never” (to distinguish the deep properties of the phenomenon from limitations created by life span and available cultural environment).
Yeah, I would agree. I meant it as a specific response to what Annoyance wrote and figured I could just reuse the term. I didn’t expect so many people to jump in. :)
“Never” as in “This scenario is impossible and cannot happen.” “Become more rational” can be restated “gain more rationality.”
Rewording the entire question:
The tricky clause is now “rationality in them.” Any more defining of terms brings this into a bigger topic. It would probably make a good top-level post, if anyone is interested.
I’d like to see a top post on this. My example of cats having a degree of rationality may be useful:
(Warning) This is a huge mind-dump created while on lunch break. By all means pick it apart, but I am not planning on defending it in any way. Take it with all the salt in the world.
Personally, I find the concept of animal rationality to be more of a distraction. For some reason, my linguistic matrix finds the word “intelligent” to describe cats responded to a can opener. Animals are very smart. Humans are very smart. But smart does not imply rational and a smart human is not necessarily imply rationality.
I tend to reserve rationality for describing the next “level” of intelligence. Rationality is the form or method of increase intelligence. An analogy is speed versus acceleration. Acceleration increases speed; rationality increases intelligence. This is more of a rough, instinctive definition, however, and one of my personal reasons for being here at Less Wrong is to learn more about rationality. My analogy does not seem accurate in application. Rationality seems connected to intelligence but to say that rationality implies change in intelligence does not fit with its reverse: irrationality does not decrease intelligence.
I am missing something, but it seems that whatever I am looking for in my definitions is not found in cats. But, as you may have meant, if cats have no rationality and cannot have rationality, is it because they have no rationality?
If this were the case, and rationality builds on itself, where does our initial rationality come from? If I claim to be rational, should I be able to point to a sequence of events in my life and say, “There it started”? It seems that fully understanding rationality implies knowing its limits; its beginning and ending. To further our rationality we should be able to know what helps or hinders our rationality.
Annoyance claims that the first instances of rationality may be caused by chance. If this were true, could we remove the chance? Could we learn what events chanced our own rationality and inflict similar events on other people?
Annoyance also seems to claim that rationality begets rationality. But something else must produce that first spark in us. That spark is worth studying. That spark is annoyingly difficult to define and observe. How do we stop and examine ourselves to know if we have the spark? If two people walk before us claiming rationality yet one is lying, how do we test and observe the truth?
Right now, we do so by their actions. But if the liar knows the rational actions and mimics them without believing in their validity or truth, how would we know? Would such a liar really be lying? Does the liar’s beliefs matter? Does rationality imply more than correct actions?
To make this more extreme, if I build a machine to mimic rationality, is it rational? This is a classic question with many forms. If I make a machine that acts human, is it human? I claim that “rationality” cannot be measured in a cat. Could it be measured in a machine? A program? Why am I so fixated on humanity? Is this bias?
Rationality is a label attached to a behavior but I believe it will eventually be reattached to a particular source of the behavior. I do not think that rational behavior is impossible to fake. Pragmatically, a Liar that acts rational is not much different from a rational person. If the Liar penetrates our community and suddenly goes ape than the lies are obvious. How do we predict the Liars before they reveal themselves? What if the Liars believe their own lies?
I do not mean “believe” as in “having convinced themselves”. What if they are not rational but believe they are? The lie is not conscious; it is a desire to be rational but not possessing the Way. How do we spot the fake rationalists? More importantly, how do I know that I, myself, have rationality?
Does this question have a reasonable answer? What if the answer is “No”? If I examine myself and find myself to be irrational, what do I do? What if I desire to be rational? Is it possible for me to become rational? Am I denied the Way?
I think much of the confusion comes from the inability to define rationality. We cannot offer a rationality test or exam. We can only describe behavior. I believe this currently necessary but I believe it will change. I think the path to this change has to do with finding the causations behind rationality and developing a finer measuring stick for determining rational behavior. I see this as the primary goal of Less Wrong.
Once we gather more information about the causes of our own rationality we can begin development methods for causing rationality in others along with drastically increasing our own rationality. I see this as the secondary goal of Less Wrong.
This is why I do not think Annoyance’s answer was sufficient. “Chance” may be how we describe our fortune but this is inoculative answer. During Eliezer’s comments on vitalism he says this:
(Emphasis original. You will have to search for the paragraph, it is about three-quarters down the page.)
“Chance” hits 3 of 4, giving Annoyance benefit of the doubt and assuming there is no cherished ignorance. So, “chance” works for now because we have no better words to describe the beginning of rationality, but there is a true cause out there flipping the light bulbs on inside of heads and producing the behavior we have labeled “rationality.” Let’s go find it.
(PS) Annoyance, this wasn’t meant to pick on what you said, it just happened to be in my mind and relevant to the discussion. You were answering a very specific question and the answer satisfied what was asked at the time.
Rationality-as-acceleration seems to match the semi-serious label of x-rationality.
My point was that some animals do appear to be able to be rational, to a degree. (I’m defining ‘rational’ as something like ’able to create accurate representations of how the world works, which can be used to make accurate predictions.)
I can even come up with examples of some animals being able to be more rational than some humans. I used to work in a nursing home, and one of the residents there was mentally retarded as part of her condition, and never did figure out that the cats could not understand her when she talked to them, and sometimes seemed to actually expect them to talk. On the other hand, most animals that have been raised around humans seem to have a pretty reasonable grasp on what we can and can’t understand of their forms of communication. Unfortunately, most of my data for the last assertion there is personal observation. The bias against even considering that animals could communicate intentionally is strong enough in modern society that it’s rarely studied at all, as far as I know. Still, consider the behavior of not-formally-trained domesticated animals that you’ve known, compared to feral examples of the same species.
Basic prediction-ability seems like such a universally useful skill that I’d be pretty surprised if we didn’t find it in at least a minimal form in any creature with a brain. It may not look like it does in humans, in those cases, but then, given what’s been discussed about possible minds, that shouldn’t be too much of a problem.
Animals obviously communicate with one another. The last I heard, there was a lot of studying being done on dolphins and whales. Anyone who has trained a dog in anything can tell you that dogs can “learn” English words. The record I remember hearing about was a Border Collie with a vocabulary of over 100 words. (No reference, sorry. It was in a trivia book.)
As for your point, I understand and acknowledge it. I think of rationality as something different, I guess. I do not know how useful continuing the cat analogy is when we seem to think of “rational” differently.
Hmm, maybe you could define ‘intelligence’ as you use it here:
I define intelligence as the ability to know how to do things (talk, add, read, write, do calculus, convince a person of something—yes, there are different forms of intelligence) and rationality as the ability to know which things to do in a given situation to get what you want out of that situation, which involves knowing what things can be gotten out of a given situation in the first place.
Well, the mind dump from earlier was mostly food for thought, not a staking out claims or definitions. I guess my rough definition of intelligence fits what I find in the dictionary:
The same dictionary, however, defines rationality as a form of the word rational:
I take intelligence to mean, “the ability to accomplish stuff,” and rationality to mean, “how to get intelligence.” Abstracted, rationality more or less becomes, “how to get the ability to accomplish stuff.” This is contrasted with “learning” which is:
I am not proposing this definition of rationality is what anyone else should use. Rather, it is a placeholder concept until I feel comfortable sitting down and tackling the problem as a whole. Right now I am still in aggregation mode which is essentially collecting other people’s thoughts on the subject.
Honestly, all of this discussion is interesting but it may not be helpful. I think Eliezer’s concept of the nameless virtue is good to keep in mind during these kinds of discussions:
Further information: The person I mentioned was able to do some intelligence-based things that I would not expect cats to do, like read and write (though not well). She may also have been able to understand that cats don’t speak English if someone actually explained it to her—I don’t think anyone ever actually did. Even so, nobody sits cats or dogs down and explains our limitations to them, either, so I think the playing field is pretty level in that respect.
If you can develop it well.
Yeah. If I were to do it I would probably start from the question of defining someone’s level of rationality. The topic itself assumes:
“Rationality” is not boolean. People can be more or less rational on a scale.
People can be completely irrational in the sense that they score a 0 on the scale.
The question becomes: Can such a person increase their level on the scale?
Further thoughts:
How does one increase their level on the scale?
Does it require rationality to get more rationality?
Is there an upper bound? If the lower bound is 0...
If there is an upper bound, can this upper bound be achieved?
...and then you prove that the level of rationality and operations on it correspond to Bayesian probability up to isomorphism. ;-)
Seriously, doing this in non-silly manner is highly nontrivial.
Oh, no joke. But we have to start somewhere. :)
Honestly, until we have a better word/definition than “rationality,” we get to play with fuzzy words. I am happy with that for now but it is a dull future.
I made more causal comments on this subject in a different comment and would appreciate your thoughts. It is kind of long, however, so no worries if you would rather not. :)
You’ve never thought about it that way before because it’s completely silly. How on earth does Annoyance make these judgments? I’m not nearly prideful enough to think I can know others’ minds to the extent Annoyance can, or, in other words, I imagine there are circumstances which could change most people in profound ways, both for ill and good. So the only thing judging people in this manner does is reinforce one’s social prejudices. Writing off people who seem resistant to reason only encourages their ignorance, and remedying their condition is both an exercise and example of reason’s power, which, incidentally, is why I’m trying so hard with Annoyance!
You did catch that I’m talking about a terminal value, right? It’s the nature of those that you want them because you want them, not because they lead to something else that you want. I want everybody to be happy. That’s a terminal value. If you ask me why I want that, I’m going to have some serious trouble answering, because there is no answer. I just want it, and there’s nothing that I know of that I want more, or that I would consider a good reason to give up that goal.
Right now, it’s pointing at “don’t make this mistake”, which I was unlikely to do anyway, but now I have the opportunity to point the mistake out to you, so you can (if you choose to; I can’t force you) stop making it, which would raise the rationality around here, which seems like a good thing to me. Or, I can not point it out, and you keep doing what you’re doing. It’s like one of those lottery problems, and I concluded that the chance of one or both of us becoming more rational was worth the cost of having this discussion. (And, it paid off at least somewhat—I think I have enough insight into that particular mistake to be able to avoid it without avoiding the situation entirely, now.)
What are you aiming for?
Could you elucidate what you intend with this gem?
“The Master of the Way treats people as straw dogs.”
It’s from the Tao Te Ching:
“Heaven and earth are ruthless, and treat the myriad creatures as straw dogs; the sage is ruthless, and treats the people as straw dogs.”
One might accuse this of falling afoul of the appeal to nature, but that would assume a fact not in evidence, to wit, that Annoyance’s motivations resemble that of a typical LW poster (to the extent that such a beast exists).
Voted down because your realization is flawed. Achieving anything does not require you to be rational, as evidenced by this post.
Your strategy of dealing with people is also flawed: does the Master of the Way always defect? If you were a skilled exploiter, you wouldn’t give obvious signals that you are an exploiter. Instead, you seem to be signaling “Vote me off the island!” to society, and this community. You may want to reconsider that position.
Wanting to accomplish thing X, and being able to expect it to occur as a result of actions I take, requires rationality.
Your objection is incorrect.
Your understanding of my strategy is incorrect, as evidenced by your question.