What follows may be a bit long, and maybe a little dramatic. I’m sorry if that is uncourteous, still I feel the following needs saying early on. Please bear with me.
I’m a recently be-bachelored sociologist from the Netherlands, am male and in my early twenties. I consider myself a jack of several trades – among them writing, drawing, cooking and some musical hobbies – but master of none. However, I do entertain the hope that the various interests and skills add up to something effective, to my becoming someone in a position to help people who need it, and I intend to take action to approach this end.
I found Less Wrong through the intriguing Harry Potter fanfiction story called ‘the Methods of Rationality.’ The story entertains me greatly, and the more abstract themes stimulate me and I find myself wishing to enter discussions regarding these matters. Instead of bothering the author of the story I decided to have a look here instead. Please note that I write this before having read any of the Sequences and only a few smaller articles. I intend to get on that soon, but as introductions go I feel it is better to present myself first. I hope you will forgive any offense the following section may give.
My relation to rationalism is quite strained. I am more often in the position where I have to attack theories mostly concerned with rationality, than that I have to defend them. Often, I find arguments where people are assumed to be rational and to make informed choices are classist and uncritical of the way people are shaped by society and vice versa. Often the desirable outcome of an action or ‘strategy’ is taken to have been the goal that the actor deliberately attempted to attain. Often this is done at the cost of more likely explanations that make fewer unfounded assumptions.
I do not at all mean that Less Wrong is implicated in this, in fact: I hope I am right to believe that quite the opposite is being attempted here, my point is that I am more used to denying people’s rationality in arguments than invoking it as a way to explain social life.
That is not to say I deny that people can engage in rational thought. Rather, it appears to me that human beings are emotional, situationally defined social animals, much more than they are rational actors. Rational thought, as I see it, is something that occurs in certain relatively rare circumstances. And when it occurs it is always bound to people’s social, emotional, physical lives. Often it is group membership and identification, rather than a objective calculation of merit, that defines the outcome of a deliberation, when a deliberation even takes place at all.
So then why am I here? For one thing, I would like to discuss these ideas with people who are knowledgeable about them, but who are also tolerant enough of dissidence that they’ll do so in a relaxed and well, rational, way.
For another, I believe that more rationality, as truly rational as we can make it, will help our species get through the ages and improve upon the fate of it´s members and the other beings it dominates. The Methods of Rationality and what little I´ve seen of this community has led me to believe that, despite having a perspective that differs from mine, people at Less Wrong are aware of some of the ways in which people are inherently not rational. That rationality is something that needs to be promoted and created, not something that is already the dominant cause of human action.
For a third, I cannot deny that I am a person who engages in a lot of thinking. Despite differing perspectives I believe this community may be able to help me develope. My ´story of origin´, if I am to present myself to you as a rationalist, involves a change in my views regarding the false or harmfull style of rationalism I mentioned earlier in this post. I once struggled with the idea that rationalism itself is to blame for perceived injustice and failings of modernity. But at some point I came to the conclusion that this is not the case. At fault is not a human rationality that will forever remain at odds with our emotions, and with those people who were not sufficiently introduced to rationality. People should be able to deliberate rationally while understanding that most of their being is disinclined to yield to abstract models and lofty humanist ideals.
At fault is not rationalism or the imperfection of our brians, but incomplete and erroneous rationalism that is employed to serve people who have no need or appreciation for a critical eye cast upon themselves.
I think the community of Less Wrong is very right to consider human rationality an art.
Orthonormal, thank you for suggesting the Straw Vulcan talk to me. It was a fairly interesting talk I was encouraged to see rationality defined through various examples in a way that is useful, accepts emotionality and works with it. I did not myself have a Straw Vulcan view of rationality, far from it, but I do recognise a few of it’s flawed features in rationalistic social theories.
However, even this speaker seemed to overstate people´s rationality. An example is given of teenagers doing dangerous things despite stating they consider the risks. The taking of the risk is attributed to flawed reasoning, miscalculation of risks and the like. From my perspective, it is much more likely that the teenagers considered the risks because they were warned against the behaviour and they realised that their peer group was about to do something their parent´s, guardians, etc. disagree with; the were somewhat anxious because they were aware of a moral conflict. However, their bond with the peer group, the emotional dynamic of the situation was not disrupted by the doubt, nor was it strong enough for them to exclude themselves from the situation (to leave), and so they took whatever risk they had pondered. I wouldn’t appropriate this to flawed thinking; as I see it the thinking was fairly irrelevant to the situation, as it seems to me that it is to most situations.
Consider this (and this related thread) from the genes’ point of view. It may be worth having all of your carriers do risky things, if the few that die of them are more than made up for by the ones who survive and learn something from the experience (such as how to kill big fierce animals without dying).
For a gene, there’s nothing reckless about having your carriers act recklessly at a stage in their lives when their reproductive survival depends on learning how to do dangerous things.
An example is given of teenagers doing dangerous things despite stating they consider the risks.
It seems to be that there is a systematic bias in teenage thinking, especially of the male sex; many teenagers I know/have known in the past place a much higher weight on peers’ opinions than on parents’ opinions, and a considerably higher weight on ‘coolness’ than on ‘safeness.’ Cool actions are often either unsafe or disapproved of by the parents’ generation. I’ve started to wonder whether there might be a good evolutionary reason for teenagers to act this way. After all, being liked and accepted by peers is more important to finding a mate than being accepted by the older generation. In an ancestral environment, young males’ ability to confidently take risks (i.e. in hunting) would have been important to success, and thus a factor in attractiveness to girls. Depending on just how risky the ‘cool’ things to do are, and how tough the competition for mates, the boys who ignored their parents’ warnings and took risks with their peer group might have had more children compared to those who were more cautious...and thus their actions would be instrumentally rational. If this hypothesis were true, the ‘thinking’ that leads modern teenagers to do dangerous things would be an implicit battle of popularity-vs-safety, with popularity usually winning because of an innate weighting.
This is a testable, falsifiable hypothesis, if I can find some way of testing it.
If it is in the genetic interests of the children to perform actions with such-and-such a risk level relative to the reward in social recognition, why is it not in the genetic interests of the parent to promote that precise risk level in the child?
No idea, actually. The following is possible stuff that my brain has produced, i.e. pure invented bullshit.
It could be that this discrepancy used to be less of a problem, when society was more constant from one generation to the next and most ‘risky’ behaviours were obviously rewarding to both teens and adults . Based on anecdotal conversations with my parents, it seems like some things that are considered ‘cool’ by most of my own peer group were considered ‘just stupid’ by the people my parents hung out with when they were teenagers.
There’s also the factor that in the modern environment, as compared to the ancestral environment, most people don’t keep the same group of friends in their twenties and thirties as in their teens. The same person can be unpopular in high school, when “coolness” is more correlated to risk taking, and yet be popular in a different group later when they have a $100 000-a-year job and an enormous house with a pool in it, and nobody remembers that back in high school they had no friends. Parents who have survived this phase may consider it okay for their children to be less popular as teenagers in order to prepare for later “success” as they define it, but to a teenager actually living through it day by day, the (http://lesswrong.com/lw/l0/adaptationexecuters_not_fitnessmaximizers/) in their brain will still rate their peers’ approval as far more important than safety, and adjust their pleasure and pain in different situations accordingly...since, in an ancestral environment of small groups that stayed together, impressing people at age 14 would have a much greater effect on your later success as an adult.
Thanks for the welcome! But I really can’t agree with your statement.
Irrationality, which I would for now define as all human action and awareness that isn’t rational thinking or that doesn’t follow a rationally defined course of action, is not a ‘bug’; rather it’s most of the features that make us up and allow our continued existence. They make up a much greater part of what we are than those things/ faculties or moments/situations that we might call rational. And most of these deserve more respect than being called bugs. Especcially in an evolutionary perspective most of these traits and processes should definately be considered features to which we owe our continued existence. Often these things conflict with a rationality we hope to attain, but I think that at other times they are neccesary prerequisites to it.
Emotions can be qualified, or ‘legitimated’ by reflexive rational thought, and we can try to purge emotions we deem to be personal hurdles, but still most of our lives take place outside the realm of rationality.
Rationality should be used to improve the rest of our lives and to improve the way humankind is organised, how it organises it’s sphere of influence. I think it’s a mistake to think rationality could, or should, be everything we are.
Irrationality, which I would for now define as all human action and awareness that isn’t rational thinking or that doesn’t follow a rationally defined course of action
Some of the disagreement is definitional. We define rationality as achieving your goals. Rationality should win. Any act or [ETA: mental] process that helps with achieving goals is rational.
There’s a followup assertion in this community that believing true things helps achieving goals. Although not all people in history have believed that, it’s hard to deny that human thinking patterns are not well calibrated for discovering and believing truth things. (Although they are better than anything else we’ve come across).
If ‘effective’ in the very loosest sense, is drawn into what is called rational, doesn’t that confuse the term?
I mean, to my mind, having a diëtician for a parent ( leading to fortuitous fortitude which assist in the achievement of certain goals ) is not rational, because it is not something that is in any way tied to the ‘ratio’. This thing that helps you achieve goals is simply convenient or a privilege, not rational at all.
If I have a choice of parents, and a dietician is the most useful parent to have for achieving my goals, then yes, choosing a dietician for a parent is a rational choice. Of course, most of us don’t have a choice of parents.
If I believe that children of dieticians do better at achieving their goals than anyone else, then choosing to become a dietician if I’m going to have children is a rational choice. (So, more complicatedly, is choosing not to have children if I’m not a dietician.)
Of course, both of those are examples of decisions related to the state of affairs you describe.
Talking about whether a state of affairs that doesn’t involve any decisions is a rational state of affairs is confusing. People do talk this way sometimes, but I generally understand them to be saying that it is symptomatic of irrationality in whoever made the decisions that led to that state of affairs.
Talking about whether a state of affairs that doesn’t involve any decisions is a rational state of affairs is confusing. People do talk this way sometimes, but I generally understand them to be saying that it is symptomatic of irrationality in whoever made the decisions that led to that state of affairs.
What do you mean? Whose irrationality? Isn’t it more straightforward (it’s there among the ‘virtues of rationality’ no?) to just not call things ‘rational’ if they do not involve thinking?
Isn’t it more straightforward (it’s there among the ‘virtues of rationality’ no?) to just not call things ‘rational’ if they do not involve thinking?
I don’t think so, since that would be a trivial property that doesn’t indicate anything, for there is no alternative available. Decisions can be made either correctly or not, and it’s useful to be able to discern that, but the world is always what it actually is.
It varies, and I might not even know. For example, if the arrangement of signs on a particular street intersection causes unnecessary traffic congestion, I might call it an irrational arrangement. In doing so I’d be presuming that whoever chose that arrangement intended to minimize traffic congestion, or at least asserting that they ought to have intended that. But I might have no idea who chose the arrangement. (I might also be wrong, but that’s beside the point.)
But that said, and speaking very roughly: irrationality on the part of the most proximal agent(s) who was (were) capable of making a different choice.
Isn’t it more straightforward (it’s there among the ‘virtues of rationality’ no?) to just not call things ‘rational’ if they do not involve thinking?
Yes, it is.
For example, what I just described above is a form of metonymy… describing the streetsign arrangement as irrational, when what I really mean is that some unspecified agent somewhere in the causal history of the streetsign was irrational. Metonymy is a common one among humans, and I find it entertaining, and in many cases efficient, and those are also virtues I endorse. But it isn’t a straightforward form of communication, you’re right.
Incidentally, I suspect that most uses of ‘rationality’ on this site (as well as ‘intelligence’) could be replaced by ‘optimization’ without losing much content. Feel free to use the terms that best achieve your goals.
You use an invalid argument to argue for a correct conclusion. It doesn’t generally follow that something that can’t be improved is not worth “worrying about”, at least in the sense of being a useful piece of knowledge to pay attention to.
What do you mean? Whose irrationality? Isn’t it more straightforward (it’s there among the ‘virtues of rationality’ no?) to just not call things ‘rational’ if they do not involve thinking?
It’s a definitional dispute, mostly caused by my original failure to specific that I meant mental processes in this comment.
It’s all irrelevant to my point, which is a self-contained criticism of a particular argument you’ve made in this comment and doesn’t depend on the purpose of that argument.
(Your quoting someone else’s writing without clarification, in a reply to my comment, is unnecessarily confusing...)
I don’t think so, since that would be a trivial property that doesn’t indicate anything....
I think it would indicate that not every action is being thought over. That some things a person does which lead to the achievement of a goal may not have beent planned for or acknowledged. By calling all things that are usefull in this way ‘rational’ I think you’d be confusing the term. Making it into a generic substitute for ‘good’ or ‘decent’.
To me, that seems harmfull to an agenda of improving people’s rational thinking.
.>, for there is no alternative available.
I would like to propose the alternatives of ‘beneficial’ and ‘usefull’. Otherwise we could consider ‘involvement in causality’ or something like that.
I think the word rationality could use protection against too much emotional attachment to it. It should retain a specific meaning instead of becoming ‘everything that’s usefull’.
I think the word rationality could use protection against too much emotional attachment to it. It should retain a specific meaning instead of becoming ‘everything that’s useful’.
I’m not in love with using the word “rationality” for what this community means by rationality. But (1) I can’t come up with a better word, (2) there’s no point in fighting to the death for a definition, and (3) thanks to the strength of various cognitive biases, it’s quite hard to figure out how to be rational and worth the effort to try.
I think various forms of “optimization” would probably fit the bill. That is, pretty much everything this site endorses about “rationalists” it would also endorse about “efficient optimizers.”
But the costs associated with such a terminology shift don’t seem remotely worth the payoff.
I mean, to my mind, having a diëtician for a parent … is not rational
Assuming for the moment that having a dietitian for a parent really does help one achieve one’s goals, yes it is rational, to the extent that it can be described as an act or process. That is, if you can influence what sorts of parents you have, then you should have a dietitian.
Similarly, it would be rational for me to spend 20 minutes making a billion dollars, even though that’s something I can’t actually do.
Whether a dietitian-parents could help you achieve all kinds of goals. Generally you’d be likely to have good health, you’re less likely to be obese. Healthy, well-fed people tend to be taller, a dietician could use diet changes to reduce acne problems and whatnot. It is generally accepted that healthy, tall, good-looking people have better chances at achieving all sorts of goals. Also, dieticians are relatively wealthy highly-educated people. A child of a dietician is a child of privilege, upper middle class!
Anyway, my point is exactly that nobody can choose their parents.TimS said:
Any act or process that helps with achieving goals is rational.
I would consider parenthood a process. But having a certain set of parents instead of another has little to do with rationality, despite most parents being ‘usefull’. In the same way, I would not consider it rational to like singing, even though the acquired skills of breathing and voice manipulation might help you convey a higher status or help with public speaking.
To decide to take singing lessons, if you want to become a public speaker, might be rational. But to simply enjoy singing shouldn’t be considered so, even if it does help with your public speaking. Because no rational thought is involved.
At a certain level, instrumental rationality is a method of making better choices, so applying it where there doesn’t appear to be a choice is not very coherent. Instrumental rationality doesn’t have anything to say about whether you should like singing. But if want skill at singing, instrumental rationality suggests music lessons.
As an empirical matter, I suggest there are lots of people who would like to be able to sing better who do not take music lessons for various reasons. We can divide those reasons into two patterns: (1) “I want something else more than singing skill and I lack the time/money/etc to do both,” or (2) “Nothing material prevents me from taking singing lessons, but I do not because of anxiety/embarrassment/social norms.”
Again, I assert that a substantial number of people decide not to take singing lessons based solely on type 2 reasons. This community thinks that this pattern of behavior is sub-optimal and would like to figure out how to change it.
Here I agree almost fully!
My problem is that people aren’t fully rational beings. That some of the people might want to take lessons on some level but don’t can’t be attributed only to their thoughts, but to their emotional environment. A persons thoughts need to be mobilised into action for something to take part. Sometimes this is a point of a person needing more basic confidence, sometimes a person needs their thoughts mirrored at them and confirmed. As in, speaking with a friend who’ll encourage them. Thinking alone isn’t enough.
I admire the community’s mission to try and change people. But by the same line of argument I use above I think focusing only on how people think and how they might think better is not going to be enough.
I think rationality should also be viewed as a social construct.
I admire the community’s mission to try and change people. But by the same line of argument I use above I think focusing only on how people think and how they might think better is not going to be enough.
One level up, consider who does the focusing how. The goal may be to build a bridge, an tune an emotion, or correct the thinking in your own mind. One way of attaining that goal is through figuring out what interventions lead to what consequences, and finding a plan that wins.
That’s what we’ve been saying. Not all of a person’s thoughts are rational. And I certainly don’t assert someone can easily think themselves out of being depressed or anxious.
I think rationality should also be viewed as a social construct.
I think that the goals people set are socially constructed. Thus, the ends rationality seeks to achieve are socially constructed. Once that is established, what further insight is contained in the assertion that rationality itself is socially constructed? To put it slightly differently, I don’t think mathematics is socially constructed, but it’s pretty obvious to me that what we choose to add together is socially constructed.
That’s what we’ve been saying. Not all of a person’s thoughts are rational. And I certainly don’t assert someone can easily think themselves out of being depressed or anxious.
My point there wasn’t that people’s thoughts aren’t all rational, though I agree with that. My point was that not all human actions are tied to thoughts or intentions. There are habits, twitches, there is emotional momentum driving people to do things they’d never dream of and may regret for the rest of their lives. People often don’t think in the first place.
Once that is established, what further insight is contained in the assertion that rationality itself is socially constructed?
I think that, when one’s goal is to improve and spread rationality, a elementary questions should be: When, and under which circumstances does a person think? How does a social situation affect your thinking?
So instead of just asking how do we think and how do we improve that? It could also be usefull to ask when do we think and how do we improve that?
At some point in the future we could then inform people of the kind of social environment they might build to help them better formulate and achieve goals. Like people with anger problems being taught to ‘stop! And count to ten’ other people might be taught to think at certain recognisable critical moments they currently tend to walk past without realising.
Yes, at this point we’re just disputing definitions. But I think we’re in agreement with all the relevant empirical facts; if you were able to chose your parents, then it would be rational to choose good ones. Also, one is not usually able to choose one’s parents.
Thanks for your quick replies. Yes we are agreed in those two points.
I’m going to try something that may come off as a little crude, but here goes:
Point 1: If every act or process that helps me is to be called rational, then having a diëtician for a parent is rational.
Point 2: The term rational implies involvement of the ‘ratio’, of thinking.
Point 3: No rational thinking, or any thinking at all, is involved in acquiring one’s parents. Even adaptive parents tend to acquire their child, not the other way around.
Conclusion; Something is wrong with saying that everything that leads to the attainment of a goal is rational.
Perhaps another term should be used for things that help achieve goals but that do not involve thinking, let alone rational or logically sound thinking. This is important because thought is often overstated in the prevalence with which it occurs, and also in the causal weight that is attached to it. Thought is not omnipresent, and thought is often of minor importance in accurately explaining a social phenomenon.
“Rationality/irrationality” in the sense used on LW is a property of someone’s decisions or actions (including the way one forms beliefs). The concept doesn’t apply to the helpful/unhelpful things not of that person’s devising.
I’d prefer to reject point 2. Arguments from etymology are not particularly strong. We’re using the term in a way that has been standard here since the site’s inception, and that is in accordance with the standard usage in economics, game theory, and artificial intelligence.
You may be right in that the argument comes more from a concern with how a broader public relates to the term of ´rational´ than how it is used in the mentioned disciplines.
On the other hand I feel that the broader public is relevant here. LessWrong isn´t that small a community and I suspect people have quite some emotional attachment to this place, as they use it as a guide to alter their thinking.
By calling all things that are usefull in this way ‘rational’ I think you’d be confusing the term. It could lead to rationality turning into a generic substitute for ‘good’ or ‘decent’. To me, that seems harmfull to an agenda of improving people’s rational thinking.
Summary: “Epistemic rationality” is having beliefs that correspond to reality. “Instrumental rationality” is being able to actualize your values, or achieve your goals.
Irrationality, then, is having beliefs that do not correspond to reality, or being unable to achieve your goals. And to the extent that humans are hard-wired to be likely irrational, that certainly is a bug that should be fixed.
By that definition you might say that, but that still leaves the problem I tend to adress, that rationality (and by the supplied definition also irrationality) is suscribe to people and actions where thinking quite likely did not take place or was not the deciding factor of what action came about in the end.
It falsely divides human experience into ‘rational’ and ‘erroneously rational/irrational’. Thinkin is nog all that goes on among humans.
Often the desirable outcome of an action or ‘strategy’ is taken to have been the goal that the actor deliberately attempted to attain.
Diamond in a box:
Suppose you’re faced with a choice between two boxes, A and B. One and only one of the boxes contains a diamond. You guess that the box which contains the diamond is box A. It turns out that the diamond is in box B. Your decision will be to take box A. I now apply the term volition to describe the sense in which you may be said to want box B, even though your guess leads you to pick box A.
Let’s say that Fred wants a diamond, and Fred asks me to give him box A. I know that Fred wants a diamond, and I know that the diamond is in box B, and I want to be helpful. I could advise Fred to ask for box B instead; open up the boxes and let Fred look inside; hand box B to Fred; destroy box A with a flamethrower; quietly take the diamond out of box B and put it into box A; or let Fred make his own mistakes, to teach Fred care in choosing future boxes.
But I do not simply say: “Well, Fred chose box A, and he got box A, so I fail to see why there is a problem.” There are several ways of stating my perceived problem:
Fred was disappointed on opening box A, and would have been happier on opening box B.
It is possible to predict that if Fred chooses box A, Fred will look back and wish he had chosen box B instead; while if Fred chooses box B, Fred will be satisfied with his choice.
Fred wanted “the box containing the diamond”, not “box A”, and chose box A only because he guessed that box A contained the diamond.
If Fred had known the correct answer to the question of simple fact, “Which box contains the diamond?”, Fred would have chosen box B.
Hence my intuitive sense that giving Fred box A, as he literally requested, is not actually helping Fred.
If you find a genie bottle that gives you three wishes, it’s probably a good idea to seal the genie bottle in a locked safety box under your bed, unless the genie pays attention to your volition, not just your decision.
Thanks for your reply! I’m not quite sure how usefull that second quote you sent is. But if I ever do find a genie, I’ll be sure to ask it whether it pays attention to my volition, or even to make it my first wish that the genie pays attention to my volition when fulfilling my other wishes ;)
My point in the section you quoted at the end of your post was not that there is a standard of rationality that people are deviating from. Closer to my views is that a standard of rationality is created, which deviates from people.
Your critique of “rationalism” as you currently understand it is, I think, valid. The goal of LessWrong, as I understand it (though I’m no authority, I just read here sometimes), is to help people become more rational themselves. As thomblake has already pointed out, we tend to believe with you in the general irrationality of humans. We also believe that this is a sort of problem to be fixed.
However, I also think you’re being unfair to people who use the Rationality Assumption in economics, biology or elsewhere. You say that:
Often the desirable outcome of an action or ‘strategy’ is taken to have been the goal that the actor deliberately attempted to attain.
That’s not an assumption that the theory requires. The Rationality Assumption only requires us to interpret the actions of an agent in terms of how well it appears to help it fulfill its goals. It needn’t be conscious of such “goals”. This type of goal is usually referred to as a revealed preference. Robin Hanson at Overcoming Bias, a blog that’s quite related to LessWrong, also loves pointing out and discussing the particular problem that you’ve raised. He usually refers to it as the “Homo hypocritus hypothesis”. You might enjoy reading some related entries on his blog. The gist of the distinction I’m trying to point to is actually pretty well-summarized by Joe Biden:
My dad used to have an expression: “Don’t tell me what you value. Show me your budget, and I’ll tell you what you value.”
It’s my own humble opinion that economists occasionally make the naive jump from talking about revealed preferences to talking about “actual” preferences (whatever those may be). In such cases, I agree with you that a disposition toward “rationalism” could be dangerous. But again, that’s not the accepted meaning of the word here. I also think it might be just as naive to take peoples’ stated preferences (whether stated to themselves or others) to be their “actual” preferences.
There have been attempts on LW to model the apparent conflict between the stated preferences and revealed preferences of agents, my favourite of which was “Conflicts Between Mental Subagents: Expanding Wei Dai’s Master-Slave Model”. If I were to taboo the word “rationality” in explaining the goal of this site, I’d say the goal is to help people bring these two mental sub-agents into agreement; to avoid being a Homo hypocritus; to help one better synchronize their stated preferences with their revealed preferences.
Clearly, the meanings of the word “rationality” that you have, and that this community has, are related. But they’re not the same. My goal in linking to the several articles in the above text, is to help you understand what is meant by that word here. Good luck and I hope you find the discourse you’re looking for!
I’m replying to you now before reading your suggestions, I’ve not had the time so far. They’re on my list but for now I’d like to adress what you reply either way.
The Joe Biden quote is very effective, and I agree with the general sentiment. But not with how that relates to questions of rationality. I tend to use rationality as any thinking at all. Illogical thinking is may be bad rationality, but it is still rationality.
My objection to assuming rationality isn’t that you shouldn’t look at how these or those actions may have some sort of function. My criticism is that, that when you do observe that a certain function is served, you shouldn’t impose rationality upon the people involved. In my experience, as a bachelor of sociology and as a human being with a habit of self-reflection, people don’t act upon their thoughts, but much more upon their knowledge of how to act in certain situations, on their social ‘programming’ and emotions, on their various loyalties.
We tend to define mankind as a being capable of thinking. I think we are wrong in this in the same way we would be wrong to define a scorpion as a being capable of making a venomous sting. The statement isn’t false, but most of the time the scorpion isn’t stinging anything. It’s just walking, sitting, eating, grabbing something with it’s claws. The stinging isn’t everything that’s going on, it’s not nearly even most of what’s going on.
Thanks again for the reply, I’ll be looking around and I’ll try to add something where I think it is fruitfull.
I’m having trouble figuring out whether we agree or disagree. So, you tell me this:
My criticism is that, that when you do observe that a certain function is served, you shouldn’t impose rationality upon the people involved.
and I agree that’s an excellent assumption for the goal of doing good sociology (and several other explanatory pursuits). I think (hope!) it will become clearer to you as your read the things I linked you to that this attitude is both (1) a very good one to take in many many instances, and (2) not in conflict with the goal of becoming more rational.
I snuck a key word by in that last sentence: assumption. When thinking about humans and societies, it’s become a very common and useful assumption to say that they don’t deliberate or make rational decisions; they’re products of their environments and they interact with those environments. At LessWrong, we usually call this the “outside view” because we’re viewing ourselves or others as though from the outside.
Note that while this is a good way to look at the world, we also have real, first-hand experiences. I don’t live my personal life as a bucket of atoms careening into other atoms, nor as an organism interacting with its environment; I live my day-to-day life as a person making decisions. These are three different non-wrong ways of conceptualizing myself. The last one, where I’m a person making decisions, is where the use of this notion of rationality that we’re interested in comes along and we sometimes call this the “inside view”. At those other levels of explanation, the concept of rationality truly doesn’t make sense.
I also can’t resist adding that you point out very rightly that most people don’t act on their thoughts and pursue their goals, opting instead to execute their social-biological programming. Many people here are genuinely interested in getting these two realms (goals and actions) to synch up and are doing some amazing theorizing as to how they can accomplish this goal.
Hello Less Wrong community, I am Kouran.
What follows may be a bit long, and maybe a little dramatic. I’m sorry if that is uncourteous, still I feel the following needs saying early on. Please bear with me.
I’m a recently be-bachelored sociologist from the Netherlands, am male and in my early twenties. I consider myself a jack of several trades – among them writing, drawing, cooking and some musical hobbies – but master of none. However, I do entertain the hope that the various interests and skills add up to something effective, to my becoming someone in a position to help people who need it, and I intend to take action to approach this end.
I found Less Wrong through the intriguing Harry Potter fanfiction story called ‘the Methods of Rationality.’ The story entertains me greatly, and the more abstract themes stimulate me and I find myself wishing to enter discussions regarding these matters. Instead of bothering the author of the story I decided to have a look here instead. Please note that I write this before having read any of the Sequences and only a few smaller articles. I intend to get on that soon, but as introductions go I feel it is better to present myself first. I hope you will forgive any offense the following section may give.
My relation to rationalism is quite strained. I am more often in the position where I have to attack theories mostly concerned with rationality, than that I have to defend them. Often, I find arguments where people are assumed to be rational and to make informed choices are classist and uncritical of the way people are shaped by society and vice versa. Often the desirable outcome of an action or ‘strategy’ is taken to have been the goal that the actor deliberately attempted to attain. Often this is done at the cost of more likely explanations that make fewer unfounded assumptions. I do not at all mean that Less Wrong is implicated in this, in fact: I hope I am right to believe that quite the opposite is being attempted here, my point is that I am more used to denying people’s rationality in arguments than invoking it as a way to explain social life.
That is not to say I deny that people can engage in rational thought. Rather, it appears to me that human beings are emotional, situationally defined social animals, much more than they are rational actors. Rational thought, as I see it, is something that occurs in certain relatively rare circumstances. And when it occurs it is always bound to people’s social, emotional, physical lives. Often it is group membership and identification, rather than a objective calculation of merit, that defines the outcome of a deliberation, when a deliberation even takes place at all.
So then why am I here? For one thing, I would like to discuss these ideas with people who are knowledgeable about them, but who are also tolerant enough of dissidence that they’ll do so in a relaxed and well, rational, way. For another, I believe that more rationality, as truly rational as we can make it, will help our species get through the ages and improve upon the fate of it´s members and the other beings it dominates. The Methods of Rationality and what little I´ve seen of this community has led me to believe that, despite having a perspective that differs from mine, people at Less Wrong are aware of some of the ways in which people are inherently not rational. That rationality is something that needs to be promoted and created, not something that is already the dominant cause of human action. For a third, I cannot deny that I am a person who engages in a lot of thinking. Despite differing perspectives I believe this community may be able to help me develope. My ´story of origin´, if I am to present myself to you as a rationalist, involves a change in my views regarding the false or harmfull style of rationalism I mentioned earlier in this post. I once struggled with the idea that rationalism itself is to blame for perceived injustice and failings of modernity. But at some point I came to the conclusion that this is not the case. At fault is not a human rationality that will forever remain at odds with our emotions, and with those people who were not sufficiently introduced to rationality. People should be able to deliberate rationally while understanding that most of their being is disinclined to yield to abstract models and lofty humanist ideals. At fault is not rationalism or the imperfection of our brians, but incomplete and erroneous rationalism that is employed to serve people who have no need or appreciation for a critical eye cast upon themselves.
I think the community of Less Wrong is very right to consider human rationality an art.
I thank you for your patience,
– Kouran.
It sounds like the Straw Vulcan talk might be relevant to some of your thoughts on rationality and emotion...
Orthonormal, thank you for suggesting the Straw Vulcan talk to me. It was a fairly interesting talk I was encouraged to see rationality defined through various examples in a way that is useful, accepts emotionality and works with it. I did not myself have a Straw Vulcan view of rationality, far from it, but I do recognise a few of it’s flawed features in rationalistic social theories.
However, even this speaker seemed to overstate people´s rationality. An example is given of teenagers doing dangerous things despite stating they consider the risks. The taking of the risk is attributed to flawed reasoning, miscalculation of risks and the like. From my perspective, it is much more likely that the teenagers considered the risks because they were warned against the behaviour and they realised that their peer group was about to do something their parent´s, guardians, etc. disagree with; the were somewhat anxious because they were aware of a moral conflict. However, their bond with the peer group, the emotional dynamic of the situation was not disrupted by the doubt, nor was it strong enough for them to exclude themselves from the situation (to leave), and so they took whatever risk they had pondered. I wouldn’t appropriate this to flawed thinking; as I see it the thinking was fairly irrelevant to the situation, as it seems to me that it is to most situations.
Consider this (and this related thread) from the genes’ point of view. It may be worth having all of your carriers do risky things, if the few that die of them are more than made up for by the ones who survive and learn something from the experience (such as how to kill big fierce animals without dying).
For a gene, there’s nothing reckless about having your carriers act recklessly at a stage in their lives when their reproductive survival depends on learning how to do dangerous things.
It seems to be that there is a systematic bias in teenage thinking, especially of the male sex; many teenagers I know/have known in the past place a much higher weight on peers’ opinions than on parents’ opinions, and a considerably higher weight on ‘coolness’ than on ‘safeness.’ Cool actions are often either unsafe or disapproved of by the parents’ generation. I’ve started to wonder whether there might be a good evolutionary reason for teenagers to act this way. After all, being liked and accepted by peers is more important to finding a mate than being accepted by the older generation. In an ancestral environment, young males’ ability to confidently take risks (i.e. in hunting) would have been important to success, and thus a factor in attractiveness to girls. Depending on just how risky the ‘cool’ things to do are, and how tough the competition for mates, the boys who ignored their parents’ warnings and took risks with their peer group might have had more children compared to those who were more cautious...and thus their actions would be instrumentally rational. If this hypothesis were true, the ‘thinking’ that leads modern teenagers to do dangerous things would be an implicit battle of popularity-vs-safety, with popularity usually winning because of an innate weighting.
This is a testable, falsifiable hypothesis, if I can find some way of testing it.
First step is to see if that’s consistent across cultures. Any anthropologists?
If it is in the genetic interests of the children to perform actions with such-and-such a risk level relative to the reward in social recognition, why is it not in the genetic interests of the parent to promote that precise risk level in the child?
No idea, actually. The following is possible stuff that my brain has produced, i.e. pure invented bullshit.
It could be that this discrepancy used to be less of a problem, when society was more constant from one generation to the next and most ‘risky’ behaviours were obviously rewarding to both teens and adults . Based on anecdotal conversations with my parents, it seems like some things that are considered ‘cool’ by most of my own peer group were considered ‘just stupid’ by the people my parents hung out with when they were teenagers.
There’s also the factor that in the modern environment, as compared to the ancestral environment, most people don’t keep the same group of friends in their twenties and thirties as in their teens. The same person can be unpopular in high school, when “coolness” is more correlated to risk taking, and yet be popular in a different group later when they have a $100 000-a-year job and an enormous house with a pool in it, and nobody remembers that back in high school they had no friends. Parents who have survived this phase may consider it okay for their children to be less popular as teenagers in order to prepare for later “success” as they define it, but to a teenager actually living through it day by day, the (http://lesswrong.com/lw/l0/adaptationexecuters_not_fitnessmaximizers/) in their brain will still rate their peers’ approval as far more important than safety, and adjust their pleasure and pain in different situations accordingly...since, in an ancestral environment of small groups that stayed together, impressing people at age 14 would have a much greater effect on your later success as an adult.
You may find the article in http://lesswrong.com/lw/jx/we_change_our_minds_less_often_than_we_think/5lkb well worth your time.
Neat. Thanks.
That’s just about right. Humans are massively irrational; but we tend to regard that as a bug and work to fix it in ourselves.
Hello Thomblake,
Thanks for the welcome! But I really can’t agree with your statement.
Irrationality, which I would for now define as all human action and awareness that isn’t rational thinking or that doesn’t follow a rationally defined course of action, is not a ‘bug’; rather it’s most of the features that make us up and allow our continued existence. They make up a much greater part of what we are than those things/ faculties or moments/situations that we might call rational. And most of these deserve more respect than being called bugs. Especcially in an evolutionary perspective most of these traits and processes should definately be considered features to which we owe our continued existence. Often these things conflict with a rationality we hope to attain, but I think that at other times they are neccesary prerequisites to it. Emotions can be qualified, or ‘legitimated’ by reflexive rational thought, and we can try to purge emotions we deem to be personal hurdles, but still most of our lives take place outside the realm of rationality. Rationality should be used to improve the rest of our lives and to improve the way humankind is organised, how it organises it’s sphere of influence. I think it’s a mistake to think rationality could, or should, be everything we are.
Some of the disagreement is definitional. We define rationality as achieving your goals. Rationality should win. Any act or [ETA: mental] process that helps with achieving goals is rational.
There’s a followup assertion in this community that believing true things helps achieving goals. Although not all people in history have believed that, it’s hard to deny that human thinking patterns are not well calibrated for discovering and believing truth things. (Although they are better than anything else we’ve come across).
If ‘effective’ in the very loosest sense, is drawn into what is called rational, doesn’t that confuse the term?
I mean, to my mind, having a diëtician for a parent ( leading to fortuitous fortitude which assist in the achievement of certain goals ) is not rational, because it is not something that is in any way tied to the ‘ratio’. This thing that helps you achieve goals is simply convenient or a privilege, not rational at all.
If I have a choice of parents, and a dietician is the most useful parent to have for achieving my goals, then yes, choosing a dietician for a parent is a rational choice. Of course, most of us don’t have a choice of parents.
If I believe that children of dieticians do better at achieving their goals than anyone else, then choosing to become a dietician if I’m going to have children is a rational choice. (So, more complicatedly, is choosing not to have children if I’m not a dietician.)
Of course, both of those are examples of decisions related to the state of affairs you describe.
Talking about whether a state of affairs that doesn’t involve any decisions is a rational state of affairs is confusing. People do talk this way sometimes, but I generally understand them to be saying that it is symptomatic of irrationality in whoever made the decisions that led to that state of affairs.
What do you mean? Whose irrationality? Isn’t it more straightforward (it’s there among the ‘virtues of rationality’ no?) to just not call things ‘rational’ if they do not involve thinking?
Incidentally, you’ve caused me to change my mind.
http://lesswrong.com/r/discussion/lw/96n/meta_rational_vs_optimized/
Wow.… I’m surprised and glad. Thanks for being open to criticism.
I don’t think so, since that would be a trivial property that doesn’t indicate anything, for there is no alternative available. Decisions can be made either correctly or not, and it’s useful to be able to discern that, but the world is always what it actually is.
It varies, and I might not even know. For example, if the arrangement of signs on a particular street intersection causes unnecessary traffic congestion, I might call it an irrational arrangement. In doing so I’d be presuming that whoever chose that arrangement intended to minimize traffic congestion, or at least asserting that they ought to have intended that. But I might have no idea who chose the arrangement. (I might also be wrong, but that’s beside the point.)
But that said, and speaking very roughly: irrationality on the part of the most proximal agent(s) who was (were) capable of making a different choice.
Yes, it is.
For example, what I just described above is a form of metonymy… describing the streetsign arrangement as irrational, when what I really mean is that some unspecified agent somewhere in the causal history of the streetsign was irrational. Metonymy is a common one among humans, and I find it entertaining, and in many cases efficient, and those are also virtues I endorse. But it isn’t a straightforward form of communication, you’re right.
Incidentally, I suspect that most uses of ‘rationality’ on this site (as well as ‘intelligence’) could be replaced by ‘optimization’ without losing much content. Feel free to use the terms that best achieve your goals.
If there is no alternative, there doesn’t seem to be a possibility of improvement. If improvement is impossible, what exactly are we worrying about?
It’s useful to know some things that are unchangeable.
Sure, but asking the rational decision to make when there is literally no decision to make is not a well formed question.
You use an invalid argument to argue for a correct conclusion. It doesn’t generally follow that something that can’t be improved is not worth “worrying about”, at least in the sense of being a useful piece of knowledge to pay attention to.
It’s a definitional dispute, mostly caused by my original failure to specific that I meant mental processes in this comment.
It’s all irrelevant to my point, which is a self-contained criticism of a particular argument you’ve made in this comment and doesn’t depend on the purpose of that argument.
(Your quoting someone else’s writing without clarification, in a reply to my comment, is unnecessarily confusing...)
I think it would indicate that not every action is being thought over. That some things a person does which lead to the achievement of a goal may not have beent planned for or acknowledged. By calling all things that are usefull in this way ‘rational’ I think you’d be confusing the term. Making it into a generic substitute for ‘good’ or ‘decent’. To me, that seems harmfull to an agenda of improving people’s rational thinking.
.>, for there is no alternative available.
I would like to propose the alternatives of ‘beneficial’ and ‘usefull’. Otherwise we could consider ‘involvement in causality’ or something like that.
I think the word rationality could use protection against too much emotional attachment to it. It should retain a specific meaning instead of becoming ‘everything that’s usefull’.
I’m not in love with using the word “rationality” for what this community means by rationality. But (1) I can’t come up with a better word, (2) there’s no point in fighting to the death for a definition, and (3) thanks to the strength of various cognitive biases, it’s quite hard to figure out how to be rational and worth the effort to try.
I think various forms of “optimization” would probably fit the bill. That is, pretty much everything this site endorses about “rationalists” it would also endorse about “efficient optimizers.”
But the costs associated with such a terminology shift don’t seem remotely worth the payoff.
Assuming for the moment that having a dietitian for a parent really does help one achieve one’s goals, yes it is rational, to the extent that it can be described as an act or process. That is, if you can influence what sorts of parents you have, then you should have a dietitian.
Similarly, it would be rational for me to spend 20 minutes making a billion dollars, even though that’s something I can’t actually do.
Whether a dietitian-parents could help you achieve all kinds of goals. Generally you’d be likely to have good health, you’re less likely to be obese. Healthy, well-fed people tend to be taller, a dietician could use diet changes to reduce acne problems and whatnot. It is generally accepted that healthy, tall, good-looking people have better chances at achieving all sorts of goals. Also, dieticians are relatively wealthy highly-educated people. A child of a dietician is a child of privilege, upper middle class!
Anyway, my point is exactly that nobody can choose their parents.TimS said:
I would consider parenthood a process. But having a certain set of parents instead of another has little to do with rationality, despite most parents being ‘usefull’. In the same way, I would not consider it rational to like singing, even though the acquired skills of breathing and voice manipulation might help you convey a higher status or help with public speaking. To decide to take singing lessons, if you want to become a public speaker, might be rational. But to simply enjoy singing shouldn’t be considered so, even if it does help with your public speaking. Because no rational thought is involved.
Ha, you caught me using loose language.
At a certain level, instrumental rationality is a method of making better choices, so applying it where there doesn’t appear to be a choice is not very coherent. Instrumental rationality doesn’t have anything to say about whether you should like singing. But if want skill at singing, instrumental rationality suggests music lessons.
As an empirical matter, I suggest there are lots of people who would like to be able to sing better who do not take music lessons for various reasons. We can divide those reasons into two patterns: (1) “I want something else more than singing skill and I lack the time/money/etc to do both,” or (2) “Nothing material prevents me from taking singing lessons, but I do not because of anxiety/embarrassment/social norms.”
Again, I assert that a substantial number of people decide not to take singing lessons based solely on type 2 reasons. This community thinks that this pattern of behavior is sub-optimal and would like to figure out how to change it.
Here I agree almost fully! My problem is that people aren’t fully rational beings. That some of the people might want to take lessons on some level but don’t can’t be attributed only to their thoughts, but to their emotional environment. A persons thoughts need to be mobilised into action for something to take part. Sometimes this is a point of a person needing more basic confidence, sometimes a person needs their thoughts mirrored at them and confirmed. As in, speaking with a friend who’ll encourage them. Thinking alone isn’t enough.
I admire the community’s mission to try and change people. But by the same line of argument I use above I think focusing only on how people think and how they might think better is not going to be enough. I think rationality should also be viewed as a social construct.
One level up, consider who does the focusing how. The goal may be to build a bridge, an tune an emotion, or correct the thinking in your own mind. One way of attaining that goal is through figuring out what interventions lead to what consequences, and finding a plan that wins.
That’s what we’ve been saying. Not all of a person’s thoughts are rational. And I certainly don’t assert someone can easily think themselves out of being depressed or anxious.
I think that the goals people set are socially constructed. Thus, the ends rationality seeks to achieve are socially constructed. Once that is established, what further insight is contained in the assertion that rationality itself is socially constructed?
To put it slightly differently, I don’t think mathematics is socially constructed, but it’s pretty obvious to me that what we choose to add together is socially constructed.
My point there wasn’t that people’s thoughts aren’t all rational, though I agree with that. My point was that not all human actions are tied to thoughts or intentions. There are habits, twitches, there is emotional momentum driving people to do things they’d never dream of and may regret for the rest of their lives. People often don’t think in the first place.
I think that, when one’s goal is to improve and spread rationality, a elementary questions should be: When, and under which circumstances does a person think? How does a social situation affect your thinking? So instead of just asking how do we think and how do we improve that? It could also be usefull to ask when do we think and how do we improve that?
At some point in the future we could then inform people of the kind of social environment they might build to help them better formulate and achieve goals. Like people with anger problems being taught to ‘stop! And count to ten’ other people might be taught to think at certain recognisable critical moments they currently tend to walk past without realising.
Yes, at this point we’re just disputing definitions. But I think we’re in agreement with all the relevant empirical facts; if you were able to chose your parents, then it would be rational to choose good ones. Also, one is not usually able to choose one’s parents.
Thanks for your quick replies. Yes we are agreed in those two points. I’m going to try something that may come off as a little crude, but here goes:
Point 1: If every act or process that helps me is to be called rational, then having a diëtician for a parent is rational. Point 2: The term rational implies involvement of the ‘ratio’, of thinking. Point 3: No rational thinking, or any thinking at all, is involved in acquiring one’s parents. Even adaptive parents tend to acquire their child, not the other way around. Conclusion; Something is wrong with saying that everything that leads to the attainment of a goal is rational.
Perhaps another term should be used for things that help achieve goals but that do not involve thinking, let alone rational or logically sound thinking. This is important because thought is often overstated in the prevalence with which it occurs, and also in the causal weight that is attached to it. Thought is not omnipresent, and thought is often of minor importance in accurately explaining a social phenomenon.
“Rationality/irrationality” in the sense used on LW is a property of someone’s decisions or actions (including the way one forms beliefs). The concept doesn’t apply to the helpful/unhelpful things not of that person’s devising.
I’d prefer to reject point 2. Arguments from etymology are not particularly strong. We’re using the term in a way that has been standard here since the site’s inception, and that is in accordance with the standard usage in economics, game theory, and artificial intelligence.
You may be right in that the argument comes more from a concern with how a broader public relates to the term of ´rational´ than how it is used in the mentioned disciplines.
On the other hand I feel that the broader public is relevant here. LessWrong isn´t that small a community and I suspect people have quite some emotional attachment to this place, as they use it as a guide to alter their thinking. By calling all things that are usefull in this way ‘rational’ I think you’d be confusing the term. It could lead to rationality turning into a generic substitute for ‘good’ or ‘decent’. To me, that seems harmfull to an agenda of improving people’s rational thinking.
If I have a choice of whether to enjoy singing or not, and I’ve chosen to take singing lessons, I ought to choose to enjoy singing.
See What Do We Mean By “Rationality”.
Summary: “Epistemic rationality” is having beliefs that correspond to reality. “Instrumental rationality” is being able to actualize your values, or achieve your goals.
Irrationality, then, is having beliefs that do not correspond to reality, or being unable to achieve your goals. And to the extent that humans are hard-wired to be likely irrational, that certainly is a bug that should be fixed.
By that definition you might say that, but that still leaves the problem I tend to adress, that rationality (and by the supplied definition also irrationality) is suscribe to people and actions where thinking quite likely did not take place or was not the deciding factor of what action came about in the end. It falsely divides human experience into ‘rational’ and ‘erroneously rational/irrational’. Thinkin is nog all that goes on among humans.
Uncontroversial, as far as that goes.
Bias
Diamond in a box:
--CEV
You imply that there is a standard of rationality people are deviating from. Yes?
Lessdazed,
Thanks for your reply! I’m not quite sure how usefull that second quote you sent is. But if I ever do find a genie, I’ll be sure to ask it whether it pays attention to my volition, or even to make it my first wish that the genie pays attention to my volition when fulfilling my other wishes ;)
My point in the section you quoted at the end of your post was not that there is a standard of rationality that people are deviating from. Closer to my views is that a standard of rationality is created, which deviates from people.
Hi Kouran, and welcome.
Your critique of “rationalism” as you currently understand it is, I think, valid. The goal of LessWrong, as I understand it (though I’m no authority, I just read here sometimes), is to help people become more rational themselves. As thomblake has already pointed out, we tend to believe with you in the general irrationality of humans. We also believe that this is a sort of problem to be fixed.
However, I also think you’re being unfair to people who use the Rationality Assumption in economics, biology or elsewhere. You say that:
That’s not an assumption that the theory requires. The Rationality Assumption only requires us to interpret the actions of an agent in terms of how well it appears to help it fulfill its goals. It needn’t be conscious of such “goals”. This type of goal is usually referred to as a revealed preference. Robin Hanson at Overcoming Bias, a blog that’s quite related to LessWrong, also loves pointing out and discussing the particular problem that you’ve raised. He usually refers to it as the “Homo hypocritus hypothesis”. You might enjoy reading some related entries on his blog. The gist of the distinction I’m trying to point to is actually pretty well-summarized by Joe Biden:
It’s my own humble opinion that economists occasionally make the naive jump from talking about revealed preferences to talking about “actual” preferences (whatever those may be). In such cases, I agree with you that a disposition toward “rationalism” could be dangerous. But again, that’s not the accepted meaning of the word here. I also think it might be just as naive to take peoples’ stated preferences (whether stated to themselves or others) to be their “actual” preferences.
There have been attempts on LW to model the apparent conflict between the stated preferences and revealed preferences of agents, my favourite of which was “Conflicts Between Mental Subagents: Expanding Wei Dai’s Master-Slave Model”. If I were to taboo the word “rationality” in explaining the goal of this site, I’d say the goal is to help people bring these two mental sub-agents into agreement; to avoid being a Homo hypocritus; to help one better synchronize their stated preferences with their revealed preferences.
Clearly, the meanings of the word “rationality” that you have, and that this community has, are related. But they’re not the same. My goal in linking to the several articles in the above text, is to help you understand what is meant by that word here. Good luck and I hope you find the discourse you’re looking for!
Fburnaby, thank you for the long reply.
I’m replying to you now before reading your suggestions, I’ve not had the time so far. They’re on my list but for now I’d like to adress what you reply either way.
The Joe Biden quote is very effective, and I agree with the general sentiment. But not with how that relates to questions of rationality. I tend to use rationality as any thinking at all. Illogical thinking is may be bad rationality, but it is still rationality. My objection to assuming rationality isn’t that you shouldn’t look at how these or those actions may have some sort of function. My criticism is that, that when you do observe that a certain function is served, you shouldn’t impose rationality upon the people involved. In my experience, as a bachelor of sociology and as a human being with a habit of self-reflection, people don’t act upon their thoughts, but much more upon their knowledge of how to act in certain situations, on their social ‘programming’ and emotions, on their various loyalties.
We tend to define mankind as a being capable of thinking. I think we are wrong in this in the same way we would be wrong to define a scorpion as a being capable of making a venomous sting. The statement isn’t false, but most of the time the scorpion isn’t stinging anything. It’s just walking, sitting, eating, grabbing something with it’s claws. The stinging isn’t everything that’s going on, it’s not nearly even most of what’s going on.
Thanks again for the reply, I’ll be looking around and I’ll try to add something where I think it is fruitfull.
-Kouran
Hey Kouran,
I’m having trouble figuring out whether we agree or disagree. So, you tell me this:
and I agree that’s an excellent assumption for the goal of doing good sociology (and several other explanatory pursuits). I think (hope!) it will become clearer to you as your read the things I linked you to that this attitude is both (1) a very good one to take in many many instances, and (2) not in conflict with the goal of becoming more rational.
I snuck a key word by in that last sentence: assumption. When thinking about humans and societies, it’s become a very common and useful assumption to say that they don’t deliberate or make rational decisions; they’re products of their environments and they interact with those environments. At LessWrong, we usually call this the “outside view” because we’re viewing ourselves or others as though from the outside.
Note that while this is a good way to look at the world, we also have real, first-hand experiences. I don’t live my personal life as a bucket of atoms careening into other atoms, nor as an organism interacting with its environment; I live my day-to-day life as a person making decisions. These are three different non-wrong ways of conceptualizing myself. The last one, where I’m a person making decisions, is where the use of this notion of rationality that we’re interested in comes along and we sometimes call this the “inside view”. At those other levels of explanation, the concept of rationality truly doesn’t make sense.
I also can’t resist adding that you point out very rightly that most people don’t act on their thoughts and pursue their goals, opting instead to execute their social-biological programming. Many people here are genuinely interested in getting these two realms (goals and actions) to synch up and are doing some amazing theorizing as to how they can accomplish this goal.