Reasons for being rational
When I found Less Wrong and started reading, when I made my first post, when I went to my first meetup….
It was a little like coming home.
And mostly it wasn’t. Mostly I felt a lot more out of place than I have in, say, church youth groups. It was hard to pinpoint the difference, but as far as I can tell, it comes down to this: a significant proportion of the LW posters are contrarians in some sense. And I’m a conformist, even if I would prefer not to be, even if that’s a part of my personality that I’m working hard to change. I’m much more comfortable as a follower than as a leader. I like pre-existing tradition, the reassuring structure of it. I like situations that allow me to be helpful and generous and hardworking, so that I can feel like a good person. Emotionally, I don’t like disagreeing with others, and the last thing I have to work hard to do is tolerate others’ tolerance.
And, as evidenced by the fact that I attend church youth groups, I don’t have the strong allergy that many of the community seem to have against religion. This is possibly because I have easily triggered mystical experiences when, for example, I sing in a group, especially when we are singing traditional ‘sacred’ music. In a previous century, I would probably have been an extremely happy nun.
Someone once expressed surprise that I was able to become a rationalist in spite of this neurological quirk. I’ve asked myself this a few times. My answer is that I don’t think I deserve the credit. If anything, I ended up on the circuitous path towards reading LessWrong because I love science, and I love science because, as a child, reading about something as beautiful as general relativity gave me the same kind of euphoric experience as singing about Jesus does now. My inability to actual believe in any religion comes from a time before I was making my own decisions about that kind of thing.
I was raised by atheist parents, not anti-theist so much as indifferent. We attended a Unitarian Universalist church for a while, which meant I was learning about Jesus and Buddha and Native American spirituality all mixed together, all the memes watered down to the point that they lost their power. I was fourteen when I really encountered Christianity, still in the mild form of the Anglican Church of Canada. I was eighteen when I first encountered the ‘Jesus myth’ in its full, meme-honed-to-maximum-virulence form, and the story arc captivated me for a full six months. I still cry during every Good Friday service. But I must have missed some critical threshold, because I can’t actually believe in that story. I’m not even sure what it would mean to believe in a story. What does that feel like?
I was raised by scientists. My father did his PhD in physical chemistry, my mother in plant biology. I grew up reading SF and pop science, and occasionally my mother or my father’s old textbooks. I remember my mother’s awe at the beautiful electron-microscope images in my high school textbooks, and how she sat patiently while I fumblingly talked about quantum mechanics, having read the entire tiny physics section of our high school library. My parents responded to my interest in science with pride and enthusiasm, and to my interest in religion with indulgent condescension. That was my structure, my tradition. And yes, that has everything to do with why I call myself an atheist. I wouldn’t have had the willpower to disagree with my parents in the long run.
Ultimately, I have an awfully long way to go if I want to be rational, as opposed to being someone who’s just interested in reading about math and science. Way too much of my motivation for ‘having true beliefs’ breaks down to ‘maybe then they’ll like me.’ This is one of the annoying things about my personality, just as annoying as my sensitivity to religious memes and my inability to say no to anyone. Luckily, my personality also comes with the ability to get along with just about anyone, and in a forum of mature adults, no one is going to make fun of me because I’m wearing tie-dye overalls. No one here has yet made fun of me for my interest in religion, even though I expect most people disagree with it.
And there’s one last conclusion I can draw, albeit from a sample size of one. Not everyone can be a contrarian rationalist. Not everyone can rebel against their parents’ religion. Not everyone can disagree with their friends and family and not feel guilty. But everyone can be rational if they are raised that way.
- The problem with too many rational memes by 19 Jan 2012 0:56 UTC; 121 points) (
- 3 Jul 2011 3:12 UTC; 28 points) 's comment on An Outside View on Less Wrong’s Advice by (
- 4 Jul 2011 22:57 UTC; 8 points) 's comment on An Outside View on Less Wrong’s Advice by (
- 4 Jul 2013 3:41 UTC; 2 points) 's comment on How I Became More Ambitious by (
- Meetup : Less Wrong Ottawa by 5 Jul 2011 1:19 UTC; 2 points) (
- 7 Jan 2013 18:56 UTC; 2 points) 's comment on Politics Discussion Thread January 2013 by (
- 27 Nov 2011 17:24 UTC; 1 point) 's comment on Video: Skepticon talks by (
- 27 Nov 2011 6:34 UTC; 0 points) 's comment on Video: Skepticon talks by (
Way too much of everyone’s motivation for anything breaks down to “maybe then group X will have/stop having attitude Y towards me”. And the vast majority of the time, we’re completely unaware of it.
So actually, you’ve got a leg up over all the people who are doing the same thing, but have a different X and Y than you and are unaware of it. (AFAICT, people who orient on “true beliefs” tend to be more about respect/status rather than affiliation, but apart from motivating slightly different behaviors, it might as well be the same thing. Affiliation-based motivation often results in “nicer” behaviors though, so that’s actually a plus for you.)
I really think that you should be less dismissive of the possibility that some people really are trying to form their beliefs in order to act on the world, rather than on other people.
In which case, their desire to act on the world is typically because of a need to influence other people by way of their world-acting-on.
Or are you talking about people whose autism is so severe that they’ve never formed a bond with any other human being, including their parents?
If that’s not who you’re talking about, you should probably reconsider.
Barring such extreme cases, people generally learned to do whatever they do—including any desire to “act on the world”—as a consequence of their interactions (or lack thereof, in some cases) with other people.
As children, we tend to choose our values based on our attempts to get love and/or attention from our parents, and those early decisions tend to shape later ones. I got rewarded with attention for being the “smart one” in my family, which led me to value learning and knowledge.
For most of my life, I assumed that this valuing was independent of any such early circumstances. Instead, I merely felt like it was the right thing to value knowledge, to seek the truth, etc., and that people who didn’t value these things were not as worthy of respect or attention.
And all the while, I never realized that this was just a lens I viewed the world through… a lens that I put on to better manipulate my parents when I was about 2 or 3 years old.
Now, you might say, “hey, what difference does it make how you get your values, as long as you ended up with good ones?” Unfortunately, it makes quite a bit of difference.
For example, my particular way of learning that value led to me:
being dismissive and impatient with people
assuming I was (and ought to be) the smartest person in any given room (and thus becoming upset or even depressed when I was not), and
valuing the “knowing” of things and knowing the “right” (respectable, reward-worthy) ways of doing things, in preference to actually doing things...
And that’s just a few of particularly awful side-effects I wound up with, off the top of my head.
Now, I’ve been able to shed these issues to some degree recently, but that’s not the same thing as undoing or avoiding the damage in the first place. And if I’d been any more dismissive of the idea that my underlying motives weren’t based on influencing people, there’s no way I’d have spotted the problems!
IOW, I think it’s delusional near-insanity to assume that your value system is not rooted in these types of covert and implicit motivations. Even if you claim to have hardware differences between yourself and other humans, it’s still not a safe assumption to make without actually checking the sources of your values and beliefs.
IOW, believing you’re not human won’t make it so. Litany of Tarski: if my motivations are impure, then I want to know that they are impure.
You seem to be confusing the causes of people’s preferences with their preferences. The fact that we want sugar because of evolution doesn’t mean that we don’t really want sugar.
Also, I’m not at all sure of what exactly you mean by ‘a bond’.
Also, not everyone needs to do anything to get adequate love and attention. Some people do in fact grow up as only children in large families or otherwise unconditionally attended to.
I do actually think there are some important hardware differences between myself and most people, but they aren’t nearly as important as the above as responses to your point. Related, I don’t think I want to not be human (though I have a strong desire to somehow blend characteristics of adult and immature humans which may not be compatible). If anything, that’s what UFAI enthusiasts want. I also don’t think I believe in purity, to a fairly anomalous degree, though that’s probably less relevant.
That depends on your definition of “want”. My point is that the causes of preferences can’t really be untangled from the preferences, because they have causal influence over how you will attempt to fulfill them, and most of that influence is subconscious or completely unconscious.
IOW, I’m focusing on the link between the cause of preferences, and how you end up behaving, thereby bypassing the difficult problem of pinning down an adequate definition of “want”. ;-)
And those people still get their values shaped by that attention, just differently. So I’m not clear on what you’re getting at there.
My girlfriend was a philosophy/english major and I was an engineering major. I was studying some mathy stuff and she brought up that she feels bad for not being able to discuss this particular interest with me. I told her that’s fine. I’m getting lots out of this study as it is, and that she and I have good conversations about other things.
Her response: “why would you want to learn something, if you’re not going to talk about it?”. I’m perfectly okay with the idea that I can’t talk about math to impress someone. I’ll be able to use the knowledge to impact the physical world in useful ways.
Of course, on a higher level, my goal in learning this math is still based on impressing people, but it’s for impressing them with physical results (or to help me raise my status, which will ultimately impress people) as opposed to having an impressive conversation.
Massively agreed.
Break down almost any human effort and at the bottom of it you’ll usually find a struggle for social status, which is/was directly conductive to reproductive success (especially for males, for females it’s more about looks when it comes to attracting a partner, but social status of cause still plays a critical role for surviving and thriving in a social group).
I seriously doubt any one of us can outrun our nature without cognitive engineering, so my preferred way of dealing with this side of human nature is to look at it as a “serious game” not terribly different from competitive poker. Win some, lose some—take it serious but don’t obsess over it to the point where you make it the core and center of your very existence. It’s not “meaningful” enough, or indeed meaningful at all given a transhuman perspective.
If we could upload and re-engineer our minds tomorrow, I’d probably strongly advocate to cut this “social status” nonsense from our cognitive make-up. By now it has outstayed its purpose and as far as I’m concerned its welcome, it has only brought untold misery upon humans and there are much more worthy things to be motivated by.
Hell, social status is even a significant roadblock for discussions among rationalists. Almost any possible communication between humans has an undercurrent that carries information about social status. So arguing and disagreeing were never ways to arrive at rational conclusions to begin with, they are actually ways to impose your will and influence and dominance onto others—so when we level a criticism or disagreement even to a fellow rationalist around here, we often feel the need to first make a little linguistic dance of appeasement to ensure our fellow apes don’t take our disagreement as an assassination attempt on their social status. Especially not while everyone’s watching from their desktops and treetops .
And sometimes, if you’re really lucky it actually works.
Do you have any recommendations on how to combat this? Obviously, mixing with groups that reward behaviour you wish to cultivate would be a good first step, but what other steps can one take? Do you think making a concious effort to identify more/feel friendlier towards people whose behaviour you consider laudable would help? This would be a step much more readily made for most people than changing their actual social group.
Combat what, precisely? Being human? ;-)
(Honestly, though, I’m not clear from your questions what it is that you’re trying to accomplish.)
I’m not!
Me too!
This is why we’re all so weird, right?
Schhh!
What exactly is your benchmark for being “rational”? If you mean becoming more openly critical in situations where your agreeableness prevents you from it, you should be aware that there are topics much more dangerous than religion where an uncompromising quest for truth might lead you to clash with the respectable opinion, both public and private. With this in mind, and considering the rest of what you wrote, it seems to me that for you, becoming hostile towards religion would not mean becoming more rational in any meaningful sense of the term. It would merely be a way to signal to a certain sort of people, without increasing the accuracy of your beliefs in any way, and distracting from topics that are far more difficult and dangerous, and thus a more critical test of rationality.
That is not what I meant. I have no intention in becoming more hostile to religion; I’m having fun with it at the moment. My main issues are a) right now it doesn’t feel like it hurts much to change my mind, meaning I’m probably not really changing my mind and I need to learn how to do it better, and b) my current schema for living life is deeply flawed. I care way too much about what others think of me, I’ve unthinkingly absorbed a ton of social conventions that I wish I hadn’t, and my inability to say no means that I’m overbooked and exhausted all the time. Also I behave very irrationally when it comes to romantic relationships, but that’s another story. And I’m a workaholic, though I would really be happier and get more done that I care about if I was able to work a little bit less.
Sure, rationality has obviously nothing to do with how mainstream or contrarian a belief is—but that should be so obvious to even any fairly new lesswronger, that we can be quite sure that her goal isn’t to alienate people just for the sake of it.
Also, you’re right about the celestial: In the western world religion has become quite low-hanging fruit by now—so low it practically touches the ground. “The God Delusion” and “God is Not Great” apparently had quite an impact on the US and the UK during the last years and smart people begin to dole out stomach punches to Jesus left and right. By now one can easily wear his or her criticism of religion on a T-shirt and one will still have the lion’s share of the academically educated world on one’s side.
Try a slogan like “democracy is retarded” on the other hand and you’ll have butchered the holy cow of practically everyone.
In most cases, when I see that someone is a particularly passionate and dedicated atheist (in the sense of the “New Atheists” etc.), lacking other information, I take it as strong evidence against their rationality. For someone living in the contemporary Western world who wants to fight against widespread and dangerous irrational beliefs, focusing on traditional religion indicates extreme bias and total blindness towards various delusions that are nowadays infinitely more pernicious and malignant than anything coming out of any traditional religion. (The same goes for those “skeptics” who relentlessly campaign against low-status folk superstition like UFOs or crystal healing, but would never dare mutter anything against all sorts of horrendous delusions that enjoy high status and academic approval.)
I like to compare such people with someone on board the Titanic who loses sleep over petty folk superstitions of the passengers, while at the same time being blissfully happy with the captain’s opinions about navigation. (And, to extend the analogy, often even attacking those who question the captain’s competence as dangerous crackpots.)
Why? Just because they spend their time in a perhaps less than optimal manner (compared to existential risks) doesn’t automatically mean that passionate atheists and skeptics are somehow highly irrational people, does it? I suspect a lot of them would be potential lesswrong readers, it’s just that they haven’t yet encountered these ideas yet.
Most people first had to become “regular” rationalists before they became lesswrongers. If I had stumbled upon this website a looong time ago when I was still something along the lines of a New Ager, I strongly suspect the Bayesian rationality meme simply would not have fallen on fertile ground. Cleaning out the superstitious garbage from your mind seems to be quite an important step for many people. It certainly was for me.
I do not agree with your viewpoint that these people are entirely wasting their time. Not every man, woman and child can participate directly or indirectly in the development of friendly AGI—and I’ve seen much worse use of time and effort than conversion attempts by the “New Atheist” movement. After all, something we may want to keep in mind is that the success and failure of many futuristic things we discuss here on lesswrong may somewhat depend on public opinion and perception (think stem cells) - and I’d much rather face at least somewhat rational atheists than a bunch of deluded theists and esoterics. The difference between 10 and 20% atheists may be all the difference it takes, to achieve more positive outcomes in certain scenarios.
Furthermore, if lesswrongian though has any kind of easily identifiable target group that would be worth “advertising” to, you’d probably find it among skeptics and atheists.
I didn’t say it was conclusive evidence, only that it is strong evidence.
Moreover, the present neglect of technology-related existential (and other) risk is only one example where the respectable opinion is nowadays remote from reality. There are many other questions where the prevailing views of academic experts, intellectuals, and other high-status shapers of public opinion, are, in my opinion, completely delusional. Some of these are just theoretical questions without much practical bearing on anything, but others have real ugly consequences on a large scale, up to and including mass death and destruction, or seriously threaten such consequences in the future. Many of them also make the world more ugly and dysfunctional, and life more burdensome and joyless, in countless little ways; others are presented as enlightened wisdom on how to live your life but are in fact a recipe for disaster for most people who might believe and try to apply them.
In this situation, if someone focuses on traditional religion as a supposedly especially prominent source of false beliefs and irrationality, it is likely that this is due to ideological reasons, which in turn means that they also swallow much of the above mentioned respectable delusions. Again, there are exceptions, which is why I wrote “lacking other information.” But this is really true in most cases.
Also, another devilish intellectual hack that boosts many modern respectable delusions is the very notion of separating “religious” beliefs and opinions from others. Many modern ideological beliefs that are no less metaphysical and irrational than anything found in traditional religions can nevertheless be advertised as rational and objective—and in turn backed and enforced by governments and other powerful institutions without violating the “separation of church and state” -- just because they don’t fall under the standard definition of “religion.” In my experience, and again with a few honorable exceptions, those who advocate against traditional religion are often at the same time entirely OK with such enforcement of state-backed ideology, even though there is no rational reason to see it as essentially different from the old-fashion establishment of religion.
Name three?
edit: I find that he has already named three, and two heuristics for determining whether an academic field is full of bunk or not, here. I commend him on this article. While I remain unconvinced on the general strategy outlined, I now understand the sort of field he is discussing and find that, on the specifics, I tentatively agree.
I strongly recommend reading Robin Hanson’s answer here.
Same challenge.
edit: I would still like to hear these.
Wow answering that challenge might seriously kill some minds.
I suggest you two PM it out.
Well, as I pointed out in my other comments, unless I answered your challenges with essays of enormous length, my answer would consist of multiple assertions without supporting evidence that sound outlandish on the face of it. Remember that we are talking about delusions that are presently shared by the experts and/or respectable high-status people.
Note that you should accept my point even if we completely disagree on what these high-status delusions are, as long as we agree that there are some, whatever they might be. Try to focus on the main point in the abstract: if delusion X is low-status and rejected by experts and high-status people (even if it might be fairly widespread among the common folk), while delusion Y is instead accepted by them, so much that by asserting non-Y you risk coming off as a crackpot, should we be more worried about X or Y, in terms of both the idealistic pursuit of truth and the practical problems that follow?
Y, of course. Perhaps I should have started out by saying that while I agree that what you say is possible, I don’t know if it describes the real world. Your assertion was that there are many high status delusions, but without evidence of that, all I can say is that I agree that supposed experts are not guaranteed to be correct on every point, and that it is extremely possible that they will reinforce delusions within their community.
OOC - some examples would be nice :)
I think a lot of the people that fall into this camp (at least those that I know of) are people that have just recently deconverted—they’ve just been through a major life-change involving religion and therefore are understandably entranced with the whole process as it is particularly meaningful to them.
Alternatively, they are reacting against some heavy prejudice that they have had to suffer through—or have some loved ones that are particularly “afflicted” and want to see something done to prevent it happening to others.
Sure, there are other big, important things out there… but one man’s meat is another’s poison, and all that.
I think it’s easy enough to say that there are bigger problems out there… when we are looking at it from the perspective of having been atheist for a long time. but some people have just had their world cave in—everything has been upturned. They no longer have that huge big safety net underneath them that tells them that everything is going to be alright in the afterlife. Maybe they’ve just discovered that they’ve been wasting one seventh of their life in church when they could have been out there exploring his beautiful world that we live in or spending quality time with their kids… it may seem like nothing important to you, but it’s a Big Thing to some people.
PS—I am also inclined to agree with you that there are better things the time could be spent on… but that’s “better from my perspective” and it’s not mine that counts.
Well, whenever I open this topic, giving concrete examples is problematic, since these are by definition respectable and high-status delusions, so it’s difficult or impossible to contradict them without sounding like a crackpot or extremist.
There are however a few topics where prominent LW participants have run into such instances of respectable opinion being dogmatic and immune to rational argument. On example is the already mentioned neglect of technology-related existential risks—as well as other non-existential but still scary threats that might be opened due to the upcoming advances in technology—and the tendency to dismiss people who ask such questions as crackpots. Another is the academic and medical establishment’s official party line against cryonics, which is completely impervious to any argument. (I have no interest in cryonics myself, but the dogmatic character of the official line is clear, as well as its lack of solid foundation.)
This, however, is just the tip of the iceberg. Unfortunately, listing other examples typically means opening ideologically charged topics that are probably best left alone. One example that shouldn’t be too controversial is economics. We have people in power to regulate and manage things, with enough power and influence to wreak havoc if they don’t know what they’re doing, whose supposed expertise however appears, on independent examination, to consist mostly of cargo-cult science and ideological delusions, even though they bear the most prestigious official titles and accreditations. Just this particular observation should be enough to justify my Titanic allegory.
Why ever not?
On the other hand elsewhere you write
which suggests that you think that the things that you’re avoiding writing about are very important. If they’re so important then why not pay the price of being considered a crackpot/extremist by some in order to fight against the delusional views? Is the key issue self-preservation of the type that you mentioned in response to Komponisto?
Or is the point that you think that there’s not much hope for changing people’s views on the questions that you have in mind so that it’s futile to try?
Well, there are several reasons why I’m not incessantly shouting all my contrarian views from the rooftops.
For start, yes, obviously I am concerned with the possible reputational consequences. But even ignoring that, the problem is that arguing for contrarian views may well have the effect of making them even more disreputable and strengthening the mainstream consensus, if it’s done in a way that signals low status, eccentricity, immorality, etc., or otherwise enables the mainstream advocates to score a rhetorical victory in the ensuing debate (regardless of the substance of the arguments). Thus, even judging purely by how much you’re likely to move people’s opinions closer or further from the truth, you should avoid arguing for contrarian views unless the situation seems especially favorable, in the sense that you’ll be able to present your case competently and in front of a suitable audience.
Moreover, there is always the problem of whether you can trust your own contrarian opinions. After all, even if you take the least favorable view of the respectable opinion and the academic mainstream, it is still the case that most contrarians are deluded in even crazier ways. So how do you know that you haven’t in fact become a crackpot yourself? This is why rather than making a piecemeal catalog of delusional mainstream views, I would prefer to have a more general framework for estimating how reliable the mainstream opinion is likely to be on a particular subject given various factors and circumstances, and what general social, economic, political, and other mechanisms have practical influence in this regard. Effort spent on obtaining such insight is, in my opinion, far more useful than attacking seemingly wrong mainstream opinions one by one.
These latter questions should, in my opinion, be very high (if not on the top) of the list of priorities of people who are concerned with overcoming bias and increasing their rationality and the accuracy of their beliefs, and one of my major disappointments with LW is that attempts to open discussion about these matters invariably fall flat. (This despite the fact that such discussions could be productive even without opening any especially dangerous and charged topics, and despite the fact that on LW one regularly hears frustrated accounts of the mainstream being impervious to argument on topics such as existential risk or cryonics. I find it especially puzzling that smart people who are concerned about the latter have no interest in investigating the underlying more general and systematic problems.)
Doesn’t this push in the direction of holding contrarian views being useless except as a personal hobby? If so, why argue against mainstream delusional views at all (even as a collection without specifying what they are)? Is the point of your comment that you think it’s possible to make progress by highlighting broad phenomena about the reliability of mainstream views so that people can work out the implications on their own without there being a need for explicit public discussion?
A natural method to avoid becoming a crackpot is to reveal one’s views for possible critique in a gradual and carefully argued fashion, adjusting them as people point out weaknesses. Of course it might not be a good idea to reveal one’s views regardless (self-preservation; opportunity cost of time) but I don’t think that danger of being a crackpot is a good reason.
I’m not sure what you have in mind here. Your post titled Some Heuristics for Evaluating the Soundness of the Academic Mainstream in Unfamiliar Fields was highly upvoted and I myself would be happy to read more along similar lines. Are there examples that you’d point to of attempts to open discussion about these matters falling flat?
Basically, I believe that exploring the general questions about how mainstream views are generated in practice and what are the implications for their reliability is by far the most fruitful direction for people interested in increasing the accuracy of their beliefs across the board. Of course, if you have a particular interest in some question, you have to grapple with the concrete issues involved, and also a general exploration must be based on concrete case studies. But attacking particular mainstream views head-on may well be counterproductive in every sense, as I noted above.
That’s assuming you have discussion partners who are knowledgeable, open-minded, and patient enough. However, such people are the most difficult to find exactly in those areas where you’re faced with the Scylla of a deeply flawed mainstream and the Charybdis of even worse crackpot contrarians.
(Please also see my reply to Nick Tarleton, who asked a similar question as the rest of your comment.)
This is fair; you’ve made your position clear, thanks.
Agree in general. How about Less Wrong in particular?
Well, LW is great for discussing a concrete problem if you manage to elicit some interest in it, both because of people’s high general intellectual skills and because of low propensity to emotionally driven reactions that are apt to derail the discussion, even in fairly charged topics (well, except for gender-related ones, I guess). So, yes, LW is very good for this sort of reality-checking if you manage to find people interested in your topic.
What’s an example? (I mostly ask so as to have some more specific idea of what topics you’re referring to.)
You can take any topic where it’s impossible to make sense of the existing academic literature (and other influential high-status sources), or where the respectable mainstream consensus seems to clash with reality. When discussions about such topics are opened on LW, often the logical next step would be to ask about the more general underlying problems that give rise to these situations, instead of just focusing on the arguments about particular problems in isolation. (And even without a concrete motivation, such questions should directly follow from LW’s mission statement.) Yet I see few, if any attempts to ask such general questions on LW, and my occasional attempts to open discussion along these lines, even when highly upvoted, don’t elicit much in terms of interesting arguments and insight.
As an illustration, we can take an innocent and mainstream problematic topic like e.g. the health questions of lifestyle such as nutrition, exercise, etc. These topics have been discussed on LW many times, and it seems evident that the mainstream academic literature is a complete mess, with potential gems of useful insight buried under mountains of nonsense work, and authoritative statements of expert opinion given without proper justification. Yet I see no attempt to ask a straightforward follow-up question: since these areas operate under the official bureaucratic system that’s supposed to be guaranteed to produce valid science, then what exactly went wrong? And what implications does it have for other areas where we take the official output of this same bureaucratic system as ironclad evidence?
Of course, when it comes to topics that are more dangerous and ideologically charged, the underlying problems are likely to be different and more severe. One can reasonably argue that such topics are best avoided on a forum like LW, both because they’re likely to stir up bad blood and because of the potential bad signaling and reputational consequences for the forum as an institution. But even if we take the most restrictive attitude towards such topics, there are still many others that can be used as case studies for gaining insight about the systematic underlying problems.
Your own points have struck me as on the mark; but I haven’t had much to add.
There are some interesting general comments that I could make based on my experience in the mathematical community in particular. I guess here I have some tendency toward self-preservation myself; I don’t want to offend acquaintances who might be cast in negative life by my analysis. (Would be happy to share my views privately if you’re interested though.) I guess my attitude here is that there’s little upside to making my remarks public. The behaviors that I perceive to be dysfunctional are sufficiently deeply entrenched so that whatever I would say would have little expected value.
The main upside would be helping others attain intellectual enlightenment, but although I myself greatly enjoy the satisfaction of being intellectual enlightenment, I’m not sure that intellectual enlightenment is very valuable from a global perspective. Being right is of little use without being influential. In general the percentage of people who are right (or interested in being right) on a given topic where a contrarian position is right is sufficiently small so that the critical mass that it would take to change things isn’t there and nor would an incremental change in this percentage make a difference.
The reason why the above point has so much weight in my mind is that despite my very high interest in learning about a variety of things and in forming accurate views on a variety of subjects; I haven’t achieved very much. It’s not clear whether having accurate views of the world has been more helpful or harmful to me in achieving my goals. The jury is still very much out and things may change; but the very fact that it’s possible for me to have this attitude is a strong indication that knowledge and accurate views on a variety of things can be useless on their own.
Regarding:
I made a comment that you may find relevant here; I would characterize nutrition/exercise/etc. as fields that are obviously important and which therefore attract many researchers/corporations/hobbyists/etc. having the effect of driving high quality of researchers out of the field on account of bad associations.
Another factor may be absence of low hanging fruit (which you reference in your top level post); it could be that the diversity of humans is sufficiently great so that it’s difficult to make general statements about what’s healthy/unhealthy.
I agree with what you said about main stream fields being diluted, but offer an interesting corollary to that. Economic motives compel various gurus and nutritionists to make claims to the average joe, and the average joe, or even the educated joe cannot sort through them. However, if one looks in more narrow fields, one can obtain more specific answers without so much trash. For example, powerlifting. This is not a huge market nor one you can benefit financially from that much. If one is trying to sell something or get something published, he can’t just say “I pretty much agree with X”, he needs to somehow distinguish himself. But when that motive is eliminated you can get more consistency in recommendations and have a greater chance to actually hit upon what works.
While you might not be interested in powerlifting, reading in more niche areas can help filter out profit/status seeking charlatans, and can allow one to see the similarities across disciplines. So while I’ve read about bodybuilding, powerlifting, and endurance sports, and their associated nutritional advice, I would never read a book about “being fit.”
As an aside, I recently had this horrible moment of realization. Much of the fitness advise given out is just so incredibly wrong, and I am able to realize that because I have a strong background in that subject. But I realized, 90% of the stuff I read about are areas I don’t have a great background in. I could be accepting really wrong facts in other areas that are just as wrong as the nutritional facts I scoff at, and I would never learn of my error.
That does seem to be a useful heuristic. DOOM mongers are usually selling something. They typically make exaggerated and biased claims. The SIAI and FHI do not seem to be significant exceptions to this—though their attempts to be scientific and rational certainly help.
These types of organisation form naturaly from those with the largest p(DOOM) estimates. That is not necessarily the best way to obtain an unbiased estimate. If you run into organistions who are trying to convince you that the end of the world is nigh—and you should donate to help them save it—you should at least be aware that this pattern is an ancient one with a dubious pedigree.
I am inclined to ask for references. As far as I understand it there is a real science Cryogenics—which goes out of its way to distance itself from its more questionable cousin (cryonics) - which has a confusingly-similar name. Much as psychology tries to distinguish itself from psychiatry. Is there much more than that going on here?
From what I understand, the professional learned society of cryobiologists has an official policy that bans any engagement with cryonics to its members under the pain of expulsion (which penalty would presumably have disastrous career implications). Therefore, cryobiologists are officially mandated to uphold this party line and condemn cryonics, if they are to speak on the subject at all. From what I’ve seen, cryonics people have repeatedly challenged this position with reasonable arguments, but they haven’t received anything like satisfactory rebuttals that would justify the official position. (See more details in this post, whose author has spent considerable effort searching for such rebuttal.)
Now, for all I know, it may well be that the claims of cryonicists are complete bunk after all. The important point is that here we see a clear and unambiguous instance of the official academic mainstream upholding an official line that is impervious to rational argument, and attempts to challenge this official line elicit sneering and stonewalling rather than any valid response. One of my claims in this discussion is that this is far from being the only such example (although the official positions and the condemnations of dissenters are rarely spelled out so explicitly), and LW people familiar with this example should take it as a significant piece of evidence against trusting the academic mainstream consensus in general.
This seems to be the relevant bit:
It says they are not allowed to perform or promote freezing of “deceased persons”—citing concerns over ethical and scientific standards—and its own reputation. They probably want to avoid them and their members being associated with cryonics scandals and lawsuits.
As I said, I don’t have a dog in this particular fight, and for all I know, the cryobiologists’ rejection of cryonics might in fact be justified, for both reasons of science and pragmatist political considerations. However, the important point is that if you ask, in a polite, reasonable, and upfront manner, for a scientific assessment of cryonics and what exactly are the problems with it, it is not possible to get a full, honest, and scientifically sound answer, as demonstrated by that article to which I linked above. Contrast this with what happens if you ask, say, physicists what is wrong with some crackpot theory of physics—they will spell out a detailed argument showing what exactly is wrong, and they will be able to answer any further questions you might have and fully clarify any confusion, as long as you’re not being impervious to argument.
Regardless of any particular concern about cryonics, the conclusion to draw from this is that a strong mainstream academic consensus sometimes rests on a rock-solid foundation that can be readily examined if you just invest some effort, but sometimes this is not the case, at the very least because for some questions there is no way to even get a clear and detailed statement on what exactly this foundation is supposed to be. From this, it is reasonable to conclude that mainstream academic consensus should not be taken as conclusive evidence for anything—and in turn, contrarian opinions should not be automatically discarded just because mainstream academics reject them -- unless you have some reliable criteria for evaluating how solid its foundation is in a particular area. The case of cryonics is relevant for my argument only insofar as this is a question where lots of LW people have run into a strong mainstream consensus for which it’s impossible to get a solid justification, thus providing one concrete example that shouldn’t be too controversial here.
I think most parties involved agree that cryonic revival is a bit of long shot. It is hard to say exactly how much of a long shot it is—since that depends on speculative far-future things like whether an advanced civilization will be sufficiently interested enough in us to revive us. Scientists can’t say too much about that—except that there are quite a few unknown unknowns—and so to have wide confidence intervals.
definitely and thank you for sharing your list :)
This comment seems to be influenced by an association fallacy.
The fact that someone has suffered doesn’t imply that they are rational or irrational.
If you acknowledge evidence that someone is being irrational it doesn’t mean you have to deny they have any positive qualities or be unsympathetic about their problems.
I made no claim that they were rational—or that they had no nice qualities.
I was trying to point out that “I think that other people should not spend time on X because it is an unimportant topic to me” is failing to understand that to these people, X is not an unimportant topic.
“some examples would be nice”
Some examples would agitate a thundering herd of heretic hunters and witch finders who would vote down every post that Vladimir has ever made or will ever make.
This dosen’t seem overwhelmingly likley false. Its not a very nice things to say, but why should that matter?
Why so many down votes? In any case since I’m Charlie Sheen I don’t care if I’m down voted since I’m WINNING no matter what they say (wow memetic brainfreeze).
It does to me. Less Wrong isn’t so popular that I’d expect a herd of people to bother downvoting each of Vladimir_M’s dozens of posts, then wait around for him to make more posts to downvote, just because of a few examples. The fact that Vladimir_M gave two or three examples but still has non-negative scores for his recent comments is more evidence that sam0345 was wrong.
Quite a few LW users see niceness as a useful norm and may have voted accordingly. At any rate, I don’t think it was a lack of niceness that provoked people to downvote that comment; I’d guess it was because they read it as an unhelpful overstatement.
I’m inclined to agree, though I suspect we’ve got different lists of what the real problems are.
If you have more to say as to your list of what the real problems are I’d be interested in reading.
I believe that a lot of what’s wrong with the world comes from taking governments too seriously. The historical argument for atheism—the damage done by religion—applies at least as strongly to governments.
This doesn’t mean I think it necessarily makes sense for individuals to conspicuously ignore a government which is dangerous to them. To put it mildly, there are group effects.
Taking governments too seriously in what sense? Adopting values implicit in in government rhetoric? Following laws? Give some examples if you’d like.
Also, are you considering the counterfactual here? Without religion there’s atheism. What happens when people don’t take governments too seriously? It’s actually unclear to me that religion does more harm than good; I would guess that the harm done apparently done by religion is largely due to general human nature and that there are upsides of organic community so that on balance it’s a wash.
I noticed that, while there used to be religious wars, for the most part these days, what gets people to die for no good reason is nationalism.
I’m not sure what the best attitude is—I don’t think we can dispense with government these days, but on the other hand, I don’t think law-abidingness and patriotism should be put very high on the list of virtues.
Don’t both of religion and nationalism fall under the broader umbrella of tribalism? It’s plausible to me that without either one there would be some other sort of tribalism with adverse effects on global welfare. People might not die as a result but I don’t think that there’s reason to think that the aggregate negative effect would be smaller.
Now, if what replaced them was some sort of ideology of the type “equal consideration for all” that filled the vaccum left over by politics/religion then that would be different. I have little sense for how likely this would be.
What sort of law-abidingness do you have in mind here? Obeying military drafts?
It’s actually unclear to me that religion does more harm than good
For quick and dirty empirical evidence, look at latest european poll Do countries on the top of table with least belief in God, spirit or life force behave more rationally?
As someone who is both into the skeptics movement and the atheist movement i’m not sure what skeptics “wouldn’t dare mutter” about. It seems to me that skeptics and atheists just have and interest in those things and want to stop the harm caused by them.
Also, i must be ignorant about all these other horrible delusions you are talking about.
Further you must be talking about instrumental rationality because i’m not sure how this is evidence against epistemic rationality.
I may have been too harsh on the skeptics, some of whom occasionally do attack nonsense in a way that riles up not just crackpots, but also some highly respectable and even academically accredited ideologues and charlatans. However, the problem I see is the main thrust of the movement, which assumes that dangerous nonsense that should be attacked and debunked is practically always purveyed and followed by people outside the official, respectable, accredited mainstream, with the implicit assumption that the latter is maybe imperfect, but still without any deep and horrendous flaws, and in matters where it produces strong consensus, we have nothing much to worry about.
This is where my Titanic analogy comes in. When I read about skeptics debunking people like, say, Uri Geller or Erich von Daeniken, clearly I have no objection to the substance of their work—on the contrary. However, if such people are left unchecked, it’s not like they will tomorrow be awarded high places in the government and the academia, and be given the power to propagandize their views with high official authority, both in classrooms and in mass media that would cite them as authorities, to write laws and regulations based on their delusions, to promote (and aggressively impose) their views through international institutions and foreign policy, etc., etc., with all the disastrous consequences that may follow from that. Therefore, shouldn’t a rational person be more concerned with the possible delusions of people who do have such power and authority? They are the ones presently in charge of steering the ship, after all, and it’s not like there aren’t any icebergs around.
Of course, if you believe that the official institutions that produce academic consensus and respectable mainstream public opinion are generally OK and not causing any ongoing (or potential future) disasters, clearly these concerns are baseless. But are you really so sure that this optimism is based on a realistic appraisal of the situation?
As I mentioned elsewhere in this thread, I recommend this post by Quirinus_Quirrell. The list there is by no means comprehensive, but it should give you an idea of what people are talking about.
Edit: Two more good articles to read are this one by Paul Graham, and this post by Vladimir_M.
Sure, but that’s because slogans aren’t about convincing people; they’re about signaling group affiliation. Wear a T-shirt with “democracy is retarded” on it and you’re effectively saying that you belong to a group that no one has ever heard of and is apparently openly opposed to one of the major shared tenets of practically every active political faction out there. Not a good way to win friends.
On the other hand, I’d be willing to bet that writing a series of blog posts, or even a book, on why democracy is retarded (ideally not in those words) wouldn’t paint you as anything more than, at worst, mildly crankish. Very little is actually unthinkable in the educated world—but if you’re going to voice opinions outside the Overton window you’d better voice them in terms of actual arguments. By definition, you can’t expect your audience to be familiar with the existing arguments for them.
Actually, wearing a t-shirt that says “Democracy is retarded” signals a double affiliation. One is opposition to democracy, and the other is willingness to gratuitously insult retarded people.
Maybe a triple affiliation, because it’s possible to put some content about what you don’t like about democracy in a short slogan, and you didn’t bother. From my point of view, you’ve just affiliated with boring trolls.
That depends on how far you are outside the Overton window, and also in what direction. On some particularly charged topics where the respectable opinion is remote from reality (or at the very least lacking firm justification and open to serious doubt), people are aware that there are plausible-sounding arguments against the respectable opinion, but believe that this is just seductive propaganda by crackpots or villains that has been decidedly debunked by the respectable authorities. (Even though that’s not the case, and the existing attempts at debunking are in fact severely flawed.) So even if you make a perfectly calm, logical, and scholarly argument against the respectable opinion, you’ll just trigger people’s alarms, without being able to make them listen.
Anecdotally, the recent essay advocating whipping (“In Defense of Flogging”) in place of jail sentences has been reprinted everywhere from the CBC to my local free paper.
Not seeing reprints of Nei’s paper “The Root of the Phylogenetic Tree of Human Populations”
It’s available freely online.
Overton window.
“Very little is actually unthinkable in the educated world”
Tell that to Andrew Bolt, a journalist currently being subjected to a show trial for heresy.
“Very little is actually unthinkable in the educated world”
A great deal, however is unspeakable.
Masatoshi Nei and Naoko Takezaki measured the genetic distance between one human race and another, and between those races and apes, treating chimpanzees as if they were another human race: “The Root of the Phylogenetic Tree of Human Populations” They found that the distance between races was quite large, typically around half the human chimpanzee distance or so. They also found that some races had considerably less genetic distance between that race and the hypothetical common ancestor of humans and chimpanzees than other races.
Like Galieo, they were asked to repent and recant, and did so.
From 1996 to 2003 that opinion was officially unspeakable, and may well still be unspeakable. Although in 2003 there was a substantial expansion of what is speakable on race, Nei’s heresy does not appear to have been repeated.
So if the Medieval Catholic Church was explicitly theocratic, so is Harvard. Catholics are required to believe what the Church teaches, whatever that may be, and the Church is the final arbiter of what it teaches.
At the time that they published, it was permissible to believe that genetic differences between races were real and substantial, that races were diverse, but equal. The reaction to the proposition that some were more closely related to the common ancestor of man and ape was so hostile, that the existence of genetic differences between races was also prohibited. Rather suddenly official truth became that humans were diverse culturally but not genetically, that races were just labels for continent of origin, so that Persians are “Asians” and Chinese are also “Asians”, so Chinese are supposedly the same race as Persians, which doctrine was quite suddenly imposed not just in Academia, but on everyone in the English speaking world, and I expect most of the rest of the world also. The main finding of their paper abruptly became impermissible after it was published. So everyone is required to believe what Harvard teaches, whatever it may be, even though it changes from time to time. Because this paper pissed off our elite, Muslims in England are “Asians”, and the English have trouble figuring out what to call Chinese, since they are supposed to call them “Asians”also, but many Englishmen have trouble using a single word for two rather different categories. If only Nei had not attempted to estimate the distance between races and the hypothetical ancestor of man and ape, Britons would probably still be calling Muslims Muslims and Pakistanis Pakistanis.
This seems very confused. Here’s the paper you are referring to.
The study is trying to provide evidence for two different theories of the origin of H. sapiens: the out of Africa theory and the multiregional theory. If the out of Africa theory is true, you can expect to find more divergence between the sub-Saharan Africans and the rest of the human population because “the most divergent and first-established population is likely to stay in the place of origin and new populations would be formed when they move out of the original place. ” I don’t quite see any implication of racial inequality here, or anything to be upset about (although I can imagine some particularly juvenile racial taunts along the lines of your race is more closely related to the chimps than mine!?). Wikipedia also refers to these conclusions without hinting at any controversy attached to them:
“Nei and Roychoudhury then estimated that Europeans and Asians diverged about 55,000 years ago and these two populations diverged from Africans about 115,000 years ago.[15] This conclusion was supported by many later studies using larger numbers of genes and populations, and the estimates are still widely used. This study was a forerunner of the out of Africa theory of human origin by Allan Wilson.”
Also, what do Britons do when they need to refer to Pakistani Muslims? :)
Do you have a link to the study?
Not only does the idea that different races would have different genetic distances from the common ancestor of apes not strike me as repugnant, it seems fairly obvious on examination. The further a population is from its ancestral environment (in terms of selection pressures of course, not geographically speaking,) the faster its genes are going to drift, so to the extent that not all environments that human populations developed in equally resemble that of our common ancestor, we should expect different genetic distances from that common ancestor.
But as for the magnitude of genetic difference between races, I can’t help but think “seriously half the human-chimpanzee difference?” That’s way more than I would have predicted.
Of course Persians and Pakistanis are Asians, since they live in Asia. The term ‘Asian’ appears to me to be a geographic term, not a racial one.
I can’t speak for the UK, but here in the English-speaking USA, we tend to think of Persians and Chinese as different races. (The various systems of classification differ on Pakistanis.)
You will often hear ‘Asian’ instead of ‘East Asian’ for the race that Chinese belong to. I would criticise this as a poor way to use the word. In any case, anybody who thinks that Persians and Chinese are the same race since both live in Asia is probably just being confused by this usage; nobody believes that in the English-speaking USA (neglecting people who only classify into their own group and everybody else and people who refuse to classify at all).
“Very little is actually unthinkable in the educated world”
That Marie Curie is famous for doing work that was completely routine when males did it demonstrates that women are incapable of doing science. If women could do science, you would have a more plausible poster girl than Marie Curie. If women could do science, no one would make a big fuss over very routine scientific accomplishments by women.
Marie Curie was the least important person on the three person team that discovered radium, yet no one remembers the other members of the team, nor does anyone remember the team that discovered radon, a similar but far more important discovery made at about the same time, far more important because that discovery revealed that radioactivity was a manifestation of one element decaying into another.
Marie Curie is remembered in the same way that two headed goats are remembered, that a woman doing even rather ordinary science is as extraordinary as a two headed goat.
You are James A. Donald and I claim my five pounds.
I disagree. The phrase “democracy is retarded” is so far from what most people, at least in the West, believe that saying it will simply make you look like a harmless eccentric.
As Paul Graham pointed out here:
You’re unlikely to convince many people by saying “democracy is retarded” so there’s no reason to attack you. As for ideas that will actually get sacred cows, I recommend looking at this comment by Quirinus_Quirrell.
Agreed. Having the slogan on a T-shirt wouldn’t warrant a fear of strong backlash or status loss. The trouble would only begin if I started to actually advocate such a position with valid arguments.
Seeing how for most people any political system that is not democratic is automatically evil, one would expect quite a reaction. “You know, I don’t think every person should have the right to influence government policy by his or her opinion” will be quite unpopular, since virtually all people delude themselves into believing that they actually know stuff and can make rational decisions, when what they are really doing is finding rationalizations for their gut-reactions.
The problem with governments is that they’re composed of people, and people tend to be stupid, corrupt, or both.
I think was you meant to say was: The problem with democracies is that they’re often run by the majority opinon of people, and people tend to be stupid, corrupt, or both.
We could build a government that wasn’t composed of just people, or even of people who fit some criteria of non-stupid and non-corrupt, and it would still be a government.
Stupidity and corruption are also problems with dictatorship and its variants as well.
And “democracy + dictatorship and it’s variants” is obviously the set of all possible workable forms of government?
Until SIAI finishes their main project I think we’re stuck with using people.
This is extremely hard as Goodhart’s law tends to make this whatever proxy you use less reliable very quickly.
HUFFLEPUFF!
As much as this is a joke, it would be interesting to see if LWers fell into any obvious groupings based on their motives for wanting to be here.
It’s been said before that, to first approximation, everyone here is Ravenclaw. But you might say that the second term in my Taylor polynomial is Hufflepuff. I have a habit of seeking out contrarian clusters before taking my contrarian beliefs seriously.
Your choice of metaphor wildly confirming that you are primarily Ravenclaw, of course.
I guess that makes me more Ravenclaw than Ravenclaw.
Genius.
It has also been said that to a first approximation everyone here (at the Rationality Boot Camp) is a Hufflepuff. (I deny it!)
I’m fairly certain I’m an even mix of Ravenclaw and Slytherin. I tend to agree with Quirrel’s logic on occasion, and I do wish to know, but I believe the best way to ensure the best possible world for all is to move into a position of power. This is because I honestly believe I could do better than current incumbents in that position, but it is still a result at least in part of my own ambition.
Would have been funnier had you posted that as the sorting hat.
Or whatever the thing used in that silly book series that I clearly haven’t read. ;) Since all you nerds are gathered here I might as well ask you: Is Orlando Bloom reprising his role as Aslan or has G.R.R.M. decided against including him in the sequel to the Last Airbender?
Sounds like you need to undergo reverse rejection therapy. Ask some people at the next meetup to ask you for things/favors/anything, and your goal is to say ‘no’ to all of them. May be everyone can be a target and all of you can rotate.
Good idea! If you ask people to ask you to do things you should definitely not do, it may get you into the habit of saying ‘no’. Examples, cause it’s funny:
“Hey would you come over and cook dinner for my cat?”
“Would you give me 5 bucks?”
“Would you tie my shoe for me?”
“Go over there and kiss that stranger.”
“Give me a piggy back ride.”
You can ask people to make their requests more or less ridiculous so you feel like you want to say yes against your better judgment.
I suggested “Learning to say ‘No’” as an addition to Swimmer963′s goal list at one of our first meetups. This sounds like a good way to implement it (should she so desire, of course).
Ok. Next meetup (that I can make it to), let’s do it!
On the agenda!
N = 1 agrees. If this is something you’re serious about changing, this would be a cheap way to try it.
You’re the negative mirror image of myself.
I’m very lucky to live in this day and age. If I was born any sooner, (or in some other place) I’d probably be killed by some aggravated superstitious mob or the officials—that is if I was indeed stupid enough to talk as I please. Contrariety and a need to oppose run incredibly deep within me.
I am majorly disgusted by religion and have real trouble to sit through a church service. I’m just counting down the minutes while mentally cringing at virtually every single stupid thing the pathetically deluded pastor drivels from the pulpit, until at last the primitive medieval circus grinds to a halt and I feel like can finally take a deep breath of sanity once I left the building.
I despise human hypocrisy intensely and am somewhat nauseated every time I detect it in others and myself. But ultimately, I believe at the very bottom of my behavior lies a simple mixture of genetic components and what I picked up from my father (who was also somewhat of a rebel in the former Soviet Union, but without any real opportunity to vent). Maybe it runs even deeper than that and I’m just the human archetype of the “male rogue” that can be encountered in many species—not dominant or agreeable enough to gain the status of an alpha male, but way too pretentious and certainly not submissive enough to suck up to anyone or cut back on my directness. I’m not quite as disagreeable as Dr. House fortunately, but he could serve as an exaggerated caricature of myself if I felt miserable and took any pleasure in cynicism.
In many ways I wish I were more like you, it would certainly make my life and enduring (some) human company so much easier. People only want to hear what they like to hear and I almost can’t help myself but to provoke and startle, if given the opportunity.
At any rate you got nothing to apologize for. If anything, your attitude is much better suited than mine, if your goal is to make other people think more rational. I’m sure you can spin things in such a way, that will enable you to much more easily and effectively convert people into rationalists, than any confrontational hardliner could hope to accomplish in comparison. As long as you don’t allow yourself (or others) to sucker you out of your rational frame of mind, your “skill” of hyper-agreeableness is much more of an asset than you seem to realize. Agreeableness indeed seems to be a rather rare social skill around these parts of the internet and the overall rationalist community.
We need more people like you around, especially for PR-purposes.
Addendum: By the way, I may have to clarify that my disgust with religiosity doesn’t extend to the people themselves. Often my attitude may get misinterpreted as me somehow despising religious people, which is not the case at all. I suppose an intuitive explanation of how I feel about religion would be to imagine that you’re the parent of a “disconnected” Scientologist. You can hate every single bit of the meme with every fiber in your body but still love the person “infected”. I suspect many rationalists feel similarly. Religion is a grim reflection of just about anything that’s terribly wrong with human cognition and morality—and instead of being actively opposed and ironed out, our society indulges in superstition and dresses it up with nice colors and hats and even let’s it run the show. It just utterly crushes my spirit when I hear something like “representatives of the church attend a meeting with politics and industry to discuss the future of nuclear energy”—as if a pastor or bishop actually knows anything about anything. (Let alone something about morality and ethics).
I’ve known people like that. From what I’ve seen, for many people it’s a game to make social interaction more interesting. I play it very poorly (for example, I almost never get sarcasm unless it’s pointed out to me, and even if I do, I’m usually too lazy to come up with something sarcastic to say in return, so I just ignore it, which is awfully boring for the person being sarcastic.) Is this why you do this?
I am very good at engaging in dialogue with just about anybody and presenting my points in such a way that it’s natural for them to agree. I think the most important component is making it obvious to people that “I don’t dislike you because you disagree with me; if anything, I like the fact that we disagree, because maybe I can learn something new from you.” Even confrontational people usually respond well to that kind of attitude, and it’s a win-win situation because I get to engage in the discussion that I want.
I disagree. After spending some very formative years of my adolescence singing in the church choir, I’ve found that the ministers do seem to...well, maybe know more about morality isn’t the right phrase, but they’ve thought about it more. A large percentage of the population never thinks about morality. Some because they just live their life without really questioning anything (like at least 50% of my fellow pool staff), some because they base their values off selfishness and don’t want to have to change it. The Church morality has its flaws, for example the implicit biblical attitudes towards sex before marriage, women, homosexuality, etc. But in the Anglican church anyway, and even in the more conservative Pentecostal church, I know hardly any actual Christians who believe that someone is inherently bad for being homosexual. There are a lot of “good” memes in the Christian morality complex, ideas of being radically generous and loving your enemies. I have seen some incredible acts of generosity in the Pentecostal church especially.
There’s also a sample bias in the kind of people who become ministers, especially Anglican ministers (this branch of Christianity is already extremely liberal; they do gay marriages and everything.) They tend to be fairly intellectual, i.e. introspective and likely to meditate on moral principles, and they tend to already like people and want to help them. And they spend years studying the material. Thus, compared to Joe Smith who works at the movie theater, I think most pastors do know more about morality and ethics. Of course, someone who’s put in the same amount of time thinking about it but isn’t limited to agreeing with a book written two thousand years ago is still more likely to be right, but I don’t know that many people like that firsthand.
Partially perhaps, but it’s hardly the main reason. Language nearly always carries with it a frequency that conveys social status and a lot of talk and argument isn’t much more than a renegotiation or affirmation of the social contract between people. So quite a lot of the actual content of any given typical conversation you’re likely to hear is quite braindead and only superficially appears to be civilized. That kind of smalltalk is boring if it’s transparent to you, and controversy spices things up for sure—so yes, there may be something to it...
But I think the ultimate reason for being provocative is because “the truth” simply is quite provoking and startling by itself, given the typical nonrational worldviews people hold. If people were rational by nature and roughly on the same page as most lesswrongers, I certainly wouldn’t feel like making an effort to provoke or piss people off just for the sake of disagreement. I simply care a lot about the truth and I care comparatively less about what people think (in general and also about me), so I’m often not terribly concerned about sounding agreeable. Sometimes I make an effort if I find it important to actually convince someone, but naturally I just feel like censoring my opinions as little as necessary. (Which is not to say that my approach is in any way all that commendable, it just simple feels natural to me—it’s in a way my mental pathway of least resistance and conscious effort.)
I’m not doing it all the time of course, I can be quite agreeable when I happen to feel like it—but overall it’s just not my regular state of being.
″...as if a pastor or bishop actually knows anything about anything. (Let alone something about morality and ethics).”
You can’t be serious, how dare you trample on my beliefs and hurt my feelings like that? ;)
Sure, and conspiracy theorists think a lot about 9/11 as well. The amount of thought people spend on any conceivable subject is at best very dimly (and usually not at all) correlated with the quality/truthfulness of their conclusions, if the “mental algorithm” by which they structure their thoughts is semi-worthless by virtue of being irrational (aka. out of step with reality).
Trying to think about morality without the concept that morality must exclusively relate to the neurological makeup of conscious brains is damn close to a waste of time. It’s like trying to wrap your head around biology without the concept of evolution—it cannot be done. You may learn certain things nonetheless, but whatever model you come up with—it will be a completely confused mess. Whatever theology may come up with on the subject of morality is at best right by accident and frequently enough it’s positively primitive, wrong and harmful—either way it’s a complete waste of time and thought given the rational alternatives (neurology,psychology) we can employ to discover true concepts about morality.
What religion has to say about morality is in the same category as what science and philosophy had to say about life and biology before Darwin and Wallace came along—which in retrospect amounts pretty much to “next to nothing of interest”.
So are all those Anglican priests nice and moral people? Sure, whatever. But do they have any real competence whatsoever to make decisions about moral issues (let alone things like nuclear power)? Hell no.
That’s like saying that the job of a sports coach is a waste of time because he is clueless about physics. If it were impossible to gain useful insights and intuitions about the world without reducing everything to first principles, nothing would ever get done. On the contrary, in the overwhelming majority of cases where humans successfully grapple with the real world, from the most basic everyday actions to the most complex technological achievements, it’s done using models and intuitions that are, as the saying goes, wrong but useful.
So, if you’re looking for concrete answers to the basic questions of how to live, it’s a bad idea to discard wisdom from the past just because it’s based on models of the world to which we now have fundamentally more accurate ones. A model that captures fundamental reality more closely doesn’t automatically translate to superior practical insight. Otherwise people who want to learn to play tennis would be hiring physicists to teach them.
Friendly-HI didn’t want to suggest that you actually have to perform the reduction to be any good. Just that you keep in mind that there’s nothing fundamentally irreducible there. I was about to add more details but Friendly-HI already did.
^ what he said
You can quote a paragraph by preceding it with > (or multiple angle brackets to nest quotes deeper).
thx. Old habits die hard.
This seems mistaken, especially considering that we’re just getting started on the neurology.
I’d say that trying to think about morality without careful observation of what changes people can make and how is a waste of time.
I already thought I should have made my position more clear to prevent confusion. Lesson learned: I really should have taken the effort.
The key word in my sentence is “concept”. I didn’t say the only source of learning things about morality is scanning the brain and understanding neurology. What I meant to convey is the vitally important >concept< that morality relates to something tangible in the real world (brains), instead of something mystical or metaphysical, or some “law of nature” that is somehow separate from biological reality. If people aren’t aware that morality is a concept that solely applies to cognitive brains, their ideas simply will not be congruent.
Psychology is studying people’s behavior at a different “resolution” than neurology, but I’m certainly not saying that observation of human behavior is negligible when it comes to morality—quite the opposite. I meant to say, that our model of morality must be based on the true premise that morality applies to brains and neurology—not that neurology is the only valid tool in the toolbox to rationally figure out what is moral and what is not. I hope you catch my drift.
This is incorrect in at least two ways.
First, models can be useful in practice even if they don’t incorporate reductionism even in principle. In fact, many useful models make explicit non-reductionist assumptions (as well as other assumptions that are known to be false from more exact and fundamental physical theories). Again, this is true for everything from the most mundane manual work to the most sophisticated technical work. Similarly, ideas about morality given by models that use various metaphysical fictions may well give you better answers on how to live in practice than any alternative model. You may disagree that this happens in practice, but you can’t demonstrate this just by dismissing them based on the fact that they make use of metaphysical fictions.
Second, it’s not at all clear whether a workable moral system for interactions between people is possible that doesn’t use metaphysical fictions. (By “workable moral system” I mean a model capable of giving practical answers to the questions, both public and private, on what to do and how to live.) You can dress these fictions in modern fashionable language so as to make them more difficult to pinpoint, but this only makes the arguments more confused and their fallacies more seductive. Personally, I’ll take honest and upfront talk about God’s commands and natural law any day over underhanded smuggling of metaphysical fictions by invoking, say, human rights or interpersonally comparable utilities. (And in fact, I have yet to see any sound argument that the latter, nowadays more fashionable sorts of models produce better answers in practice than those of the former, old-fashioned sort.)
True, but are such models really ->more<- useful—especially in the long run? If I’m a philosopher of morality and am not aware, that morality only applies to certain kinds of minds, which arise from certain kinds of brains… then my work would be akin to building a skycastle and obsessing about the color of the wallpapers, while being oblivious that the whole thing isn’t firmly grounded in reality, but floats midair. Of course that doesn’t mean that all of my concepts would be wrong, since perfectly normal common sense can carry someone a long way when it comes to moral behavior… but I may still be very susceptible to get other kinds of important questions dead wrong—like stem cells or abortion.
So while of course you’re right when you say that models can be very useful even if they are non-reductionist, I would maintain that there is a limit to the usefulness such simplistic models can reach, and that they can be surpassed by models that are better grounded in reality. In 50 years we may have to answer questions like: “is a simulated mind a real person to which we must apply our morality?” or “how should we treat this new genetically engineered species of animal?” I would predict giving answers to such questions could be simple, although not easily achieved by today’s standards: Look at their minds and see how they processes pain and pleasure and how these emotions relate to various other things going on in there and you’ll have your practical answer, without the need of pointless armchair-philosophy-battles based on false premises. We may encounter many moral issues of similar sorts in the upcoming years and we’ll be terribly unequipped to deal with them, if we don’t realize that they are reducible to tangible neural networks.
PS: Also I’m not sure how human rights are any more a metaphysical fiction than say… tax law is. How is a social contract or convention metaphysical, if you’ll find its content inside the brains of people or written down on artifacts? But I highly suspect that’s not the kind of human rights you’re talking about—nor the kind of human rights most people are talking about, when they use this term. So you probably accuse them rightly for treating human rights as if it was some kind of metaphysical concept.
Also I find it curious that you would prefer god-talk morality over certain philosophical concepts of morality—seeing how the latter would in principle be much more susceptible to our line of reasoning than the former. I prefer as little god-talk as possible.
Of course they are more useful. You have only finite computational power, and often any models that are tractable must be simplified at the expense of capturing fundamental reality. Even if that’s not an issue, insisting on a more exact model beyond what’s good enough in practice only introduces additional cost and error-proneness.
Now, you are of course right that problems that may await us in the future, such as e.g. the moral status of artificial minds, are hopelessly beyond the scope of any traditional moral/ethical intuitions and models, and require getting down to the fundamentals if we are to get any sensible answers at all. However, in this discussion, I have in mind much more mundane everyday practical questions of how to live your life and deal with people. When it comes to these, traditional models and intuitions that have evolved naturally (in both the biological and cultural sense) normally beat any attempts at second-guessing them. That’s at least from my experience and observations.
Fundamentally, they aren’t. The normal human modus operandi for resolving disputes is to postulate some metaphysical entities about whose nature everyone largely agrees, and use the recognized characteristics of these metaphysical entities as Schelling points for agreement. This gives a great practical flexibility to norms, since a disagreement about them can be (hopefully) channeled into a metaphysical debate about these entities, and the outcome of this debate is then used as the conclusive Schelling point, avoiding violent conflict.
From this perspective, there is no essential difference between ancient religious debates over what God’s will is in some dispute and the modern debates over what is compatible with “human rights”—or any legal procedure beyond fact-finding, for that matter. All of these can be seen as rhetorical contests in metaphysical debates aimed at establishing and stabilizing more concrete Schelling points within some existing general metaphysical framework. (As for utilitarianism, here we get to another important criticism of it: conclusions of utilitarian arguments typically make for very poor Schelling points in practice, for all sorts of reasons.)
Of course, these systems can work better or worse in practice, and they can break down in all sorts of nasty ways. The important point is that human disputes will be resolved either violently or by such metaphysical debates, and the existing frameworks for these debates should be judged on the practical quality of the network of Schelling points they provide—not on how convincingly they obfuscate the unavoidable metaphysical nature of the entities they postulate. From this perspective, you might well prefer God-talk in some situations for purely practical reasons.
I’m with you most of the way. On the rational alternatives though, I’m not sure what you suggest works in the way we might imagine.
Neurology and psychology can provide a factual/ontological description of how humans manifest morality. They don’t give a description of what morality should be.
There’s a deontological kernel to morality, it’s about what we think people should do, not what they do do.
Psychology etc. can give great insights into choosing morals that go with the human grain. But those choices are primarily motivated by pragmatism rather than vitue. The virtue you’ve chosen is to be pragmatic…
Happy to be proven wrong here, but in terms of what virtues we place value on, I think there’s going to be an element of arbitrariness in their choice.
The question “what do we think people should do?” is a question about what we think. Thus the relevance of psychology. Note that this is different from “what should people do?” being itself about what we think. But if you want to find out “what should people do?” half the work is pretty much done for you if you can figure out where this “should” idea in your brain is coming from, and what it means.
Can you clarify this statement? As phrased, it doesn’t quite mesh with the rest of your self-description. If you truly did not care about what other people thought, it wouldn’t bother you that they think untrue things. A more precise formulation would be that you assign little or no value to untrue beliefs. Furthermore, you assign very little value to any emotions that for the person are bound up in their holding that belief.
The untrue belief and the attached emotions are not the same thing, though they are obviously related. It does not follow from “untrue beliefs deserve little respect” that “emotions attached to untrue beliefs deserve little respect”. The emotions are real after all.
vs.
You’re right about the emotions part, but I’m certainly not bashing people as hard as Dr. House and I’m also not gonna take nice delusions of heaven away from poor old granny. Yes, of cause I too care about the emotions of people, depending on the person and the specific circumstances.
I’m also usually not the one to open up the conversation on the kind of topics we discuss here, but if people share their opinion I’ll often throw my weight in and voice my unusual opinions without too much concern about tiptoeing around sensibilities of -say- the political, religious or the new age types.
Of cause I’m not claiming to be a total hardliner, deep within my brain there is such a thing as a calculation taking place about whether or not giving my real opinion to person X Y and Z will result in too much damage for me, others, or our relationship… it’s just that I’m less inclined to be agreeable in comparison with others. I’m not claiming to be brain damaged after all, of cause I care as well to some (considerably less than average) extent about social repercussions.
Addendum: Agreeableness is also something that is known to rise with progressing age, so it’s likely that I will become more agreeable over time, seeing how I’m still just 23. Another factor in agreeableness is impulsiveness, which thankfully diminishes with age—and I’m a fairly impulsive person. Agreeableness isn’t just composed of “one thing”, it’s the result of several interactions.
I’m 19, and I’m already one of the most agreeable and least impulsive people I know. I’m fucked...
Maybe you should consider a career in politics where having a spine is optional :P
EDIT: Wait, what am I saying… it’s of cause not optional but actually prohibitively costly.
No way! There’s a possibility I wouldn’t be able to keep everyone happy all the time! There’s a possibility people would dislike me for policies I implemented! It would be WAY too stressful!
Second time I catch this, so it may not be a mere typo. Did you mean “of c_our_se”, in the sense of “obviously”?
English is my 3rd language, so unfortunately it wasn’t really just a typo. Now that you pointed it out of course the mistake is obvious to me.
For an ironically religious meme comparison “Love the sinner; hate the sin.”
I’m reminded of the discussions in BDSM on how to be submissive in a healthy D/s relationship without being a doormat or being vulnerable to abuse.
Being more comfortable as a follower than as a leader is not necessarily a bug. Just make sure you can pick the right leader to follow.
As a contrarian rationalist, I can assure you that my attitudes are the results of my personality & upbringing, not some bold brave conscious decision. I was always different, enough that conforming wouldn’t have worked, so finding true & interesting & positive-attention-capturing ways to be different was my best path. The result is that I’m biased towards contrarian theses, which I think is useful for improving group rationality in most cases, but still isn’t rational.
First, I feel like I can partly explain why you’re finding many contrarian rationalists here. If rationality is a technology, it’s a technology that enables you to be right in situations where people are normally wrong. Since rationalists should win, it should also enable us to be right where others are right. But there’s nothing to discuss about that on here! In those cases, we can just learn from our surrounding cultures and don’t need a specialized blog.
I’ve always thought of this place as essentially “a place to learn how to go against the grain in order to be right”. Because all of our discussions are of that flavour, it probably paints contrarianism as a central feature of rationality. But I think that’s just a small part of the big picture. Eliezer did a good job of cautioning us against forming that image of rationality by discussing Spock as a bad rationalist and warning against reversed stupidity.
I think contrarianism is an occasional effect of being rational. But I don’t see why you should feel like you’re any less of a rationalist if you find yourself wanting to please people you care about. If you value keeping your family happy and bonding with your friends and feeling religious feelings, it sounds like you’re probably doing everything right.
I think that as rationalists, we are the ultimate conformists. It only happens, in this day and age, that the majority of people is somehow wrong about reality. But the fact that we agree to the existence of an ultimate truth and that we cannot rationally agree to disagree, should makes us the most uniform group in history. Biases and tendencies to conform/contrast indexical social groups are just going in the way of reaching consensus.
By the way, there’s nothing wrong in being moved or being mystically exalted by some piece of literature/music/etc. I was moved by the ending of Stephen King’s It, but that doesn’t mean I started to believe in interdimensional giant spider. You just probably fall on the orchid’s side of the genetic variance.
Interesting article. But as more agreeable, wouldn’t she fall on the other side?
I’m sorry for the duplicated post. Apparently Firefox is a little at odds with the new layout.
(ok ok, I was also wrong in pressing “Comment” the second and the third time when the first click didn’t yeld any result)
While the comment deletion feature is unavailable, you can ask a moderator to ban your comments that you want deleted. (Removed the duplicates.)
Nice self-introduction. I’m not sure exactly what you mean by “mystical experience of music,” but if you mean spine-tingling transportation, I think that’s pretty common, even among us’ns. For example, here’s a good performance of a piece my own group (not this one) sang a year ago—turn it up :)
Thank you, that was beautiful. Here is a link to a neat piece my choir did this past Christmas. (Fast-forward to about the halfway point if you want to skip the massively long introduction.) I am actually standing in the second row on the far right, just behind the choir director’s head when there’s a close up of him.
What does it mean to disagree with an interest? That sounds like it means that most LWers either disapprove of your involvement with religion, do not share your interest, or expect that you could increase your utility by decreasing your involvement with religion. I’m not sure which of these, if any, you meant, but as to the first, I don’t think most of us do disapprove of it. As to the second, that’s not disagreement. And as to the third, I’m not sure what most LWers think, but I think that you are in a much better position to judge that than I am.
In general, from the posts I’ve seen, the LessWrong attitude to religion is one of derision. (This may represent Eliezer’s point of view more than it represents the average point of view.) But that derision has never been directed at me personally. A number of posts use religion as an example of ‘widespread irrationality that is bad for individuals and for world’ but no one has accused me directly of propagating an irrational or damaging meme.
As for your second point, yeah, lack of interest is not disapproval or disagreement. I have very little interest in finance and investment; that doesn’t mean I disagree with the premise of it. In fact, I have a lot of respect for people who can put up with studying something that seems so tedious to me. And I doubt that no one on LessWrong is interested in religion in the abstract, since it is one of the more surreal and bizarre aspects of human behaviour.
I bet there are several people on LessWrong interested in religion in the abstract, since it is one of the more surreal and bizarre aspects of human behaviour.
Also, there’s a selection effect, the people who have strong opinions about religion will tend to be the ones to talk about it. I suspect there’s a significant number of LWers who just don’t pay much attention to religion and tend to think religion is not useful but not harmful in many contexts.
Disapproval of religious interest isn’t really a disapproval of an interest. I doubt there is much disapproval of scientific investigation of the evo-psych reasons for religious belief.
Interest in religion is just a fancy way of saying flirtation with the beliefs/worldview offered by religious faiths. Thus the derision is no different than the average person’s derision of people who take seriously the idea that the world might be flat or that dogs can talk. The difference is simply that religion is thrust into your face and life substantially more frequently than most people encounter nutjobs insisting the world might be flat (in a topological sense no telportation in weird coordinates).
Well and a generous sprinkle of the superiority complex all persecuted/socially rejected groups develop to salvage their pride. I mean psychologically it’s very difficult to resist the easy option of going along with the flow without some form of personal satisfaction (I’m smarter than them) derived from maintaining that difference.
No it isn’t. I’m interested in religion. It’s a fascinatingly complex aspect of human culture. Even if it’s obviously wrong, I have a hard time relating to a mental state of finding it uninteresting. How should a person express this if not by saying that they’re interested in religion?
Sounds like you’d score very high on tests for the personality trait “agreeableness”.
Yes, I score high on Agreeableness and Conscientiousness.
You are very likely to be cheated on apparently.
[Citation needed]
We’re probably talking about that blogger who did a regression test on the MIDUS survey results?
http://inductivist.blogspot.com/2011/06/predictors-of-getting-cheated-on.html
Note that “very likely to be cheated on” is not an accurate summary, but indeed Agreeableness turned out to correlate with reporting that your wife had been unfaithful. (although if you’re a woman the odds tilt the other way: no correlation for Agreeableness, and a correlation with reported husband fidelity for Conscientiousness)
If you’re looking to practice disagreeing with people, then you could do worse than joining Hacker News, a community which very strongly rewards (well-cited, technical) disagreement.
See this post, where the top scored comment, (written by me) with 40 points, disagrees with the linked article, and has another dozen comments disagreeing with it.
Do beware being too negative too early, however. If your first comment is downvoted into the negatives, then your account will be hellbanned.
Part of me wants to write: “You’re a brave and forthright person, and I admire you for it.”
Another part of me, which I think is motivated by your honesty, reads that and says I should write: “I just wrote that because I want you to like me, and it reads like it might get an upvote (after LW acceptance subprocess runs consciously), proving someone else likes me, too.”
When I’m alone, alert and unoccupied, those two parts (there may be more, I don’t know) are always bickering. Thing 1 decides some feeling or idea is good, or correct, or sincere, and Thing 2 almost always has to come back and say why my conclusion is based entirely in bias or rationalization. I think this is why I try not to be alone, alert and unoccupied very often.
When I’m around other people, Thing 2 mostly shuts up, only butting in if Thing 1 is getting carried away with pleasing people, or bragging, or lying (i.e. making the truth sound more exciting), etc. I like Thing 2 quite a lot at those times.
When I’m tired or have a drink, Thing 1 and Thing 2 both go to sleep before the rest of my cognition does.
When I’m occupied, there is sometimes some bickering if I’m occupied at a game, or a blog, or something that’s not useful, but it’s not too bad. It sometimes gets to be enough that I’ll do something useful to stop the conflict.
So, that’s my Usual Live Life subroutine. It’s kind of bleak because Thing 2 insisted I write it this way, but I do manage to be happy, entertained, challenged, or deeply thoughtful most of the time.
So, why write this in response to the OP? Because my first internal response to the OP was “That’s a lot like me!” And then I read Friendly-HI’s response and I thought “That’s a lot like me!” And this bugged me. So, I thought I’d try to describe from an internal, process-oriented perspective how my days go by, and see whether that clicks more with one of you than the other (or anyone else who wants to chime in).
I didn’t catch your comment for a long time, because it wasn’t in response to my own and thus didn’t light up the red message symbol. Just stumbled over it by accident, so here’s my response a mere 1,5 months later:
I feel next to no conflict or friction between my rational and my emotional self, whether I’m on my own or with company. I radically adhere and submit to the guiding principle that “if it is true, I want to believe it and if it is false, I want to reject it”. So if I happen to have some kind of innate feeling or intuition about some objective topic, I immediately catch it and just kill it off as best I can (usually pretty good) in favor of a rational analysis. But these days I usually don’t have many of these “emotional preconceptions” left anyway. Over the years I buried so many of my favorite emotional preconceptions about every imaginable topic in favor of what appears to be “the truth”, that the act of giving up some idiotic emotion about a serious topic in favor of a better model hardly stings at all anymore. It feels quite good to let go actually, it’s a kind of progress I thoroughly welcome. Often I really don’t have any discernible emotion one way or another, even towards highly contentious and controversial topics.
Now if I am in the company of other non-Bayesian people (especially women, with whom the whole point of interaction usually isn’t information-related but purely emotional anyway), I put my rational machinery to rest and just let my instincts flow without paying too much attention to how rational everything I (or they) say is. That’s because enjoying human company is first and foremost about exchanging emotion, not information or rational argument. (Although I have to admit that it always feels like a shocking slap in the face, if suddenly it turns out that she believes in astrology et al. I have to admit that a brain failure of that magnitude kills my libido faster than the kick of a horse). So yes, my red “light bulb” that says “irrational/unproven belief” still gets triggered a lot in typical conversations with the average Joe and Joy, but not every instance justifies the break of rapport in favor of starting an argument. Actually I realize that I tend to argue much more often with guys (maybe because arguing can be a way to establish social status) than with girls, where I often just skip the logical loopholes and inconsistencies in favor of maintaining rapport.
Come to think of it, that is actually a rather rational strategy, given my heterosexual utility-function ;)
If I am interested in improving or expanding my mental model of reality on the other hand, I crank up my “bias & rationality” machinery and have a careful in-depth conversation with someone who is up to the task.
If I’m doing something irrational like procrastinating or playing a game instead of furthering my goals, then often the rationality module kicks in and says I’m a bum wasting my precious (though hopefully unlimited) life-time. Often I can’t (or raher don’t want to) stop having fun however, so I just gently smother the rational voice in my head with a pillow and score a new record time in Dirt 3 instead. I suppose that’s roughly the highest peak of conflict between my emotional needs and rational goals—but unfortunately, especially when it comes to hedonistic procrastination, the rational component doesn’t put up much of a fight, which is certainly less than optimal.
Actually, I’m procrastinating right now instead of studying Psychology, so farewell.
In conclusion: It seems we aren’t all that different, except that for some reason you seem to have some kind of problem with the “conflict” between your rationality and your emotions, which is something I don’t really care about. The important thing is that I can use my rationality when I actually need it, not that I constantly use my rationality to smother every single possibly irrational emotion at every given opportunity. So where is your particular problem and why is any of this important again?
This evidently didn’t bother me a few years ago when I wrote this post, but I want to say that if all of your interactions with women are like this, you are doing something wrong. It may be that the society around you is the main culprit for doing stereotypes wrong, but as a woman I still find this attitude frustrating.
EDIT: This comment was unclearly and unhelpfully worded; I was having fun being indignant at the expense of being specific. Will add more specificity when I’m not trying to run out the door to work.
I’m tempted to inject a ‘speak for yourself!’ here, or at least a caveat that the (subjectively asserted) mistake must include “or you are choosing to interact with the wrong women” in it somewhere.
Some people of a certain kind of social disposition (yes, more female than male from what I can tell) do mostly have interactions that would be classified as emotional rather than informational according to the inferred intent of the labels. Having that preference and style works well for them and others declaring that they are doing it ‘wrong’ is invasive and irrelevant. Similarly, when interacting with someone in the style that works best for interacting with that person and someone else declaring that you are doing it wrong is out of place.
I also note that much of what is labelled (and sometimes dismissed) as ‘emotional’ is itself information. Just information in a different, insufficiently nerdy, format.
Me too, but then I thought that “interacting with the wrong women” is one possible case of “doing it wrong”, if the latter is to be interpreted at all charitably.
Noted. I was not being very specific/using sufficient disclaimers in this discussion.
Disclaimer: if you are interacting this way with women on LW or interested in rationality, I am >90% sure that you are missing out on some valuable interesting/intellectual conversations.
Hypothesis: if you are interacting this way with women who aren’t interested in rationality (who you don’t think are interested), it may be contributing to a self-fulfilling prophecy that women aren’t interested in rationality. (Disclaimer: I’m probably guilty of this for both genders, in that I don’t introduce enough of my potentially interested friends to LW ideas, period.)
I say this with no small amount of cynicism and bitterness: according to the appropriate roles and goals spelled out by our current society, he is doing it exactly right.
And remember that rejecting those roles and goals takes a LOT of effort, which means it takes a lot of motivation. Some people can find that motivation internally (they see a better way for their lives to be), others find it externally (they just aren’t equipped to fit into the roles their culture wants to assign them), but most people don’t find it at all.
It may be unpleasant to realize that most people don’t particularly care about your thoughts or feelings capability or even your well-being except instrumentally, but it is true. And even with all the strides that feminism and gender equality have made against the stereotype you quoted, it’s still entrenched enough that merely saying “you are doing something wrong” is inadequate. You have to explain to people why they should see you as a human being, and what seeing you as a human being actually looks like, or they will simultaneously fail to understand why they should, and fail to understand how they are not doing so already.
As a “young female with higher-then-average physical attractiveness” (if I remember your self-description accurately), you may be used to not having to spell that out in face-to-face interactions. Susceptible men will likely tend to implicitly understand “you are doing something wrong” as “you will not unlock the puzzle-box that has ownership of me as the prize”. But here, you have the advantage of not being able to rely on that misunderstanding; I would strongly recommend that you practice using it.
Do you mean that this is true of how people interact with other people in general, or specifically how men interact with women?
They should because it’s self-evident that I am a human being? To me, at least. I spend a lot of time in a male-dominant community (atheists/skeptics/rationalists cluster), and even more time in a female-dominated domain (nursing), and my conversations among females are no more dominated by emotion than those among males. We have conversations to share useful information and ask for practical advice, to tell morbid anecdotes that we all find hilarious, to share personal goals, to point out new discoveries in medicine that we think are fascinating and exciting, etc etc etc. It’s so freaking obvious to me that this whole gender thing just Does. Not. Matter.
Most people in my immediate social circle already do this right, including the local Less Wrong group. It’s all the more jarring when I’m reminded that right, this whole feminism thing isn’t a moot point yet after all.
If that is someone’s goal in talking to women in general, they are doing it wrong, no matter the content and tone of their discussion.
I get that some women’s revealed preferences seem to indicate that they expect and want to be treated this way. This is deeply confusing to me. Anyone who wants ownership of me as a prize for their interesting conversation is going to be disappointed, because that prize is not on the table.
In my case at least, I think it’s more precise to say that gender is screened off by context: a randomly chosen conversation between me and a male is more likely to be about physics and less likely to be about emotions than a randomly chosen conversation between me and a female, but once you specify whether or not they’re a colleague of mine, whether or not we are on a walk together, etc., knowing their gender doesn’t provide much more evidence either way; it’s just that I have more male colleagues than female ones, take more walks with females than with males, etc.
I explicitly mean both at once.
But it is not self-evident that they are required to treat you as one, or even that they will gain a net benefit from doing so. Perhaps I should have avoided the rhetorical device “like a human being” and used the more precise “like a person”, instead. Let me reframe in that way:
It is obvious that you are a human being—that is, a member of the species Homo sapiens.
It is less obvious that you are a person—that is, a being that they must treat with the same level of rights and respect with which they expected to be treated.
This statement:
indicates that you are a tool / resource for achieving specific instrumental goals, and that those instrumental goals are different when they involve you than when they involve men. Your own preferences of which instrumental goals you would like to be utilized for is irrelevant; your average heterosexual male cares exactly as much for your preferences as your average Congressman or your average corporate marketing team: your preferences are useful for determining how to manipulate you towards instrumental ends, but other than that they don’t really enter into the equation.
Perhaps you want to be treated as an equal—seen as a terminal value rather than as an instrumental tool or resource? That’s an admirable goal, and one you will find that most people share. And thus, a good deal of social manipulation involves providing the appearance that we care about each other as terminal values, while behaving very clearly as if we really only see each other instrumentally. It is ironic that in this specific case, a significant portion of the motivation for unlocking your puzzle-box is so that the male can believe that you see him as a terminal value. Our cultural narrative strongly reinforces to men that the only way they will ever get someone to see them as a terminal value is by finding a woman and unlocking her puzzle-box—for many men (especially the so-called “Nice Guys”), this is actually a more powerful motivation even than sex.
That is irrelevant. The fact that you are in front of them, and they can imagine you fulfilling the role, means (from their perspective) that that prize is on the table. The fact that you do not wish to reward it will not disuade them; at most it just means (again, from their perspective) that they need to give the puzzle-box one or two more twists before it opens.
(EDIT: Re-reading this post, the picture I was painting of human nature is perhaps unnecessarily bleak. I think it is more accurate to say that people do not naturally treat each other as terminal values unless they are given explicit reason to, and that family, friendship etc. are all the normal reasons that they are given explicit reason to. Humans CAN be taught to treat all other humans as terminal values by default, but this is not particularly common. Unfortunately, it is far more common for people to learn to PRETEND to treat all other humans as terminal values—and to pretend to themselves just as much as they pretend to each other. Breaking through that to teach people how to truly love each other is something that mystics and visionaries strive towards every generation; you can look around you to get a rough estimate of their success rates.)
This may be a little on the pedantic side, but people are not values. They may factor into values, terminal or otherwise, in some way—you might for example want to maximize their happiness or their preference satisfaction—but if you say “Alice is a terminal value to me” or “Bob is an instrumental value”, you haven’t actually said anything well-defined about how to optimize your behavior. You haven’t even said anything about how they relate to other people in your value system: you can weight values differently, and it’s entirely consistent to treat Alice and Bob’s happiness as (separate) terminal values while weighting Alice’s needs over Bob’s in every situation where they come into conflict.
I find your interlocutors’ comments to be very insensitive, and think that they’re being hyperbolic.
I think that their descriptive characterizations of the world are true to some degree, but that this is highly contingent on culture. Our culture places very high emphasis on women’s physical appearance to the exclusion of most other things. I don’t think that this is biologically engrained. I think that men have small genetically rooted tendencies to view women in a more sexualized way than they view men, and that these tendencies have been greatly exacerbated by self-reinforcing runaway cultural feedback loops. I think that your interlocutors have (whether knowingly or unknowingly) reinforced these with their comments.
Their comments contain a valid overarching point which isn’t specific to gender relations at all: people greatly exaggerate their own and others’ prosocial motivations, deluding themselves into believing that they play a greater role than they do. The things that superficially appear to be altruistic often turn out not to be upon further investigation. People have some concern for others, but when they have conflicting motivations, they’ll generally succumb to them. I do believe that it’s possible to overcome these tendencies to a substantial degree, but most people aren’t sufficiently self-aware to recognize that there’s an issue that needs to be corrected, or interested enough to put effort into it.
What? It takes me more effort to follow them than to go my own way. YMMV, but remember not to generalize from one example.
EDIT: The “others find it externally (they just aren’t equipped to fit into the roles their culture wants to assign them)” suggests you did already understand that. (Still a weird way to put it IMO—“refraining from smoking is hard, but certain people are motivated to do that because they don’t like tobacco”? -- but still.)
Sorry, I have trouble phrasing things normally. It’s one of the reasons I often fall back on metaphor.
Hi again.
I thought It’s about time I replied to this topic. I’ve seen the response(s) earlier but didn’t feel like responding at the time and unfortunately forgot all about it afterwards—up until now.
It seems to me there is a major point I should make.
According to this definition of “stereotype” (http://en.wikipedia.org/wiki/Stereotype) I would claim they are unavoidable and useful cognitive tools for categorizing and streamlining our internal map of the world, including other people. They are not to be confused with “prejudices”, which include an affective judgement.
So me believing that most Italians like spaghetti and eat it more often than people of other nationality or origin is a stereotype. For me this is not an affective judegement because I could(n’t) care less about spaghetti or whether someone is Italian or not. I would however be more surprised if an Italian told me he does not like spaghetti, than if a Russian told me likewise. Furthermore this stereotype may or may not be true, as in principle it is a claim about what reality is like—in this case the average food preferences of a certein group.
A prejudice on the other hand may be for example that Americans are on average less rational and less well educated than average central Europeans. If this view carried an affective judement it would be a prejudice, which is essentially a hybrid of a stereotype and an attached affective judgement. Personally I actually do believe this to be the case, but I do not know if it really is a prejudice or a stereotype on my part, since I don’t really “feel” traces of affective judement wrapped into this belief. For me it is simply a simplified model of a huge group.
I admit to having this stereotype, and as far as I can tell it mainly results from me occasionally watching American news programs (several of which would be unthinkable to exist on this side of the pond, although standards seem to be falling) and watching TV programs like the Colbert Report or many years ago “Real Time with Bill Maher”. I also read several statistics (like percentage of atheists, or prevalence of certein irrational non-religious beliefs etc. etc.) that roughly confirm my internal model of what an “average American” (whatever that is ecactly) believes, behaves like and thinks like.
Personally I’m not even sure this belief qualifies as a prejudice on my part, since it may be nothing more than a simple stereotype, since I cannot discern a “negative affective sting”. For me this view is simply consistent with the data I know of and the things I experienced through the media it may or may not be true, but I certainly do not “hate” Americans and I sure don’t waste time on ranting about “those impossible Americans”.
If I know absolutely nothing more about a person other than the fact that he or she is American, what happens in my brain is that I correct the probability that said person is less well ducated, more religious, and has “republican” views upwards, because of some data I am aware of. Again this may or may not be true.
On the other hand I happen to know some statistical data on the religious views of Swedes as well, which is probably not true because it places the number of atheists at roughly 60-80% (I would rather estime something like 40% atheists with another 30-40% “believing in some metaphysical notions”).
If you just grant me the axiom, that Americans are more religious then Swedes, we can play through this hypothetical situation: If you set up an experiment where you tell me I have to spend an hour conversing with a) a completely random American or b) a random Swede, that is an easy decision for me. However, that does of course not mean that I indiscriminately dislike every American I meet, because of no other reason than their country of origin which would be ridiculous. Americans also don’t have to “prove themselves more” than Swedes do.
I’m perfectly aware that not every American -and in fact not even a single one- fits my stereotype of “the average American”. And of course I’m also perfectly aware of a multitude brilliant people and inventions that are of “US-origin”. Maybe it is just a case of the worst parts being the most salient.
So why write all this? It’s obviously an analogy to my stereotypes of women and my internal model of what “they” like to converse about. In spite of what I wrote it doesn’t actually matter to me if someone is American or not, because I -tada- update on incoming evidence and once I have an actual person in front of me that happens to be American he or she gets taken out of the drawer labeled “what I think an average American is like” and gets “promoted” into the category labelled “things I know ad beleive about James Smith”, which includes a free and nearly effortless upgrade to a more complex and custom model of who that person is.
Same goes for women, I start out from my stereotype -or bayesian prior- (where else should I start from?) and update on the “evidence” as it rolls in. Not every conversation with every woman I meet is about the fluffy emotional stuff, if I pick up on signals that indicate she is interested in talking about “heavy” stuff then that’s where I’ll go. If I met you in real life, my prior/stereotype of you aka. “Swimmer963” looks different than the grossly oversimplified one that only says “women” on the drawer.
It’s still a crude stereotype but hey you gotta start somewhere, right?
He said “usually”, not “always”, but still...
I agree, it seems we’re pretty similar in this arena. I think maybe I just feel more negative emotion about, as you put it, hedonistic procrastination than you do. Those are the times I feel the most unpleasant conflict. I should just stop procrastinating, I guess. I’m working on that, getting better about it. Anyway, I don’t need to go into too much detail on this side topic. Thanks for the reply.
Maybe Thing 1, Thing 2, and you (if there’s anything left) could agree on a new way of doing things that leaves all parties better off.
Though I do tend to be contrarian, I’ve always thought that acting independently from others is the correct stance. Does everyone agree that being contrarian or conformist are both forms of bias to be avoided? I think that at best they can be seen as very weak/indirect reasons to believe something or do something, and only relative to your context. (You need to pick your battles as a contrarian and you need to break from conforming with the wrong people as a conformist)
This is an interesting question. I definitely agree that being a contrarian and being a conformist can both be forms of bias. However, I would add one example which suggests that conformity can in some cases be a positive instinct.
I have never studied general relativity in depth. My belief that “general relativity is right” is based on the heuristics, “most scientists believe in general relativity,” and “things that most scientists believe are usually right.” In part I think it’s also based on the fact that I know that evidence and arguments are available which everybody claims to be very strong.
To show that most of my belief in general relativity comes from popularity-based heuristics, consider the following scenario. Somebody proposes a unified field theory (UFT-1). They claim that evidence and arguments are available which would would convince me that the theory is right. Furthermore, they are the only person who believes in UFT-1. To eliminate further confounding variables, let us suppose that UFT-1 has existed for 35 years and has been examined in detail by 200 qualified physicists.
The main difference between general relativity and UFT-1, from my perspective, having never examined the arguments for either, is that most scientists believe in general relativity, and most scientists do not believe in UFT-1. Yet, I believe that general relativity is almost definitely right, I believe that UFT-1 is almost definitely wrong, and I believe that these are rational judgments.
Furthermore, these rational judgments are based almost entirely on a popularity-based heuristic: that is, the heuristic that popular beliefs are more likely to be true. To review, from the information I have, the main difference between general relativity and UFT-1 is that a lot of people believe in general relativity, and few people believe in UFT-1. Otherwise they are quite similar: both of them have been around for a while, both of them have received significant exposure, and both of them claim to have sound arguments in their favor. (The differences between these arguments cannot enter into my evaluation of the two theories, because I have not examined the arguments for either.)
This example suggests that popularity-based heuristics, telling us that popular beliefs are more likely to be true, rightly have a place in rational people’s judgments.
This makes sense. The amount of thinking that the human race as a whole has done vastly exceeds the amount of thinking that I will ever do. It would make sense for me to rely on this vast repository of intelligence in choosing my own beliefs. This is related to the idea of “the wisdom of crowds.”
On the other hand, popularity-based heuristics often lead us to the wrong answer. Religion is an obvious example. So we have to be careful in applying them. I’m not sure what general principles would result in our popularity-based heuristics excluding religious beliefs, but including popular scientific theories which we have not evaluated for ourselves. What do you guys think?
The strength of others’ beliefs as evidence depends on what you know about how they arrived at those beliefs. If you know that scientists have a general process for establishing accepted truths which involves repeated testing with attempts to falsify their hypotheses and find alternative explanations, then you can take established consensus as evidence proportional to your trust in that process. Likewise, if you know that people tend to come to religious consensuses due to early indoctrination and community reinforcement, you should take religious consensuses as evidence proportional to your confidence that those processes will tend to produce true beliefs.
That’s how scientific beliefs become consensus too. It becomes a question of how the doctrine was originally chosen and on what criterion the culture most rewards oneupmanship attempts. ie. You can put more trust in science based indoctrination because you believe that if the powers that be were indoctrinating you with beliefs that can be contradicted via science rituals another power would have an excuse to ridicule them.
The beliefs of other people are evidence of some fashion. In some cases (e.g. scientific consensus), a belief being widely held is a very strong signal of correctness. In other cases (e.g. religion), less so.
Of course, our social instinct to conform do not take into account the reliability of the beliefs of the group that one is part of—although, they do take into account whether you identify yourself as part of that group, which gives one some control (only identify yourself with groups that have a good track-record of correctness.)
I’d be hesitant to classify being either contrarian or conformist as being examples of bias per se. For something to be a bias, it must influence ones beliefs in such a way that is not rationally justified. Being contrarian regarding e.g. the religious beliefs and beliefs stemming from religious beliefs of your parents is, probably, rational; conforming to the beliefs of people with more experience than you working in a field that strongly rewards and punishes success or failure (e.g. stock trading) is, again, probably rational.
Of course, being conformist can be considered to bring great gains in instrumental rationality. A large proportion of the beliefs people hold do not change in any significant way the way they lead their lives, but they do hold a large signalling value—that one is part of a group, and not some insane, socially inept geek that believes in crazy things such as the singularity. Fortunately, it is possible to get almost all of the same benefits of actual conformity by simply pretending to conform; normally one does not even need to lie, just holding ones tongue is often enough. The only advantage I can see to actually conforming is that it may make it easier to empathize with and predict others behaviour in that group, but I don’t think that this is normally much of an advantage.
One of the reasons this post is of interest is that it likely represents the feelings of some/many would-be rationalists and the struggles they have. The reasons this person has for continuing their current mode of living cuts across many different lines. How many people choose to not come out of the closet, don’t admit to being childfree, or refuse to be the sexual libertines they wish they could be because of fear of potentially being ostracized (and losing their social and economic support networks)? Thought experiment:
In a theoretical future society where the following conditions are true:
New people are “grown” or simply do not know their parents. A highly advanced AI raises everyone. This means that there are no familial attachment. All attachments are to others who you voluntarily enter into relationships with (friends, sexual partners, mentors, whatever.) Modern analogue would “raised by the state” (and not necessarily in underfunded orphanages.)
The link between work and survival has been completely severed. Robots do all the work, and all the basics are provided. You can work if you want to, but it’s not required, and you’re in no greater danger of starving, being homeless, being involved in a violent situation, etc. if you don’t. This means the economic reasons for maintaining links to others are also severed. Modern equivalent could be generous welfare states with universal job systems.
Finding people who you feel you’d want to associate with has become trivial. A system exists that can very quickly find others who share you interests, and due to sophisticated “intent” reading technology (meaning that it’s impossible to lie or deceive said system) there’s no question that those you are connected with are honest about their intentions for wanting to associate with you. No modern equivalent.
To sum up the above, it’s a society of free associations, no economic dependence, and total transparency with regards to interpersonal connections.
In this society, how many people would be afraid to be rationalists (or irreligious, childfree, libertines, take your pick)? What does the data say about societies which tend more in these directions than the US? Here’s one interesting datapoint: http://t.co/E2WEIxR
Bottom line for this comment: I would speculate that the ability to be an open rationalist is likely heavily influenced by which society you live in, though obviously some real data would be helpful here. Using both educational attainment and level of religiosity as a proxy for open rationalism, are countries which score high on those ranks more accepting of open rationality? Top fits would be places like the Czech Republic, Finland, Sweden, and maybe Germany. It would be interesting to know.
Depends. The countries in question come with their own ideological quirks. Especially Scandinavia is in a way a very conformist culture. Conformist to Liberal sensibilities but basically filled with people who have internalized a very low regard for any deviation from those sensibilities (I have heard Sweden mockingly referred to as the Saudi Arabia of Feminism).
The Czech Republic seems a good bet. Free of PC and free of conservative religious and other silliness. Perhaps the older generation carries some pro-Communist nostalgia, which makes the anti-free market bias a bit stronger but their historical legacy probably softens that too.
Also the beautiful young ladies there where a riot last time l visited thumbs up
Societies like that would leave room for a wide range of eccentricity. There wouldn’t just be more rationalists.
One of the things I’ve been exploring is where my sense of self is and what it’s doing. I’ve found that I spend a lot of time imagining myself as someone looking at me and disapproving.
Obviously, sometimes I do need to check my behavior, but what I’m seeing seems more like an emotional habit.
I don’t know whether you’ve got the same habit (even if you’re got something similar, it’s probably shaded differently), but I do recommend gentle exploration of what’s going on in your stream of consciousness.
Ah! Thank you.. I find that describes precisely what’s most wrong (currently) with my own sense of self. Earlier I would go days thinking about something that I said/did which probably nobody else noticed and feeling bad about it. One way I’ve recently started managing it is by actually forcing myself to notice similar mistakes made by people I like and respect and realizing that nobody thinks less of them for those slip ups.
You may want to borrow from the concept of meditation, where you simply observe the thoughts that pop into your head without any judgement—as if you’re an independent nonjudgmental observer.
On the other hand I don’t know if not being disapproving as you observe yourself from “the outside” will trigger your desired behavior, so it’s a recommendation with some reservations. You may want to try it—or maybe not. I’ve switched about a year ago and I’m certainly happier now, although not quite as productive (but I don’t know if that actually has anything to do with my new perspective, it may have entirely unrelated causes—and I can easily think of several).
If I understand you correctly, you’re bringing up something that’s a subtle problem for me—I’ve been doing a lot of self-observation, but didn’t realize how much judgement (not always negative) was mixed in.
Lately, I’ve been working with Kenny Werner’s methods—he’s very good about defusing the desire to get things “right”.
I didn’t really need any program or epiphany to change. The change came gradually as my general sense of self-acceptance grew from the knowledge that there is no free will and that I am who I am—a product of my genes and my surroundings and nothing more nor less.
There’s nothing worth feeling guilty about. If I think about myself and try to understand why I’m doing what I’m doing, I simply try to identify the causes of my behavior, instead of finding ways to guilt myself into my desired behaviors (which works hardly at all). I think the key to changing oneself is to non-judgementally observe yourself and to try to understand the reasons for your behavior and your thoughts—but it takes a lot of knowledge about psychology and evolutionary psychology to make introspection a worthwhile endeavor. The second step would then be to find working remedies for your identified problems, which is a problem all on its own, as one must locate non-BS instructions which can sometimes be hard to come by.
From what I’ve read from lukeprog so far, he’s quite into “self-help” (the non-BS type), so it may be worth looking into what he has written on certain subjects. He usually adds a ton of references that can be a goldmine.
This strikes me as excellent.
This strikes me as dubious, but I’m curious about how it’s worked out for you.
In my case, I find it’s more useful to work on accurate observation of what I’m feeling and thinking, and thinking about whether the methods I’m using are getting me what I want.
For example, I used to try to stabilize negative emotions so that I could work on them. This was a bad strategy—I was spending more time in negative emotions than I needed to, and a stabilized-from-memory emotion probably isn’t the same thing as a spontaneous emotion.
I’m not sure which part is the dubious one in your eyes—that evolutionary psychology is needed to understand one’s own behavior, or is it my opinion that for the vast majority of people introspection is at best a waste of time and at worst can be a real drawback for their mental health?
It’s pretty clear from what is known about psychology, that people who think a lot about themselves aren’t very happy in general. That’s because they don’t actually think in a rational manner and thus won’t succeed in identifying and addressing their problems. Instead, they actually do what is known as ruminating—which is more akin to an endlessly looping pattern of thought, that rarely yields any real insights, let alone tangible changes for the better. For a ruminating person it usually feels like they are thinking, but they are really not… it’s just one messy out-of-control thought-stream that is endlessly looping. But even without rumination-loops… if you took a hundred people and looked at what they come up with, when they are tasked with some kind of introspection you’d probably get mostly deluded nonsense out of them.
As far as evolutionary psychology goes… well let me give you an example how that can be useful. I’m a young male who’s understandably very status-driven in this stage of life, but unlike many people I’ve been quite aware of this for a long time and way back I’ve framed it to myself as having a character trait of “a huge ego”. I was very aware of how practically everything that I (and others) said carried an undercurrent that was really all about social status. So I had the futile idea that I should somehow extinguish this character trait… which is of course nonsense, because it simply cannot be done. People don’t have that kind of malleable access to “character traits”, that were deeply ingrained and hardwired by evolution. I felt bad about being status-driven, because it seemed to be such a silly and unworthy thing… and so I was completely wasting my time with the idea that it could somehow be outgrown. And there is a whole array of other evolutionary hardwired things, that people regularly misidentify as something undesirable that they should try to outgrow—when in reality they most certainly can’t. Someone wants to stop worrying about social status...? he or she might as well try to disable their breathing reflex.
I think evolutionary psychology includes a lot of guesswork.
Your distinction between rumination and thinking is excellent.
As for your specific example, I don’t think evolutionary psychology is needed to realize that concern about status is a common preoccupation, and that there aren’t many people (if any) who don’t care about it at all, and that therefore, it doesn’t make sense to expect oneself to be free of it.
I’m interested in why such an inhuman standard is so popular. I’ve got two possible angles. I’ve heard that Wilhelm Reich thought having rules about sex that people can’t follow is a very convenient tool for controlling them. I’ve extended the theory to include the more modern issue of having morality and status very entangled with what people eat.
Karen Horney (an early psychoanalyst) thought that if a child is abused, neglected, or had early development much interfered with (I think being pushed to walk early would be an example), they conclude that being a human being isn’t good enough, and invent inhuman standards (always right, always virtuous, always victorious, etc.) and attempt to live by them. I don’t know whether she looked at the implications of cultures where such standards become dominant.
I don’t really believe that superhuman standards have anything to do with a faulty upbringing. Let’s get back to the social status thing… there are people in this world who are perceived by many (if not most) as humans who do not worry or tend to their own status at all, which is but one component that makes up their irresistibly charismatic pull.
Think Buddha, Jesus, Gandhi, even Einstein- these people are generally not perceived as being concerned about lowly things like social status, but as hardliners for their high causes. Or just look at some gurus who are still alive and revered. Even self-help guys like Tony Robbins are widely perceived that way. For anyone with a streak of perfectionism or simply high standards it would only be natural to try to emulate the “best”. But of course such people do not actually exist and evolutionary psychology tells you exactly why they can’t. (Sure you don’t need Ev.Psy. to tell you that, but it’s one way of becoming aware of the kinds of tricks your mind and perception can play on you.)
There are still a lot of people out there who hold onto the ludicrous “blank slate” model of malleable personality, because they simply can’t bear the thought that life is severely impacted by our genetic make-up. They are stuck in what is called the “fair world fallacy”—the delusion that somehow we all have equal chances from birth and that life is somehow a fair race, and that you can become anything. It’s basically a complimentary model of psychology that plays into the fantasy of “The American Dream”. Some people invent karma or the afterlife to satisfy their deep need for a sense of fairness, and others simply deny that there is a problem to begin with and start believing in a highly delusional version of the American Dream. They think they can shape themselves into whatever person they desire.
In retrospect, my example was poorly chosen, because you are perfectly right when you say that being aware of our neediness for social status doesn’t require an understanding of EvPsy at all. A perhaps better example where an understanding of EvPsy is much more useful to make sense of our own thoughts may be the problem of rationality and being aware of our numerous psychological biases.
I like Venkat Rao’s cat/dog personality distinction, which he describes here. According to Venkat, you’d be more of a dog.
Perhaps it is sometimes rational to prefer agreeing with your friends over being rational :-) ?
I understand. I -absolutely- love the Gnostic narrative, having stumbled upon it through the books of Philip K. Dick. That’s a really cool story, and I’d love it to be true, and Eliezer’s schticks can’t quite explain how PKD saved his child on 2-3-74… but I’d never rely upon it in any expectations, so it’s not a belief in the strict sense. It’s just a science fiction/fantasy story I really love.
I suspect we others are more like you than we know, and you’ve done well to notice your own compulsion to conform. That said, you do make it sound like you’re an extreme case, but conforming with LessWrong norms is probably not a bad thing.
This comment has been deleted by the author. (Dupe.)
Sounds like you need to undergo reverse rejection therapy. Ask some people at the next meetup to ask you for things/favors/anything, and your goal is to say ‘no’ to all of them. May be everyone can be a target and all of you can rotate.
As I have argued elsewhere I think one big problem here is that while it surely makes sense to desire to believe more true things and less false ones being rational isn’t even a coherent notion.
Of course it could be that really the greek gods run the universe on their whims and the apparent success of physics is merely a fashion trend popular up on mount olympus (hidden from our view). To be meaningful rationality has to exclude simply waking up tomorrow deciding you didn’t like the priors you had the day before and deciding that you assign prior probability 1 to the greek gods on a whim (unless of course you want to say that we are somehow constrained by our past selves from being both rational and evaluating the world afresh no matter how young we were when we first formed our worldview).
This demonstrates that rationality and truth maximization must diverge. Obviously the truth maximizing strategy is to simply dogamtically believe true propositions and dogmatically deny false ones. Moreover, this is clearly a well-defined description of a way to update/form beliefs that obeys probabilistic updating. So either rationality is about luckily picking your dogmatism to be correct or it diverges from the truth maximizing strategy. But if rationality doesn’t make you believe the most true things in the actual world and we lack anything like a coherent principled notion of measure on the set of possible worlds it seems like there isn’t any room left for a non-trivial notion of rationality.
Really less wrong isn’t about being rational since there is no such concept. It’s a social club for people of a certain technical/philosophical/social bent. In other words it’s like a book club for people who want to BS about AI, physics and philosophy and the lack of sexual and social desierability these skills offer men. Frankly I think that’s a much more important goal since I’m more interested in people being happy than rational.