Reflections on rationality a year out
Edited for concreteness.
Exactly one year ago, LessWrong helped me change my mind about something important.
Since then, my life has been changing very rapidly, as a direct result of the rationalist community. I got in touch with other rationalists in person, which made my social life vastly more interesting (not to say surreal). My plans for the future have definitely shifted a bit. I began a deliberate habit of trying new things and learning new skills, and facing up to my flaws, often with advice from LessWrongers or IRL rationalist friends.
A few examples: I improved my diet (paleo), tried yoga, took up cognitive behavioral therapy to work on some chronic insecurities, moved Python from the “wish I knew” box to the “have a detailed plan to learn” box, dared to publish some popular-science articles under my real name, learned to do Fermi calculations in my head. I also noticed that my habits of thought have been changing: for one thing, I’m getting better calibrated about probabilities—I’m better at estimating how I did on schoolwork. For another thing, I’m getting better at not reflexively dismissing non-standard ideas: the first time someone mentioned me that a good statistician could make a lot of money in car insurance by finding new correlations to monetize, I thought “Car insurance? Hmph, low status.” The second time I heard that suggestion, about five months later, I thought “Hey, that’s a decent idea.” Some of these changes have begun to show results—the time-management habits* I came up with have started to improve my academic performance, and I notice I’m far less inhibited about taking the initiative to work on projects (I have a couple of interesting balls in the air now, including a business idea and some volunteer work for SIAI, whereas I used to be very reluctant to volunteer for things.) I’ve become much more open to cold-emailing people who work on interesting things (on one occasion I got a job offer out of an AI researcher); I’m more comfortable viewing myself as a junior member of the Interesting-People Club. I made a unilateral decision to be happier, and though I hate to jinx it, I think it’s working.
I say this just to offer evidence that something about “rationality” works. I’m not sure what it is; many of the components of LessWrong-style rationality exist elsewhere (cognitive biases are fairly common knowledge; self-improvement hacks aren’t unique to LessWrong; Bayesian statistics wasn’t news to me when I got here). If anything, it’s the sense that rationality can be an art, a superpower, a movement. It’s the very fact of consolidating and giving a name and culture to the ideas surrounding how humans can think clearly. I’m never sure how much of that is a subjective primate in-group thing, but I’m hesitant to be too suspicious—I don’t want to blow out the spark before the fire has even started. My point is, there’s something here that’s worthwhile. It’s not just social hour for nerds (not that we can’t enjoy that aspect) -- it actually is possible to reach out to people and make a difference in how they live and see the world.
Once upon a time—it seems like ages ago—I used to envy a certain kind of person. The kind who has confidence that he can make a decent stab at ethical behavior without the threat of divine wrath. The kind who thinks that human beings have something to be proud of, that we’re getting better at understanding the world and fitfully reducing suffering and injustice. The kind who thinks that he, personally, has some chance to make a valuable contribution. The kind who’s audacious, who won’t let anybody tell him what to think. The kind who whistles as he wins. Bertrand Russell seemed to be like that; also Robert Heinlein, and a couple of close friends of mine. That attitude, to me, seemed like a world of cloudless blue sky—what a pity that I couldn’t go there!
Ah, folly. Thing is, none of that attitude, strictly speaking, is rationality—it might be what comes before rationality. It might be what makes rationality seem worthwhile. It might simply be the way you think if you read a lot of science fiction in your youth. But I’ve never seen it encouraged so well as here. When people ask me “What’s a rationalist anyway,” I tell them it’s living the empirical life: trying to look at everything as though it’s science, not just the lab—trying different things and seeing what works, trying to actually learn from everything you observe.
I’m grateful for all this. While it’s probably for the best that we don’t pat ourselves on the back too much, I’m convinced that we should notice and appreciate what works. I used to be uncomfortable with evangelism, but now I tend to refer people to LessWrong when they mention a related idea (like complaining about incoherent arguments in debates). I think more visibility for us would be a good thing. I have plans to make a “rationality toy” of sorts—I know other people have projects in that vein—the more things we can create beyond the blog, the more alternate channels people have to learn about these ideas. And the more we can inspire the less confident among us that yes, you can do something, you can contribute.
*My anti-procrastination tactics are goal tracking via Joe’s Goals and selective internet blocking via Self Control. Also posting my weekly goals to the New York Less Wrong mailing list. My problem up until now has really been spending too few hours on work—in the bad old days I would frequently spend only 5 hours working on a weekday or 3 hours on a Saturday and the rest fooling around on the internet. I was really hooked on the intermittent stimulation of certain message boards, which I’m mostly glad to have given up. Now I’m aiming for 60-hour weeks. One thing that works in my favor is that I’ve almost completely stopped motivating myself by the ideal of being a “good girl” who receives approval; the reason I’m trying to get more work done is so that I can get credentials and preparation for the life I actually want to lead. I’m trying to be strategic, not ascetic. I don’t know if what I’ve done is enough—there’s always someone who works harder or longer and seems to never need a break. But it’s definitely better than nothing.
- References & Resources for LessWrong by 10 Oct 2010 14:54 UTC; 167 points) (
- A Rationalist’s Tale by 28 Sep 2011 1:17 UTC; 122 points) (
- Leveling Up in Rationality: A Personal Journey by 17 Jan 2012 11:02 UTC; 51 points) (
- How rationality can make your life more awesome by 29 Nov 2011 1:23 UTC; 39 points) (
- Reading the Sequences before Starting to Post: Costs and Benefits by 31 Mar 2011 2:01 UTC; 22 points) (
- Q: What has Rationality Done for You? by 2 Apr 2011 4:13 UTC; 18 points) (
- 6 Aug 2011 18:31 UTC; 18 points) 's comment on Raise the Age Demographic by (
- Rationality and Relationships by 8 Aug 2011 4:02 UTC; 18 points) (
- Sweet Unconsciousness by 2 Oct 2011 3:33 UTC; 16 points) (
- 4 Apr 2011 4:18 UTC; 10 points) 's comment on Recent de-convert saturated by religious community; advice? by (
- 17 Jan 2012 16:28 UTC; 4 points) 's comment on Leveling Up in Rationality: A Personal Journey by (
- Meetup : Ottawa Weekly Meetup by 7 Sep 2011 18:18 UTC; 2 points) (
Rationality working is one possible explanation of this, but it’s not the only one or even the most likely.
There are all sorts of interesting sociological differences between actively religious people and the nonreligious, usually to the advantage of theists. They live longer, report greater happiness, are healthier by most measures of health, and I think have some protection against mental disease. Most studies investigating these advantages find they have nothing to do with the content of the religion and everything to do with the religion providing easy access to the religious community, a friendly and supportive social group to which other believers have an automatic “in”.
I have a feeling this works in more subtle ways than just the obvious; it’s not just about going to church and seeing people, but about slowly absorbing these people’s norms (which are usually pretty positive in practice even when the theory behind them is repulsive) and internalizing their conception of you as a genuinely okay person because you’re part of the in-group.
A lot of what you’re talking about sounds potentially mediated by the same factors. You are part of a large and active RL community of rationalists and may have internalized the idea of fellow rationalists as your in-group, which means you’re adjusting your behavior to conform to rationalist norms and values rather than the norms and values of whatever was your in-group before.
This is not to devalue the importance of the material—most of us would not fit into a religious community no matter how hard we tried and so the material deserves a lot of credit as the attractor around which a community of interesting non-religious people can form—but I think the value of the material is indirect rather than direct.
FWIW, I just spent the last two years in a church in hopes of achieving such benefits. A few weeks ago I classified the experiment as a failure—I was more connected to others and generally happier in the one month I spent with the NYC rationalist community than at any time with the religious group, with which I had spent more time.
I’m impressed that you did the experiment.
Thanks. I thought I would be metaphorically smacked in the head with a trout by most people here for trying something like that after all I’d read on OB/LW.
Edit: Part of why I had joined was because it’s the easiest way to get a social group in Waco. Now I don’t know whether to try to relocate my life; try to find other, more appropriate groups to join here; or go for broke and try to get a stable Waco LW meetup going.
I’ll note that creating a stable Waco LW meetup group would have positive externalities.
Thanks for helping me understand the term “externality” by providing a comprehensible example of its use.
Just to re-confuse you, jsalvatier would also say that, in the present environment, it would create positive externalities for me to counterfeit money and use it to be junk that I don’t want.
Your comment reduces my confidence that I understand the term “externality”. Until I read it, I tentatively believed that “X has positive externalities” means that X is an action taken voluntarily by a person (or firm) and has positive expected global utility. Most economic discourse assumes that all voluntary actions taken by a person (firm) have positive expected personal (organizational) utility. But in the present environment, counterfeiting money has according to my models negative global expected utility by reducing (by a small amount) the value of every asset denominated in the currency being counterfeited (e.g., cash and loans). (Counterfeiting is a member of the class or set of a diffuse harms, which by the way do not seem to get the attention they deserve here on Less Wrong.)
(Buying junk I do not want has negative global expected utility, too, under my models.)
The textbook definition of “externality” is where some activity has an effect (whether positive or negative) on people who are neither party to that activity, nor in a contractual relationship with those people.
So, creating a meetup group that other people will enjoy has a positive externality, but note if SilasBarta had been hired by those people to create that group there would be no externality (unless it also benefited some people who hadn’t hired him).
As for the reference to counterfeiting, that I believe is (based on previous discussions with SilasBarta) a sly reference to Keynesian economics, and you should probably leave it to one side if you’re still trying to get your head around externalities.
Thanks.
Happy to help, I like to contribute my economics knowledge to the group when its germane.
In the present environment, at least the in the US and most of Europe, it is conceivable that counterfeiting money has positive externalities. There is a very high unemployment rate, and low capacity utilization across most sectors of the economy. There is a fairly broad school of economists who believe that this is the result of a shortage of aggregate demand brought on by poor macroeconomic management due to an irrational fear of inflation—that the central bank can and should do more than it is doing to stimulate the economy, and failing that, central goverments not facing high or rising borrowing costs should be willing to run large short-term deficits. If this bunch of economists is correct, then these policies would be good for the global economy. Since counterfeiting money is essentially equivalent to monetary stimulus, it also would have positive externality. It would be much more likely to put some resources back to work and have little or no effect on the value of assets denominated in that currency.
If all economic actors are perfectly rational, and none suffer from money illusion, hyperbolic discounting, or other effects, then you would be right in all times, not just in normal times of close to optimal fed policy and near full labor and capital usage. That would also mean that the economists to which I refer would be wrong about the current state of events.
I agree, though, that buying junk you do not want would destroy most of any utility gained by counterfeiting. It would be far better to buy things you do want, or failing that, to simply give the money away.
The disagreement here isn’t about the term “externality”, it’s about the consequences of counterfeiting.
Right now, the U.S. economy is in such a screwed-up state that injecting more currency into the economy (regardless of whether it’s done legally by the Federal Reserve or illegally by counterfeiters) may indeed have net positive effects instead of net negative effects.
According to my preferred expert, the best macroeconomic model for our current situation is that of a demand shock brought on by the recent financial crisis: people lost a lot of money, which has led to a fall in aggregate demand (people are buying less stuff), which has led to a drop in output (people are making less stuff), which has led to higher unemployment (you don’t need as many employees when you’re making less stuff), which has led to a fall in aggregate demand (newly unemployed people no longer have the money to buy stuff)...
We don’t seem to be in a downward spiral any more (unemployment stabilized at around 10%), but business investment is extremely low; corporations are sitting on cash instead of spending it to expand production because nobody is buying. Right now, the bottleneck to economic growth in the United States isn’t productive capacity, but people’s desire and ability to purchase finished products. We’re at the point where having the government hire people to dig ditches and fill them up again, or even dropping cash from helicopters, would actually improve the economy.
(Note that in spite of the 2009 stimulus bill, government spending in the United States has actually decreased because spending by state and local governments has dropped more than federal spending has increased.)
On the bright side, at least the developing world is indeed continuing to develop, in spite of the mess the developed world has gotten itself into.
It costs you almost nothing to post a meetup for a Waco group up here, and only an afternoon reading/on your laptop to wait at a failed meetup. Just because a course of action has a very high payout doesn’t mean that trying it has a high cost. The universe isn’t fair, and sometimes that’s a good thing.
True; I was referring to the full cost of getting a stable one going, which is not the same as making one attempt of that type.
Getting a stable meetup in Waco doesn’t sound like more work than moving. Am I missing something?
There’s a Unitarian church in Waco which might be worth a look.
Just visited the UU church in Waco and went to their three hour intro. Looks to be compatible with me, something I don’t have to put a mask on for.
Upvoted for Just Trying It.
Didn’t know that was upvoteworthy now. A reference to me not having tried a Waco meetup yet?
I’m guessing that this is in reference to willingness to try low-risk activities which have a reasonable chance of paying off.
To compare to moving, you would need to factor in the benefits of each as well, and I’m iffy on the upside … Waco isn’t a very intellectual town. Still planning to do it, just saying.
You’re welcome.
What religion or denomination was it?
It was the non-denominational Antioch Community Church, a pretty large one, especially given the metro area’s size.
Offhand, I’d think that the only mainstream American religions which could be compatible for most LessWrongians would be Unitarianism and the Quakers.
I’m a member of my local Unitarian Universalist church (in El Paso, just down the street from Waco by SW standards), and it is very friendly to atheists and skeptics—I would say 15% to 20% of the membership would identify as “agnostic” or more skeptical. However, it is also friendly to an array of other, much less evidence-based views. I’d say a UU church would definitely be worth a look, and would almost certainly be a better fit for a LW denizen than a “non-denominational Christian” one. But one might need to be tolerant of some rather silly beliefs. OTOH, I’m starting to take it as an opportunity to learn to “evangelize” (gently).
Naturalistic Neopaganism (HT Nick Tarleton)
I said “mainstream” because I’m assuming that the statistical good effects from religion require a social infrastructure that neo-paganism doesn’t tend to have.
Having been to a pagan convention in San Jose, this seems most likely false. I’d have to attend some local, routine meetups to be sure, but I get the feeling there’s an excellent social infrastructure in place.
Mainstream religions have people get together every week, with stuff going on between the major services. I don’t know of pagan groups which have that much going on.
If there were pagan groups that have that much going on, would you know about it?
Maybe. Do you know of any?
I am not familiar with any pagan groups at all. I was just wondering how much evidence against a thing existing your non-observance of that thing is.
Fair enough—I’ve had some involvement with Neo-pagan groups in Philadelphia and Delaware, though I’m not expert even for those regions, and I recently saw some discussion of Philadelphia being a dead spot for Neo-paganism compared to other regions.
Most worthwhile thing I’ve read by someone (ESR) who has written a lot of worthwhile things even though I will concede that he is a little full of himself sometimes.
ADDED. Actually, “Sex tips for geeks” is also in the running for most worthwhile thing written by ESR.
That was a very interesting article; I had not previously encountered such a perspective on the subject.
I don’t agree with all of it, though:
Needs more joy in the merely real, and maybe some how an algorithm feels from the inside. But still, very interesting.
When I first read that, it seemed slightly odd that he would place so much trust (provisionally?) in this particular psychological explanation. Later I read The Jung Cult, which includes a persuasive argument against the validity of the evidence for a collective unconscious. (And I guess the author had to fill the rest of the book somehow.) You’ll have to decide if you think the prior probability suffices.
Mind you, I doubt this argument would make all the phenomena go away.
Quakers? What about the God and mysticism stuff? (I was going to mention technology, but I may be incorrectly equating them with the Amish.)
Edit: Also, don’t forget the Church of Bayes.
My grandparents were Quakers. I’ve been to a few of their meetings. A Quaker meeting consists of everyone in the congregation sitting silently in a room, with individuals standing up to speak at irregular and unplanned intervals. In my experience, when people stand up to speak, they talk about the things that are important in their “spiritual” lives, which, in practice, means their emotional/moral lives. God was mentioned only in passing, and, aside from these mentions of God, I don’t remember anything mystical.
Quakers run the gamut from very conservative to explicitly atheist.
Thanks for the information—I just assumed that the inner light could be interpreted as a neurologically based reward of meditation.
As with Unitarians, there are apparently some groups of Quakers that have relinquished belief in God.
I’d say that, depending on the congregation, Reconstructionist Judaism is quite compatible with LW-rationality. Granted, Reconstructionist Jews are a tiny minority of a tiny minority, but it still qualifies as a mainstream religion in the way that term is usually employed. I’d likely belong to a congregation if there were actually one located closer to me.
Could that be in part due to your inability to buy into the church’s claims, rather than the NYC rationalist community being that much more awesome?
When my father manages to get me to go to church I can never shake the feeling that I don’t belong, no matter how nice they are.
I’d be interested in hearing about the details of your experiment as well as exploring the reasons for the failure of the experiment.
How about an article on the matter?
that would be great!
It’s possible, but I worry that our friendly local countersignalers are underestimating the power of being sane.
Most people stumble in with their friends. Your friends are the people you happen to sit next to at the first day of class, people who work in the same office as you, people who belong to the same clubs as you, people who go to the same bars as you. This is usually local because as the search radius increases, the amount of new data you have to deal with (people to filter out) becomes excessive.
It takes a strong sense of purpose to travel and hour and a half by train to meetup with strangers at an apartment in order to find a community, all based on the fact that you read the same blog. That is a very small part of search space.
There are many things that are claimed to give people large amounts of happiness. Most don’t work, and many that work won’t work for a given person. Quickly identifying what works for you, and making a beeline towards it is one of the largest benefits rationality can give a typical person. People see this and focus on the “it” (in this case finding a community) and say “of course that made you happy.” This feels like hindsight bias. If you had met SarahC a year ago, would you have said to her “Oh, you obviously need to meet us with these really awesome rationalists in NYC”? Finding that option is where the rationality comes in.
Generally, people who try to lose weight don’t actually lose weight, and when they do lose some weight, they put it back on later (yo-yo dieting). Zvi, a NYC rationalist, recently posted about how he lost weight using TDT style thinking. He lost a considerable amount, and has kept it off for many years. He is not alone in the NYC group. Many of us have done this relatively simple task, and kept the weight off for years. We all used different methods to change our behavior, but we each picked one that worked for our specific problems.
Rationality helps you CHOOSE one option out of many. The option you choose isn’t “rational” in any special sense, but in some cases the choice would be unlikely. Maybe as unlikely as traveling 63 miles to hang out in a strangers apartment. Noticing that option exists is a superpower, even if taking it is obvious afterwards.
Some of it is just community belonging, but not everything.
Some of the changes I’ve made are explicitly related to becoming more rational: getting better at probability estimates, being more likely to make decisions based on evidence instead of convention, getting in the habit of making changes to my behavior in response to real-world results.
Some of what’s changed about me is just the effect of being friends with people who aren’t students or academics—I knew very little about life as a 20- or 30-something with a private-sector job. And some of it is just the techie counterculture (which I enjoy quite a bit.) You know, vibrams/ancap/science fiction/burning man/crossfittery/self-quantification/hacker ethos.
I agree with this. I sought out the NYC group for precisely this purpose. But there’s definitely a benefit to having such a community that DOESN’T come with other norms that make you believe wrong things (sometimes to the detriment of either yourself or society).
Something that occurred to me, inspired by many of the details of your story, was that actively seeking to cultivate rationality may internalize one’s locus of control.
Locus of control is a measurable psychological trait that ranges from “internal” to “external” where an internal locus roughly indicates that you think events in your life are primarily affected by your self, your plans, your choices, and your skills. You can measure it generally or for specific domains and an internal locus of control is associated with more interest and participation in politics, and better management of diabetes.
My initial hypothesis for any particular person (reversing the fundamental attribution error and out of considerations of inferential distance) is generally that their personal locus of control is a basically accurate assessment of their abilities within the larger context of their life. If someone lives in a violent and corrupt country and lacks money, guns, or muscles then an external locus of control is probably a cognitive aspect of their honest and effective strategy for surviving by “keeping their head down”. When I imagine trying to change someone’s locus of control with this background assumption, the critical thing seems likely to be changing their circumstances so that they are objectively less subject to random environmental stresses with things like corruption-reducing political reform, or creating protected opportunities to work and keep the fruits of their labor, or something else that directly and materially changes their personal prospects for success.
I’d always thought that locus of control had obvious connections to rationality, in that it seemed that a justifiably external locus of control would make it rational to not bother cultivating rationality. Significant efforts or careful planning are pointless if success and failure in life will be dominated by unpredictable factors that swoop in from “out there” to manipulate outcomes in unforeseen ways. If your ship’s destination will be determined by random winds that can tear your sails to shreds or speed you swiftly to a surprise destination, why bother making a map? The choice is pretty much just whether to get in the ship at all, and it’s probably a bad idea unless your current conditions are abysmal.
Your story makes me wonder about connections in the other direction, from rationality to locus of control. It seems plausible that cultivated rationality might teach people to notice patterns, to find points of leverage, and to see the ways that they can affect the things that matter to them. Rationality education might be a personal intervention that could internalize a person’s locus of control on the cheap, even without having substantial political influence or resources to direct their way.
More pragmatically, this makes me wonder if it would be useful to measure people’s locus of control before and a while after an intervention designed to improve rationality? I guess an alternative hypothesis is that you’ve been involved in meetups and your social environment might have improved? Perhaps any group of reasonably non-evil people could have helped just as well? I can’t think of any simple way off the top of my head to measure something that might help control for this factor...
It seems like it would be nice if “rationality itself” was the secret sauce, but “proving it for real” and then maybe optimizing based on the post-proof insights feels like something demanded by full thematic consistency :-)
I would definitely say yes. There are people who have a tendency to think that if there’s any major component of randomness involved in something, then it’s pointless to try to make plans relating to that thing. Simply grokking expected utility and some very basic probability theory would help these people tremendously, while also shifting their locus of control inwards.
That’s really helpful. I can see that tendency even in my own attempt to explain what an external locus of control would feel like from the inside in emotionally compelling terms where I wrote:
To be less dramatic and more balanced I should have said that the choice is whether to get in the ship at all, comparing the expected value of travel versus the expected value of one’s present circumstances, perhaps with a risk of ruin calculation to handle the different variances and valid risk aversion. My first wording revealed strong risk aversion and no implication of comparative calculation.
...Of course, now that I think about it, even that specific analogy suggests historical examples. People have literally been forced onto ships with little opportunity to research it or calculate expected values when they were to be sold as slaves or serve in the navy, or fight in the jungle. I can easily imagine that many of these people updated in the direction of an external locus of control, and later would “rationally expect” that cultivated rationality wouldn’t be that useful. By the same token, in those specific circumstances, cultivated rationality might have helped them avoid situations where they were likely to be press ganged?
But now we’re getting into “blaming the victim” territory with all the confusions inherent to politics. It makes me wonder if a strong desire to be sympathetic, translated into controversial political questions like these, limits a person’s likely appreciation for cultivated rationality? Maybe the (Gendlin ignoring) logic would run: “If I believed people could have predicted and avoided their current tragic circumstances, then it will be harder for me to be sympathetic, but I want to be sympathetic so I should not believe that people could have predicted and avoided their tragedy.”
Perhaps some kind of “active sympathy” techniques could make rationality training more useful and resilient in adverse social circumstances? I would guess that heart of the trick would be to reverse the latent fear (rather than simply reduce it) and show that irrationality actually tends to reduce effective sympathy, and cultivated rationality tends to increase it. Googling around I find empathic concern as a keyword, with measures being developed in the late 1970′s, and intervention efficacy happening by 2007 for things like couples therapy.
I think it is better to be sympathetic regardless of whether the “people could have predicted and avoided their current tragic circumstances” (whatever the counterfactual means, maybe that a more rational person facing the same problem would have predicted and avoided the problem?).
Like Eliezer says:
I am going to go ahead and push that moral line out to cover paralyzing loss of autonomy.
“We should blame and stigmatize people for conditions where blame and stigma are the most useful methods for curing or preventing the condition...”
A little knowledge is a dangerous thing. Assume for a second the hypothesis is true: Slaves became slaves because Africa wasn’t rational enough. If we are sympathetic based on false beliefs, then we will not be able to offer them a true solution. We might offer them our sympathies, or be more willing to donate to their cause (even if it’s irrational), but we won’t be able to stop it from happening again.
If we believe that people could have avoided these tragedies through rationality (assuming this is true), then we automatically have the solution for avoiding these tragedies in the future. Just add rationality! It doesn’t matter how sympathetic we are if all we do with our sympathy is wander around, looking for the answer we’ve blinded ourselves to.
Sympathy is more than just feeling bad for the victim while you let them get exploited again and again. Sympathy is understanding the victim and having a desire to help. Clear, truthful understanding of all causes of victimization is a prerequisite for both of these to occur. You cannot understand a victim until you understand how they truly came to be a victim. You cannot provide meaningful help until you understand their role in the problem. Sympathy without rationality is just worthless pity.
I didn’t grok this much. Are you saying that rationality might not help people who will have an external locus of control regardless, or that you used to think this, or something different?
I’m saying that if someone really doesn’t have the ability to influence outcomes of personal interest, then it might really be senseless to make plans or worry about acting coherently. Someone might have an internal locus of control with respect to a slot machine, believing that their timing and bar-pulling-technique actually matter, and try to do statistically significant studies on which technique is best.
Maybe the person would discover a broken slot machine and discover how to game it? Its possible. But mostly they would just be crazy.
Wow. I wanted to say something like that, but this is waaaay better.
I think that the shift probably has to do with framing things as you deciding to take actions which are linked to specific utilities, rather than things happening to you.
There seems to be an emphasis in lots of older philosophies (most Monotheistic Religions, Norse Mythology, Stoicism, Daoism) on external loci of control. I wonder how much of that of that is because they’re right, memetically infective, or just because people didn’t know how to control things well.
Hmm. I wonder if it’s worthwhile to make a distinction between external locus of control and absence of a locus of control; Stoic-style fatalism seems subtly different from Calvinist-style predestination, and somewhat more clearly distinguished from limited self-determination within a motivational landscape defined mainly by forces outside your control.
Yeah. I winced a bit when I clumped them together like that.
It seems to me that Stoicism asserts that your locus of control over external events is external, but that you can control yourself and by going along with Nature and in doing so eliminate your suffering.
The whole thing was good, but I particularly liked this bit. Humble-but-awesome is tough to pull off!
Great post! Could you change your comment link to point HERE? Not knowing your story, I found the comment you linked to unclear. I had to keep clicking to show more comments to understand what you were discussing changing your mind/having belief in belief about.
How is the superstition working out for you? ;)
It could be interpreted in non-superstitious ways. I’ve certainly developed anxiety before by thinking about a process instead of merely implementing it.
Yes, because thinking about process X isn’t the same as implementing process X as the quotation is not the referent.
Agree, but see no connection between your comment and mine.
Why not post this on the main site?
Agreed. Add more details, post it on the main site!
More on those PLEASE! (Or link if you already wrote about it.)
To my memory, I’ve been following along with this idea since before this blog, or Overcoming Bias, existed. I can’t really say that it’s done me all that much good.
I think that the majority of what helps is having a physical community of people you genuinely care about and who care about you (not necessarily in the real world, although that helps). I think any given community I joined might have helped me, but I think that joining a community based on rationality in particular gave me the perspective to make a significant change to my goals.
Specifically, it has to do with how much of my money will go towards charity and what kinds of charities it goes to. This doesn’t benefit me directly, but it reassures me that I really am the kind of person I want to be. Rationality enabled me to think critically about how much money is necessary for personal happiness and what kinds of good things I can be accomplishing with the rest of it, which in turn gives me real confidence that I am doing the right thing.
That may not be motivating to most people, but it was valuable to me.
Permission requested to move to main section, promote.
Note also that this could be transformed from “good” to “timeless classic” by going through, taking all paragraphs about something abstract, and inserting at least one concrete example into each of them.
I for one would like to applaud the 20 members of the LessWrong community who just applauded Eliezer for applauding SarahC for applauding the LessWrong community.
I applaud you, Steven. *clap clap clap*
I voted Eliezer’s comment up because I agreed with the suggestion.
I voted the comment up because I agreed with the suggestion.
But you can’t just expect me to fairly represent people’s motives. I’d lose rhetorical force!
Nice article.
One thing that I’ve found interesting is that rationality doesn’t seem to make people happier by cleaning up their beliefs, so much as it does by inspiring more self-confidence.
Some quick ideas:
I find that rationality is much more attractive when you emphasize its ability to help you do things that you care about, rather than its ability to shoot down other ideas.
Rationality has greatly increased the rate at which my life changes, and made me a lot more comfortable in areas I wasn’t before.
*Not sure if this is true.
This is true. What I get out of LessWrong is little bits and pieces that mean I do things very slightly better and do very slightly less useless things, and these increments feel like they’re adding up. I don’t have numbers to point at to show improvement, but my life hasn’t disimproved and feels much lower-friction, so that’s a win already.
.
This is inspiring. It is taking me a little longer (I discovered rationality two years ago); but I feel like I am on the brink of the next level of awesome. Thank you.
Woo-hoo! (Edit: that’s me patting you on the back.)
FWIW, I am inclined to think that “rationality” is a bad brand identification for a good thing. Rationality conjures up “Spock” (the Star Trek character) not “Spock” (the compassionate and wise child rearing guru). It puts an emphasis on a very inhuman part of the kind of human being you feel you are becoming.
Whatever it means in your context, as a brand to evangelize to others about its benefits, it is lacking. Better, in the sense of offering a positive vision, perhaps than “atheism” or “secularism” but not still not grounded and humane enough. I like “naturalist” better, although it is loaded with the connotation of bird watching, and also “humanist” although the term, without the modifier “secular” can mean little more than someone who gives a damn. “Enlightened” (as in the Enlightenment era) might be a good term if it weren’t so damned arrogant in the modern vernacular.
The sense that I think you are trying to capture of something of the sense conveyed by the title to Carl Sagan’s book “Demon Haunted World.” You want to convey the joys of having exorcised the demons and opening yourself to seeing the world more clearly. But, to sell it to others, I think it is necessary to find a better marketing plan.
On the “spock” front, I dislike the identification of “rational” with “Inhuman”. These, too, are human qualities! However I certainly agree that many people do see this negatively.
There’s an interesting tension in marketing plans—how far can we go in using marketing, which is normally about exploiting irrational responses, in pushing rationality?
If people see rationalists using irrational arguments to push rationality, does it blow our credibility?
The local jargon term appears to be “dark arts”.
The tricky thing is that it’s hard to effectively interact with the typical not-particularly-rational human in a manner that someone, somewhere, couldn’t conceivably interpret as dark arts.
I tend to resolve this by doing something that seems to have a reasonable chance of working, not actively seeking to deceive and seeking a win-win outcome. Would the subject feel socially ripped-off? If no, then fine. (This heuristic is somewhat inchoate and may not stand up to detailed examination, which I would welcome.)
Dunno about detailed examination, but will you settle for equally inchoate thoughts?
If I think about how N independent perfectly rational AI agents might communicate about the world, if they all had the intention of cooperating in a shared enterprise of learning as much as they can about it… one approach is for each agent to upload all their observations to a well-indexed central repository, and for each agent to periodically download all novel observations and then update on that.
They might also upload their inferences, in order to save one another the trouble of computing them… basically a performance optimization.
And they might have a mechanism for callibrating their inference engines… that is, agents A1 and A2 might periodically ensure that they are drawing the same conclusions from the same data, and engage in some diagnostic/repair work if not.
So that’s more or less my understanding of communication on the “light side of the Force:” share well-indexed data, avoid double-counting evidence, share the results of computationally expensive inferences (clearly labeled as such), and compare the inference process and point out discrepancies to support self-diagnostics and repair.
Humans don’t come anywhere near being able to do that, of course. But we can treat that as an ideal, and ask how well we are approximating it.
One obvious divergence from that ideal is that we’re dealing with other humans, who are not only just as flawed as we are, but are sometimes not even playing the same game: they may be actively distorting their transmissions in order to manipulate our behavior in various ways.
So right away, one thing I have to do is build models of other agents and estimate how they are likely to distort their output, and then apply correction algorithms to my human-generated inputs accordingly. And since they’re all doing the same thing, I have to model their likely models of me, and adjust my output to compensate for their distortions (aka corrections) of it.
So before either of us even opens our mouths, we are already two levels deep into a duel of the dark arts. The question is, how far am I willing to go?
In general, I draw my lines based on goals, not tactics.
What am I trying to accomplish? If I’m trying to understand someone, or be understood, or make progress towards a goal they value, or act in their interests, I’m generally cool with that. If I’m acting against their interests, I’m not so cool with that. If I’m trying to protect myself from damage (including social damage) or advance my own interests, I’m generally cool with that. These factors are sometimes in mutual opposition.
And then multiply that pairwise computation by the mutual interactions of all the other people we know, plus some dogs I really like, and approximate ruthlessly because I don’t have a hope of doing that matrix computation.
One doesn’t have to use irrational arguments to push rationality, but one of the lessons we draw from how people make decisions is that people simply do not make decisions about how to view and understand the world, even a decision to do so rationally, in an entirely rational way. The emotional connection matters as well.
Rational ideas proferred without an emotional counterpart wither. The political landscape is full of people who advanced good, rational programs or policy ideas or views about science that crashed and burned for long periods of time because the audience didn’t respond.
Look at the argument of SarahC’s original post itself. It isn’t a philosophical proof with Boolean logic, it is a testimonial about the emotional benefits of this kind of outlook. This is prefectly valid evidence, even if it is not obtained by a “reasoning process” of deduction. In the same way, I took particular pride when my non-superstitutiously raised daughter won the highest good character award in her elementary school, because it showed that rational thinking isn’t inconsistent with good moral character.
While one doesn’t want to undermine one’s own credibility with the approach one uses to make an argument, it is also important to defuse false inferences in arguments to oppose rationality. One of the false inferences is that rational is synonomous with ammoral. Another is that rational is synonomous with emotionally vacant and unfulfilling. A third is the sense that rationality implies that one use individual reason alone without the benefiit of a social network and context, because that is the character of a lot of activities (e.g. math homework or tax return preparation or logic problems) that are commonly characterized as “rational.” Simple anecdote can show that these stereotypes aren’t always present. Evidence from a variety of sources can show that these stereotypes are usually inapt.
When one looks at the worldview one chooses for oneself, it isn’t enough to argue that rationality gives correct answers, one must establish that if gives answers in a way that allows you to feel good about how your are living your life. Without testimonials and other emotional evidence, you don’t establish that there are not hidden costs which you are withholding from the audience for your statement.
Moreover, marketing, in the sense I am using the word is not about “exploiting irrational responses.” It is about something much more basic—using words that will convey to the intended audience the message that you actually intend to convey. Care in one’s use of words so as to avoid confusion in one’s audience is quintessentially consistent with good practice of someone seeking to apply a rational method in philosophy.
I think Sam Harris gets it mostly right.
I’d like to bring up a comparison with a similar term that isn’t used much any more: “abolitionist”. It’s very rare to find anyone these days who wouldn’t agree with those in the pre-Civil War United States who called themselves abolitionists. We don’t need the term today, but we did need it back then...
“Reason” and “evidence based” are both quite nice words to convey the idea.
Thanks for this. That talk was an informative read.
NYLW has done some preliminary testing, asking people what they think of when they hear the word “rational”. So far the results have been positive.
So far as I know I’ve been the one doing most of the asking, and I don’t have a large enough sample size to declare anything, just seven people. The results have been mostly neutral, with one enthusiastically positive and two slightly negative. If I were to extrapolate from this, I’d say that enough people are at least neutral to the word that it won’t harm us to use it.
If our goal was to find an optimum marketing word, I’d wait until we’d done much more substantial testing. But I think there’s benefit to changing the Spock Perception, so as long as people are mostly neutral towards the word, it’s worth using. (I’d still want more than seven responses before committing to it)
The specific question I’ve been asking people is:
“I’m just curious, if someone were to describe themselves as a Rationalist to you, what stereotypes would come to your mind about that person?”
(The first time I started by saying “what thoughts and feelings come to your mind if say “Rationality?” That prompted some questions and confusion that I don’t have time for in the typical elevator ride, which is where I do the asking. By the third query I had narrowed it down to the phrasing above, because it seemed to cut to the heart of the matter. People might be okay with “rationality” but are they okay with people who strongly “identify” with rationality?)
When I hit some arbitrary milestone that triggers warm fuzzies, I’ll post the results so far and invite analysis. (I’m thinking 20′s a decent number to start with)
I’ve been using the word “Luminous” to explicitly refer to “LessWrong rationality” (as opposed to “Spock rationality”). It’s a bit of a kludge, but the concept has always felt central to what I get out of LessWrong. I’m not sure how true this is for others.
Tongue-in-cheek, I’d also suggest “Illuminati” ;)
Luminosity is already a technical term for a subset of rationality skills. If it’s the subset you usually have cause to talk about, there’s nothing wrong with that, but calling the entire thing that seems just mistaken.
nods I am aware it’s a subset, thus calling it a kludge.
Certainly, I’m open to a better term, but I happen to deal with a lot of “Spock” rationalists, as have many of the people I talk to, so having some way of distinguishing “no I don’t mean that idiocy” is important to me, and this is the best-fit that I’ve found so far.
The chain of thought, if you’re curious: On a non-verbal/intuitive level, I feel like the sub-skill of Luminosity is a lot of what distinguishes “LessWrong” rational from “Spock” rational. Since “LessWrong Rationality” is itself fairly awkward phrase (referring as it does to a single specific community), I substituted “Luminous rationality”, and that eventually got short-handed back to just “Luminous”. English allows for all sorts of weird confusing things where a word refers to both the set and a specific subset (frex, “man” referring to both “humans” and “humans who are male”), so while it’s kludgy, it works for me.
I can completely understand this word not working well for others :)
Have you heard of The Brights movement ?
It was kind of inspired by the gay movement as an attempt to find a word for atheism that was more socially acceptable ie without all the negative baggage, and embracing/popularising it.
I have heard of it.
I think it’s an awful name, exactly on the grounds of having huge negative baggage. For me, at least, it has strong associations of smug, superior, condescending, and other such qualities.
Yep—correlating it with “being intelligent” seems to be a bit of a PR disaster… which the brights have tried to counter by calling non-brights “supers” .
Not sure if that’s worked at all… I keep occasional tabs on what’s happening in that community but don’t really consider myself an active member. I think the heart is in the right place—especially in the US where religiosity is at a much more fervent level. but not sure it’s really proven effective yet… but then I might be able to say similar things about this community :)
I’ve never been particularly fond of it, it always struck me as too self aggrandizing. It particularly upset me when my sister started identifying me as a Bright to other people without my permission.
I proposed a new logo for the Brights in 2007 :-)
The images don’t seem to work there?
When I open the image in a separate window, I get a message that I don’t have permission to access it.
It took me looking at Brights on Wikipedia then a moment’s imagination to work out what he would have come up with.
I have, and even started to mention it, but figured that I was going too far afield. I think the problem there is that the established meaning of “Bright” as intelligent, overshadows the secondary meaning that is sought. I think “light” as a metaphor is promising, but the word “Bright” in particular, is inapt.
I think this is a good occasion to point out a terrible bug:
After a post has been moved from discussion to main LW, the score is reset to 0 but one’s “Vote up” button remains greyed out. Right now I can choose from giving this post 0,-1 or −2 points, while it should be +1,0, or −1.