Personal Benefits from Rationality
I saw this and realised something:
“Hey, wait, where have I seen other people talk about specific benefits from Rationality?”
And then I realised I hadn’t. I look around the site some. Nothing there.
This is a place to fix that. The idea of this page is to post specific things that you personally have found helpful, that you learned from your studies of Bayescraft. This way we can find some that seem to work for a large number of people, so that when new people start to become interested in Rationality we can “make it rain” so that they see the benefits that come with being less wrong.
For commenters:
If someone posted something already that also worked for you, mention that. If every tactic is apparently used by only a single person, then it is harder for us as a community to figure out what we should recommend to tyros.
List of N Things:
Understanding that my high school history class has more to do with real science than does my Chemistry class let me understand how I should be approaching the problem. History lets you look at what happened and say “Why did this happen” when you view it the right way.
Reading up on cognitive neuroscience taught me that I could use the placebo affect on myself. I have missed one day of school due to illness in my life.
Learning to not propose solutions for a minimum of five minutes, by the clock, has honestly been the most effective thing I have yet learned for personal application at Less Wrong.
May we all share many useful things, for our own benefit and as a place to point tyros towards.
Not proposing solutions for five minutes is something I do every couple of days. I literally have a timer on my watch that is, by default, set to five minutes, and if I am wrestling with a difficult problem, I just sit down, start the timer, close my eyes, and think. In a more general sense, I use techniques like original seeing all the time.
Positive bias is one of the most powerful bits of information I know for determining the truth.
In a general sense, I can’t quite stress how much happier I’ve consistently gotten since I’ve realized that when I notice I’m not happy, or bored, or whatever, I can just ask myself “What could I be doing right now instead of what I am doing, that I would enjoy more?” This has led me to several impromptu road trips, a number of hiking, biking excursions, etc, and reading several really good new books over the last couple of months alone.
Yeah, I think that trying new things is another thing most rationalists should do. I find my self defaulting towards action a lot more often now—like last night I signed up for a free improv lesson in my area because I thought improv would be useful, and just looked for what I could do about that.
I want to do that as well. A friend of mine took a class on stand-up comedy and really loved it. We were going to take it together, but the schedule didn’t work for me.
Most of what I’ve learned here has become background ideas that I can’t quite point to real-world direct instances of; most of the thinking I do is ‘intuitive’, or rather, not reflected upon and poorly remembered. That said:
“Multiplication.” That is, the notion that you actually can compute expected value of real-world choices. Most blandly, I won an iPod in a raffle after a very rough computation that buying tickets was actually worthwhile compared to buying the item at retail, whereas I had previously been ‘irrationally’ opposed to anything resembling ‘gambling’.
Recognizing the pattern of disputes over definitions has helped me avoid disputing definitions, or caring about the outcome of such disputes, when they are not actually useful.
It is clear from my experience that working hurts less than procrastinating; I have a poor record of actually applying this knowledge, however.
This. One time my grad school department wanted to have the most donations to charity through the university’s program, and so they arranged that a donation of any size (even $1) would constitute an entry into a department-organized raffle-y thing. I don’t know where the prizes came from, but instead of dismissing the prospect out of hand I did a little arithmetic, got an estimate of how many people were entering from the school secretary, and determined that the cash prizes alone (let alone the gift basket and whatnot) constituted positive expected value. So I gave Planned Parenthood a dollar. (I lost, though.)
Upvoted for distinguishing the anecdotal outcome (a loss) from the expected outcome (positive). In other words, anecdotes are only good data when everyone reports on their outcome.
BTW, as long as I could give to a charity I considered worthwhile, this would almost always be a slam dunk regardless of the expected gain from the raffle itself. For example, if I valued the good produced by giving $1 to the charity at $0.95, then $0.05 in expected winnings would be my break-even point, not $1.00
Why would you? Charities differ in effectiveness by far more than 5%..
I meant to imply by “a charity I considered worthwhile” that you’re near the utility break-even point already. Obviously if you think the charity only does a tiny amount of good, then you need a bigger expected win to make the raffle it worth playing.
Since becoming a rationalist, I’ve become (on average) happier, more adventurous, more tolerant of other people, more comfortable in a variety of situations, more motivated, more intentional, more understanding, less moody, less nihilistic, less contentious, and lots of other fun things. (Feel free to ask for evidence for these claims, I just didn’t want to write all of that out now and make this reply even longer.)
I think the main ideas that the rest of that sort of latches onto are:
Don’t be afraid of the truth, and don’t look for answers that confirm what you think
You need to understand how things work in order to accomplish youg goals
The universe is allowed to be terrible by human standards
And the more specific useful details that actually had changes in my life are:
You’re allowed to do things to accomplish your goals other than your cached thoughts
Other people are different than me, and I should think from their perspective rather than extrapolating from myself
Working in groups is really, really helpful for accomplishing things
How to dissolve the question
Take other people’s advice
Hold off on proposing solutions
Don’t listen to Bruce
It was addressed earlier here.
I’m fine with talking about it again, because a lot of that thread focused on Xixidu, and I’m all up for hearing more upbeat responses
There was one of these at the start of April:
http://lesswrong.com/lw/52n/q_what_has_rationality_done_for_you/
Want to declare this May’s thread?
My April answer still holds. I will add: LessWrong has taught me to separate the process of thinking rationally from believing I have to set goals that pass a test of rationality. “Is it rational to want that?” only makes sense for instrumental values, not terminal ones, and can lead to trouble when you haven’t fully resolved which of your values or goals is which (whether you think you have or not).
My housemate has almost completely hacked my brain (liberally apply computer programming and Godel, Escher, Bach to your mind) to think in isomorphisms, efficient algorithms, and the like. This has caused improvements like using a queue instead of a stack for scheduling chores (one bad chore in a stack will cause me to look for other easier chores to stack on top of it) which means my weekly chores get done in an afternoon instead of a week, and a general attitude of thinking about problems instead of solving them. Usually, a bit of thought will reveal some underlying pattern that has an optimal solution ready and waiting.
Rationality gave me this because it told me, at one point, about behavioral hacks. So I looked for my smartest, most effective, and most awesome friend, and made them my housemate.
Djikstra said that computer science is as much about computers as astronomy is about telescopes, so it shouldn’t be surprising that things like algorithms and data structures has relevance to even mundane reality. I think one way I look at myself is an extremely small and limited computer. On the fly, my brain is slow at performing operations, I have a hard time recalling information, and I do so with limited accuracy. Sometimes I make mistakes while performing operations.
So what are we doing when we try to organize ourselves and make plans but trying to compile a program for these very far from optimal circumstances? Obviously, if I make plenty of mistakes, I need to write in plenty of redundancy; and I have to employ “tricks” in order to achieve meta-cognition at the right times (something that goes beyond the computer analogy, I know).
This involves, as I see it, a further way of looking at yourself. You see yourself as both the machine executing instructions, and the programmer writing those instructions (as well as the compiler, trying to translate the program to machine language). Nietzsche wrote that we have to develop as both commanders and obeyers. I thought this was hogwash, but I’ve learned that there is a lot of truth to that.
I am not a native English speaker, and so usually saw these as synonymous. Could you explain a bit what the difference is?
It’s not about English:
http://en.wikipedia.org/wiki/Queue_(data_structure)
http://en.wikipedia.org/wiki/Stack_(data_structure)
Ahh, yes of course. LIFO vs. FIFO. Thank you for explaining.
A quick google tells me “A stack is generally First In, Last Out, and a queue is First In First Out.”
Just to be clear, you mean that you now you will be doing one task then see another needing doing and do it instead. Whereas before you would continue doing the current task, with the intention of doing the other when it was completed?
[Sorry if thats blindingly obvious, my computer science knowledge is fairly sparse.]
Say I needed to vacuum the house and complete an essay. If I stack vacuum on top of essay, I’ll be vacuuming first, and then going to do my essay. But if, while I’m vacuuming, I realise the dishes need cleaning and I need to post a letter, I’ll put those on the stack as well, and they’ll get done before the essay because they’re on top. And as long as I can come up with more tasks, I can stack them on top of the essay, and never get around to it.
But with a queue, I do the vacuuming, realise the dishes need doing, and queue that up behind the essay. The essay gets done before the dishes, removing the temptation to generate mindless busywork for myself.
Ah I see, the task currently being done is not part of the stack.
I can see this working with tasks of similar length and difficulty. But what about when one task is significantly shorter than another and partly time dependent? E.g. in this case, while your essay is more important, it might take several hours to do well, during which the dishes will moulder and annoy your flatmate. Whereas the essay will not e altered during that length of time. I acknowledge that this is a possible way to rationalise procrastination, but there would be cases where it was true.
It’s possible, but I’ve never encountered such a situation.
I have fond memories of implementing Priority Queues, back in the day. The algorithm is rather elegant.
Threads on this crop up occasionally. Unfortunately the options for directing new attention to an old topic are very limited.
Most direct parallel is here: http://lesswrong.com/lw/6t/the_benefits_of_rationality/ ,
Not sure if it’s specific enough, but mental illness type problems can be cancelled out by it.
Could you elaborate? There seems to be some anecdotal evidence (and more specifically the theory of “Depressive realism” that seem to show that (misapplied) rationalism can be detrimental to ones mental health.
Keyoword: missaplied. Someone who actually read the sequences, understand them, and apply them as recommended wont misapply them.
I’m not sure if reading the sequences is sufficient to ensure correct application.
But regardless can you give an example of rationality cancelling out mental illness issues?
Edit: Apologies if doing so goes into personal information you would rather not discuss, if that is the case please consider my request withdrawn.
It sort of does, if you’re really interested you can go through my posting history thou I think I posted somehting about it a while ago.
I often (weekly, if not more frequently) use the technique “don’t evaluate before brainstorming” in my day-to-day work. I consciously look for further alternatives before considering which is best of the ones I’ve already listed—that is, I sometimes catch myself in the act, and say “hang on, are those really all the options?” Several times this has helped me hit on an alternative superior to my first thoughts.
It’s hard to articulate all the benefits it’s had in my life, but I’ll name some that I’ve really noticed:
Self evaluation and development: Many of the posts on Bias and human behaviours have helped me understand myself as a very primal creature, and in distinguishing the difference between the rational logic in my head, and the very human part of me that exists in day-to-day, and because I have the tools to understand those subjective parts of me, I can essentially ‘manipulate’ myself for my own benefit. For example, Luke’s article on The Good News of Situational Psychology helped me to understand how influenced I am by my situation, and that I need to pro-actively place myself in situations that will encourage me to make better decisions. Also the ability to be objective about myself and my emotions allows me to do a cost-benefit analysis on what parts of me need the most work.
Social interaction: Many of the same tools have given me a stronger understanding of humans as social creatures, with many similar behaviours and mannerisms as animals, and being able to see things in that light has allowed me to take advantage of these social norms, even in terms of encouraging positive behaviours in my friends and family, and optimizing our lives.
Those are the main two, but they sort of underly everything else that goes on in my life as well....
My main goal at the moment is to utilize “rationality” as a way to map how individuals comes to a decision/belief (what to do, what to say, what to believe, etc.). I am not assuming that these individuals ARE rational, but it’s a useful (and quickie) tool for me to retrospectively “map out” the thought process of an individual. Like all tools though, they will be limitations.
I only expect that my communication with others will be easier—if and when they will be more rational.
Nothing else.
That sounds like a net negative.
I don’t see this reply as a rational one.
The OP asked about benefits from rationality. You gave something that looks like a negative effect. And mention that this is the only thing you get. Hence you seem to experience a net loss by being rational over not being.
You know that rationality is not just about being right. It is also about achieving the things you set out to do. Winning and such. If You do not win, something is wrong.
I don’t see that as a net negative. There may be a lot of people who are “rational”, and possibly those who are already “winning”. Knowing how to communicate with them is indeed a net plus, since this gives you an exclusive network other people won’t have (letting you “win” as well).
Internal rationality I take as granted. But any cooperation with not so rational people is more difficult than with those (much) more rational.
The rationality increases the communication bandwidth.
What else do you expect—except a better information exchange with others if the rationality goes up with time?
What else?
I recommend you re:read the original posting and the other comments. There seems to be a difference in how you interpret the question and everyone else.
Seeing the thing. just as everybody else, or at least as the local majority? How do you call it? The confirmation bias?