Can anyone attest to getting real instrumental benefit from SI/LW rationality training, whether from SI bootcamps or just from reading LessWrong?
I don’t just mean “feeling better about myself,” but identifiable and definite improvements, like getting a good job in one week after two years without success.
At the moment, LW has provided negative benefit to my life. I recently quit my job to start learning positive psychology. My initial goal was to blog about positive psychology, and eventually use my blog as a platform to sell a book.
LW has made me deeply uncertain of the accuracy of the research I read, the words I write on my blog, and the advice I am writing in the book I intend to sell. Long-term, the uncertainty will probably help me by making me more knowledgeable than my peers, but in the short-term, demotivates (e.g. if I was sure what I was learning was correct, I would enthusiastically proselytize, which is a much more effective blogging strategy).
Still, I read on, because I’ve passed the point of ignorance.
I also think that LW has provided negative benefit to my life. Since I decided that I wanted my beliefs to be true, rather than pleasing to me, I’ve felt less connected to my friendship group. I used to have certain politcal views that a lot of my friends approved of. Now, I think I was wrong about many things (not totally wrong, but I’m far less confident of the views that I continue to hold). Overall, I’d rather believe true things, but I think so far it’s made me less happy.
No, I would not not-prefer to believe true things.
That said I also don’t experience believing true things as making me unhappy the way you describe.
It’s the combination of those statements that intrigues me: X makes you unhappy and you would rather do X. So I was curious as to why you would rather do it.
I have to admit, though, your answers leave me even more puzzled.
4.So, I suppose in some ways, feeling that my beliefs are more accurate has given me some sort of satisfaction. I don’t know it it outweigh’s feeling disconnected socially, though.
5.Altruism. I used to put a lot of energy into UK politics. I gained moral satisfaction and approval from my friends for this, but I’ve come to think that it’s really not a very effective way of improving the world. I would rather learn about more effective ways of making the world better (eg, donating to efficient charity).
Does that make sense? If you did feel that believing true things made you unhappy, would you try to make yourself belief not-true but satisfying things?
Altruism makes some sense to me as an answer… if you’re choosing to sacrifice your own happiness in order to be more effective at improving the world, and believing true things makes you more effective at improving the world, then that’s coherent.
Unrelatedly, if the problem is social alienation, one approach is to find a community in which the things you want to do (including believe true things) are socially acceptable.
If you did feel that believing true things made you unhappy, would you try to make yourself belief not-true but satisfying things?
There are areas in which I focus my attention on useful and probably false beliefs, like “I can make a significant difference in the world if I choose to take action.” It’s not clear to be that I believe those things, though. It’s also not clear to me that it matters whether I believe them or not, if they are motivating my behavior just the same.
That’s how I felt for the first few months after discovering that Jesus wasn’t magic after all. At that moment, all I could see was that (1) my life up to that point had largely been wasted on meaningless things, (2) my current life plans were pointless, (3) my closest relationships were now strained, and (4) much of my “expertise” was useless.
I’m tempted to conclude that your current accumulated utility given LW is lower than given (counterfactual no-LW), but that in counterpart/compensation your future expected utility has risen considerably by unknown margins with a relatively high confidence.
Is this an incorrect interpretation of the subtext? Am I reading too much into it?
I’ve noticed that I don’t even need to be knowledge to gain utility—there is a strong correlation between the signaling of my ‘knowledgeableness’ and the post popularity—the most popular had the largest number of references (38), and so on. When writing the post, I just hide the fact that I researched so much because of my uncertainty :)
Absence of evidence is evidence of absence :-) Most of us don’t seem to get such benefits from reading LW, so learning about an individual case of benefit shouldn’t influence your decisions much. It will probably be for spurious reasons anyway. Not sure about the camps, but my hopes aren’t high.
Can anyone attest to getting real instrumental rationality benefits from reading Wikipedia? (As a control question; everyone seems to think that Wikipedia is obviously useful and beneficial, so is anyone getting “real instrumental rationality benefits” from it?)
I suspect that the “success equation”, as it were, is something like expected_success = drive intelligence rationality, and for most people the limiting factor is drive, or maybe intelligence. Also, I suspect that changes in your “success equation” parameters take can take years to manifest as substantial levels of success, where people regard you as “successful” and not just “promising”. And I don’t think anyone is going to respond to a question like this with “reading Less Wrong made me more promising” because that would be dopey, so there’s an absence of data. (And promising folks may also surf the internet, and LW, less.)
It’s worth differentiating between these two questions, IMO: “does reading LW foster mental habits that make you better at figuring out what’s true?” and “does being better at figuring out what’s true make you significantly more successful?” I tend to assign more credence to the first than the second.
John, Wikipedia is generally valued for epistemic benefit, i.e., it teaches you facts. Only rarely does it give you practically useful facts, like the fact that lottery tickets are a bad buy. I agree that LW-rationality gives epistemic benefits.
And as for “years to manifest”: Diets can make you thinner in months. Likewise, PUA lessons get you laid, weightlifting makes you a bit stronger, bicycle repair workshops get you fixing your bike, and Tim Ferris makes you much better at everything, in months—if each of these is all it’s cracked up to be.
Some changes do take years, but note also that LW-style rationality has been around for years, so at least some people should be reporting major instrumental improvements.
And as for “years to manifest”: Diets can make you thinner in months. Likewise, PUA lessons get you laid, weightlifting makes you a bit stronger, bicycle repair workshops get you fixing your bike, and Tim Ferris makes you much better at everything, in months—if each of these is all it’s cracked up to be.
One point is that if a specific diet helps you, it’s easy to give credit to that diet. But if LW changes your thinking style, and you make a decision differently years later, it’s hard to know what decision you would have made if you hadn’t found LW.
Another point is that rationality should be most useful for domains where there are long feedback cycles—where there are shorter feedback cycles, you can just futz around and get feedback, and people who study rationality won’t have as much of an advantage.
Some changes do take years, but note also that LW-style rationality has been around for years, so at least some people should be reporting major instrumental improvements.
I think I’ve gotten substantial instrumental benefits from reading LW. It makes me kind of uncomfortable to share personal details, but I guess I’ll share one example: When I was younger, I was very driven and ambitious. I wanted to spend my time teaching myself programming, etc., but in actuality I would spend my time reading reddit and feeling extremely guilty that I wasn’t teaching myself programming. At a certain point I started to realize that my feeling of guilt was counterproductive, and if I actually wanted to accomplish my goals then I should figure out what emotions would be useful for accomplishing my goals and try to feel those. I think it’s likely that if I didn’t read LW I wouldn’t have had this realization, or would’ve had this realization but not taken it seriously. And this realization, along with others in the same vein, seems to have been useful for helping me get more stuff done.
I was at July rationality minicamp, and in addition to many “epiphanies”, one idea that seems to work for me is this, very simplified—forget the mysterious “willpower” and use self-reductionism, instead of speaking in far mode what you should and want to do, observe in near mode the little (irrational) causes that really make you do things. Then design your environment to contain more of those causes which make you do things you want to do. And then, if the theory is correct, you find yourself doing more of what you want to do, without having to suffer the internal conflict traditionally called “willpower”.
Today it’s almost one month since the minicamp, and here are the results so far. I list the areas where I wanted to improve myself, and assign a score from 0 to 5, where 5 means “works like a miracle; awesome” and 0 means “no change at all”. (I started to work on all these goals in parallel, which may be a good or bad idea. Bad part is, there is probably no chance succeeding in all at once. Good part is, if there is success in any part, then there is a success.)
(0) improving social skills and expanding comfort zone
(0) spending more time outside
So far it seems like a benefit, although of course I would be more happy with greater/faster improvement. The mere fact that I’m measuring (not very exactly) my progress is suprising enough. I’m curious about the long-term trends: will those changes gradually increase (as parts of my life get fixed and turn to habit) or decrease (as happened with my previous attempts at self-improvement)? Expect a more detailed report in the end of December 2012.
How exactly did I achieve this? (Note: This is strongly tailored to my personality, it may not work for other people.) Gamification—I have designed a set of rules that allow me to get “points” during the days. There are points e.g. for: having enough sleep, having an afternoon nap, meeting a friend, exercising (a specific amount), publishing a blog article, spending a day without consuming sugar, spending a day without browsing web, etc. Most of these rules allow only one point of given type per day, to avoid failure modes like “I don’t feel like getting this point now, but I can get two of these points tomorrow”. So each day I collect my earned points, literally small squares of colored paper (this makes them feel more real), and glue them on a paper form, which is always on my desk, and provides me a quick visual feedback on how “good” the previous days were. It’s like a computer game (exact rules, quick visual feedback), which is exactly why I like it.
This specific form of gamification was not literally taught at Minicamp, and I was considering something similar years before. Yet I never did it; mostly because I was stopped by my monkey-tribe-belonging instincts. Doing something that other people don’t do is weird. I tried to convince some friends to join me in doing this, but all my attempts failed; now I guess it’s because admitting that you need some kind of help is low status, while speaking about willpower in far mode is high status. Being with people at Minicamp messed with my tribe instincts; meeting a community of people with social norm of doing “weird” things reduced my elephant’s opposition to doing a weird thing. Sigh, I’m just a monkey, and I’m scared of doing things that other monkeys never do; even if it means being rational or winning.
Hm, I’ve been trying to get rid of one particular habit (drinking while sitting at my computer) for a long time. Recently I’ve considered the possibility of giving myself a reward every time I go to the kitchen to get a beer and come back with something else instead. The problem was that I couldn’t think of a suitable reward (there’s not much that I like). I hadn’t thought of just making something up, like pieces of paper. Thanks for the inspiration!
I was a July minicamp attendee. I did the big reading through the Sequences thing when lukeprog was doing it at Common Sense Atheism, so I’d day fewer of the benefits were rationality level-ups and more were life hacking. Post-minicamp I am:
doing sit-ups, push-ups, and squats every day (using the apps from the 200 situps guy), up from not doing this at all
martial arts training four times a week (aikido and krav) again, up from not doing things at all
using RTM to manage tasks which means
dropping way fewer small tasks
breaking tasks down into steps more efficiently
knocked off about three lagging tasks (not timebound, so I was making no progress on them) in the month that I got back
stopped using inbox as task manager, so I could actually only keep emails I was replying to in there
using beeminder to get down to inbox zero (currently three)
working in pomodoros has sped up my writing to the point where:
I miss doing a daily posts to my blog more rarely (one over two weeks compared to 0-2 a week) and have had more double post days than previously (which translates into higher page views and more money for me)
Less time writing left me more time for leisure reading
I should add that I had a bit of a crestfallen feeling for the first few days of minicamp, since being more efficient and organized feels like a really lame superpower. I expected a bit more of it to be about choosing awesome goals. But then I realized that I’d always be grateful for a genie that magically gave me an extra hour, and I shouldn’t look a gift genie in the mouth, just because it wasn’t magic.
So, now that I’ve got more time, it’s up to me to do superheroic things with it. Once I finish my Halloween costume.
I should add that I had a bit of a crestfallen feeling for the first few days of minicamp, since being more efficient and organized feels like a really lame superpower.
This. Holy cow, I worried I was the only one who felt a bit of a letdown during minicamp and then started noticing afterwards that my ways of dealing with problems had suddenly become more effective.
OK, those count as benefits. We shouldn’t just give all the credit to the lifehacking community, since LW/SI successfully got you to implement lifehacking techniques.
Of course, anything can be called instrumentally rational if it works, but I wonder how other approaches compare to explicit rationality in successfully convincing oneself to lifehack . For example, the sort of motivational techniques used for salespeople.
I’m not sure. One thing that worked pretty well for me at minicamp was that the instructors were pretty meticulous about describing levels of confidence in different hacks. Everything from “Here are some well-regarded, peer reviewed studies you can look at” to “It’s worked pretty well for us, and most of the people who’ve tried, and here’s how we think it fits into what we know about the brain” to “we don’t know why this works, but it has for most people, so we think it’s worth trying out, so make sure you tell us if you try and get bupkis so we’re hearing about negative data” to “this is something that worked for me that you might find useful.”
I think this is a pretty audience-specific selling point, but it did a great job of mitigating the suspicious-seeming levels of enthusiasm most lifehackers open with.
Are you saying you now don’t think LW is “useful for noticing bullshit and cutting it away from my thoughts”, or that the value of doing this isn’t as high as you thought?
I used to be very skeptical of Eliezer’s ideas about improving rationality when he was posting the Sequences, but one result that’s hard to deny is that all of a sudden there is a community of people who I can discuss my decision theory ideas with, whereas before that I seemingly couldn’t get them across to anyone except maybe one or two people, even though I had my own highly active mailing list.
I’d say that being able to achieve this kind of subtle collective improvement in philosophical ability is already quite impressive, even if the effect is not very dramatic in any given individual. (Of course ultimately the improvement has to be graded against what’s needed to solve FAI and not against my expectations, and it seems to still fall far short of that.)
It’s indeed nice to have a community that discusses decision-theoretic ideas, but a simpler explanation is that Eliezer’s writings attracted many smart folks and also happened to make these ideas salient, not that Eliezer’s writings improved people’s philosophical ability.
Attracting many smart folks and making some particular ideas salient to them is no mean feat in itself. But do you think that’s really all it took? That any group of smart people, if they get together and become interested in some philosophical topic, could likely make progress instead of getting trapped in a number of possible ways?
I think it’s always helpful when a community has a vernacular and a common library of references. It’s better if the references are unusually accurate, but even bland ones might still speed up progress on projects.
an easier explanation is that Eliezer’s writings attracted many smart folks and also happened to make these ideas salient, not that Eliezer’s writings improved people’s philosophical ability
Eliezer’s writings were certainly the focus of my own philosophical development. The current me didn’t exist before processing them, and was historically caused by them, even though it might have formed on its own a few years later.
I had been considering earlier today that since I started reading lesswrong I noticed a considerable increase in my ability to spot and discern bullshit and flawed arguments, without paying much attention to really asking myself the right questions in order to favor other things I considered more important to think about.
Reading this made me realize that I’ve drawn a conclusion too early. Perhaps I should re-read those “epiphany addiction” posts with this in mind.
Thanks. In most of those links, the author says that he gained some useful mental tools, and maybe that he feels better. That’s good. But no one said that rationality helped them achieve any goal other the goal of being rational.
For example:
Launch a successful startup
Get a prestigious job
Break out of a long-term abusive relationship.
Lose weight (Diets are discussed, but I don’t see that a discussion driven by LW/SI-rationality is any more successful in this area than any random discussion of diets.)
Get lucky in love (and from what I can tell, the PUAs do have testimonials for their techniques)
Avoid akrasia (The techniques discussed are gathered from elsewhere; so to the extent that rationality means “reading up on the material,” the few successes attested in this area can count as confirmation.)
Break an addiction to drugs/gambling.
… and so on.
Religious deconversion doesn’t count for the purpose of my query unless the testimonial describes some instrumental benefit.
Carl’s comment about the need for an experiment is good; but if someone can just give a testimonial, that would be a good start!
I think LW-style thinking may have helped me persist better at going to the gym (which has been quite beneficial for me) than I otherwise would have, but obviously it’s hard to know for sure.
“I used to buy lottery tickets every day but now I understand the negative expectation of the gamble and the diminishing marginal utility of the ticket, so I don’t.”
A doctor says “I now realize that I was giving my patients terrible advice about what it meant when a test showed positive for a disease. Now that I have been inducted into the Secret Order of Bayes, My advice on that is much better now.”
July minicamper here. My own life has had enough variance in the past few months over many variables (location, job, romantic status) with too many exogenous variables for me to be very confident about the effect of minicamp, aside from a few things (far fewer totally wasted days than I used to suffer from what I saw as being inescapably moody).
But I’ve gained an identifiable superpower in the realm of talking helpfully to other people by modeling their internal conflicts more accurately, by steering them toward “making deals with themselves” rather than ridiculous memes like “using willpower”, and by noticing confusion and getting to the root of it via brainstorming and thought experiments. And the results have absolutely floored people, in three different cases. If you’re worried about epiphany addiction, then I suppose you might label me a “carrier” (although there’s the anomalous fact that friends have followed through on my advice to them after talking to me).
Subjectively I feel happier and more effective, but there’s not reliable external evidence for this. I’ve gotten better at talking to people and interacting in positive ways thanks to using metacognition, and my views have become more consistent. Timeless thinking has helped me adopt a diet and stick to it, as well as made me start wearing my seatbelt.
Can anyone attest to getting real instrumental benefit from SI/LW rationality training, whether from SI bootcamps or just from reading LessWrong?
I don’t just mean “feeling better about myself,” but identifiable and definite improvements, like getting a good job in one week after two years without success.
At the moment, LW has provided negative benefit to my life. I recently quit my job to start learning positive psychology. My initial goal was to blog about positive psychology, and eventually use my blog as a platform to sell a book.
LW has made me deeply uncertain of the accuracy of the research I read, the words I write on my blog, and the advice I am writing in the book I intend to sell. Long-term, the uncertainty will probably help me by making me more knowledgeable than my peers, but in the short-term, demotivates (e.g. if I was sure what I was learning was correct, I would enthusiastically proselytize, which is a much more effective blogging strategy).
Still, I read on, because I’ve passed the point of ignorance.
I also think that LW has provided negative benefit to my life. Since I decided that I wanted my beliefs to be true, rather than pleasing to me, I’ve felt less connected to my friendship group. I used to have certain politcal views that a lot of my friends approved of. Now, I think I was wrong about many things (not totally wrong, but I’m far less confident of the views that I continue to hold). Overall, I’d rather believe true things, but I think so far it’s made me less happy.
Why would you rather believe true things?
1.I would just rather know the right answer!
2.I think believing true things has better consequences than the reverse, for many people. I’m not sure if it will for me.
3.It’s too late. I can’t decide to go back to believing things that aren’t true to make me feel better, because I’d know that’s what I was doing.
Would you not prefer to believe true things?
No, I would not not-prefer to believe true things.
That said I also don’t experience believing true things as making me unhappy the way you describe.
It’s the combination of those statements that intrigues me: X makes you unhappy and you would rather do X. So I was curious as to why you would rather do it.
I have to admit, though, your answers leave me even more puzzled.
Here are a couple of other reasons:
4.So, I suppose in some ways, feeling that my beliefs are more accurate has given me some sort of satisfaction. I don’t know it it outweigh’s feeling disconnected socially, though.
5.Altruism. I used to put a lot of energy into UK politics. I gained moral satisfaction and approval from my friends for this, but I’ve come to think that it’s really not a very effective way of improving the world. I would rather learn about more effective ways of making the world better (eg, donating to efficient charity).
Does that make sense? If you did feel that believing true things made you unhappy, would you try to make yourself belief not-true but satisfying things?
Altruism makes some sense to me as an answer… if you’re choosing to sacrifice your own happiness in order to be more effective at improving the world, and believing true things makes you more effective at improving the world, then that’s coherent.
Unrelatedly, if the problem is social alienation, one approach is to find a community in which the things you want to do (including believe true things) are socially acceptable.
There are areas in which I focus my attention on useful and probably false beliefs, like “I can make a significant difference in the world if I choose to take action.” It’s not clear to be that I believe those things, though. It’s also not clear to me that it matters whether I believe them or not, if they are motivating my behavior just the same.
That’s how I felt for the first few months after discovering that Jesus wasn’t magic after all. At that moment, all I could see was that (1) my life up to that point had largely been wasted on meaningless things, (2) my current life plans were pointless, (3) my closest relationships were now strained, and (4) much of my “expertise” was useless.
Things got better after a while.
I’m tempted to conclude that your current accumulated utility given LW is lower than given (counterfactual no-LW), but that in counterpart/compensation your future expected utility has risen considerably by unknown margins with a relatively high confidence.
Is this an incorrect interpretation of the subtext? Am I reading too much into it?
That interpretation is correct.
I’ve noticed that I don’t even need to be knowledge to gain utility—there is a strong correlation between the signaling of my ‘knowledgeableness’ and the post popularity—the most popular had the largest number of references (38), and so on. When writing the post, I just hide the fact that I researched so much because of my uncertainty :)
Absence of evidence is evidence of absence :-) Most of us don’t seem to get such benefits from reading LW, so learning about an individual case of benefit shouldn’t influence your decisions much. It will probably be for spurious reasons anyway. Not sure about the camps, but my hopes aren’t high.
Can anyone attest to getting real instrumental rationality benefits from reading Wikipedia? (As a control question; everyone seems to think that Wikipedia is obviously useful and beneficial, so is anyone getting “real instrumental rationality benefits” from it?)
I suspect that the “success equation”, as it were, is something like expected_success = drive intelligence rationality, and for most people the limiting factor is drive, or maybe intelligence. Also, I suspect that changes in your “success equation” parameters take can take years to manifest as substantial levels of success, where people regard you as “successful” and not just “promising”. And I don’t think anyone is going to respond to a question like this with “reading Less Wrong made me more promising” because that would be dopey, so there’s an absence of data. (And promising folks may also surf the internet, and LW, less.)
It’s worth differentiating between these two questions, IMO: “does reading LW foster mental habits that make you better at figuring out what’s true?” and “does being better at figuring out what’s true make you significantly more successful?” I tend to assign more credence to the first than the second.
John, Wikipedia is generally valued for epistemic benefit, i.e., it teaches you facts. Only rarely does it give you practically useful facts, like the fact that lottery tickets are a bad buy. I agree that LW-rationality gives epistemic benefits.
And as for “years to manifest”: Diets can make you thinner in months. Likewise, PUA lessons get you laid, weightlifting makes you a bit stronger, bicycle repair workshops get you fixing your bike, and Tim Ferris makes you much better at everything, in months—if each of these is all it’s cracked up to be.
Some changes do take years, but note also that LW-style rationality has been around for years, so at least some people should be reporting major instrumental improvements.
One point is that if a specific diet helps you, it’s easy to give credit to that diet. But if LW changes your thinking style, and you make a decision differently years later, it’s hard to know what decision you would have made if you hadn’t found LW.
Another point is that rationality should be most useful for domains where there are long feedback cycles—where there are shorter feedback cycles, you can just futz around and get feedback, and people who study rationality won’t have as much of an advantage.
I think I’ve gotten substantial instrumental benefits from reading LW. It makes me kind of uncomfortable to share personal details, but I guess I’ll share one example: When I was younger, I was very driven and ambitious. I wanted to spend my time teaching myself programming, etc., but in actuality I would spend my time reading reddit and feeling extremely guilty that I wasn’t teaching myself programming. At a certain point I started to realize that my feeling of guilt was counterproductive, and if I actually wanted to accomplish my goals then I should figure out what emotions would be useful for accomplishing my goals and try to feel those. I think it’s likely that if I didn’t read LW I wouldn’t have had this realization, or would’ve had this realization but not taken it seriously. And this realization, along with others in the same vein, seems to have been useful for helping me get more stuff done.
I was at July rationality minicamp, and in addition to many “epiphanies”, one idea that seems to work for me is this, very simplified—forget the mysterious “willpower” and use self-reductionism, instead of speaking in far mode what you should and want to do, observe in near mode the little (irrational) causes that really make you do things. Then design your environment to contain more of those causes which make you do things you want to do. And then, if the theory is correct, you find yourself doing more of what you want to do, without having to suffer the internal conflict traditionally called “willpower”.
Today it’s almost one month since the minicamp, and here are the results so far. I list the areas where I wanted to improve myself, and assign a score from 0 to 5, where 5 means “works like a miracle; awesome” and 0 means “no change at all”. (I started to work on all these goals in parallel, which may be a good or bad idea. Bad part is, there is probably no chance succeeding in all at once. Good part is, if there is success in any part, then there is a success.)
(5) avoiding sugar and soda
(4) sleeping regularly, avoiding sleep deprivation
(2) spending less time procrastinating online
(2) exercising regularly
(2) going to sleep early, waking up early
(1) following my long-term plans
(1) spending more time with friends
(1) being organized, planning, self-reflecting
(0) writing on blog, improving web page
(0) learning a new language
(0) being more successful at work
(0) improving social skills and expanding comfort zone
(0) spending more time outside
So far it seems like a benefit, although of course I would be more happy with greater/faster improvement. The mere fact that I’m measuring (not very exactly) my progress is suprising enough. I’m curious about the long-term trends: will those changes gradually increase (as parts of my life get fixed and turn to habit) or decrease (as happened with my previous attempts at self-improvement)? Expect a more detailed report in the end of December 2012.
How exactly did I achieve this? (Note: This is strongly tailored to my personality, it may not work for other people.) Gamification—I have designed a set of rules that allow me to get “points” during the days. There are points e.g. for: having enough sleep, having an afternoon nap, meeting a friend, exercising (a specific amount), publishing a blog article, spending a day without consuming sugar, spending a day without browsing web, etc. Most of these rules allow only one point of given type per day, to avoid failure modes like “I don’t feel like getting this point now, but I can get two of these points tomorrow”. So each day I collect my earned points, literally small squares of colored paper (this makes them feel more real), and glue them on a paper form, which is always on my desk, and provides me a quick visual feedback on how “good” the previous days were. It’s like a computer game (exact rules, quick visual feedback), which is exactly why I like it.
This specific form of gamification was not literally taught at Minicamp, and I was considering something similar years before. Yet I never did it; mostly because I was stopped by my monkey-tribe-belonging instincts. Doing something that other people don’t do is weird. I tried to convince some friends to join me in doing this, but all my attempts failed; now I guess it’s because admitting that you need some kind of help is low status, while speaking about willpower in far mode is high status. Being with people at Minicamp messed with my tribe instincts; meeting a community of people with social norm of doing “weird” things reduced my elephant’s opposition to doing a weird thing. Sigh, I’m just a monkey, and I’m scared of doing things that other monkeys never do; even if it means being rational or winning.
Hm, I’ve been trying to get rid of one particular habit (drinking while sitting at my computer) for a long time. Recently I’ve considered the possibility of giving myself a reward every time I go to the kitchen to get a beer and come back with something else instead. The problem was that I couldn’t think of a suitable reward (there’s not much that I like). I hadn’t thought of just making something up, like pieces of paper. Thanks for the inspiration!
I was a July minicamp attendee. I did the big reading through the Sequences thing when lukeprog was doing it at Common Sense Atheism, so I’d day fewer of the benefits were rationality level-ups and more were life hacking. Post-minicamp I am:
doing sit-ups, push-ups, and squats every day (using the apps from the 200 situps guy), up from not doing this at all
martial arts training four times a week (aikido and krav) again, up from not doing things at all
using RTM to manage tasks which means
dropping way fewer small tasks
breaking tasks down into steps more efficiently
knocked off about three lagging tasks (not timebound, so I was making no progress on them) in the month that I got back
stopped using inbox as task manager, so I could actually only keep emails I was replying to in there
using beeminder to get down to inbox zero (currently three)
working in pomodoros has sped up my writing to the point where:
I miss doing a daily posts to my blog more rarely (one over two weeks compared to 0-2 a week) and have had more double post days than previously (which translates into higher page views and more money for me)
Less time writing left me more time for leisure reading
I should add that I had a bit of a crestfallen feeling for the first few days of minicamp, since being more efficient and organized feels like a really lame superpower. I expected a bit more of it to be about choosing awesome goals. But then I realized that I’d always be grateful for a genie that magically gave me an extra hour, and I shouldn’t look a gift genie in the mouth, just because it wasn’t magic.
So, now that I’ve got more time, it’s up to me to do superheroic things with it. Once I finish my Halloween costume.
This. Holy cow, I worried I was the only one who felt a bit of a letdown during minicamp and then started noticing afterwards that my ways of dealing with problems had suddenly become more effective.
OK, those count as benefits. We shouldn’t just give all the credit to the lifehacking community, since LW/SI successfully got you to implement lifehacking techniques.
Of course, anything can be called instrumentally rational if it works, but I wonder how other approaches compare to explicit rationality in successfully convincing oneself to lifehack . For example, the sort of motivational techniques used for salespeople.
I’m not sure. One thing that worked pretty well for me at minicamp was that the instructors were pretty meticulous about describing levels of confidence in different hacks. Everything from “Here are some well-regarded, peer reviewed studies you can look at” to “It’s worked pretty well for us, and most of the people who’ve tried, and here’s how we think it fits into what we know about the brain” to “we don’t know why this works, but it has for most people, so we think it’s worth trying out, so make sure you tell us if you try and get bupkis so we’re hearing about negative data” to “this is something that worked for me that you might find useful.”
I think this is a pretty audience-specific selling point, but it did a great job of mitigating the suspicious-seeming levels of enthusiasm most lifehackers open with.
How are you both posting more to your blog, and spending less time writing?
I’m writing faster when I work in pomodoros and when I write on the train on the long schlep to aikido.
Where I just broke my toe. Oh no, negative utility alert!
This topic has been raised dozens of times before, but the stories are scattered. Here’s a sampling:
Louie’s What I’ve Learned from Less Wrong
cousin_it on how LW helps him notice bullshit
FrankAdamek on gains from LW
cata’s group rationality diary thread contains lots of stories of people benefiting from applying the lessons learned in rationality camps
A couple people have posted about how LW deconverted them from their religions, but I can’t recall where
But also see this comment from Carl Shulman.
That comment of mine was from 2010 and I disagree with it now. My current opinion is better expressed in the “Epiphany addiction” post and comments.
Are you saying you now don’t think LW is “useful for noticing bullshit and cutting it away from my thoughts”, or that the value of doing this isn’t as high as you thought?
Looking back today, the improvement seems smaller than I thought then, and LW seems to have played a smaller role in it.
I used to be very skeptical of Eliezer’s ideas about improving rationality when he was posting the Sequences, but one result that’s hard to deny is that all of a sudden there is a community of people who I can discuss my decision theory ideas with, whereas before that I seemingly couldn’t get them across to anyone except maybe one or two people, even though I had my own highly active mailing list.
I’d say that being able to achieve this kind of subtle collective improvement in philosophical ability is already quite impressive, even if the effect is not very dramatic in any given individual. (Of course ultimately the improvement has to be graded against what’s needed to solve FAI and not against my expectations, and it seems to still fall far short of that.)
It’s indeed nice to have a community that discusses decision-theoretic ideas, but a simpler explanation is that Eliezer’s writings attracted many smart folks and also happened to make these ideas salient, not that Eliezer’s writings improved people’s philosophical ability.
Attracting many smart folks and making some particular ideas salient to them is no mean feat in itself. But do you think that’s really all it took? That any group of smart people, if they get together and become interested in some philosophical topic, could likely make progress instead of getting trapped in a number of possible ways?
I think it’s always helpful when a community has a vernacular and a common library of references. It’s better if the references are unusually accurate, but even bland ones might still speed up progress on projects.
Eliezer’s writings were certainly the focus of my own philosophical development. The current me didn’t exist before processing them, and was historically caused by them, even though it might have formed on its own a few years later.
Hmm. Thanks for that update.
I had been considering earlier today that since I started reading lesswrong I noticed a considerable increase in my ability to spot and discern bullshit and flawed arguments, without paying much attention to really asking myself the right questions in order to favor other things I considered more important to think about.
Reading this made me realize that I’ve drawn a conclusion too early. Perhaps I should re-read those “epiphany addiction” posts with this in mind.
Thanks. In most of those links, the author says that he gained some useful mental tools, and maybe that he feels better. That’s good. But no one said that rationality helped them achieve any goal other the goal of being rational.
For example:
Launch a successful startup
Get a prestigious job
Break out of a long-term abusive relationship.
Lose weight (Diets are discussed, but I don’t see that a discussion driven by LW/SI-rationality is any more successful in this area than any random discussion of diets.)
Get lucky in love (and from what I can tell, the PUAs do have testimonials for their techniques)
Avoid akrasia (The techniques discussed are gathered from elsewhere; so to the extent that rationality means “reading up on the material,” the few successes attested in this area can count as confirmation.)
Break an addiction to drugs/gambling.
… and so on.
Religious deconversion doesn’t count for the purpose of my query unless the testimonial describes some instrumental benefit.
Carl’s comment about the need for an experiment is good; but if someone can just give a testimonial, that would be a good start!
There’s also Zvi losing weight with TDT. :)
Losing weight is a core human value?
Thanks, I edited it.
I think LW-style thinking may have helped me persist better at going to the gym (which has been quite beneficial for me) than I otherwise would have, but obviously it’s hard to know for sure.
Or even better:
“I used to buy lottery tickets every day but now I understand the negative expectation of the gamble and the diminishing marginal utility of the ticket, so I don’t.”
A doctor says “I now realize that I was giving my patients terrible advice about what it meant when a test showed positive for a disease. Now that I have been inducted into the Secret Order of Bayes, My advice on that is much better now.”
.… etc.
July minicamper here. My own life has had enough variance in the past few months over many variables (location, job, romantic status) with too many exogenous variables for me to be very confident about the effect of minicamp, aside from a few things (far fewer totally wasted days than I used to suffer from what I saw as being inescapably moody).
But I’ve gained an identifiable superpower in the realm of talking helpfully to other people by modeling their internal conflicts more accurately, by steering them toward “making deals with themselves” rather than ridiculous memes like “using willpower”, and by noticing confusion and getting to the root of it via brainstorming and thought experiments. And the results have absolutely floored people, in three different cases. If you’re worried about epiphany addiction, then I suppose you might label me a “carrier” (although there’s the anomalous fact that friends have followed through on my advice to them after talking to me).
Great, I’d love to have that superpower!
I think the probability of my having got my current job without LW etc. is under 20%.
Subjectively I feel happier and more effective, but there’s not reliable external evidence for this. I’ve gotten better at talking to people and interacting in positive ways thanks to using metacognition, and my views have become more consistent. Timeless thinking has helped me adopt a diet and stick to it, as well as made me start wearing my seatbelt.