I’m disappointed at how few of these comments, particularly the highly-voted ones, are about proposed solutions, or at least proposed areas for research. My general concern about the LW community is that it seems much more interested in the fun of debating and analyzing biases, rather than the boring repetitive trial-and-error of correcting them.
Anna’s post lays out a particular piece of poor performance which is of core strategic value to pretty much everyone—how to identify and achieve your goals—and which, according to me and many people and authors, can be greatly improved through study and practice. So I’m very frustrated by all the comments about the fact that we’re just barely intelligent and debates about the intelligence of the general person. It’s like if Eliezer posted about the potential for AI to kill us all and people debated how they would choose to kill us instead of how to stop it from happening.
Sorry, folks, but compared to the self-help/self-development community, Less Wrong is currently UTTERLY LOSING at self-improvement and life optimization. Go spend an hour reading Merlin Mann’s site and you’ll learn way more instrumental rationality than you do here. Or take a GTD class, or read a top-rated time-management book on Amazon.
Talking about biases is fun, working on them is hard. Do Less Wrongers want to have fun, or become super-powerful and take over (or at least save) the world? So far, as far as I can tell, LW is much worse than the Quantified Self & time/attention-management communities (Merlin Mann, Zen Habits, GTD) at practical self-improvement. Which is why I don’t read it very often. When it becomes a rationality dojo instead of a club for people who like to geek out about biases, I’m in.
I’ve disappointed in LessWrong too, and it’s caused me to come here more and more infrequently. I’m even talking about the lurking. I used to come here every other day, then every week, then it dropped to once a month. This
I get the impression many people either didn’t give a shit or despaired about their own ability to function better through any reasonable effort that they dismissed everything that came along. It used to make me really mad, or sad. Probably I took it a little too personally too, because I read a lot of EY’s classic posts as inspiration not to fucking despair about what seemed like a permanently ruined future. “tsuyoku naritai” and “isshou kenmei” and “do the impossible” and all that said, look, people out there are working on much harder problems—there’s probably a way up and out for you too. The sadness: I wanted other people to get at least that, and the anger—a lot of LessWrongers not seeming to get the point.
On the other hand, I’m pleased with our OvercomingBias/LessWrong meetup group in NYC. I think we do a good job in-person helping other members with practical solutions to problems—how we can all become really successful. Maybe it’s because a lot of our members have integrated ideas from QS, Paleo, and CrossFit, Seth Roberts, and PJ Eby. We’ve counseled members on employment opportunities, how to deal with crushing student and consumer debts, how to make money, and nutrition. By now we all tend to look down on the kind of despairing analysis that’s frequently upvoted on here LW. We talk about FAI sparingly these days, unless someone has a particular insight we think would be valuable. Instead, the sentiment is more, “Shit, none of us can do much about it directly. How ’bout we all get freaking rich and successful first!”
I suspect the empathy formed from face to face contact can be a really great motivator. You hear someone’s story from their own mouth and think, “Shit man, you’re cool, but you’re in bad shape right now. Can we all figure out how to help you out?” Little by little people relate, even the successful ones—we’ve all been there in small ways. This eventually moves towards, “Can we we think about how to help all of us out?” It’s not about delivering a nice tight set of paragraphs with appropriate references and terminology. When we see each other again, we care that our proposed solutions and ideas are going somewhere because we care about the people. All the EvPsych speculation and calibration admonitions can go to hell if doesn’t fucking help. But if it does, use it, use it to help people, use it to help yourself, use it to help the future light cone of the human world.
Yet if we’re intentional about it I think we can keep it real here too. We can give a shit. Okay, maybe I don’t know that. Maybe it takes looking for and rewarding the useful insights and then coming back later and talking about how the insights were useful. Maybe it takes getting a little more personal. Maybe I and my suggestions are full of shit but, hell, I want to figure this out. I used to talk about LessWrong with pride and urge people to come check it out because the posts were great, the commenters /comment scheme is great, it was a shining example of what the rest of the intellectually discursive interwebs could be like. And, man, I’d like it to be that way again.
If there are (relative to LW) many good self-help sites and no good sites about rationality as such, that suggests to me LW should focus on rationality as such and leave self-help to the self-help sites. This is compatible with LW’s members spending a lot of time on self-help sites that they recommend each other in open threads.
My impression is that there are two good reasons to incorporate productivity techniques into LW, instead of aiming for a separate community specialized in epistemic rationality that complements self-help communities.
Our future depends on producing people who can both see what needs doing (wrt existential risk, and any other high-stakes issues), and can actually do things. This seems far higher probability than “our future depends on creating an FAI team” and than “our future depends on plan X” for any other specific plan X. A single community that teaches both, and that also discusses high-impact philanthropy, may help.
There seems to be a synergy between epistemic and instrumental rationality, in the sense that techniques for each give boosts to the other. Many self-help books, for example, spend much time discussing how to think through painful subjects instead of walling them off (instead of allowing ugh fields to clutter up your to do list, or allowing rationalized “it’s all your fault” reactions to clutter up your interpersonal relations). It would be nice to have a community that could see the whole picture here.
Instrumental rationality and productivity techniques and self-help are three different though overlapping things, though the exact difference is hard to pinpoint. In many cases it can be rational to learn to be more productive or more charismatic, but productivity and charisma don’t thereby become kinds of rationality. Your original post probably counts as instrumental rationality in that it’s about how to implement better general decision algorithms. In general, LW will probably have much more of an advantage relative to other sites in self-help that’s inspired by the basic logic/math of optimal behavior than in other kinds of self-help.
Re: 1, obviously one needs both of those things, but the question is which is more useful at the margin. The average LWer will go through life with some degree of productivity/success/etc even if such topics never get discussed again, and it seems a lot easier to get someone to allocate 2% rather than 1% of their effort to “what needs doing” than to double their general productivity.
I feel like noting that none of the ten most recent posts are about epistemic rationality; there’s nothing that I could use to get better at determining, just to name some random examples, whether nanotech will happen in the next 50 years, or whether egoism makes more philosophical sense than altruism.
On the other hand, I think a strong argument for having self-help content is that it draws people here.
But part of my point is that LW isn’t “focusing on rationality”, or rather, it is focusing on fun theoretical discussions of rationality rather than practical exercises that are hard to work implement but actually make you more rational. The self-help / life hacking / personal development community is actually better (in my opinion) at helping people become more rational than this site ostensibly devoted to rationality.
The self-help / life hacking / personal development community is actually better (in my opinion) at helping people become more rational than this site ostensibly devoted to rationality.
Hmm. The self-help / life hacking / personal development community may well be better than LW at focussing on practice, on concrete life-improvements, and on eliciting deep-seated motivation. But AFAICT these communities are not aiming at epistemic rationality in our sense, and are consequently not hitting it even as well as we are. LW, for all its faults, has had fair success at teaching folks how to thinking usefully about abstract, tricky subjects on which human discussions often tend rapidly toward nonsense (e.g. existential risk, optimal philanthropy, or ethics). It has done so by teaching such subskills as:
Never attempting to prove empirical facts from definitions;
Never saying or implying “but decent people shouldn’t believe X, so X is false”;
Being curious; participating in conversations with intent to update opinions, rather than merely to defend one’s prior beliefs;
Asking what potential evidence would move you, or would move the other person;
Not expecting all sides of a policy discussion to line up;
Aspiring to have true beliefs, rather than to make up rationalizations that back the group’s notions of virtue.
By all means, let’s copy the more effective, doing-oriented aspects of life hacking communities. But let’s do so while continuing to distinguish epistemic rationality as one of our key goals, since, as Steven notes, this goal seems almost unique to LW, is achieved here more than elsewhere, and is necessary for tackling e.g. existential risk reduction.
The self-help / life hacking / personal development community is actually better (in my opinion) at helping people become more rational than this site ostensibly devoted to rationality.
Could you elaborate on what you mean by that claim, or why you believe it?
I love most of your recent comments, but on this point my impression differs. Yes, folks often learn more from practice, exercises, and deep-seated motivation than from having fun discussions. Yes, some self-help communities are better than LW at focussing on practice and life-improvement. But, AFAICT: no, that doesn’t mean these communities do more to boost their participants’ epistemic rationality. LW tries to teach folks skills for thinking usefully about abstract, tricky subjects on which human discussions often tend rapidly toward nonsense (e.g. existential risk, optimal philanthropy, or ethics). And LW, for all its flaws, seems to have had a fair amount of success in teaching its longer-term members (judging from my discussions with many such, in person and online) such skills as:
Never attempting to prove empirical facts from definitions;
Never saying or implying “but decent people shouldn’t believe X, so X is false”;
Being curious; participating in conversations with intent to update opinions, rather than merely to defend one’s prior beliefs;
Asking what potential evidence would move you, or would move the other person;
Not expecting all sides of a policy discussion to line up;
Aspiring to have true beliefs, rather than to make up rationalizations that back the group’s notions of virtue.
Do you mean: (1) self-help sites are more successful than LW at teaching the above, and similar, subskills; (2) the above subskills do not in fact boost folks’ ability to think non-nonsensically about abstract and tricky issues; or (3) LW may better boost folks’ ability to think through abstract issues, but that ability should not be called “rationality”?
I’m surprised that you seem to be saying that LW shouldn’t getting more into instrumental rationality! That would seem to imply that you think the good self-help sites are doing enough. I really don’t agree with that. I think LWers are uniquely suited to add to the discussion. More bright minds taking a serious, critical look at all thing, and, importantly, urgently looking for solutions contains a strong possibility of making a significant dent in things.
Major point, though, of GGP is not about what’s being discussed, but how. He’s bemoning that when topics related to self-improvement come up that we completely blow it! A lot of ineffectual discussion gets upvoted. I’m guilty of this too, but this little tirade’s convinced me that we can do better, and that it’s worth thinking about how to do better.
Instead, the sentiment is more, “Shit, none of us can do much about it directly. How ’bout we all get freaking rich and successful first!”
Well, I think that’s the rational thing to do for the vast majority of people. Not only due to public good problems, but because if there’s something bad about the world which affects many people negatively, it’s probably hard to fix or one of the many sufferers would have already. Whereas your life might not have been fixed just because you haven’t tried yet. It’s almost always a better use of your resources. Plus “money is the unit of caring”, so the optimal way to help a charitable cause is usually to earn your max cash and donate, as opposed to working on it directly.
I suspect the empathy formed from face to face contact can be a really great motivator.
Agreed. Not just a motivator to help other people—but f2f contact is more inherently about doing, while web forums are more inherently about talking. In person it is much more natural to ask about someone’s life and how it is going—which is where interventions happen.
Yet if we’re intentional about it I think we can keep it real here too.
Perhaps. I think it will need a lot of intentionality, and a combination of in-person meetups and online discussions. I’ve thought about this as a “practicing life” support group, Eliezer’s term is “rationality dojo”, either way the key is to look at rationality and success just like any other skill—you learn by breaking it down into practiceable components and then practicing with intention and feedback, ideally in a social group. The net can be used to track the skill exercises, comment on alternative solutions for various problems, rank the leverage of the interventions and so forth.
But the key from my perspective is the website would be more of a database rather than an interaction forum. “This is where you go to find your local chapter, and a list of starting exercises / books to work through together / metrics / etc”
I’m new here at LW—are there any chapters outside of the New York meetup?
If not, is there a LW mechanism to gather location info from interested participants to start new ones? Top-level post and a Wiki page?
I created a Wiki to kick things off, but as a newb I think I can’t create an article yet, and quite frankly I’m not confident enough that that’s the right way to go about it to do it even if I could. So if you’ve been here longer and think that’s the right way, please do it and direct LWers to the Wiki page.
“money is the unit of caring”, so the optimal way to help a charitable cause is usually to earn your max cash and donate, as opposed to working on it directly.
This is false. Giving food directly to starving people (however it is obtained) is much better than throwing financial aid at a nation or institution and hope that it manages to “trickle-down” past all the middle-men and career politicians/activists and eventually is used to purchase food that eventually actually gets to people who need it. The only reason sayings like the above are so common and accepted is because people assume that there are no methods of Direct Action that will directly and immediately alleviate suffering, and are comparing “throwing money at it” to just petitioning, marching, and lengthy talks/debates. Yes, in those instances, years of political lobbying may do a lot less than just using that lobbying money to directly buy necessities for the needy or donating them to an organization who does (after taking a cut for cost of functioning, and to pay themselves), but compared to actually getting/taking the necessary goods and services directly to the needy (and teaching them methods for doing so themselves), it doesn’t hold up. Another way of comparison is to ask “what if everyone (or even most) did what people said was best?” If we compared the rule of “donate money to institutions you trust (after working up to the point where you feel wealthy enough to do so)”, and “directly applying their time and energy in volunteer work and direct action”, one would lead to immediate relief and learning for those in need, and the other would be a long-term hope that the money would work its way through bureaucracies, survive the continual shaving of funds for institutional funding and employee payment, and eventually get used to buy the necessities the people need (hoping that everything they need can be bought, and that they haven’t starved or been exposed to the elements enough to kill them).
Another way of comparison is to ask “what if everyone (or even most) did what people said was best?” If we compared the rule of “donate money to institutions you trust (after working up to the point where you feel wealthy enough to do so)”, and “directly applying their time and energy in volunteer work and direct action”, one would lead to immediate relief and learning for those in need, and the other would be a long-term hope that the money would work its way through bureaucracies, survive the continual shaving of funds for institutional funding and employee payment, and eventually get used to buy the necessities the people need (hoping that everything they need can be bought, and that they haven’t starved or been exposed to the elements enough to kill them).
This rule is an awful way to evaluate prescriptive statements. For example:
Should I become an artist for a living?
Everyone would die of starvation if everyone did this. Your comparative system prohibits every profession but subsistence agriculture. That means I don’t like your moral system and think that it is silly.
Aside from problems like that one, you’ll also run into major problems with games theory, such as collective action problems and the prisoner’s dilemma. It makes no sense at all to think that by extrapolating from individual action into communal action and evaluating the hypothetical results we will then be able to evaluate which individual actions are good. I don’t know why this belief is so common, but it is.
Just-so story: leaders needed to be able to evaluate things this way, evolution had no choice but to give everyone this trait so that the leaders would also receive it. Another just-so story: this is a driving force behind social norms which are useful from an individual perspective because those who violate social norms are outcompeted.
Of course, people who use rules like that to evaluate their actions won’t normally run into those sort of silly conclusions. But the reason for that isn’t because the rules make sense but because the rules will only be invoked selectively, to support conclusions that are already believed in. It’s a way of making personal preferences and beliefs appear to have objective weight behind them, but it’s really just an extension of your assumptions and an oversimplification of reality.
Come on now; I had only recently come out of lurking here because I have found evidence that this site and its visitors welcome dissident debate, and hold high standards for rational discussion.
Should I become an artist for a living? -- Everyone would die of starvation if everyone did this. Your comparative system prohibits every profession but subsistence agriculture. That means I don’t like your moral system and think that it is silly.
Could you please present some evidence for this? You’re claim rests on the assumption that to “do art” or “be an artist” means that you can only do art 24⁄7 and would obviously just sit there painting until you starve to death. Everyone can be an artist, just make art; and that doesn’t exclude doing other things as well. Can I be an artist for a living; can everyone? Maybe, but it sure would be a lot more likely if our society put its wealth and technology towards giving everyone subsistence level comfort (if you disagree that our current technological state is incapable of this, then you’d need to argue for such, and why it isn’t worth trying, or doing the most we could anyways). The argument is that if individuals and groups in our society actually did some of the direct actions that could have immediate and life-changing results, rather than trying to “amass wealth for charity” or “petition for redress of grievances” alone, we would see much better results, and our understanding of what world’s are possible and within our reach would change as well. One can certainly disagree or argue against this claim, but changing the subject to surviving on art, or just asserting that such actions could only be done on subsistence agriculture, are claims that need some evidence, or at least some more rationale. And, as really shouldn’t need stated, “not liking” something doesn’t make it less likely or untrue, and calling an argument silly is itself silly if you don’t present justification for why you think that is the case.
As for “extrapolating from individual action into communal action”, just because it is not a sure-fire way to certain morality (nothing is) doesn’t mean that such thought experiments aren’t useful for pulling out implications and comparing ideas/methodologies. I certainly wouldn’t claim that such an argument alone should convince anyone of anything; as it says, it is just “another way of comparison” to try and explain a viewpoint and look at another facet of how it interacts with other points of view.
I’m sorry, but I have failed to understand your last paragraph. It reeks of sophistry; claiming that there are a bunch of irrational and bias-based elements to a viewpoint you don’t like, without actually citing any specific examples (and assuming that such a position couldn’t be stated in any way without them). That last sentence is a completely unsupported; it assumes its own conclusion, that such claims only “appear to have objective weight” but really “really just an extension of your assumptions and an oversimplification of reality”. Simplified it states: It is un-objective because of its un-objectivity. Evidence and rationale please? Please remember Reverse Stupidity is Not Intelligence
Your first paragraph attacks the validity of the art example; I’m willing to drop that for simplicity’s sake.
Your second paragraph concedes that it’s not a good way to approximate morality. You say that nothing is. I interpret that as a reason that we shouldn’t approach moral tradeoffs with hard and fast decision rules, rather than as a reason that any one particular sort of flawed framework should be considered acceptable. You say that it’s a useful thought experiment, I fundamentally disagree. It only muddles the issue because individual actors do not have agency over the actions of each other. I do not see any benefit to using this sort of thought experiment, I only see a risk that the relevancy and quality of analysis is degraded.
You might be misunderstanding my last paragraph. I’m saying that the type of thought experiment you use is one that is normally, almost always, only used selectively, which suggests that it’s not the real reason behind whatever position it’s being used to advance. No one considers the implications of what would happen if everyone made the same career choices or if everyone made the same lifestyle choices, and then comes to conclusions about what their own personal lives should be like based on those potential universalizations. For example, in response to my claims about art, you immediately started qualifying exactly how much art would be universal and taken as a profession, and added a variety of caveats. But you didn’t attempt to consider similar exemptions when considering whether we should view charity donations on a universal level as well, which tells me that you’re applying the technique unfairly.
People only ever seem to imagine these scenarios in cases where they’re trying to garner support for individual actions but are having a difficult time justifying their desired conclusion from an individual perspective, so they smuggle in the false assumptions that individuals can control other people and that if an action has good consequences for everyone then it’s rational for each individual to take that action (this is why I mentioned games theory previously). These false assumptions are the reason that I don’t like your thought experiment.
Giving food directly to starving people (however it is obtained) is much better than throwing financial aid at a nation or institution
What’s your estimate of how much money and how much time I would have to spend to deliver $100 of food directly to a starving person? Does that estimate change if 50% of my neighbors are also doing this?
Actually my point is questions like that are already guiding discussion away from alternative solutions which may be capable of making a real impact (outside of needing to “become rich” first, or risk the cause getting lost in bureaucracy and profiteering). Take a group like Food Not Bombs for instance; they diminish the “money spent” part of the equation by dumpstering and getting food donations. The time involved would of course depend on where you live, and how easily you could find corporate food waste (sometimes physically guarded by locks, wire, and even men with guns to enforce artificial scarcity), and transporting it to the people who need it. The more people who join in, would of course mean more food must be produced and more area covered in search of food waste to be reclaimed. A fortunate thing is that the more people pitch in, the shorter it takes to do large amounts of labor that benefits everyone; thus the term mutual aid.
I’m not even taking the cost of the food into consideration. I’m assuming there’s this food sitting here.. perhaps as donations, perhaps by dumpstering, perhaps by theft, whatever. What I was trying to get a feel for is your estimate of the costs of individuals delivering that food to where it needs to go. But it sounds like you endorse people getting together in groups in order to do this more efficiently, as long as they don’t become bureaucratic institutions in the process, so that addresses my question. Thanks.
To people who go to meetups in other parts of the world: are they all like this? How do they vary in terms of satisfaction and progress in achieving goals?
Interestingly, the people who seem most interested in the topic of instrumental rationality never seem to write a lot of posts here, compared to the people interested in epistemic rationality. Maybe that’s because you’re too busy “doing” to teach (or to ask good open questions), but I’m confident that’s not true of all the I-Rationality crowd.
Of course, as an academic, I’m perfectly happy staying on the E-Rationality side.
Instrumental rationality is one of my primary interests here, but I don’t post much—the standard here is too high. All I have to offer is personal anecdotal evidence about various self-help / anti-akrasia techniques I tried on myself, and I always feel a bit guilty when posting them because unsubstantiated other-optimizing is officially frowned upon here. Attempting to extract any deep wisdom from these anecdotes would be generalizing from one example.
An acceptable way to post self-help on LW would be in the form of properly designed, properly conducted long-term studies of self-help techniques. However, designing and conducting such studies is a full-time job which ideally requires a degree in experimental psychology.
If that’s true, we absolutely need to lower the bar for such posts. Three good sorts of posts that are not terribly difficult are: (1) a review of a good self-help book and what you personally took from it; (2) a few-sentence summary of an academic study on an income-boosting technique, a method for improving your driving safety, or other useful content, with a link to the same; or (3) a description of self-intervention you tried and tracked impacts from, quantified self style.
When someone says they have anecdotes but want data, I hear an opportunity for crowdsourcing.
Perhaps a community blog is the wrong tool for this? What if we had a tool that supported tracking rationalist intervention efficacy? People could post specific interventions and others could report their personal results. Then the tool would allow for sorting interventions by reported aggregate efficacy. Maybe even just a simple voting system?
That seems like it could be a killer app for lowering the bar toward encouraging newcomers and data-poor interventions from getting posted and evaluated.
I have been thinking that LW really needs categorization system for top level post, this would create a way to post on ‘lighter’ topics without feeling like you’re not matching people’s expectations.
I’ve had this very failure to communicate with Tom McCabe (so the evidence is mounting that the problem is with me, rather than all of you) - [edit]Tags[/edit] are categories, only with more awesome and fewer constraints. If “predefined categories can be used to drive navigation”, then surely [edit]Tags[/edit] can be used to drive navigation, without having to be predefined.
Is the problem just that the commonly used [edit]Tags[/edit] need to be positioned differently in the site layout?
I think xamdam meant that there should be a category of “lighter” posts that people could opt out of (ie, not see in their feed of new posts) so that they wouldn’t have the right to complain that they didn’t live up to their expectations. Promotion means that there are two tiers, but I’m not sure whether people read the front page or the new posts.
Incidentally, I think people are using the tags too much for subject matter and not enough for indicating this kind of weight or type of post. For example, I don’t see a tag for self-experimentation. If the tags were visible in the article editing mode, that would encourage people to reuse the same tags, which is important for making them function (thought maybe retagging is the only way to go). If predefined tags were visible in the article editing mode, that would encourage posts on those topics; in particular, it could be used to indicate that some things are acceptable, as in Anna’s list above.
I think xamdam meant that there should be a category of “lighter” posts that people could opt out of (ie, not see in their feed of new posts) so that they wouldn’t have the right to complain that they didn’t live up to their expectations. Promotion means that there are two tiers, but I’m not sure whether people read the front page or the new posts.
Idea #3 (less easy) is to support saveable searches that include or exclude tags (and rss feeds of those searches) so that users can view the site through that customized lens.
Maybe that’s because you’re too busy “doing” to teach
I think there is definitely some of that, and I’ve heard that from other LW “fringers” like myself—people who love the concept of rationality and support the philosophy of LW, but have no time to write posts because their lives are full to the brim with awesome projects.
One problem, i think, is that teaching and writing things up well/usefully is work. I spend time reading and writing blogs, and I do that in my “fun time” because it is fun. Careful writing about practical rationality would be work and come out of my work time, and my work time is very very full. Which suggests that to advance, we need people whose job it is to do this work. Which is part of what we see in the self-improvement world—people get paid to write books and run workshops, and while there is lots of crap out there generally the result is higher quality and more useful material.
I agree 100%. This reminds me about a recent interview with Robin Hanson in which he commented something along the lines of: “If you want to really be rational or scientific you need a process with more teeth, just having a bunch of people who read the same web pages is not enough.”
rationality dojo—group of people practicing together to become more rational, not as an intellectual exercise (“I can rattle off dozens of cognitive biases!”) but by actually becoming more rational themselves. It would spend a lot more time on boring practical things, and less on shiny ideas. The effort would be directed towards irrationalities weighted by their negative impact on the participant’s lives, rather than how interesting they are.
Sure, I will see if I can find the time to write a top-level post on this, thanks for asking.
Go spend an hour reading Merlin Mann’s site and you’ll learn way more instrumental rationality than you do here.
Really? Could you point out some posts you think are particularly helpful? Recent posts? I used to read his site and remember finding it gradually more disappointed and dropping it off my list. I don’t really remember why, though.
Ah, his email theory—I used to think that looked like a message from an alien world. Re-reading it briefly now it still looks completely alien, describing a situation I have never found myself in. I just haven’t ever had the feeling of being overwhelmed by email or having any sort of management problem with email. Still, I’m sure there are people who do have that problem and find Mann’s writings helpful. I remember a guy back in college who swore by this inbox zero stuff. (I also remember having exchanges with him like: “That info you need is in the email I sent you a few days ago.” “Uh, could you resend that? I delete all my email.”)
I’ll see if I can find the time and attention to check out the time and attention video. I would have strongly preferred text, though. Watching 80 minute lectures is not something I can always easily arrange.
I remember a guy back in college who swore by this inbox zero stuff. (I also remember having exchanges with him like: “That info you need is in the email I sent you a few days ago.” “Uh, could you resend that? I delete all my email.”)
Mann (after David Allen) recommends processing your email, then moving it out of your inbox to the place it belongs. He does not recommend deleting emails you have not finished with yet.
Mann has post titles like Inbox Zero: Delete, delete, delete—my friend took that to heart. I’m personally never ‘finished with’ an email in the sense that I’m confident that I’ll never ever want to look at it again. I search through my email archives all the time.
Admittedly, Mann, in that article, says that he archives his mail and doesn’t delete it—but he presents that as a “big chicken” option and a couple of paragraphs up he’s lambasting “holding” folders.
Anyway, I’ve got nothing in particular against Mann—I just don’t find what he’s saying useful or fun (I tried the recommended video but 10 minutes in I turned it off, he didn’t seem to be saying anything interesting I hadn’t heard before) while I do find LessWrong frequently useful or fun.
So now you have a highly-voted comment which contains no solutions to the problem but only a criticism of how many highly-voted comments here contain no solutions but only criticisms?
I implied solutions. Like, people who want to get more rational should go read self-help / life hacking books instead of LW. And, if LW wants to be more useful, it should become more like self-help & life hacking community—focused on practical changes one can make in one’s own life, explicit exercises for increasing rationality, groups that work together in-person to provide feedback, monitor performance, provide social motivation, etc.
I’m disappointed at how few of these comments, particularly the highly-voted ones, are about proposed solutions, or at least proposed areas for research. My general concern about the LW community is that it seems much more interested in the fun of debating and analyzing biases, rather than the boring repetitive trial-and-error of correcting them.
Anna’s post lays out a particular piece of poor performance which is of core strategic value to pretty much everyone—how to identify and achieve your goals—and which, according to me and many people and authors, can be greatly improved through study and practice. So I’m very frustrated by all the comments about the fact that we’re just barely intelligent and debates about the intelligence of the general person. It’s like if Eliezer posted about the potential for AI to kill us all and people debated how they would choose to kill us instead of how to stop it from happening.
Sorry, folks, but compared to the self-help/self-development community, Less Wrong is currently UTTERLY LOSING at self-improvement and life optimization. Go spend an hour reading Merlin Mann’s site and you’ll learn way more instrumental rationality than you do here. Or take a GTD class, or read a top-rated time-management book on Amazon.
Talking about biases is fun, working on them is hard. Do Less Wrongers want to have fun, or become super-powerful and take over (or at least save) the world? So far, as far as I can tell, LW is much worse than the Quantified Self & time/attention-management communities (Merlin Mann, Zen Habits, GTD) at practical self-improvement. Which is why I don’t read it very often. When it becomes a rationality dojo instead of a club for people who like to geek out about biases, I’m in.
I’ve disappointed in LessWrong too, and it’s caused me to come here more and more infrequently. I’m even talking about the lurking. I used to come here every other day, then every week, then it dropped to once a month. This
I get the impression many people either didn’t give a shit or despaired about their own ability to function better through any reasonable effort that they dismissed everything that came along. It used to make me really mad, or sad. Probably I took it a little too personally too, because I read a lot of EY’s classic posts as inspiration not to fucking despair about what seemed like a permanently ruined future. “tsuyoku naritai” and “isshou kenmei” and “do the impossible” and all that said, look, people out there are working on much harder problems—there’s probably a way up and out for you too. The sadness: I wanted other people to get at least that, and the anger—a lot of LessWrongers not seeming to get the point.
On the other hand, I’m pleased with our OvercomingBias/LessWrong meetup group in NYC. I think we do a good job in-person helping other members with practical solutions to problems—how we can all become really successful. Maybe it’s because a lot of our members have integrated ideas from QS, Paleo, and CrossFit, Seth Roberts, and PJ Eby. We’ve counseled members on employment opportunities, how to deal with crushing student and consumer debts, how to make money, and nutrition. By now we all tend to look down on the kind of despairing analysis that’s frequently upvoted on here LW. We talk about FAI sparingly these days, unless someone has a particular insight we think would be valuable. Instead, the sentiment is more, “Shit, none of us can do much about it directly. How ’bout we all get freaking rich and successful first!”
I suspect the empathy formed from face to face contact can be a really great motivator. You hear someone’s story from their own mouth and think, “Shit man, you’re cool, but you’re in bad shape right now. Can we all figure out how to help you out?” Little by little people relate, even the successful ones—we’ve all been there in small ways. This eventually moves towards, “Can we we think about how to help all of us out?” It’s not about delivering a nice tight set of paragraphs with appropriate references and terminology. When we see each other again, we care that our proposed solutions and ideas are going somewhere because we care about the people. All the EvPsych speculation and calibration admonitions can go to hell if doesn’t fucking help. But if it does, use it, use it to help people, use it to help yourself, use it to help the future light cone of the human world.
Yet if we’re intentional about it I think we can keep it real here too. We can give a shit. Okay, maybe I don’t know that. Maybe it takes looking for and rewarding the useful insights and then coming back later and talking about how the insights were useful. Maybe it takes getting a little more personal. Maybe I and my suggestions are full of shit but, hell, I want to figure this out. I used to talk about LessWrong with pride and urge people to come check it out because the posts were great, the commenters /comment scheme is great, it was a shining example of what the rest of the intellectually discursive interwebs could be like. And, man, I’d like it to be that way again.
So damn, what do y’all think?
If there are (relative to LW) many good self-help sites and no good sites about rationality as such, that suggests to me LW should focus on rationality as such and leave self-help to the self-help sites. This is compatible with LW’s members spending a lot of time on self-help sites that they recommend each other in open threads.
My impression is that there are two good reasons to incorporate productivity techniques into LW, instead of aiming for a separate community specialized in epistemic rationality that complements self-help communities.
Our future depends on producing people who can both see what needs doing (wrt existential risk, and any other high-stakes issues), and can actually do things. This seems far higher probability than “our future depends on creating an FAI team” and than “our future depends on plan X” for any other specific plan X. A single community that teaches both, and that also discusses high-impact philanthropy, may help.
There seems to be a synergy between epistemic and instrumental rationality, in the sense that techniques for each give boosts to the other. Many self-help books, for example, spend much time discussing how to think through painful subjects instead of walling them off (instead of allowing ugh fields to clutter up your to do list, or allowing rationalized “it’s all your fault” reactions to clutter up your interpersonal relations). It would be nice to have a community that could see the whole picture here.
Instrumental rationality and productivity techniques and self-help are three different though overlapping things, though the exact difference is hard to pinpoint. In many cases it can be rational to learn to be more productive or more charismatic, but productivity and charisma don’t thereby become kinds of rationality. Your original post probably counts as instrumental rationality in that it’s about how to implement better general decision algorithms. In general, LW will probably have much more of an advantage relative to other sites in self-help that’s inspired by the basic logic/math of optimal behavior than in other kinds of self-help.
Re: 1, obviously one needs both of those things, but the question is which is more useful at the margin. The average LWer will go through life with some degree of productivity/success/etc even if such topics never get discussed again, and it seems a lot easier to get someone to allocate 2% rather than 1% of their effort to “what needs doing” than to double their general productivity.
I feel like noting that none of the ten most recent posts are about epistemic rationality; there’s nothing that I could use to get better at determining, just to name some random examples, whether nanotech will happen in the next 50 years, or whether egoism makes more philosophical sense than altruism.
On the other hand, I think a strong argument for having self-help content is that it draws people here.
But part of my point is that LW isn’t “focusing on rationality”, or rather, it is focusing on fun theoretical discussions of rationality rather than practical exercises that are hard to work implement but actually make you more rational. The self-help / life hacking / personal development community is actually better (in my opinion) at helping people become more rational than this site ostensibly devoted to rationality.
Hmm. The self-help / life hacking / personal development community may well be better than LW at focussing on practice, on concrete life-improvements, and on eliciting deep-seated motivation. But AFAICT these communities are not aiming at epistemic rationality in our sense, and are consequently not hitting it even as well as we are. LW, for all its faults, has had fair success at teaching folks how to thinking usefully about abstract, tricky subjects on which human discussions often tend rapidly toward nonsense (e.g. existential risk, optimal philanthropy, or ethics). It has done so by teaching such subskills as:
Never attempting to prove empirical facts from definitions;
Never saying or implying “but decent people shouldn’t believe X, so X is false”;
Being curious; participating in conversations with intent to update opinions, rather than merely to defend one’s prior beliefs;
Asking what potential evidence would move you, or would move the other person;
Not expecting all sides of a policy discussion to line up;
Aspiring to have true beliefs, rather than to make up rationalizations that back the group’s notions of virtue.
By all means, let’s copy the more effective, doing-oriented aspects of life hacking communities. But let’s do so while continuing to distinguish epistemic rationality as one of our key goals, since, as Steven notes, this goal seems almost unique to LW, is achieved here more than elsewhere, and is necessary for tackling e.g. existential risk reduction.
Could you elaborate on what you mean by that claim, or why you believe it?
I love most of your recent comments, but on this point my impression differs. Yes, folks often learn more from practice, exercises, and deep-seated motivation than from having fun discussions. Yes, some self-help communities are better than LW at focussing on practice and life-improvement. But, AFAICT: no, that doesn’t mean these communities do more to boost their participants’ epistemic rationality. LW tries to teach folks skills for thinking usefully about abstract, tricky subjects on which human discussions often tend rapidly toward nonsense (e.g. existential risk, optimal philanthropy, or ethics). And LW, for all its flaws, seems to have had a fair amount of success in teaching its longer-term members (judging from my discussions with many such, in person and online) such skills as:
Never attempting to prove empirical facts from definitions;
Never saying or implying “but decent people shouldn’t believe X, so X is false”;
Being curious; participating in conversations with intent to update opinions, rather than merely to defend one’s prior beliefs;
Asking what potential evidence would move you, or would move the other person;
Not expecting all sides of a policy discussion to line up;
Aspiring to have true beliefs, rather than to make up rationalizations that back the group’s notions of virtue.
Do you mean: (1) self-help sites are more successful than LW at teaching the above, and similar, subskills; (2) the above subskills do not in fact boost folks’ ability to think non-nonsensically about abstract and tricky issues; or (3) LW may better boost folks’ ability to think through abstract issues, but that ability should not be called “rationality”?
I’m surprised that you seem to be saying that LW shouldn’t getting more into instrumental rationality! That would seem to imply that you think the good self-help sites are doing enough. I really don’t agree with that. I think LWers are uniquely suited to add to the discussion. More bright minds taking a serious, critical look at all thing, and, importantly, urgently looking for solutions contains a strong possibility of making a significant dent in things.
Major point, though, of GGP is not about what’s being discussed, but how. He’s bemoning that when topics related to self-improvement come up that we completely blow it! A lot of ineffectual discussion gets upvoted. I’m guilty of this too, but this little tirade’s convinced me that we can do better, and that it’s worth thinking about how to do better.
Well, I think that’s the rational thing to do for the vast majority of people. Not only due to public good problems, but because if there’s something bad about the world which affects many people negatively, it’s probably hard to fix or one of the many sufferers would have already. Whereas your life might not have been fixed just because you haven’t tried yet. It’s almost always a better use of your resources. Plus “money is the unit of caring”, so the optimal way to help a charitable cause is usually to earn your max cash and donate, as opposed to working on it directly.
Agreed. Not just a motivator to help other people—but f2f contact is more inherently about doing, while web forums are more inherently about talking. In person it is much more natural to ask about someone’s life and how it is going—which is where interventions happen.
Perhaps. I think it will need a lot of intentionality, and a combination of in-person meetups and online discussions. I’ve thought about this as a “practicing life” support group, Eliezer’s term is “rationality dojo”, either way the key is to look at rationality and success just like any other skill—you learn by breaking it down into practiceable components and then practicing with intention and feedback, ideally in a social group. The net can be used to track the skill exercises, comment on alternative solutions for various problems, rank the leverage of the interventions and so forth.
But the key from my perspective is the website would be more of a database rather than an interaction forum. “This is where you go to find your local chapter, and a list of starting exercises / books to work through together / metrics / etc”
I’m new here at LW—are there any chapters outside of the New York meetup?
If not, is there a LW mechanism to gather location info from interested participants to start new ones? Top-level post and a Wiki page?
I created a Wiki to kick things off, but as a newb I think I can’t create an article yet, and quite frankly I’m not confident enough that that’s the right way to go about it to do it even if I could. So if you’ve been here longer and think that’s the right way, please do it and direct LWers to the Wiki page.
http://wiki.lesswrong.com/wiki/LocalChapters
This is false. Giving food directly to starving people (however it is obtained) is much better than throwing financial aid at a nation or institution and hope that it manages to “trickle-down” past all the middle-men and career politicians/activists and eventually is used to purchase food that eventually actually gets to people who need it. The only reason sayings like the above are so common and accepted is because people assume that there are no methods of Direct Action that will directly and immediately alleviate suffering, and are comparing “throwing money at it” to just petitioning, marching, and lengthy talks/debates. Yes, in those instances, years of political lobbying may do a lot less than just using that lobbying money to directly buy necessities for the needy or donating them to an organization who does (after taking a cut for cost of functioning, and to pay themselves), but compared to actually getting/taking the necessary goods and services directly to the needy (and teaching them methods for doing so themselves), it doesn’t hold up. Another way of comparison is to ask “what if everyone (or even most) did what people said was best?” If we compared the rule of “donate money to institutions you trust (after working up to the point where you feel wealthy enough to do so)”, and “directly applying their time and energy in volunteer work and direct action”, one would lead to immediate relief and learning for those in need, and the other would be a long-term hope that the money would work its way through bureaucracies, survive the continual shaving of funds for institutional funding and employee payment, and eventually get used to buy the necessities the people need (hoping that everything they need can be bought, and that they haven’t starved or been exposed to the elements enough to kill them).
This rule is an awful way to evaluate prescriptive statements. For example:
Everyone would die of starvation if everyone did this. Your comparative system prohibits every profession but subsistence agriculture. That means I don’t like your moral system and think that it is silly.
Aside from problems like that one, you’ll also run into major problems with games theory, such as collective action problems and the prisoner’s dilemma. It makes no sense at all to think that by extrapolating from individual action into communal action and evaluating the hypothetical results we will then be able to evaluate which individual actions are good. I don’t know why this belief is so common, but it is.
Just-so story: leaders needed to be able to evaluate things this way, evolution had no choice but to give everyone this trait so that the leaders would also receive it. Another just-so story: this is a driving force behind social norms which are useful from an individual perspective because those who violate social norms are outcompeted.
Of course, people who use rules like that to evaluate their actions won’t normally run into those sort of silly conclusions. But the reason for that isn’t because the rules make sense but because the rules will only be invoked selectively, to support conclusions that are already believed in. It’s a way of making personal preferences and beliefs appear to have objective weight behind them, but it’s really just an extension of your assumptions and an oversimplification of reality.
Come on now; I had only recently come out of lurking here because I have found evidence that this site and its visitors welcome dissident debate, and hold high standards for rational discussion.
Could you please present some evidence for this? You’re claim rests on the assumption that to “do art” or “be an artist” means that you can only do art 24⁄7 and would obviously just sit there painting until you starve to death. Everyone can be an artist, just make art; and that doesn’t exclude doing other things as well. Can I be an artist for a living; can everyone? Maybe, but it sure would be a lot more likely if our society put its wealth and technology towards giving everyone subsistence level comfort (if you disagree that our current technological state is incapable of this, then you’d need to argue for such, and why it isn’t worth trying, or doing the most we could anyways). The argument is that if individuals and groups in our society actually did some of the direct actions that could have immediate and life-changing results, rather than trying to “amass wealth for charity” or “petition for redress of grievances” alone, we would see much better results, and our understanding of what world’s are possible and within our reach would change as well. One can certainly disagree or argue against this claim, but changing the subject to surviving on art, or just asserting that such actions could only be done on subsistence agriculture, are claims that need some evidence, or at least some more rationale. And, as really shouldn’t need stated, “not liking” something doesn’t make it less likely or untrue, and calling an argument silly is itself silly if you don’t present justification for why you think that is the case.
As for “extrapolating from individual action into communal action”, just because it is not a sure-fire way to certain morality (nothing is) doesn’t mean that such thought experiments aren’t useful for pulling out implications and comparing ideas/methodologies. I certainly wouldn’t claim that such an argument alone should convince anyone of anything; as it says, it is just “another way of comparison” to try and explain a viewpoint and look at another facet of how it interacts with other points of view.
I’m sorry, but I have failed to understand your last paragraph. It reeks of sophistry; claiming that there are a bunch of irrational and bias-based elements to a viewpoint you don’t like, without actually citing any specific examples (and assuming that such a position couldn’t be stated in any way without them). That last sentence is a completely unsupported; it assumes its own conclusion, that such claims only “appear to have objective weight” but really “really just an extension of your assumptions and an oversimplification of reality”. Simplified it states: It is un-objective because of its un-objectivity. Evidence and rationale please? Please remember Reverse Stupidity is Not Intelligence
Your first paragraph attacks the validity of the art example; I’m willing to drop that for simplicity’s sake.
Your second paragraph concedes that it’s not a good way to approximate morality. You say that nothing is. I interpret that as a reason that we shouldn’t approach moral tradeoffs with hard and fast decision rules, rather than as a reason that any one particular sort of flawed framework should be considered acceptable. You say that it’s a useful thought experiment, I fundamentally disagree. It only muddles the issue because individual actors do not have agency over the actions of each other. I do not see any benefit to using this sort of thought experiment, I only see a risk that the relevancy and quality of analysis is degraded.
You might be misunderstanding my last paragraph. I’m saying that the type of thought experiment you use is one that is normally, almost always, only used selectively, which suggests that it’s not the real reason behind whatever position it’s being used to advance. No one considers the implications of what would happen if everyone made the same career choices or if everyone made the same lifestyle choices, and then comes to conclusions about what their own personal lives should be like based on those potential universalizations. For example, in response to my claims about art, you immediately started qualifying exactly how much art would be universal and taken as a profession, and added a variety of caveats. But you didn’t attempt to consider similar exemptions when considering whether we should view charity donations on a universal level as well, which tells me that you’re applying the technique unfairly.
People only ever seem to imagine these scenarios in cases where they’re trying to garner support for individual actions but are having a difficult time justifying their desired conclusion from an individual perspective, so they smuggle in the false assumptions that individuals can control other people and that if an action has good consequences for everyone then it’s rational for each individual to take that action (this is why I mentioned games theory previously). These false assumptions are the reason that I don’t like your thought experiment.
What’s your estimate of how much money and how much time I would have to spend to deliver $100 of food directly to a starving person?
Does that estimate change if 50% of my neighbors are also doing this?
Actually my point is questions like that are already guiding discussion away from alternative solutions which may be capable of making a real impact (outside of needing to “become rich” first, or risk the cause getting lost in bureaucracy and profiteering). Take a group like Food Not Bombs for instance; they diminish the “money spent” part of the equation by dumpstering and getting food donations. The time involved would of course depend on where you live, and how easily you could find corporate food waste (sometimes physically guarded by locks, wire, and even men with guns to enforce artificial scarcity), and transporting it to the people who need it. The more people who join in, would of course mean more food must be produced and more area covered in search of food waste to be reclaimed. A fortunate thing is that the more people pitch in, the shorter it takes to do large amounts of labor that benefits everyone; thus the term mutual aid.
I’m not even taking the cost of the food into consideration. I’m assuming there’s this food sitting here.. perhaps as donations, perhaps by dumpstering, perhaps by theft, whatever. What I was trying to get a feel for is your estimate of the costs of individuals delivering that food to where it needs to go. But it sounds like you endorse people getting together in groups in order to do this more efficiently, as long as they don’t become bureaucratic institutions in the process, so that addresses my question. Thanks.
Only hoping I’m parsing this ramble correctly, but I agree if you mean to say:
We have plenty of people asking, “Why” but we need to put a lot more effort asking, “What are we going to do about it?”
To people who go to meetups in other parts of the world: are they all like this? How do they vary in terms of satisfaction and progress in achieving goals?
Interestingly, the people who seem most interested in the topic of instrumental rationality never seem to write a lot of posts here, compared to the people interested in epistemic rationality. Maybe that’s because you’re too busy “doing” to teach (or to ask good open questions), but I’m confident that’s not true of all the I-Rationality crowd.
Of course, as an academic, I’m perfectly happy staying on the E-Rationality side.
Instrumental rationality is one of my primary interests here, but I don’t post much—the standard here is too high. All I have to offer is personal anecdotal evidence about various self-help / anti-akrasia techniques I tried on myself, and I always feel a bit guilty when posting them because unsubstantiated other-optimizing is officially frowned upon here. Attempting to extract any deep wisdom from these anecdotes would be generalizing from one example.
An acceptable way to post self-help on LW would be in the form of properly designed, properly conducted long-term studies of self-help techniques. However, designing and conducting such studies is a full-time job which ideally requires a degree in experimental psychology.
If that’s true, we absolutely need to lower the bar for such posts. Three good sorts of posts that are not terribly difficult are: (1) a review of a good self-help book and what you personally took from it; (2) a few-sentence summary of an academic study on an income-boosting technique, a method for improving your driving safety, or other useful content, with a link to the same; or (3) a description of self-intervention you tried and tracked impacts from, quantified self style.
When someone says they have anecdotes but want data, I hear an opportunity for crowdsourcing.
Perhaps a community blog is the wrong tool for this? What if we had a tool that supported tracking rationalist intervention efficacy? People could post specific interventions and others could report their personal results. Then the tool would allow for sorting interventions by reported aggregate efficacy. Maybe even just a simple voting system?
That seems like it could be a killer app for lowering the bar toward encouraging newcomers and data-poor interventions from getting posted and evaluated.
I have been thinking that LW really needs categorization system for top level post, this would create a way to post on ‘lighter’ topics without feeling like you’re not matching people’s expectations.
Tags
Tags do not affect how the site is read by most people, some predefined categories can be used to drive navigation.
I’ve had this very failure to communicate with Tom McCabe (so the evidence is mounting that the problem is with me, rather than all of you) - [edit]Tags[/edit] are categories, only with more awesome and fewer constraints. If “predefined categories can be used to drive navigation”, then surely [edit]Tags[/edit] can be used to drive navigation, without having to be predefined.
Is the problem just that the commonly used [edit]Tags[/edit] need to be positioned differently in the site layout?
Tags are categories.
I think xamdam meant that there should be a category of “lighter” posts that people could opt out of (ie, not see in their feed of new posts) so that they wouldn’t have the right to complain that they didn’t live up to their expectations. Promotion means that there are two tiers, but I’m not sure whether people read the front page or the new posts.
Incidentally, I think people are using the tags too much for subject matter and not enough for indicating this kind of weight or type of post. For example, I don’t see a tag for self-experimentation. If the tags were visible in the article editing mode, that would encourage people to reuse the same tags, which is important for making them function (thought maybe retagging is the only way to go). If predefined tags were visible in the article editing mode, that would encourage posts on those topics; in particular, it could be used to indicate that some things are acceptable, as in Anna’s list above.
yes
Excellent (it was me).
Ideas in commets below:
Easy change #1 would be to list the most popular tags in the edit interface, just below the tags inputbox.
Idea #3 (less easy) is to support saveable searches that include or exclude tags (and rss feeds of those searches) so that users can view the site through that customized lens.
Easy change #2 would be to add categories (or tags) to Tags, and to group the tag list by category, like:
Mood: flippant, serious, light, humbly_curious
Subject: standard_biases, etc.
I think there is definitely some of that, and I’ve heard that from other LW “fringers” like myself—people who love the concept of rationality and support the philosophy of LW, but have no time to write posts because their lives are full to the brim with awesome projects.
One problem, i think, is that teaching and writing things up well/usefully is work. I spend time reading and writing blogs, and I do that in my “fun time” because it is fun. Careful writing about practical rationality would be work and come out of my work time, and my work time is very very full. Which suggests that to advance, we need people whose job it is to do this work. Which is part of what we see in the self-improvement world—people get paid to write books and run workshops, and while there is lots of crap out there generally the result is higher quality and more useful material.
I agree 100%. This reminds me about a recent interview with Robin Hanson in which he commented something along the lines of: “If you want to really be rational or scientific you need a process with more teeth, just having a bunch of people who read the same web pages is not enough.”
What does a “rationality dojo” as you envision it look like?
One thing you could do to help LW become more the kind of forum you’d like it to be is write a top-level post.
Another, if you don’t want to do that, is to comment somewhere with the kind of top-level topics you would like to see addressed.
rationality dojo—group of people practicing together to become more rational, not as an intellectual exercise (“I can rattle off dozens of cognitive biases!”) but by actually becoming more rational themselves. It would spend a lot more time on boring practical things, and less on shiny ideas. The effort would be directed towards irrationalities weighted by their negative impact on the participant’s lives, rather than how interesting they are.
Sure, I will see if I can find the time to write a top-level post on this, thanks for asking.
Bump. Do it.
Really? Could you point out some posts you think are particularly helpful? Recent posts? I used to read his site and remember finding it gradually more disappointed and dropping it off my list. I don’t really remember why, though.
I thought his recent “time and attention” talk was excellent, and of course his writing on email is classic.
Ah, his email theory—I used to think that looked like a message from an alien world. Re-reading it briefly now it still looks completely alien, describing a situation I have never found myself in. I just haven’t ever had the feeling of being overwhelmed by email or having any sort of management problem with email. Still, I’m sure there are people who do have that problem and find Mann’s writings helpful. I remember a guy back in college who swore by this inbox zero stuff. (I also remember having exchanges with him like: “That info you need is in the email I sent you a few days ago.” “Uh, could you resend that? I delete all my email.”)
I’ll see if I can find the time and attention to check out the time and attention video. I would have strongly preferred text, though. Watching 80 minute lectures is not something I can always easily arrange.
Mann (after David Allen) recommends processing your email, then moving it out of your inbox to the place it belongs. He does not recommend deleting emails you have not finished with yet.
Mann has post titles like Inbox Zero: Delete, delete, delete—my friend took that to heart. I’m personally never ‘finished with’ an email in the sense that I’m confident that I’ll never ever want to look at it again. I search through my email archives all the time.
Admittedly, Mann, in that article, says that he archives his mail and doesn’t delete it—but he presents that as a “big chicken” option and a couple of paragraphs up he’s lambasting “holding” folders.
Anyway, I’ve got nothing in particular against Mann—I just don’t find what he’s saying useful or fun (I tried the recommended video but 10 minutes in I turned it off, he didn’t seem to be saying anything interesting I hadn’t heard before) while I do find LessWrong frequently useful or fun.
“frustrated by all the comments about the fact that we’re just barely intelligent”
From “frustrated” to hinting at your own take just six words later
So now you have a highly-voted comment which contains no solutions to the problem but only a criticism of how many highly-voted comments here contain no solutions but only criticisms?
I’m not saying that pointing out that something is wrong without proposing an alternate solution is necessarily a bad idea. In fact, I think it can often be helpful, and I think the specific complaint your comment makes is a good one.
But, I also think that your statement isn’t self-consistent. If you only value comments that propose solutions, then propose a solution!
I implied solutions. Like, people who want to get more rational should go read self-help / life hacking books instead of LW. And, if LW wants to be more useful, it should become more like self-help & life hacking community—focused on practical changes one can make in one’s own life, explicit exercises for increasing rationality, groups that work together in-person to provide feedback, monitor performance, provide social motivation, etc.