I’ve disappointed in LessWrong too, and it’s caused me to come here more and more infrequently. I’m even talking about the lurking. I used to come here every other day, then every week, then it dropped to once a month. This
I get the impression many people either didn’t give a shit or despaired about their own ability to function better through any reasonable effort that they dismissed everything that came along. It used to make me really mad, or sad. Probably I took it a little too personally too, because I read a lot of EY’s classic posts as inspiration not to fucking despair about what seemed like a permanently ruined future. “tsuyoku naritai” and “isshou kenmei” and “do the impossible” and all that said, look, people out there are working on much harder problems—there’s probably a way up and out for you too. The sadness: I wanted other people to get at least that, and the anger—a lot of LessWrongers not seeming to get the point.
On the other hand, I’m pleased with our OvercomingBias/LessWrong meetup group in NYC. I think we do a good job in-person helping other members with practical solutions to problems—how we can all become really successful. Maybe it’s because a lot of our members have integrated ideas from QS, Paleo, and CrossFit, Seth Roberts, and PJ Eby. We’ve counseled members on employment opportunities, how to deal with crushing student and consumer debts, how to make money, and nutrition. By now we all tend to look down on the kind of despairing analysis that’s frequently upvoted on here LW. We talk about FAI sparingly these days, unless someone has a particular insight we think would be valuable. Instead, the sentiment is more, “Shit, none of us can do much about it directly. How ’bout we all get freaking rich and successful first!”
I suspect the empathy formed from face to face contact can be a really great motivator. You hear someone’s story from their own mouth and think, “Shit man, you’re cool, but you’re in bad shape right now. Can we all figure out how to help you out?” Little by little people relate, even the successful ones—we’ve all been there in small ways. This eventually moves towards, “Can we we think about how to help all of us out?” It’s not about delivering a nice tight set of paragraphs with appropriate references and terminology. When we see each other again, we care that our proposed solutions and ideas are going somewhere because we care about the people. All the EvPsych speculation and calibration admonitions can go to hell if doesn’t fucking help. But if it does, use it, use it to help people, use it to help yourself, use it to help the future light cone of the human world.
Yet if we’re intentional about it I think we can keep it real here too. We can give a shit. Okay, maybe I don’t know that. Maybe it takes looking for and rewarding the useful insights and then coming back later and talking about how the insights were useful. Maybe it takes getting a little more personal. Maybe I and my suggestions are full of shit but, hell, I want to figure this out. I used to talk about LessWrong with pride and urge people to come check it out because the posts were great, the commenters /comment scheme is great, it was a shining example of what the rest of the intellectually discursive interwebs could be like. And, man, I’d like it to be that way again.
If there are (relative to LW) many good self-help sites and no good sites about rationality as such, that suggests to me LW should focus on rationality as such and leave self-help to the self-help sites. This is compatible with LW’s members spending a lot of time on self-help sites that they recommend each other in open threads.
My impression is that there are two good reasons to incorporate productivity techniques into LW, instead of aiming for a separate community specialized in epistemic rationality that complements self-help communities.
Our future depends on producing people who can both see what needs doing (wrt existential risk, and any other high-stakes issues), and can actually do things. This seems far higher probability than “our future depends on creating an FAI team” and than “our future depends on plan X” for any other specific plan X. A single community that teaches both, and that also discusses high-impact philanthropy, may help.
There seems to be a synergy between epistemic and instrumental rationality, in the sense that techniques for each give boosts to the other. Many self-help books, for example, spend much time discussing how to think through painful subjects instead of walling them off (instead of allowing ugh fields to clutter up your to do list, or allowing rationalized “it’s all your fault” reactions to clutter up your interpersonal relations). It would be nice to have a community that could see the whole picture here.
Instrumental rationality and productivity techniques and self-help are three different though overlapping things, though the exact difference is hard to pinpoint. In many cases it can be rational to learn to be more productive or more charismatic, but productivity and charisma don’t thereby become kinds of rationality. Your original post probably counts as instrumental rationality in that it’s about how to implement better general decision algorithms. In general, LW will probably have much more of an advantage relative to other sites in self-help that’s inspired by the basic logic/math of optimal behavior than in other kinds of self-help.
Re: 1, obviously one needs both of those things, but the question is which is more useful at the margin. The average LWer will go through life with some degree of productivity/success/etc even if such topics never get discussed again, and it seems a lot easier to get someone to allocate 2% rather than 1% of their effort to “what needs doing” than to double their general productivity.
I feel like noting that none of the ten most recent posts are about epistemic rationality; there’s nothing that I could use to get better at determining, just to name some random examples, whether nanotech will happen in the next 50 years, or whether egoism makes more philosophical sense than altruism.
On the other hand, I think a strong argument for having self-help content is that it draws people here.
But part of my point is that LW isn’t “focusing on rationality”, or rather, it is focusing on fun theoretical discussions of rationality rather than practical exercises that are hard to work implement but actually make you more rational. The self-help / life hacking / personal development community is actually better (in my opinion) at helping people become more rational than this site ostensibly devoted to rationality.
The self-help / life hacking / personal development community is actually better (in my opinion) at helping people become more rational than this site ostensibly devoted to rationality.
Hmm. The self-help / life hacking / personal development community may well be better than LW at focussing on practice, on concrete life-improvements, and on eliciting deep-seated motivation. But AFAICT these communities are not aiming at epistemic rationality in our sense, and are consequently not hitting it even as well as we are. LW, for all its faults, has had fair success at teaching folks how to thinking usefully about abstract, tricky subjects on which human discussions often tend rapidly toward nonsense (e.g. existential risk, optimal philanthropy, or ethics). It has done so by teaching such subskills as:
Never attempting to prove empirical facts from definitions;
Never saying or implying “but decent people shouldn’t believe X, so X is false”;
Being curious; participating in conversations with intent to update opinions, rather than merely to defend one’s prior beliefs;
Asking what potential evidence would move you, or would move the other person;
Not expecting all sides of a policy discussion to line up;
Aspiring to have true beliefs, rather than to make up rationalizations that back the group’s notions of virtue.
By all means, let’s copy the more effective, doing-oriented aspects of life hacking communities. But let’s do so while continuing to distinguish epistemic rationality as one of our key goals, since, as Steven notes, this goal seems almost unique to LW, is achieved here more than elsewhere, and is necessary for tackling e.g. existential risk reduction.
The self-help / life hacking / personal development community is actually better (in my opinion) at helping people become more rational than this site ostensibly devoted to rationality.
Could you elaborate on what you mean by that claim, or why you believe it?
I love most of your recent comments, but on this point my impression differs. Yes, folks often learn more from practice, exercises, and deep-seated motivation than from having fun discussions. Yes, some self-help communities are better than LW at focussing on practice and life-improvement. But, AFAICT: no, that doesn’t mean these communities do more to boost their participants’ epistemic rationality. LW tries to teach folks skills for thinking usefully about abstract, tricky subjects on which human discussions often tend rapidly toward nonsense (e.g. existential risk, optimal philanthropy, or ethics). And LW, for all its flaws, seems to have had a fair amount of success in teaching its longer-term members (judging from my discussions with many such, in person and online) such skills as:
Never attempting to prove empirical facts from definitions;
Never saying or implying “but decent people shouldn’t believe X, so X is false”;
Being curious; participating in conversations with intent to update opinions, rather than merely to defend one’s prior beliefs;
Asking what potential evidence would move you, or would move the other person;
Not expecting all sides of a policy discussion to line up;
Aspiring to have true beliefs, rather than to make up rationalizations that back the group’s notions of virtue.
Do you mean: (1) self-help sites are more successful than LW at teaching the above, and similar, subskills; (2) the above subskills do not in fact boost folks’ ability to think non-nonsensically about abstract and tricky issues; or (3) LW may better boost folks’ ability to think through abstract issues, but that ability should not be called “rationality”?
I’m surprised that you seem to be saying that LW shouldn’t getting more into instrumental rationality! That would seem to imply that you think the good self-help sites are doing enough. I really don’t agree with that. I think LWers are uniquely suited to add to the discussion. More bright minds taking a serious, critical look at all thing, and, importantly, urgently looking for solutions contains a strong possibility of making a significant dent in things.
Major point, though, of GGP is not about what’s being discussed, but how. He’s bemoning that when topics related to self-improvement come up that we completely blow it! A lot of ineffectual discussion gets upvoted. I’m guilty of this too, but this little tirade’s convinced me that we can do better, and that it’s worth thinking about how to do better.
Instead, the sentiment is more, “Shit, none of us can do much about it directly. How ’bout we all get freaking rich and successful first!”
Well, I think that’s the rational thing to do for the vast majority of people. Not only due to public good problems, but because if there’s something bad about the world which affects many people negatively, it’s probably hard to fix or one of the many sufferers would have already. Whereas your life might not have been fixed just because you haven’t tried yet. It’s almost always a better use of your resources. Plus “money is the unit of caring”, so the optimal way to help a charitable cause is usually to earn your max cash and donate, as opposed to working on it directly.
I suspect the empathy formed from face to face contact can be a really great motivator.
Agreed. Not just a motivator to help other people—but f2f contact is more inherently about doing, while web forums are more inherently about talking. In person it is much more natural to ask about someone’s life and how it is going—which is where interventions happen.
Yet if we’re intentional about it I think we can keep it real here too.
Perhaps. I think it will need a lot of intentionality, and a combination of in-person meetups and online discussions. I’ve thought about this as a “practicing life” support group, Eliezer’s term is “rationality dojo”, either way the key is to look at rationality and success just like any other skill—you learn by breaking it down into practiceable components and then practicing with intention and feedback, ideally in a social group. The net can be used to track the skill exercises, comment on alternative solutions for various problems, rank the leverage of the interventions and so forth.
But the key from my perspective is the website would be more of a database rather than an interaction forum. “This is where you go to find your local chapter, and a list of starting exercises / books to work through together / metrics / etc”
I’m new here at LW—are there any chapters outside of the New York meetup?
If not, is there a LW mechanism to gather location info from interested participants to start new ones? Top-level post and a Wiki page?
I created a Wiki to kick things off, but as a newb I think I can’t create an article yet, and quite frankly I’m not confident enough that that’s the right way to go about it to do it even if I could. So if you’ve been here longer and think that’s the right way, please do it and direct LWers to the Wiki page.
“money is the unit of caring”, so the optimal way to help a charitable cause is usually to earn your max cash and donate, as opposed to working on it directly.
This is false. Giving food directly to starving people (however it is obtained) is much better than throwing financial aid at a nation or institution and hope that it manages to “trickle-down” past all the middle-men and career politicians/activists and eventually is used to purchase food that eventually actually gets to people who need it. The only reason sayings like the above are so common and accepted is because people assume that there are no methods of Direct Action that will directly and immediately alleviate suffering, and are comparing “throwing money at it” to just petitioning, marching, and lengthy talks/debates. Yes, in those instances, years of political lobbying may do a lot less than just using that lobbying money to directly buy necessities for the needy or donating them to an organization who does (after taking a cut for cost of functioning, and to pay themselves), but compared to actually getting/taking the necessary goods and services directly to the needy (and teaching them methods for doing so themselves), it doesn’t hold up. Another way of comparison is to ask “what if everyone (or even most) did what people said was best?” If we compared the rule of “donate money to institutions you trust (after working up to the point where you feel wealthy enough to do so)”, and “directly applying their time and energy in volunteer work and direct action”, one would lead to immediate relief and learning for those in need, and the other would be a long-term hope that the money would work its way through bureaucracies, survive the continual shaving of funds for institutional funding and employee payment, and eventually get used to buy the necessities the people need (hoping that everything they need can be bought, and that they haven’t starved or been exposed to the elements enough to kill them).
Another way of comparison is to ask “what if everyone (or even most) did what people said was best?” If we compared the rule of “donate money to institutions you trust (after working up to the point where you feel wealthy enough to do so)”, and “directly applying their time and energy in volunteer work and direct action”, one would lead to immediate relief and learning for those in need, and the other would be a long-term hope that the money would work its way through bureaucracies, survive the continual shaving of funds for institutional funding and employee payment, and eventually get used to buy the necessities the people need (hoping that everything they need can be bought, and that they haven’t starved or been exposed to the elements enough to kill them).
This rule is an awful way to evaluate prescriptive statements. For example:
Should I become an artist for a living?
Everyone would die of starvation if everyone did this. Your comparative system prohibits every profession but subsistence agriculture. That means I don’t like your moral system and think that it is silly.
Aside from problems like that one, you’ll also run into major problems with games theory, such as collective action problems and the prisoner’s dilemma. It makes no sense at all to think that by extrapolating from individual action into communal action and evaluating the hypothetical results we will then be able to evaluate which individual actions are good. I don’t know why this belief is so common, but it is.
Just-so story: leaders needed to be able to evaluate things this way, evolution had no choice but to give everyone this trait so that the leaders would also receive it. Another just-so story: this is a driving force behind social norms which are useful from an individual perspective because those who violate social norms are outcompeted.
Of course, people who use rules like that to evaluate their actions won’t normally run into those sort of silly conclusions. But the reason for that isn’t because the rules make sense but because the rules will only be invoked selectively, to support conclusions that are already believed in. It’s a way of making personal preferences and beliefs appear to have objective weight behind them, but it’s really just an extension of your assumptions and an oversimplification of reality.
Come on now; I had only recently come out of lurking here because I have found evidence that this site and its visitors welcome dissident debate, and hold high standards for rational discussion.
Should I become an artist for a living? -- Everyone would die of starvation if everyone did this. Your comparative system prohibits every profession but subsistence agriculture. That means I don’t like your moral system and think that it is silly.
Could you please present some evidence for this? You’re claim rests on the assumption that to “do art” or “be an artist” means that you can only do art 24⁄7 and would obviously just sit there painting until you starve to death. Everyone can be an artist, just make art; and that doesn’t exclude doing other things as well. Can I be an artist for a living; can everyone? Maybe, but it sure would be a lot more likely if our society put its wealth and technology towards giving everyone subsistence level comfort (if you disagree that our current technological state is incapable of this, then you’d need to argue for such, and why it isn’t worth trying, or doing the most we could anyways). The argument is that if individuals and groups in our society actually did some of the direct actions that could have immediate and life-changing results, rather than trying to “amass wealth for charity” or “petition for redress of grievances” alone, we would see much better results, and our understanding of what world’s are possible and within our reach would change as well. One can certainly disagree or argue against this claim, but changing the subject to surviving on art, or just asserting that such actions could only be done on subsistence agriculture, are claims that need some evidence, or at least some more rationale. And, as really shouldn’t need stated, “not liking” something doesn’t make it less likely or untrue, and calling an argument silly is itself silly if you don’t present justification for why you think that is the case.
As for “extrapolating from individual action into communal action”, just because it is not a sure-fire way to certain morality (nothing is) doesn’t mean that such thought experiments aren’t useful for pulling out implications and comparing ideas/methodologies. I certainly wouldn’t claim that such an argument alone should convince anyone of anything; as it says, it is just “another way of comparison” to try and explain a viewpoint and look at another facet of how it interacts with other points of view.
I’m sorry, but I have failed to understand your last paragraph. It reeks of sophistry; claiming that there are a bunch of irrational and bias-based elements to a viewpoint you don’t like, without actually citing any specific examples (and assuming that such a position couldn’t be stated in any way without them). That last sentence is a completely unsupported; it assumes its own conclusion, that such claims only “appear to have objective weight” but really “really just an extension of your assumptions and an oversimplification of reality”. Simplified it states: It is un-objective because of its un-objectivity. Evidence and rationale please? Please remember Reverse Stupidity is Not Intelligence
Your first paragraph attacks the validity of the art example; I’m willing to drop that for simplicity’s sake.
Your second paragraph concedes that it’s not a good way to approximate morality. You say that nothing is. I interpret that as a reason that we shouldn’t approach moral tradeoffs with hard and fast decision rules, rather than as a reason that any one particular sort of flawed framework should be considered acceptable. You say that it’s a useful thought experiment, I fundamentally disagree. It only muddles the issue because individual actors do not have agency over the actions of each other. I do not see any benefit to using this sort of thought experiment, I only see a risk that the relevancy and quality of analysis is degraded.
You might be misunderstanding my last paragraph. I’m saying that the type of thought experiment you use is one that is normally, almost always, only used selectively, which suggests that it’s not the real reason behind whatever position it’s being used to advance. No one considers the implications of what would happen if everyone made the same career choices or if everyone made the same lifestyle choices, and then comes to conclusions about what their own personal lives should be like based on those potential universalizations. For example, in response to my claims about art, you immediately started qualifying exactly how much art would be universal and taken as a profession, and added a variety of caveats. But you didn’t attempt to consider similar exemptions when considering whether we should view charity donations on a universal level as well, which tells me that you’re applying the technique unfairly.
People only ever seem to imagine these scenarios in cases where they’re trying to garner support for individual actions but are having a difficult time justifying their desired conclusion from an individual perspective, so they smuggle in the false assumptions that individuals can control other people and that if an action has good consequences for everyone then it’s rational for each individual to take that action (this is why I mentioned games theory previously). These false assumptions are the reason that I don’t like your thought experiment.
Giving food directly to starving people (however it is obtained) is much better than throwing financial aid at a nation or institution
What’s your estimate of how much money and how much time I would have to spend to deliver $100 of food directly to a starving person? Does that estimate change if 50% of my neighbors are also doing this?
Actually my point is questions like that are already guiding discussion away from alternative solutions which may be capable of making a real impact (outside of needing to “become rich” first, or risk the cause getting lost in bureaucracy and profiteering). Take a group like Food Not Bombs for instance; they diminish the “money spent” part of the equation by dumpstering and getting food donations. The time involved would of course depend on where you live, and how easily you could find corporate food waste (sometimes physically guarded by locks, wire, and even men with guns to enforce artificial scarcity), and transporting it to the people who need it. The more people who join in, would of course mean more food must be produced and more area covered in search of food waste to be reclaimed. A fortunate thing is that the more people pitch in, the shorter it takes to do large amounts of labor that benefits everyone; thus the term mutual aid.
I’m not even taking the cost of the food into consideration. I’m assuming there’s this food sitting here.. perhaps as donations, perhaps by dumpstering, perhaps by theft, whatever. What I was trying to get a feel for is your estimate of the costs of individuals delivering that food to where it needs to go. But it sounds like you endorse people getting together in groups in order to do this more efficiently, as long as they don’t become bureaucratic institutions in the process, so that addresses my question. Thanks.
To people who go to meetups in other parts of the world: are they all like this? How do they vary in terms of satisfaction and progress in achieving goals?
I’ve disappointed in LessWrong too, and it’s caused me to come here more and more infrequently. I’m even talking about the lurking. I used to come here every other day, then every week, then it dropped to once a month. This
I get the impression many people either didn’t give a shit or despaired about their own ability to function better through any reasonable effort that they dismissed everything that came along. It used to make me really mad, or sad. Probably I took it a little too personally too, because I read a lot of EY’s classic posts as inspiration not to fucking despair about what seemed like a permanently ruined future. “tsuyoku naritai” and “isshou kenmei” and “do the impossible” and all that said, look, people out there are working on much harder problems—there’s probably a way up and out for you too. The sadness: I wanted other people to get at least that, and the anger—a lot of LessWrongers not seeming to get the point.
On the other hand, I’m pleased with our OvercomingBias/LessWrong meetup group in NYC. I think we do a good job in-person helping other members with practical solutions to problems—how we can all become really successful. Maybe it’s because a lot of our members have integrated ideas from QS, Paleo, and CrossFit, Seth Roberts, and PJ Eby. We’ve counseled members on employment opportunities, how to deal with crushing student and consumer debts, how to make money, and nutrition. By now we all tend to look down on the kind of despairing analysis that’s frequently upvoted on here LW. We talk about FAI sparingly these days, unless someone has a particular insight we think would be valuable. Instead, the sentiment is more, “Shit, none of us can do much about it directly. How ’bout we all get freaking rich and successful first!”
I suspect the empathy formed from face to face contact can be a really great motivator. You hear someone’s story from their own mouth and think, “Shit man, you’re cool, but you’re in bad shape right now. Can we all figure out how to help you out?” Little by little people relate, even the successful ones—we’ve all been there in small ways. This eventually moves towards, “Can we we think about how to help all of us out?” It’s not about delivering a nice tight set of paragraphs with appropriate references and terminology. When we see each other again, we care that our proposed solutions and ideas are going somewhere because we care about the people. All the EvPsych speculation and calibration admonitions can go to hell if doesn’t fucking help. But if it does, use it, use it to help people, use it to help yourself, use it to help the future light cone of the human world.
Yet if we’re intentional about it I think we can keep it real here too. We can give a shit. Okay, maybe I don’t know that. Maybe it takes looking for and rewarding the useful insights and then coming back later and talking about how the insights were useful. Maybe it takes getting a little more personal. Maybe I and my suggestions are full of shit but, hell, I want to figure this out. I used to talk about LessWrong with pride and urge people to come check it out because the posts were great, the commenters /comment scheme is great, it was a shining example of what the rest of the intellectually discursive interwebs could be like. And, man, I’d like it to be that way again.
So damn, what do y’all think?
If there are (relative to LW) many good self-help sites and no good sites about rationality as such, that suggests to me LW should focus on rationality as such and leave self-help to the self-help sites. This is compatible with LW’s members spending a lot of time on self-help sites that they recommend each other in open threads.
My impression is that there are two good reasons to incorporate productivity techniques into LW, instead of aiming for a separate community specialized in epistemic rationality that complements self-help communities.
Our future depends on producing people who can both see what needs doing (wrt existential risk, and any other high-stakes issues), and can actually do things. This seems far higher probability than “our future depends on creating an FAI team” and than “our future depends on plan X” for any other specific plan X. A single community that teaches both, and that also discusses high-impact philanthropy, may help.
There seems to be a synergy between epistemic and instrumental rationality, in the sense that techniques for each give boosts to the other. Many self-help books, for example, spend much time discussing how to think through painful subjects instead of walling them off (instead of allowing ugh fields to clutter up your to do list, or allowing rationalized “it’s all your fault” reactions to clutter up your interpersonal relations). It would be nice to have a community that could see the whole picture here.
Instrumental rationality and productivity techniques and self-help are three different though overlapping things, though the exact difference is hard to pinpoint. In many cases it can be rational to learn to be more productive or more charismatic, but productivity and charisma don’t thereby become kinds of rationality. Your original post probably counts as instrumental rationality in that it’s about how to implement better general decision algorithms. In general, LW will probably have much more of an advantage relative to other sites in self-help that’s inspired by the basic logic/math of optimal behavior than in other kinds of self-help.
Re: 1, obviously one needs both of those things, but the question is which is more useful at the margin. The average LWer will go through life with some degree of productivity/success/etc even if such topics never get discussed again, and it seems a lot easier to get someone to allocate 2% rather than 1% of their effort to “what needs doing” than to double their general productivity.
I feel like noting that none of the ten most recent posts are about epistemic rationality; there’s nothing that I could use to get better at determining, just to name some random examples, whether nanotech will happen in the next 50 years, or whether egoism makes more philosophical sense than altruism.
On the other hand, I think a strong argument for having self-help content is that it draws people here.
But part of my point is that LW isn’t “focusing on rationality”, or rather, it is focusing on fun theoretical discussions of rationality rather than practical exercises that are hard to work implement but actually make you more rational. The self-help / life hacking / personal development community is actually better (in my opinion) at helping people become more rational than this site ostensibly devoted to rationality.
Hmm. The self-help / life hacking / personal development community may well be better than LW at focussing on practice, on concrete life-improvements, and on eliciting deep-seated motivation. But AFAICT these communities are not aiming at epistemic rationality in our sense, and are consequently not hitting it even as well as we are. LW, for all its faults, has had fair success at teaching folks how to thinking usefully about abstract, tricky subjects on which human discussions often tend rapidly toward nonsense (e.g. existential risk, optimal philanthropy, or ethics). It has done so by teaching such subskills as:
Never attempting to prove empirical facts from definitions;
Never saying or implying “but decent people shouldn’t believe X, so X is false”;
Being curious; participating in conversations with intent to update opinions, rather than merely to defend one’s prior beliefs;
Asking what potential evidence would move you, or would move the other person;
Not expecting all sides of a policy discussion to line up;
Aspiring to have true beliefs, rather than to make up rationalizations that back the group’s notions of virtue.
By all means, let’s copy the more effective, doing-oriented aspects of life hacking communities. But let’s do so while continuing to distinguish epistemic rationality as one of our key goals, since, as Steven notes, this goal seems almost unique to LW, is achieved here more than elsewhere, and is necessary for tackling e.g. existential risk reduction.
Could you elaborate on what you mean by that claim, or why you believe it?
I love most of your recent comments, but on this point my impression differs. Yes, folks often learn more from practice, exercises, and deep-seated motivation than from having fun discussions. Yes, some self-help communities are better than LW at focussing on practice and life-improvement. But, AFAICT: no, that doesn’t mean these communities do more to boost their participants’ epistemic rationality. LW tries to teach folks skills for thinking usefully about abstract, tricky subjects on which human discussions often tend rapidly toward nonsense (e.g. existential risk, optimal philanthropy, or ethics). And LW, for all its flaws, seems to have had a fair amount of success in teaching its longer-term members (judging from my discussions with many such, in person and online) such skills as:
Never attempting to prove empirical facts from definitions;
Never saying or implying “but decent people shouldn’t believe X, so X is false”;
Being curious; participating in conversations with intent to update opinions, rather than merely to defend one’s prior beliefs;
Asking what potential evidence would move you, or would move the other person;
Not expecting all sides of a policy discussion to line up;
Aspiring to have true beliefs, rather than to make up rationalizations that back the group’s notions of virtue.
Do you mean: (1) self-help sites are more successful than LW at teaching the above, and similar, subskills; (2) the above subskills do not in fact boost folks’ ability to think non-nonsensically about abstract and tricky issues; or (3) LW may better boost folks’ ability to think through abstract issues, but that ability should not be called “rationality”?
I’m surprised that you seem to be saying that LW shouldn’t getting more into instrumental rationality! That would seem to imply that you think the good self-help sites are doing enough. I really don’t agree with that. I think LWers are uniquely suited to add to the discussion. More bright minds taking a serious, critical look at all thing, and, importantly, urgently looking for solutions contains a strong possibility of making a significant dent in things.
Major point, though, of GGP is not about what’s being discussed, but how. He’s bemoning that when topics related to self-improvement come up that we completely blow it! A lot of ineffectual discussion gets upvoted. I’m guilty of this too, but this little tirade’s convinced me that we can do better, and that it’s worth thinking about how to do better.
Well, I think that’s the rational thing to do for the vast majority of people. Not only due to public good problems, but because if there’s something bad about the world which affects many people negatively, it’s probably hard to fix or one of the many sufferers would have already. Whereas your life might not have been fixed just because you haven’t tried yet. It’s almost always a better use of your resources. Plus “money is the unit of caring”, so the optimal way to help a charitable cause is usually to earn your max cash and donate, as opposed to working on it directly.
Agreed. Not just a motivator to help other people—but f2f contact is more inherently about doing, while web forums are more inherently about talking. In person it is much more natural to ask about someone’s life and how it is going—which is where interventions happen.
Perhaps. I think it will need a lot of intentionality, and a combination of in-person meetups and online discussions. I’ve thought about this as a “practicing life” support group, Eliezer’s term is “rationality dojo”, either way the key is to look at rationality and success just like any other skill—you learn by breaking it down into practiceable components and then practicing with intention and feedback, ideally in a social group. The net can be used to track the skill exercises, comment on alternative solutions for various problems, rank the leverage of the interventions and so forth.
But the key from my perspective is the website would be more of a database rather than an interaction forum. “This is where you go to find your local chapter, and a list of starting exercises / books to work through together / metrics / etc”
I’m new here at LW—are there any chapters outside of the New York meetup?
If not, is there a LW mechanism to gather location info from interested participants to start new ones? Top-level post and a Wiki page?
I created a Wiki to kick things off, but as a newb I think I can’t create an article yet, and quite frankly I’m not confident enough that that’s the right way to go about it to do it even if I could. So if you’ve been here longer and think that’s the right way, please do it and direct LWers to the Wiki page.
http://wiki.lesswrong.com/wiki/LocalChapters
This is false. Giving food directly to starving people (however it is obtained) is much better than throwing financial aid at a nation or institution and hope that it manages to “trickle-down” past all the middle-men and career politicians/activists and eventually is used to purchase food that eventually actually gets to people who need it. The only reason sayings like the above are so common and accepted is because people assume that there are no methods of Direct Action that will directly and immediately alleviate suffering, and are comparing “throwing money at it” to just petitioning, marching, and lengthy talks/debates. Yes, in those instances, years of political lobbying may do a lot less than just using that lobbying money to directly buy necessities for the needy or donating them to an organization who does (after taking a cut for cost of functioning, and to pay themselves), but compared to actually getting/taking the necessary goods and services directly to the needy (and teaching them methods for doing so themselves), it doesn’t hold up. Another way of comparison is to ask “what if everyone (or even most) did what people said was best?” If we compared the rule of “donate money to institutions you trust (after working up to the point where you feel wealthy enough to do so)”, and “directly applying their time and energy in volunteer work and direct action”, one would lead to immediate relief and learning for those in need, and the other would be a long-term hope that the money would work its way through bureaucracies, survive the continual shaving of funds for institutional funding and employee payment, and eventually get used to buy the necessities the people need (hoping that everything they need can be bought, and that they haven’t starved or been exposed to the elements enough to kill them).
This rule is an awful way to evaluate prescriptive statements. For example:
Everyone would die of starvation if everyone did this. Your comparative system prohibits every profession but subsistence agriculture. That means I don’t like your moral system and think that it is silly.
Aside from problems like that one, you’ll also run into major problems with games theory, such as collective action problems and the prisoner’s dilemma. It makes no sense at all to think that by extrapolating from individual action into communal action and evaluating the hypothetical results we will then be able to evaluate which individual actions are good. I don’t know why this belief is so common, but it is.
Just-so story: leaders needed to be able to evaluate things this way, evolution had no choice but to give everyone this trait so that the leaders would also receive it. Another just-so story: this is a driving force behind social norms which are useful from an individual perspective because those who violate social norms are outcompeted.
Of course, people who use rules like that to evaluate their actions won’t normally run into those sort of silly conclusions. But the reason for that isn’t because the rules make sense but because the rules will only be invoked selectively, to support conclusions that are already believed in. It’s a way of making personal preferences and beliefs appear to have objective weight behind them, but it’s really just an extension of your assumptions and an oversimplification of reality.
Come on now; I had only recently come out of lurking here because I have found evidence that this site and its visitors welcome dissident debate, and hold high standards for rational discussion.
Could you please present some evidence for this? You’re claim rests on the assumption that to “do art” or “be an artist” means that you can only do art 24⁄7 and would obviously just sit there painting until you starve to death. Everyone can be an artist, just make art; and that doesn’t exclude doing other things as well. Can I be an artist for a living; can everyone? Maybe, but it sure would be a lot more likely if our society put its wealth and technology towards giving everyone subsistence level comfort (if you disagree that our current technological state is incapable of this, then you’d need to argue for such, and why it isn’t worth trying, or doing the most we could anyways). The argument is that if individuals and groups in our society actually did some of the direct actions that could have immediate and life-changing results, rather than trying to “amass wealth for charity” or “petition for redress of grievances” alone, we would see much better results, and our understanding of what world’s are possible and within our reach would change as well. One can certainly disagree or argue against this claim, but changing the subject to surviving on art, or just asserting that such actions could only be done on subsistence agriculture, are claims that need some evidence, or at least some more rationale. And, as really shouldn’t need stated, “not liking” something doesn’t make it less likely or untrue, and calling an argument silly is itself silly if you don’t present justification for why you think that is the case.
As for “extrapolating from individual action into communal action”, just because it is not a sure-fire way to certain morality (nothing is) doesn’t mean that such thought experiments aren’t useful for pulling out implications and comparing ideas/methodologies. I certainly wouldn’t claim that such an argument alone should convince anyone of anything; as it says, it is just “another way of comparison” to try and explain a viewpoint and look at another facet of how it interacts with other points of view.
I’m sorry, but I have failed to understand your last paragraph. It reeks of sophistry; claiming that there are a bunch of irrational and bias-based elements to a viewpoint you don’t like, without actually citing any specific examples (and assuming that such a position couldn’t be stated in any way without them). That last sentence is a completely unsupported; it assumes its own conclusion, that such claims only “appear to have objective weight” but really “really just an extension of your assumptions and an oversimplification of reality”. Simplified it states: It is un-objective because of its un-objectivity. Evidence and rationale please? Please remember Reverse Stupidity is Not Intelligence
Your first paragraph attacks the validity of the art example; I’m willing to drop that for simplicity’s sake.
Your second paragraph concedes that it’s not a good way to approximate morality. You say that nothing is. I interpret that as a reason that we shouldn’t approach moral tradeoffs with hard and fast decision rules, rather than as a reason that any one particular sort of flawed framework should be considered acceptable. You say that it’s a useful thought experiment, I fundamentally disagree. It only muddles the issue because individual actors do not have agency over the actions of each other. I do not see any benefit to using this sort of thought experiment, I only see a risk that the relevancy and quality of analysis is degraded.
You might be misunderstanding my last paragraph. I’m saying that the type of thought experiment you use is one that is normally, almost always, only used selectively, which suggests that it’s not the real reason behind whatever position it’s being used to advance. No one considers the implications of what would happen if everyone made the same career choices or if everyone made the same lifestyle choices, and then comes to conclusions about what their own personal lives should be like based on those potential universalizations. For example, in response to my claims about art, you immediately started qualifying exactly how much art would be universal and taken as a profession, and added a variety of caveats. But you didn’t attempt to consider similar exemptions when considering whether we should view charity donations on a universal level as well, which tells me that you’re applying the technique unfairly.
People only ever seem to imagine these scenarios in cases where they’re trying to garner support for individual actions but are having a difficult time justifying their desired conclusion from an individual perspective, so they smuggle in the false assumptions that individuals can control other people and that if an action has good consequences for everyone then it’s rational for each individual to take that action (this is why I mentioned games theory previously). These false assumptions are the reason that I don’t like your thought experiment.
What’s your estimate of how much money and how much time I would have to spend to deliver $100 of food directly to a starving person?
Does that estimate change if 50% of my neighbors are also doing this?
Actually my point is questions like that are already guiding discussion away from alternative solutions which may be capable of making a real impact (outside of needing to “become rich” first, or risk the cause getting lost in bureaucracy and profiteering). Take a group like Food Not Bombs for instance; they diminish the “money spent” part of the equation by dumpstering and getting food donations. The time involved would of course depend on where you live, and how easily you could find corporate food waste (sometimes physically guarded by locks, wire, and even men with guns to enforce artificial scarcity), and transporting it to the people who need it. The more people who join in, would of course mean more food must be produced and more area covered in search of food waste to be reclaimed. A fortunate thing is that the more people pitch in, the shorter it takes to do large amounts of labor that benefits everyone; thus the term mutual aid.
I’m not even taking the cost of the food into consideration. I’m assuming there’s this food sitting here.. perhaps as donations, perhaps by dumpstering, perhaps by theft, whatever. What I was trying to get a feel for is your estimate of the costs of individuals delivering that food to where it needs to go. But it sounds like you endorse people getting together in groups in order to do this more efficiently, as long as they don’t become bureaucratic institutions in the process, so that addresses my question. Thanks.
Only hoping I’m parsing this ramble correctly, but I agree if you mean to say:
We have plenty of people asking, “Why” but we need to put a lot more effort asking, “What are we going to do about it?”
To people who go to meetups in other parts of the world: are they all like this? How do they vary in terms of satisfaction and progress in achieving goals?