Think of it as superstimulus to the cool-idea sensor.
Thought exercise: could all the LW/CFAR-favoured model of epistemic rationality be ineffective, even though it sounds really good and make sense? What would the world look like in this case? What would you expect if LW rationality didn’t actually work, except to convince its fans that it did work? (For a value of “work” that is defined before examining the results.)
I think it may help me succeed in life a little, but I think the correlation between x-rationality and success is probably closer to 0.1 than to 1. Maybe [higher] in some businesses like finance, but people in finance tend to know this and use specially developed x-rationalist techniques on the job already without making it a lifestyle commitment.
Hard work, intelligence, social skill, attractiveness, risk-taking, need for sleep, height, and enormous amounts of noise go into life success as measured by something like income or occupational status. So unless there were a ludicrously large effect size of hanging around Less Wrong, differences in life success between readers and nonreaders would be overwhelmingly driven by selection effects. Now, in fact those selection effects put the LW population well above average (lots of college students, academics, software engineers, etc) but don’t speak much to positive effects of their reading habits.
To get a good picture of that you would need a randomized experiment, or at least a ‘natural experiment.’ CFAR is going to tack some outcomes on the attendees of its minicamps, after using randomized admission among applicants above a certain cutoff. Due to the limited sample size, I think this only has enough power to detect insanely massive intervention effects, i.e. a boost of a large fraction of a standard deviation from a few days at a workshop. So I think it won’t show positive effects there. It does seem plausible to me, however, that there will be positive effects on narrow measures closer to the intervention, e.g. performance on some measures of cognitive bias from the psychology literature.
In the same way, a scheduling system like Getting Things Done will probably not have visibly significant effects on career outcomes within a year on a small sample size, but would be more likely to do so on a measure like “projects delivered on time” or “average time-to-response for emails.”
For someone interested in personal success, a more relevant standard would be whether n hours spent studying or practicing ‘rationality exercises’ would increase income or other success measures more than taking an extra programming class at Udacity, or working out at the gym, or reading up about financial planning and investment. Here, I’m less certain about the outcome, although my intuition is that rationality exercises would come out behind. The educational literature shows that transfer learning is generally poor, so better to do focused work on the areas of interest, which may include domain-specific heuristics of rational behavior.
And that is for exercises selected to be relatively useful in everyday life. Looking at Eliezer Yudkowsky’s sequences much of the content is very far from that: meta-ethics, philosophy of mind, avoiding verbal disputes, an account of welfare for future utopias or dystopias, quantum mechanics (the connection to cryonics at the end is dubious, and a small expected benefit that can’t be pinned down today), determinism, much of the sequence on avoiding merely verbal disputes, and so forth. I wouldn’t expect big improvements in everyday life from those any more than I would from reading pop-science articles or philosophy textbooks.
If there are big effects from exercises on epistemic rationality, I would expect to see them in areas that normally aren’t the subject of much effort, or are the subject of active self-deception, like self-assessments of driving skill, or avoiding asymmetric (“myside”) judgments of media bias, or noticing flaws in one’s theology. That may help improve aggregate outcomes in areas like politics or charity where people more often indulge in epistemic irrationality for pleasure, laziness, or signalling, but won’t be earthshaking on an individual level. But even here, most new lesson plans don’t work well, students don’t retain that much, and the interventions in the academic literature show mostly modest effect sizes. So I would expect these gains to be small-to-moderate.
OK … If someone asked you “So, there’s a million words of these Sequences that you think I should read. What do I get out of reading them?” then what’s the answer to that? You seem to be saying “we don’t think they actually do anything much.” Surely that’s not the case.
Mostly standard arguments, often with nonstandard examples and lively presentation, for a related cluster of philosophical views: physicalism, the appearance of free will as outgrowth of cognitive algorithm, his brand of metaethics, the Everett interpretation of quantum mechanics, the irrelevance of verbal disputes, etc.
A selective review of the psychology heuristics and biases literature, with entertaining examples and descriptions
A bunch of suggested heuristics, based on personal experience and thought, for debiasing, e.g. leaving a line of retreat to reduce resistance
Some thoughtful exposition of applications of intro probability and Bayes’ theorem, e.g. conservation of expected evidence
Interesting reframings and insights into a number of philosophical problems using the Solomonoff Induction framework, and the “how could this intuition emerge from an algorithm?” approach
Debate about AI with Robin, a science fiction story, a bunch of meta posts, and assorted minor elements
“So, there’s a million words of these Sequences that you think I should read. What do I get out of reading them?” then what’s the answer to that?
The large chunks that are review of existing psychology and philosophy would be hard to get from one or a few books (as they are extracted from far and wide), although those would then be less filtered. They may be more enjoyable and addictive than an organized study program, i.e. as popular science/philosophy, so folk who wouldn’t undertake the former alone find themselves doing the latter, and then perhaps also doing the former. This would depend on how they felt about the writing style, their own habits, and so forth. The other elements would have to be evaluated more idiosyncratically.
Now, because of the online forum element of Less Wrong, there is another big benefit in selectively attracting a very unusual audience, and providing common background knowledge and norms that help discussion on most (if not all) discussions to be relatively truth-seeking compared to most online fora.
You seem to be saying “we don’t think they actually do anything much.” Surely that’s not the case.
Well, for one thing, I’m speaking only for myself, and I’m more skeptical of the novelty and impact per reader (as mentioned above) than others.
But I do think that reading the sequences on probability, changing your mind, and other core topics (as opposed to quantum mechanics) causes some improvement in quality of argumentation, readiness to accede to evidence, and similar metrics (as judged by 3rd party raters). I don’t think the effect is enormous, i.e. well-read physics or philosophy grad students will have garnered most of the apparent benefits from other sources, but it’s there. And interest, enjoyment, and accessibility aren’t peanuts either.
Sequences contain a rational world view. Not a comprehensive one, but still, it gives some idea about how to avoid thinking stupid and how to communicate with other people that are also trying to find out what’s true and what’s not. It gives you words by which you can refer to problems in your world view, meta-standards to evaluate whether whatever you’re doing is working, etc. I think of it as an unofficial manual to my brain and the world that surrounds me. You can just go ahead and figure out yourself what works, without reading manuals, but reading a manual before you go makes you better prepared.
That’s asserting the thing that the original question asked to examine: how do we know that this is a genuinely useful manual, rather than something that reads like the manual and makes you think “gosh, this is the manual!” but following it doesn’t actually get you anywhere much? What would the world look like if it was? What would the world look like if it wasn’t?
Note that there are plenty of books (particularly in the self-help field) that have been selected by the market for looking like the manual to life, at the expense of actually being the manual to life. This whole thread is about reading something and going “that’s brilliant!” but actually it doesn’t do much good.
It’s an enjoyable read, which helps establish some good community norms? I’d value reading the sequences more than, say, a randomly selected novel (though maybe not randomly selected from novels that I have actually read.)
So, the Sequences and LessWrong in general are purely for entertainment purposes? That’s fine, but that certainly wasn’t the original idea, which was to be practical.
Yeah I am not sure how much practicality the sequences contain (although some other posts here have been practical) but that wouldn’t stop me from recommending them.
Something like, if you were to direct community members to spend their time on activities other than making money for the purpose of donating it, what distribution of activities would you direct them to?
That’s a difficult question, but a potentially valuable one to have answered. Here’s a long list of thoughts I came up with, written not to Michael Vassar but to a regular supporter of SI:
Donations are maximally fungible and require no overhead or supervision. From the outside, you may not see how a $5000 donation to SI changes the world, but I sure as hell do. An extra $5000 means I can print 600 copies of a paperback of the first 17 chapters of HPMoR and ship one copy each to the top 600 most promising young math students (on observable indicators, like USAMO score) in the U.S. (after making contact with them whenever possible). An extra $5000 means I can produce nicely-formatted Kindle and PDF versions of The Sequences, 2006-2009 and Facing the Singularity. An extra $5000 means I can run a nationwide essay contest for the best high school essay on the importance of AI safety (to bring the topic to the minds of AI-interested high schoolers, and to find some good writers who care about AI safety). An extra $5000 means I can afford a bit more than a month of work from a new staff researcher (including salary, health coverage, and taxes).
Remember that a good volunteer is hard to find. After about a year of interacting with people who claim to want to volunteer, I can say this: If somebody approaches me who (1) has obvious skills that can produce value for SI, (2) claims they have 10+ hours/week available, and (3) claims they really want to help out, then I can predict with 60% confidence that they won’t do any valuable volunteer work for SI in the next three months. Because of this, an enormous amount of overhead goes into chunking tasks so that volunteers can do them, then handing one task to Volunteer #1, waiting for them to watch TV instead, then handing that task to Volunteer #2, waiting for them to watch TV instead, then handing that task to Volunteer #3, etc. Of course, this means that if you’re one of the volunteers doing actual work, you are a rare gem and we thank you mucho.
CFAR can generally make better use of volunteers than SI can. My guesses as to why this is the case: (1) CFAR work is more emotionally motivating work because you’re producing visible effects in human lives now rather than very slightly increasing the chances that trillions of future people will have the opportunity to live out happy lives. (2) SI volunteer-doable tasks tend to either be things that (a) anyone could do, or (b) almost nobody can do because of the amount of domain knowledge required. There’s nobody to do tasks of type (b), and few people like to do tasks of type (a) because it doesn’t require their special skills. In contrast, CFAR has many volunteer-doable tasks that can be done by lots of people but not just anyone — i.e. tasks that make use of special skills in a way that is more motivating than others. (3) CFAR has some habits that motivate volunteers that SI hasn’t been able to mimic yet.
People generally become more useful to SI/CFAR when they move to one of the major SI/LW/CFAR hubs of people: the Bay Area or NYC. I suspect this is because (1) regular in-person contact with us reminds people of stuff we’re doing that they care about, is viscerally motivating, and allows for more opportunities to be involved than are available remotely, and because (2) LWers tend to become happier when they move to a place where lots of other aspiring rationalists are doing cool stuff. (See: The Good News of Situationist Psychology.)
Obviously, the most valuable-to-SI activity that someone can do (besides making money and donating it) will vary from person to person. I’ll give some examples below.
Examples of useful SI volunteer activites: help to moderate LW; contribute to the LW wiki; run an LW meetup group; help to translate Facing the Singularity into other languages; join our “document production team” to assist with porting The Sequences and research papers into pretty LaTeX templates; sign up for the Singularity Institute affinity card; sign up to be a Singularity Institute volunteer advisor; help us distribute Singularity Summit flyers at science and technology events in the Bay Area (contact malo@intelligence.org); tell people about the Summmit and encourage them to buy their tickets before the August 15th price increase. We are currently building a new “volunteer intake system” so that we can more efficiently direct incoming volunteers to useful tasks they will feel good about helping us with.
Make yourself stronger and gain influence in the world, so as to pivot the world in strategic ways when it becomes much clearer which particular pivots would reduce AI risk. E.g. become prestigious in math, AI, or physics so you can spread x-risk memes. Work toward becoming a policy-maker that would influence the spending of research money for technology projects so that you can assist in differential technological development. Become an editor at important media outlets so that you can help x-risk and rationality content see the light of day. Etc.
If you’re a researcher in math, compsci, or formal philosophy, find ways to take up research projects that both advance your career and are useful for x-risk reduction. So You Want to Save the World can help you think about potential research projects of that type, and Eliezer’s forthcoming sequence on “Open Problems in Friendly AI” will also help.
I could list random thoughts on the subject for hours, but… I doubt I can answer your question. It depends too much on individual details about their skills, experience, availability, other opportunities, etc.
Carl, you name a lot of factors as going into income and occupational status. What are your estimates for their respective effect sizes and correlations? I’m skeptical of the ‘enormous amounts of noise’ claim remaining the case after your list plus initial socio-economic endowment, health, specific skills and possibly a few other factors are accounted for. In fact, I’d expect the uncertainty due noise to be far less than the uncertainty in between person estimates of occupational status, a variable which different groups would measure quite differently from one-another.
Also, estimates of the causal relationships between the factors in success would be nice.
I’m skeptical of the ‘enormous amounts of noise’ claim
Trivially, look at the wealth of Bill Gates vs Steve Jobs. Most of Peter Thiel’s wealth relative to other past tech CEOs comes from one great hit at Facebook. Even entrepreneurs who have succeeded at past VC-backed startups are only moderately more likely to succeed (acquisition, IPO, large size) than new ones. Financiers vary hugely in lifetime career success based on market conditions on Wall Street when they finished school, on which product groups have ups and downs when, and which risky bets happen to blow up before or after they move on.
Within a given size of social circle and selective filter, happening to have the right friends with the right contacts (Jobs and Wozniak) at the right time is critical. Who else produces a similar startup at the same time and how good are they? Do key patents and lawsuits get decided in one’s favor? What new scientific and technological innovations enhance or destroy the position of one’s company?
At a smaller scale: when do you fall in love and get married? What geographical constraints does that place on you? Do you get hit by a car or infectious disease or cancer, and when? Do you get through noisy hiring processes in tight labor markets, e.g. tenure in academia, getting a first job on Wall Street? Do you click with the person deciding on your medical residency of choice?
So Jobs ended up with what, $6.7 Billion, http://www.forbes.com/profile/steve-jobs/ making him the 99.99999th percentile among Americans, after not bothering to become the 99.999995th percentile by cashing some options. Meanwhile, Gates started at the 98th or 99th percentile instead of the 60th or 80th percentile and, with a much higher IQ and much greater strategic ability but somewhat lesser over-all talent, rose by the same factor. The idea that Jobs had unusual luck by having one highly technically skilled friend (given Jobs’ social skills no less), and by only having to compete head to head with Microsoft, is also faintly amusing. Point mine, I think. Regarding Gates, yes, it’s true that a key lawsuit hurt his net worth, but not his wealth rank-order.
Thiel likewise would only have a billion or two instead of three or four (depending on estimates of his non-public holdings like Palantir) without Facebook, lowering his average annual investment returns during the last decade, from about 50% (vs. nothing for the market) to about 40%. This fits with my general impression, and that of the world, that there’s a significant absolute amount of luck in investment returns, enough to impact expected outcomes by a factor of 2 or so over a decade and thus a factor of 4 over a career. That makes a big impact in a retirement plan, but not in a business career.
While we’re on VC returns, if I’m not mistaken, the best VC firms often have >50% hit rates, and there aren’t all that many VC firms.
On a smaller scale, luck determines a very large part of life outcomes, but simply put, that’s because most people do nothing with their lives, they simply drift on the wind and let social forces blow them around. If you control for the impact of one’s effort applied to decisions, by not trying to make decisions in any deliberate manner but simply going with social pressure, you unsurprisingly find that random factors and not effortful decisions are the major driver of life outcomes.
Yvain’s argument was that “x-rationality” (roughly the sort of thing that’s taught in the Sequences) isn’t practically helpful, not that nothing is. I certainly have read lots of things that have significantly helped me make better decisions and have a better map of the territory. None of them were x-rational. Claiming that x-rationality can’t have big effects because the world is too noisy, just seems like another excuse for avoiding reality.
I certainly have read lots of things that have significantly helped me make better decisions and have a better map of the territory.
What effect size, assessed how, against what counterfactuals? If it’s just “I read book X, and thought about it when I made decision Y, and I estimate that decision Y was right” we’re in testimonial land, and there are piles of those for both epistemic and practical benefits (although far more on epistemic than practical). Unfortunately, those aren’t very reliable. I was specifically talking about non-testimonials, e.g. aggregate effects vs control groups or reference populations to focus on easily transmissible data.
Claiming that x-rationality can’t have big effects because the world is too noisy, just seems like another excuse for avoiding reality.
Imagine that we try to take the best general epistemic heuristics we can find today, and send them back in book form to someone from 10 years ago. What effect size do you think they would have on income or academic productivity? What about 20 years? 50 years? Conditional on someone assembling, with some additions, a good set of heuristics what’s your distribution of effect sizes?
I’m wondering: how many people have noticed changes in the quality of their interpersonal reactions after becoming ‘more rational’ than they were before learning about applied rationality? How would those changes in quality be judged from both outside and inside views?
(I use quotes, as each person will have a different metric by which they will judge an increase in rationality—and I can’t think of a standard metric everyone can use for purposes of answering this query. To mitigate this variable, please state the metric you’re using.)
I sympathize with the statement, which you may or may not have implied, that that world would look a lot like our world. But maybe we should make the question more concrete. What benefits do people honestly expect from LW rationality? Are they actually getting those benefits?
I’m here because and while it’s enjoyable—LW is marked as part of the Internet-as-TV time budget. That said, I feel more rational, I think because I’m paying attention to my thoughts. But e.g. I’m not actually richer and don’t have a string of interesting new achievements under my belt. The outside view shows nothing.
If your answer is “it would look like the world is now”—then what would the world look like if it was effective and did work, for whatever value of “work”? (I’m thinking a value something like “what one would expect trying a new thing like this and wanting to get tangible self-improvement value out of it”, though I’m open to other possible values I haven’t thought of.)
Hard to say. My life would look completely different. I was honestly, for the most part, much happier before getting involved, but I’m certainly more effective now, to the point of not really occupying the same reference class in any useful sense.
I was honestly, for the most part, much happier before getting involved, but I’m certainly more effective now.
Perhaps you have a metavalue favoring effectiveness over happiness: you value valuing effectiveness. But isn’t happiness the terminal value, effectiveness the instrumental value?
Usually people don’t expressly strive for happiness because doing so tends to defeat the project (as Bertrand Russell pointed out in his book about achieving happiness), but it doesn’t change that happiness (almost by definition) is what they (and you) ultimately strive for.
In so far as happiness is what we strive for by definition the statement is vacant, and what is described as ‘happiness’ doesn’t closely match the natural language meaning of the word.
I would expect the following things to have failed:
Good community norms (this is a difficult problem that LW has solved admirably)
Using advice from LW to improve my social skills (this is actually sort of a subpoint of “if someone else does it better, LW points you to them”)
Promotion of thinking about pet issues (this one failed by less than the others—the sequences pretty much cleaned out the god-hating but I’m constantly annoyed at people who don’t have good epistemology)
there’s plenty more that is substantially less tangible, and YMMV, but those first two points have created a large amount of value for me personally.
Think of it as superstimulus to the cool-idea sensor.
Thought exercise: could all the LW/CFAR-favoured model of epistemic rationality be ineffective, even though it sounds really good and make sense? What would the world look like in this case? What would you expect if LW rationality didn’t actually work, except to convince its fans that it did work? (For a value of “work” that is defined before examining the results.)
Effective at what? I agree with Yvain that:
Hard work, intelligence, social skill, attractiveness, risk-taking, need for sleep, height, and enormous amounts of noise go into life success as measured by something like income or occupational status. So unless there were a ludicrously large effect size of hanging around Less Wrong, differences in life success between readers and nonreaders would be overwhelmingly driven by selection effects. Now, in fact those selection effects put the LW population well above average (lots of college students, academics, software engineers, etc) but don’t speak much to positive effects of their reading habits.
To get a good picture of that you would need a randomized experiment, or at least a ‘natural experiment.’ CFAR is going to tack some outcomes on the attendees of its minicamps, after using randomized admission among applicants above a certain cutoff. Due to the limited sample size, I think this only has enough power to detect insanely massive intervention effects, i.e. a boost of a large fraction of a standard deviation from a few days at a workshop. So I think it won’t show positive effects there. It does seem plausible to me, however, that there will be positive effects on narrow measures closer to the intervention, e.g. performance on some measures of cognitive bias from the psychology literature.
In the same way, a scheduling system like Getting Things Done will probably not have visibly significant effects on career outcomes within a year on a small sample size, but would be more likely to do so on a measure like “projects delivered on time” or “average time-to-response for emails.”
For someone interested in personal success, a more relevant standard would be whether n hours spent studying or practicing ‘rationality exercises’ would increase income or other success measures more than taking an extra programming class at Udacity, or working out at the gym, or reading up about financial planning and investment. Here, I’m less certain about the outcome, although my intuition is that rationality exercises would come out behind. The educational literature shows that transfer learning is generally poor, so better to do focused work on the areas of interest, which may include domain-specific heuristics of rational behavior.
And that is for exercises selected to be relatively useful in everyday life. Looking at Eliezer Yudkowsky’s sequences much of the content is very far from that: meta-ethics, philosophy of mind, avoiding verbal disputes, an account of welfare for future utopias or dystopias, quantum mechanics (the connection to cryonics at the end is dubious, and a small expected benefit that can’t be pinned down today), determinism, much of the sequence on avoiding merely verbal disputes, and so forth. I wouldn’t expect big improvements in everyday life from those any more than I would from reading pop-science articles or philosophy textbooks.
If there are big effects from exercises on epistemic rationality, I would expect to see them in areas that normally aren’t the subject of much effort, or are the subject of active self-deception, like self-assessments of driving skill, or avoiding asymmetric (“myside”) judgments of media bias, or noticing flaws in one’s theology. That may help improve aggregate outcomes in areas like politics or charity where people more often indulge in epistemic irrationality for pleasure, laziness, or signalling, but won’t be earthshaking on an individual level. But even here, most new lesson plans don’t work well, students don’t retain that much, and the interventions in the academic literature show mostly modest effect sizes. So I would expect these gains to be small-to-moderate.
OK … If someone asked you “So, there’s a million words of these Sequences that you think I should read. What do I get out of reading them?” then what’s the answer to that? You seem to be saying “we don’t think they actually do anything much.” Surely that’s not the case.
Major elements to consider:
Mostly standard arguments, often with nonstandard examples and lively presentation, for a related cluster of philosophical views: physicalism, the appearance of free will as outgrowth of cognitive algorithm, his brand of metaethics, the Everett interpretation of quantum mechanics, the irrelevance of verbal disputes, etc.
A selective review of the psychology heuristics and biases literature, with entertaining examples and descriptions
A bunch of suggested heuristics, based on personal experience and thought, for debiasing, e.g. leaving a line of retreat to reduce resistance
Some thoughtful exposition of applications of intro probability and Bayes’ theorem, e.g. conservation of expected evidence
Interesting reframings and insights into a number of philosophical problems using the Solomonoff Induction framework, and the “how could this intuition emerge from an algorithm?” approach
Debate about AI with Robin, a science fiction story, a bunch of meta posts, and assorted minor elements
The large chunks that are review of existing psychology and philosophy would be hard to get from one or a few books (as they are extracted from far and wide), although those would then be less filtered. They may be more enjoyable and addictive than an organized study program, i.e. as popular science/philosophy, so folk who wouldn’t undertake the former alone find themselves doing the latter, and then perhaps also doing the former. This would depend on how they felt about the writing style, their own habits, and so forth. The other elements would have to be evaluated more idiosyncratically.
Now, because of the online forum element of Less Wrong, there is another big benefit in selectively attracting a very unusual audience, and providing common background knowledge and norms that help discussion on most (if not all) discussions to be relatively truth-seeking compared to most online fora.
Well, for one thing, I’m speaking only for myself, and I’m more skeptical of the novelty and impact per reader (as mentioned above) than others.
But I do think that reading the sequences on probability, changing your mind, and other core topics (as opposed to quantum mechanics) causes some improvement in quality of argumentation, readiness to accede to evidence, and similar metrics (as judged by 3rd party raters). I don’t think the effect is enormous, i.e. well-read physics or philosophy grad students will have garnered most of the apparent benefits from other sources, but it’s there. And interest, enjoyment, and accessibility aren’t peanuts either.
Also see one mathematician’s opinion: Yes, a blog.
Sequences contain a rational world view. Not a comprehensive one, but still, it gives some idea about how to avoid thinking stupid and how to communicate with other people that are also trying to find out what’s true and what’s not. It gives you words by which you can refer to problems in your world view, meta-standards to evaluate whether whatever you’re doing is working, etc. I think of it as an unofficial manual to my brain and the world that surrounds me. You can just go ahead and figure out yourself what works, without reading manuals, but reading a manual before you go makes you better prepared.
That’s asserting the thing that the original question asked to examine: how do we know that this is a genuinely useful manual, rather than something that reads like the manual and makes you think “gosh, this is the manual!” but following it doesn’t actually get you anywhere much? What would the world look like if it was? What would the world look like if it wasn’t?
Note that there are plenty of books (particularly in the self-help field) that have been selected by the market for looking like the manual to life, at the expense of actually being the manual to life. This whole thread is about reading something and going “that’s brilliant!” but actually it doesn’t do much good.
[struck]
It’s an enjoyable read, which helps establish some good community norms? I’d value reading the sequences more than, say, a randomly selected novel (though maybe not randomly selected from novels that I have actually read.)
.
So, the Sequences and LessWrong in general are purely for entertainment purposes? That’s fine, but that certainly wasn’t the original idea, which was to be practical.
Yeah I am not sure how much practicality the sequences contain (although some other posts here have been practical) but that wouldn’t stop me from recommending them.
For the record, I agree with all this.
What EV calculations do you come up with? Also, same question as to Carl? Finally, what do your social circle think?
EV of what in particular?
Less Wrong sequences and related materials, their content, etc.
That sounds like a very large analysis project. Maybe there’s a simpler question I can answer? What’s the final question you’d like to get at?
Something like, if you were to direct community members to spend their time on activities other than making money for the purpose of donating it, what distribution of activities would you direct them to?
That’s a difficult question, but a potentially valuable one to have answered. Here’s a long list of thoughts I came up with, written not to Michael Vassar but to a regular supporter of SI:
Donations are maximally fungible and require no overhead or supervision. From the outside, you may not see how a $5000 donation to SI changes the world, but I sure as hell do. An extra $5000 means I can print 600 copies of a paperback of the first 17 chapters of HPMoR and ship one copy each to the top 600 most promising young math students (on observable indicators, like USAMO score) in the U.S. (after making contact with them whenever possible). An extra $5000 means I can produce nicely-formatted Kindle and PDF versions of The Sequences, 2006-2009 and Facing the Singularity. An extra $5000 means I can run a nationwide essay contest for the best high school essay on the importance of AI safety (to bring the topic to the minds of AI-interested high schoolers, and to find some good writers who care about AI safety). An extra $5000 means I can afford a bit more than a month of work from a new staff researcher (including salary, health coverage, and taxes).
Remember that a good volunteer is hard to find. After about a year of interacting with people who claim to want to volunteer, I can say this: If somebody approaches me who (1) has obvious skills that can produce value for SI, (2) claims they have 10+ hours/week available, and (3) claims they really want to help out, then I can predict with 60% confidence that they won’t do any valuable volunteer work for SI in the next three months. Because of this, an enormous amount of overhead goes into chunking tasks so that volunteers can do them, then handing one task to Volunteer #1, waiting for them to watch TV instead, then handing that task to Volunteer #2, waiting for them to watch TV instead, then handing that task to Volunteer #3, etc. Of course, this means that if you’re one of the volunteers doing actual work, you are a rare gem and we thank you mucho.
CFAR can generally make better use of volunteers than SI can. My guesses as to why this is the case: (1) CFAR work is more emotionally motivating work because you’re producing visible effects in human lives now rather than very slightly increasing the chances that trillions of future people will have the opportunity to live out happy lives. (2) SI volunteer-doable tasks tend to either be things that (a) anyone could do, or (b) almost nobody can do because of the amount of domain knowledge required. There’s nobody to do tasks of type (b), and few people like to do tasks of type (a) because it doesn’t require their special skills. In contrast, CFAR has many volunteer-doable tasks that can be done by lots of people but not just anyone — i.e. tasks that make use of special skills in a way that is more motivating than others. (3) CFAR has some habits that motivate volunteers that SI hasn’t been able to mimic yet.
People generally become more useful to SI/CFAR when they move to one of the major SI/LW/CFAR hubs of people: the Bay Area or NYC. I suspect this is because (1) regular in-person contact with us reminds people of stuff we’re doing that they care about, is viscerally motivating, and allows for more opportunities to be involved than are available remotely, and because (2) LWers tend to become happier when they move to a place where lots of other aspiring rationalists are doing cool stuff. (See: The Good News of Situationist Psychology.)
Obviously, the most valuable-to-SI activity that someone can do (besides making money and donating it) will vary from person to person. I’ll give some examples below.
Examples of useful SI volunteer activites: help to moderate LW; contribute to the LW wiki; run an LW meetup group; help to translate Facing the Singularity into other languages; join our “document production team” to assist with porting The Sequences and research papers into pretty LaTeX templates; sign up for the Singularity Institute affinity card; sign up to be a Singularity Institute volunteer advisor; help us distribute Singularity Summit flyers at science and technology events in the Bay Area (contact malo@intelligence.org); tell people about the Summmit and encourage them to buy their tickets before the August 15th price increase. We are currently building a new “volunteer intake system” so that we can more efficiently direct incoming volunteers to useful tasks they will feel good about helping us with.
Make yourself stronger and gain influence in the world, so as to pivot the world in strategic ways when it becomes much clearer which particular pivots would reduce AI risk. E.g. become prestigious in math, AI, or physics so you can spread x-risk memes. Work toward becoming a policy-maker that would influence the spending of research money for technology projects so that you can assist in differential technological development. Become an editor at important media outlets so that you can help x-risk and rationality content see the light of day. Etc.
If you’re a researcher in math, compsci, or formal philosophy, find ways to take up research projects that both advance your career and are useful for x-risk reduction. So You Want to Save the World can help you think about potential research projects of that type, and Eliezer’s forthcoming sequence on “Open Problems in Friendly AI” will also help.
I could list random thoughts on the subject for hours, but… I doubt I can answer your question. It depends too much on individual details about their skills, experience, availability, other opportunities, etc.
This might be appropriate for promoting CFAR (at least HPMoR talks about rationality), but surely not for promoting SI.
What are these habits?
If we could identify them, we’d be mimicking them already. :)
Carl, you name a lot of factors as going into income and occupational status. What are your estimates for their respective effect sizes and correlations? I’m skeptical of the ‘enormous amounts of noise’ claim remaining the case after your list plus initial socio-economic endowment, health, specific skills and possibly a few other factors are accounted for. In fact, I’d expect the uncertainty due noise to be far less than the uncertainty in between person estimates of occupational status, a variable which different groups would measure quite differently from one-another.
Also, estimates of the causal relationships between the factors in success would be nice.
Trivially, look at the wealth of Bill Gates vs Steve Jobs. Most of Peter Thiel’s wealth relative to other past tech CEOs comes from one great hit at Facebook. Even entrepreneurs who have succeeded at past VC-backed startups are only moderately more likely to succeed (acquisition, IPO, large size) than new ones. Financiers vary hugely in lifetime career success based on market conditions on Wall Street when they finished school, on which product groups have ups and downs when, and which risky bets happen to blow up before or after they move on.
Within a given size of social circle and selective filter, happening to have the right friends with the right contacts (Jobs and Wozniak) at the right time is critical. Who else produces a similar startup at the same time and how good are they? Do key patents and lawsuits get decided in one’s favor? What new scientific and technological innovations enhance or destroy the position of one’s company?
At a smaller scale: when do you fall in love and get married? What geographical constraints does that place on you? Do you get hit by a car or infectious disease or cancer, and when? Do you get through noisy hiring processes in tight labor markets, e.g. tenure in academia, getting a first job on Wall Street? Do you click with the person deciding on your medical residency of choice?
We could quibble, but I’d leave it at that.
So Jobs ended up with what, $6.7 Billion, http://www.forbes.com/profile/steve-jobs/ making him the 99.99999th percentile among Americans, after not bothering to become the 99.999995th percentile by cashing some options. Meanwhile, Gates started at the 98th or 99th percentile instead of the 60th or 80th percentile and, with a much higher IQ and much greater strategic ability but somewhat lesser over-all talent, rose by the same factor. The idea that Jobs had unusual luck by having one highly technically skilled friend (given Jobs’ social skills no less), and by only having to compete head to head with Microsoft, is also faintly amusing. Point mine, I think. Regarding Gates, yes, it’s true that a key lawsuit hurt his net worth, but not his wealth rank-order.
Thiel likewise would only have a billion or two instead of three or four (depending on estimates of his non-public holdings like Palantir) without Facebook, lowering his average annual investment returns during the last decade, from about 50% (vs. nothing for the market) to about 40%. This fits with my general impression, and that of the world, that there’s a significant absolute amount of luck in investment returns, enough to impact expected outcomes by a factor of 2 or so over a decade and thus a factor of 4 over a career. That makes a big impact in a retirement plan, but not in a business career.
While we’re on VC returns, if I’m not mistaken, the best VC firms often have >50% hit rates, and there aren’t all that many VC firms.
On a smaller scale, luck determines a very large part of life outcomes, but simply put, that’s because most people do nothing with their lives, they simply drift on the wind and let social forces blow them around. If you control for the impact of one’s effort applied to decisions, by not trying to make decisions in any deliberate manner but simply going with social pressure, you unsurprisingly find that random factors and not effortful decisions are the major driver of life outcomes.
Yvain’s argument was that “x-rationality” (roughly the sort of thing that’s taught in the Sequences) isn’t practically helpful, not that nothing is. I certainly have read lots of things that have significantly helped me make better decisions and have a better map of the territory. None of them were x-rational. Claiming that x-rationality can’t have big effects because the world is too noisy, just seems like another excuse for avoiding reality.
What effect size, assessed how, against what counterfactuals? If it’s just “I read book X, and thought about it when I made decision Y, and I estimate that decision Y was right” we’re in testimonial land, and there are piles of those for both epistemic and practical benefits (although far more on epistemic than practical). Unfortunately, those aren’t very reliable. I was specifically talking about non-testimonials, e.g. aggregate effects vs control groups or reference populations to focus on easily transmissible data.
Imagine that we try to take the best general epistemic heuristics we can find today, and send them back in book form to someone from 10 years ago. What effect size do you think they would have on income or academic productivity? What about 20 years? 50 years? Conditional on someone assembling, with some additions, a good set of heuristics what’s your distribution of effect sizes?
I’m wondering: how many people have noticed changes in the quality of their interpersonal reactions after becoming ‘more rational’ than they were before learning about applied rationality? How would those changes in quality be judged from both outside and inside views?
(I use quotes, as each person will have a different metric by which they will judge an increase in rationality—and I can’t think of a standard metric everyone can use for purposes of answering this query. To mitigate this variable, please state the metric you’re using.)
I sympathize with the statement, which you may or may not have implied, that that world would look a lot like our world. But maybe we should make the question more concrete. What benefits do people honestly expect from LW rationality? Are they actually getting those benefits?
I’m here because and while it’s enjoyable—LW is marked as part of the Internet-as-TV time budget. That said, I feel more rational, I think because I’m paying attention to my thoughts. But e.g. I’m not actually richer and don’t have a string of interesting new achievements under my belt. The outside view shows nothing.
If your answer is “it would look like the world is now”—then what would the world look like if it was effective and did work, for whatever value of “work”? (I’m thinking a value something like “what one would expect trying a new thing like this and wanting to get tangible self-improvement value out of it”, though I’m open to other possible values I haven’t thought of.)
Hard to say. My life would look completely different. I was honestly, for the most part, much happier before getting involved, but I’m certainly more effective now, to the point of not really occupying the same reference class in any useful sense.
Have you written up how you got it to work for you? If not, then please do!
Perhaps you have a metavalue favoring effectiveness over happiness: you value valuing effectiveness. But isn’t happiness the terminal value, effectiveness the instrumental value?
Usually people don’t expressly strive for happiness because doing so tends to defeat the project (as Bertrand Russell pointed out in his book about achieving happiness), but it doesn’t change that happiness (almost by definition) is what they (and you) ultimately strive for.
In so far as happiness is what we strive for by definition the statement is vacant, and what is described as ‘happiness’ doesn’t closely match the natural language meaning of the word.
.
I would expect the following things to have failed:
Good community norms (this is a difficult problem that LW has solved admirably)
Using advice from LW to improve my social skills (this is actually sort of a subpoint of “if someone else does it better, LW points you to them”)
Promotion of thinking about pet issues (this one failed by less than the others—the sequences pretty much cleaned out the god-hating but I’m constantly annoyed at people who don’t have good epistemology)
there’s plenty more that is substantially less tangible, and YMMV, but those first two points have created a large amount of value for me personally.
Maybe the world would look something like this.