I agree with almost everything here, with the following caveats:
I. The practical benefits we get from (3) are (I think I’m agreeing with you here) likely to be so small as to be difficult to measure informally; i.e. anyone who claims to have noticed a specific improvement is as likely to be imagining it as really improving. Probably some effects that could be measured in a formal experiment with a very large sample size, but this is not what we have been doing.
II. (2) shows promise but is not something I see discussed very often on Overcoming Bias or Less Wrong. Using the Boyle metaphor, this would be the technology of rationality, as opposed to the science of it. I’ve seen a few suggestions for “techniques”, but they seem sort of ad hoc (I will admit, in retrospect, that many of the times I was proposing ‘techniques’ were more of an attempt to sound like I was thinking pragmatically, than soundly based on good experimental evidence). I’ve tried to apply specific methods to specific decisions, but never gone so far as to set aside a half hour each day to “rationality practice”, nor would I really know what to do with that half hour if I did. I’d like to know more about what you do and what you think has helped.
III. You list a greater appreciation of transhumanism as one of the benefits of x-rationality, but the causal linkage doesn’t impress me. Many of the transhumanists here were transhumanists before they were rationalists, and only came to Overcoming Bias out of interest in reading what transhumanist leaders Eliezer and Robin had to say. I think my “conversion” to transhumanism came about mostly because I started meeting so many extremely intelligent transhumanists that it no longer seemed like a fringe crazy-person belief and my mind felt free to judge it with the algorithms it uses for normal scientific theories rather than the algorithms it uses for random Internet crackpottery. Many other OB readers came to transhumanism just because EY and RH explicitly argued for it and did a good job. Still others probably felt pressure to “convert” as an in-group identification thing. And finally, I think transhumanists and x-rationalists are part of that big atheist/libertarian/sci-fi/et cetera personspace cluster Eliezer’s been talking about: we all had a natural vulnerability to that meme before ever arriving here. AFAIK Kahneman and Tversky are not transhumanists, Aumann certainly isn’t, and I would be surprised if x-rationalists not associated with EY and RH and our group come to transhumanism in numbers greater than their personspace cluster membership predicts.
IV. Given fifty years to improve the Art, I also wouldn’t be surprised with anything from “massive practical help” to “not much help at all”. I don’t know exactly what you mean by “ridiculously stupid decision-making that most people do”, but are you sure it’s something that should be solved with x-rationality as opposed to normal rationality?
I don’t know exactly what you mean by “ridiculously stupid decision-making that most people do”, but are you sure it’s something that should be solved with x-rationality as opposed to normal rationality?
I’m sure it’s something that could be helped with techniques like The Bottom Line, which most intelligent, science-literate, trying to be “rational” people mostly don’t do nearly enough of. Also something that could be helped by paying attention to which thinking techniques lead to what kinds of results, and learning the better ones. Dojos could totally teach these practices, and help their students actually incorporate them into their day-to-day, reflexive decison-making (at least more than most “intelligent, science-literate” people do now; most people hardly try at all). As to heuristics and biases, and probability theory… I do find those helpful. Essential for thinking usefully about existential risk; helpful but non-essential for day to day inference, according to my mental but not written (I’ve been keeping a written record lately, but not for long enough, and not systematically enough) observations. The probability theory in particular may be hard to teach to people who don’t easily think about math, though not impossible. But I don’t think building an art of rationality needs to be solely about the heuristics and biases literature. Certainly much of the rationality improvement I’ve gotten from OB/LW isn’t that.
You list a greater appreciation of transhumanism as one of the benefits of x-rationality, but the causal linkage doesn’t impress me.
The benefit I’m trying to list isn’t “greater appreciation of transhumanism” so much as “directing one’s efforts to ‘make the world a better place’ in directions that actually do efficiently make the world a better place”.
As to the evidence and its significance:
Even if we skip transhumanism, and look fully outside the Eliezer/Robin/Vassar orbit, folks like Holden Karnofsky of Givewell are impressive, both in terms of ability to actually analyze the world, and in terms of positive impact. You might say it’s just traditional rationality Holden is using—certainly he didn’t get it from Eliezer—but it’s beyond the level common among “intelligent, science-literate people” (who mostly donate their money in much less effective ways).
Within transhumanism… I agree that the existing correlation between transhumanism and rationality-emphasis will tend to create future correlation, whether or not rationality helps one see merits in transhumanism. And that’s an important point. But it’s also bizarrely statistically significant that when people show up and say they want to spend their lives reducing AI risks, they’re often people who spent unusual effort successfully becoming better thinkers before they ever heard of Eliezer or Robin, or met anyone else working on this stuff. It’s true that maybe we’re just recognizing “oh, someone who cares about actually getting things right, that means I can relax and believe them” (or, worse, “oh, someone with my brand of tennis shoes, let me join the in-group”). But…
Recognizing that someone else has good epistemic standards and can be believed is rationality working, even without independently deriving the same conclusions (though under the tennis shoe interpretation, not so much);
Many of us (independently, before reading or being in contact with anyone in this orbit) said we were looking for the most efficient use of some time/money, and it’s probably not an accident that trying to become a good thinker, and asking what use of time/money will actually help the world, tend to correlate, and tend to lead to modes of action that actually do help the world.
I agree with almost everything here, with the following caveats:
I. The practical benefits we get from (3) are (I think I’m agreeing with you here) likely to be so small as to be difficult to measure informally; i.e. anyone who claims to have noticed a specific improvement is as likely to be imagining it as really improving. Probably some effects that could be measured in a formal experiment with a very large sample size, but this is not what we have been doing.
II. (2) shows promise but is not something I see discussed very often on Overcoming Bias or Less Wrong. Using the Boyle metaphor, this would be the technology of rationality, as opposed to the science of it. I’ve seen a few suggestions for “techniques”, but they seem sort of ad hoc (I will admit, in retrospect, that many of the times I was proposing ‘techniques’ were more of an attempt to sound like I was thinking pragmatically, than soundly based on good experimental evidence). I’ve tried to apply specific methods to specific decisions, but never gone so far as to set aside a half hour each day to “rationality practice”, nor would I really know what to do with that half hour if I did. I’d like to know more about what you do and what you think has helped.
III. You list a greater appreciation of transhumanism as one of the benefits of x-rationality, but the causal linkage doesn’t impress me. Many of the transhumanists here were transhumanists before they were rationalists, and only came to Overcoming Bias out of interest in reading what transhumanist leaders Eliezer and Robin had to say. I think my “conversion” to transhumanism came about mostly because I started meeting so many extremely intelligent transhumanists that it no longer seemed like a fringe crazy-person belief and my mind felt free to judge it with the algorithms it uses for normal scientific theories rather than the algorithms it uses for random Internet crackpottery. Many other OB readers came to transhumanism just because EY and RH explicitly argued for it and did a good job. Still others probably felt pressure to “convert” as an in-group identification thing. And finally, I think transhumanists and x-rationalists are part of that big atheist/libertarian/sci-fi/et cetera personspace cluster Eliezer’s been talking about: we all had a natural vulnerability to that meme before ever arriving here. AFAIK Kahneman and Tversky are not transhumanists, Aumann certainly isn’t, and I would be surprised if x-rationalists not associated with EY and RH and our group come to transhumanism in numbers greater than their personspace cluster membership predicts.
IV. Given fifty years to improve the Art, I also wouldn’t be surprised with anything from “massive practical help” to “not much help at all”. I don’t know exactly what you mean by “ridiculously stupid decision-making that most people do”, but are you sure it’s something that should be solved with x-rationality as opposed to normal rationality?
I’m sure it’s something that could be helped with techniques like The Bottom Line, which most intelligent, science-literate, trying to be “rational” people mostly don’t do nearly enough of. Also something that could be helped by paying attention to which thinking techniques lead to what kinds of results, and learning the better ones. Dojos could totally teach these practices, and help their students actually incorporate them into their day-to-day, reflexive decison-making (at least more than most “intelligent, science-literate” people do now; most people hardly try at all). As to heuristics and biases, and probability theory… I do find those helpful. Essential for thinking usefully about existential risk; helpful but non-essential for day to day inference, according to my mental but not written (I’ve been keeping a written record lately, but not for long enough, and not systematically enough) observations. The probability theory in particular may be hard to teach to people who don’t easily think about math, though not impossible. But I don’t think building an art of rationality needs to be solely about the heuristics and biases literature. Certainly much of the rationality improvement I’ve gotten from OB/LW isn’t that.
The benefit I’m trying to list isn’t “greater appreciation of transhumanism” so much as “directing one’s efforts to ‘make the world a better place’ in directions that actually do efficiently make the world a better place”.
As to the evidence and its significance:
Even if we skip transhumanism, and look fully outside the Eliezer/Robin/Vassar orbit, folks like Holden Karnofsky of Givewell are impressive, both in terms of ability to actually analyze the world, and in terms of positive impact. You might say it’s just traditional rationality Holden is using—certainly he didn’t get it from Eliezer—but it’s beyond the level common among “intelligent, science-literate people” (who mostly donate their money in much less effective ways).
Within transhumanism… I agree that the existing correlation between transhumanism and rationality-emphasis will tend to create future correlation, whether or not rationality helps one see merits in transhumanism. And that’s an important point. But it’s also bizarrely statistically significant that when people show up and say they want to spend their lives reducing AI risks, they’re often people who spent unusual effort successfully becoming better thinkers before they ever heard of Eliezer or Robin, or met anyone else working on this stuff. It’s true that maybe we’re just recognizing “oh, someone who cares about actually getting things right, that means I can relax and believe them” (or, worse, “oh, someone with my brand of tennis shoes, let me join the in-group”). But…
Recognizing that someone else has good epistemic standards and can be believed is rationality working, even without independently deriving the same conclusions (though under the tennis shoe interpretation, not so much);
Many of us (independently, before reading or being in contact with anyone in this orbit) said we were looking for the most efficient use of some time/money, and it’s probably not an accident that trying to become a good thinker, and asking what use of time/money will actually help the world, tend to correlate, and tend to lead to modes of action that actually do help the world.