It would be overconfident for me to say rationality could never become useful. My point is just that we are acting like it’s practically useful right now, without very much evidence for this beyond our hopes and dreams. Thus my last sentence—that “crossing the Pacific” isn’t impossible, but it’s going to take a different level of effort.
If in 1660, Robert Boyle had gone around saying that, now that we knew Boyle’s Law of gas behavior, we should be able to predict the weather, and that that was the only point of discovering Boyle’s Law and that furthermore we should never trust a so-called chemist or physicist except insofar as he successfully predicted the weather—then I think the Royal Society would be making the same mistake we are.
Boyle’s Law is sort of helpful in understanding the weather, sort of. But it’s step one of ten million steps, used alone it doesn’t work nearly as well as just eyeballing the weather and looking for patterns, and any attempt to judge applicants to the Royal Society on their weather prediction abilities would have excluded some excellent scientists. Any attempt to restrict gas physics itself to things that were directly helpful in predicting the weather would have destroyed the science, ironically including the discoveries two hundred years down the road that were helpful in weather prediction.
Summed up: With luck, (some) science can result in good practical technology. But demanding the technology too soon, or restricting science to only the science with technology to back it up, hurts both science and technology.
(there is a difference between verification and technology. Boyle was able to empirically test his gas law, but not practically apply it. This may be fuzzier in rationality)
I’m confused about this article. I agree with most you’ve said, but I’m not sure the point is exactly. I thought the entire premise of this community was that more is possible, but we’re only “less wrong” at the moment. I didn’t think there was any promise of results for the current state of the art. Is this post a warning, or am I overlooking this trend?
I agree we shouldn’t see x-rationality as practically useful now. You don’t rule out rationality becoming the superpower Eliezer portrays in his fiction. That is certainly a long ways off. Boyle’s Law and weather prediction is an apt analogy. Just trying harder to apply our current knowledge won’t go very far, but there should be some productive avenues.
I think I’d understand your purpose better if you could answer these questions:
In your mind, how likely is it that x-rationality could be practically useful in, say, 50 years? What approaches are most likely to get us to a useful practice of rationality? Or is your point that any advances that are made will be radically different from our current lines of investigation?
Component 2 would be (to say it again) that I developed the particular techniques that are to be found in my essays, in the course of solving my problem. And if you were to try to attack that or a similar problem you would suddenly find many more OB posts to be of immensely greater use and indeed necessity. The Eliezer of 2000 and earlier was not remotely capable of getting his job done.
What you’re seeing here is the backwash of techniques that seem like they ought to have some general applicability (e.g. Crisis of Faith) but which are not really a whole developed rationalist art, nor made for the purpose of optimizing everyday life.
Someone faced with the epic Challenge Of Changing Their Mind may use the full-fledged Crisis of Faith technique once that year. How much benefit is this really? That’s the question, but I’m not sure the cynical answer is the right one.
What I am hoping to see here is others, having been given a piece of the art, taking that art and extending it to cover their own problems, then coming back and describing what they’ve learned in a sufficiently general sense (informed by relevant science) that I can actually absorb it. For that which has been developed to address e.g. akrasia outside the rationalist line, I have found myself unable to absorb.
But you’re not a good test case to see whether rationality is useful in everyday life. Your job description is to fully understand and then create a rational and moral agent. This is the exceptional case where the fuzzy philosophical benefits of rationality suddenly become practical.
One of the fundamental lessons of Overcoming Bias was “All this stuff philosophers have been debating fruitlessly for centuries actually becomes a whole lot clearer when we consider it in terms of actually designing a mind.” This isn’t surprising; you’re the first person who’s really gotten to use Near Mode thought on a problem previously considered only in Far Mode. So you’ve been thinking “Here’s this nice practical stuff about thinking that’s completely applicable to my goal of building a thinking machine”, and we’ve been thinking, “Oh, wow, this helps solve all of these complicated philosophical issues we’ve been worrying about for so long.”
But in other fields, the rationality is domain-specific and already exists, albeit without the same thunderbolt of enlightenment and awesomeness. Doctors, for example, have a tremendous literature on evidence and decision-making as they relate to medicine (which is one reason I get so annoyed with Robin sometimes). An x-rationalist who becomes a doctor would not, I think, necessarily be a significantly better doctor than the rest of the medical world, because the rest of the medical world already has an overabundance of great rationality techniques and methods of improving care that the majority of doctors just don’t use, and because medicine takes so many skills besides rationality that any minor benefits from the x-rationalist’s clearer thinking would get lost in the noise. To make this more concrete: I don’t think good doctors are more likely to be atheists than bad doctors, though I do think good AI scientists are more likely to be atheists than bad AI scientists. I think this paragraph about doctors also applies to businessmen, scientists, counselors, et cetera.
When I said that we had a non-trivial difference of opinion on your secret identity post, this was what I meant: that a great x-rationalist might be a mediocre doctor; that maybe if you’d gone into medicine instead of AI you would have been a mediocre doctor and then I wouldn’t be “allowed” to respect you for your x-rationality work.
An x-rationalist who becomes a doctor would not, I think, necessarily be a significantly better doctor than the rest of the medical world, because the rest of the medical world already has an overabundance of great rationality techniques and methods of improving care that the majority of doctors just don’t use
Evidence-based medicine was developed by x-rationalists. And to this day, many doctors ignore it because they are not x-rationalists.
...huh. That comment was probably more helpful than you expected it to be. I’m pretty sure I’ve identified part of my problem as having too high a standard for what makes an x-rationalist. If you let the doctors who developed evidence-based medicine in...yes, that clears a few things up.
One thinks particularly of Robyn Dawes—I don’t know him from “evidence-based medicine” per se, but I know he was fighting the battle to get doctors to acknowledge that their “clinical experience” wasn’t better than simple linear models, and he was on the front lines against psychotherapy shown to perform no better than talking to any bright person.
If you read “Rational Choice in an Uncertain World” you will see that Dawes is pretty definitely on the level of “integrate Bayes into everyday life”, not just Traditional Rationality. I don’t know about the historical origins of evidence-based medicine, so it’s possible that a bunch of Traditional Rationalists invented it; but one does get the impression that probability theorists trying to get people to listen to the research about the limits of their own minds, were involved.
After thinking on this for a while, here are my thoughts. This should probably be a new post but I don’t want to start another whole chain of discussions on this issue.
I had the belief that many people on Less Wrong believed that our currently existing Art of Rationality was sufficient or close to sufficient to guarantee practical success or even to transform its practioner into an ubermensch like John Galt. I’m no longer sure anyone believes this. If they do, they are wrong. If anyone right now claims they participate in Less Wrong solely out of a calculated program to maximize practical benefits and not because they like rationality, I think they are deluded.
Where x-rationality is defined as “formal, math-based rationality”, there are many cases of x-rationality being used for good practical effect. I missed these because they look more like three percent annual gains in productivity than like Brennan discovering quantum gravity or Napoleon conquering Europe. For example, doctors can use evidence-based medicine to increase their cure rate.
The doctors who invented evidence-based medicine deserve our praise. Eliezer is willing to consider them x-rationalists. But there is no evidence that they took a particularly philosophical view towards rationality, as opposed to just thinking “Hey, if we apply these tests, it will improve medicine a bit.” Depending on your view of socialism, the information that one of these inventors ran for parliament on a socialist platform may be an interesting data point.
These doctors probably had mastery of statistics, good understanding of the power of the experimental method, and a belief that formalizing things could do better than normal human expertise. All of these are rationalist virtues. Any new doctor who starts their career with these virtues will be in a better position to profit from and maybe expand upon evidence-based medicine than a less virtuous doctor, and will reap great benefits from their virtues. Insofar as Less Wrong’s goal is to teach people to become such doctors, this is great...
...except that epidemiology and statistics classes teach the same thing with a lot less fuss. Less Wrong’s goal seems to be much higher. Less Wrong wants a doctor who can do that, and understand their mental processes in great detail, and who will be able to think rationally about politics and religion and turn the whole thing into a unified rationalist outlook.
Or maybe it doesn’t. Eliezer has already explained that a lot of his OB writing was just stuff that he came across trying to solve AI problems. Maybe this has turned us into a community of people who like talking about philosophy, and that really doesn’t matter much and shouldn’t be taught at rationality dojos. Maybe a rationality dojo should be an extra-well-taught applied statistics class and some discussion of important cognitive biases and how to avoid them. It seems to me that a statistics class plus some discussion of cognitive biases would be enough to transform an average doctor into the kind of doctor who could invent or at least use evidence-based medicine and whatever other x-rationality techniques might be useful in medicine. With a few modifications, the same goes for business, science, and any other practical field.
I predict the marginal utility of this sort of rationality will decline quickly. The first year of training will probably do wonders. The second year will be less impressive. I doubt a doctor who studies this rationality for ten years will be noticeably better off than one who studies it for five, although this may be my pessimism speaking. Probably the doctor would be better off spending those second five years studying some other area of medicine. In the end, I predict these kinds of classes could improve performance in some fields 10-20% for people who really understood them.
This would be a useful service, but it wouldn’t have the same kind of awesomeness as Overcoming Bias did. There seems to be a second movement afoot here, one to use rationality to radically transform our lives and thought processes, moving so far beyond mere domain-specific reasoning ability that even in areas like religion, politics, morality, and philosophy we hold only rational beliefs and are completely inhospitable to any irrational thoughts. This is a very different sort of task.
This new level of rationality has benefits, but they are less practical. There are mental clarity benefits, and benefits to society when we stop encouraging harmful political and social movements, and benefits to the world when we give charity more efficiently. Once people finish the course mentioned in (6) and start on the course mentioned in (8), it seems less honest to keep telling them about the vast practical benefits they will attain.
This might have certain social benefits, but you would have to be pretty impressive for conscious-level social reasoning to get better than the dedicated unconscious modules we already use for that task.
I have a hard time judging opinion here, but it does seem like some people think sufficient study of z-rationality can turn someone into an ubermensch. But the practical benefits beyond those offered by y-rationality seem low. I really like z-rationality, but only because I think it’s philosophically interesting and can improve society, not because I think it can help me personally.
In the original post, I was using x-rationality in a confused way, but I think to some degree I was thinking of (8) rather than (6).
One thinks particularly of Robyn Dawes—I don’t know him from “evidence-based medicine” per se, but I know he was fighting the battle to get doctors to acknowledge that their “clinical experience” wasn’t better than simple linear models, and he was on the front lines against psychotherapy shown to perform no better than talking to any bright person.
If you read “Rational Choice in an Uncertain World” you will see that Dawes is pretty definitely on the level of “integrate Bayes into everyday life”, not just Traditional Rationality. I don’t know about the historical origins of evidence-based medicine, so it’s possible that a bunch of Traditional Rationalists invented it; but one does get the impression that probability theorists trying to get people to listen to the research about the limits of their own minds, were involved.
Those studies sucked. That book had tons of fallacious reasoning and questionable results. It was while reading Dawes’ book that I became convinced that H&B is actively harmful for rationality. Now that you say Dawes was also behind the anti-psychotherapy stuff I suddenly have a lot more faith in psychotherapy. (By the way, it’s not just that Dawes isn’t a careful researcher—he can also be actively misleading.)
I really hope Anna is right that the Center for Modern Rationality won’t be giving much weight to oft-cited overblown H&B results (e.g “confirmation bias”). Knowing about biases almost always hurts people.
ETA: Apologies for curmudgeonly tone; I’m just worried that unless executed with utmost care, the CMR idea will do evil.
That is an important heuristic (and upvoted), but I don’t think it’s one we should endorse without some pretty substantial caveats. If you deprecate any results that strike you as ideologically tainted, and your criteria for “ideologically tainted” are themselves skewed in one direction or another by identity effects, you can easily end up accepting less accurate information than you would by taking every result in the field at face value.
Agreed. I think your caveat is just a special case: put minimal trust in researchers who seem to have ideological axes to grind, including yourself. (And if you can’t discern when you might be grinding an axe then you’re probably screwed anyway.) (But yeah, I admit it’s a perversion of “researchers” to include meta-researchers.)
I think the claim that “those studies sucked” and the accompanying link were in reference to:
the battle to get doctors to acknowledge that their “clinical experience” wasn’t better than simple linear models
The linked comment discusses a few different statistical prediction rules, not just wine-tasting. To the extent that the comment identifies systematic flaws in claims that linear models outperform experts, it does somewhat support the claim that “those studies sucked” (though I wouldn’t think it supports the claim sufficiently to actually justify making it).
(See Steven’s comment, the “those studies sucked” comment was meant to be a reference to the linear model versus expert judgment series, not the psychotherapy studies. Obviously the link was supposed to be representative of a disturbing trend, not the sum total justification for my claims.)
FWIW I still like a lot of H&B research—I’m a big Gigerenzer fan, and Tetlock has some cool stuff, for example—but most of the field, including much of Tversky and Kahneman’s stuff, is hogwash, i.e. less trustworthy than parapsychology results (which are generally held to a much higher standard). This is what we’d expect given the state of the social sciences, but for some reason people seem to give social psychology and cognitive science a free pass rather than applying a healthy dose of skepticism. I suspect this is because of confirmation bias: people are already trying to push an ideology about how almost everyone is irrational and the world is mad, and thus are much more willing to accept “explanations” that support this conclusion.
Start by reading Gigerenzer’s critiques. E.g. I really like the study on how overconfidence goes away if you ask for frequencies rather than subjective probabilities—this actually gives you a rationality technique that you can apply in real life! (In my experience it works, but that’s an impression, not a statistical finding.) I also quite liked his point about how just telling subjects to assume random sampling is misleading. You can find a summary of two of his critiques in a LW post by Kaj Sotala, “Heuristics and Biases Biases?” or summat. Also fastandfrugal.com should have some links or links to links. Also worth noting is that Gigerenzer’s been cited many thousands of times and has written a few popular books. I especially like Gigerenzer because unlike many H&B folk he has a thorough knowledge of statistics, and he uses that knowledge to make very sharp critiques of Kahneman’s compare-to-allegedly-ideal-Bayesian-reasoner approach. (Of course it’s still possible to use a Bayesian approach, but the most convincing Bayesian papers I’ve seen were sophisticated (e.g. didn’t skimp on information theory) and applied only to very simple problems.)
I wouldn’t even say that the problem is overall in the H&B lit, it’s just that lots of H&B folk spin their results as if they somehow applied to real life situations. It’s intellectually dishonest, and leads to people like Eliezer having massive overconfidence in the relevance of H&B knowledge for personal rationality.
I wouldn’t bother with those questions if I were you, thomblake. They’ve never been answered here, and are unlikely ever to be answered, here or elsewhere.
The goal here is to talk about being rational, not actually being so; to talk about building AIs, not show progress in doing so or even to define what that would be.
I’ll admit I might be attacking a straw man, but if you read the posts linked to on the very top, I think there are at least a few people out there who believe it, or who don’t consciously believe it but act as if it’s true.
How likely is it that x-rationality could be practically useful in, say, 50 years.
Depends how you reduce “practically useful”. Reduce it to “a person randomly assigned to take rationality classes two hours a week plus homework for a year will make on average ten percent more money than a similar person who doesn’t”, my wild completely unsubstantiated guess is 50% likely. But I’d give similar numbers to other types of self-improvement classes like Carnegie seminars and that sort of thing.
What approaches are most likely to get us to a useful practice of rationality? Or is your point that any advances that are made will be radically different from our current lines of investigation?
If by “useful practice of rationality” you mean the way Eliezer imagines it, I think there should be more focus on applying the rationality we have rather than delving deeper and deeper into the theory, but if I could say more than that, I’d be rich and you’d be paying me outrageous hourly fees to talk about it :)
I do think non-godlike levels of rationality have far more potential to help us in politics than in daily life, but that’s a minefield. In terms of easy profits we should focus the movement there, but in terms of remaining cohesive and credible it’s not really an option.
It would be overconfident for me to say rationality could never become useful. My point is just that we are acting like it’s practically useful right now, without very much evidence for this beyond our hopes and dreams. Thus my last sentence—that “crossing the Pacific” isn’t impossible, but it’s going to take a different level of effort.
If in 1660, Robert Boyle had gone around saying that, now that we knew Boyle’s Law of gas behavior, we should be able to predict the weather, and that that was the only point of discovering Boyle’s Law and that furthermore we should never trust a so-called chemist or physicist except insofar as he successfully predicted the weather—then I think the Royal Society would be making the same mistake we are.
Boyle’s Law is sort of helpful in understanding the weather, sort of. But it’s step one of ten million steps, used alone it doesn’t work nearly as well as just eyeballing the weather and looking for patterns, and any attempt to judge applicants to the Royal Society on their weather prediction abilities would have excluded some excellent scientists. Any attempt to restrict gas physics itself to things that were directly helpful in predicting the weather would have destroyed the science, ironically including the discoveries two hundred years down the road that were helpful in weather prediction.
Summed up: With luck, (some) science can result in good practical technology. But demanding the technology too soon, or restricting science to only the science with technology to back it up, hurts both science and technology.
(there is a difference between verification and technology. Boyle was able to empirically test his gas law, but not practically apply it. This may be fuzzier in rationality)
I’m confused about this article. I agree with most you’ve said, but I’m not sure the point is exactly. I thought the entire premise of this community was that more is possible, but we’re only “less wrong” at the moment. I didn’t think there was any promise of results for the current state of the art. Is this post a warning, or am I overlooking this trend?
I agree we shouldn’t see x-rationality as practically useful now. You don’t rule out rationality becoming the superpower Eliezer portrays in his fiction. That is certainly a long ways off. Boyle’s Law and weather prediction is an apt analogy. Just trying harder to apply our current knowledge won’t go very far, but there should be some productive avenues.
I think I’d understand your purpose better if you could answer these questions: In your mind, how likely is it that x-rationality could be practically useful in, say, 50 years? What approaches are most likely to get us to a useful practice of rationality? Or is your point that any advances that are made will be radically different from our current lines of investigation?
Just trying to understand.
The above would be component 1 of my own reply.
Component 2 would be (to say it again) that I developed the particular techniques that are to be found in my essays, in the course of solving my problem. And if you were to try to attack that or a similar problem you would suddenly find many more OB posts to be of immensely greater use and indeed necessity. The Eliezer of 2000 and earlier was not remotely capable of getting his job done.
What you’re seeing here is the backwash of techniques that seem like they ought to have some general applicability (e.g. Crisis of Faith) but which are not really a whole developed rationalist art, nor made for the purpose of optimizing everyday life.
Someone faced with the epic Challenge Of Changing Their Mind may use the full-fledged Crisis of Faith technique once that year. How much benefit is this really? That’s the question, but I’m not sure the cynical answer is the right one.
What I am hoping to see here is others, having been given a piece of the art, taking that art and extending it to cover their own problems, then coming back and describing what they’ve learned in a sufficiently general sense (informed by relevant science) that I can actually absorb it. For that which has been developed to address e.g. akrasia outside the rationalist line, I have found myself unable to absorb.
But you’re not a good test case to see whether rationality is useful in everyday life. Your job description is to fully understand and then create a rational and moral agent. This is the exceptional case where the fuzzy philosophical benefits of rationality suddenly become practical.
One of the fundamental lessons of Overcoming Bias was “All this stuff philosophers have been debating fruitlessly for centuries actually becomes a whole lot clearer when we consider it in terms of actually designing a mind.” This isn’t surprising; you’re the first person who’s really gotten to use Near Mode thought on a problem previously considered only in Far Mode. So you’ve been thinking “Here’s this nice practical stuff about thinking that’s completely applicable to my goal of building a thinking machine”, and we’ve been thinking, “Oh, wow, this helps solve all of these complicated philosophical issues we’ve been worrying about for so long.”
But in other fields, the rationality is domain-specific and already exists, albeit without the same thunderbolt of enlightenment and awesomeness. Doctors, for example, have a tremendous literature on evidence and decision-making as they relate to medicine (which is one reason I get so annoyed with Robin sometimes). An x-rationalist who becomes a doctor would not, I think, necessarily be a significantly better doctor than the rest of the medical world, because the rest of the medical world already has an overabundance of great rationality techniques and methods of improving care that the majority of doctors just don’t use, and because medicine takes so many skills besides rationality that any minor benefits from the x-rationalist’s clearer thinking would get lost in the noise. To make this more concrete: I don’t think good doctors are more likely to be atheists than bad doctors, though I do think good AI scientists are more likely to be atheists than bad AI scientists. I think this paragraph about doctors also applies to businessmen, scientists, counselors, et cetera.
When I said that we had a non-trivial difference of opinion on your secret identity post, this was what I meant: that a great x-rationalist might be a mediocre doctor; that maybe if you’d gone into medicine instead of AI you would have been a mediocre doctor and then I wouldn’t be “allowed” to respect you for your x-rationality work.
Evidence-based medicine was developed by x-rationalists. And to this day, many doctors ignore it because they are not x-rationalists.
...huh. That comment was probably more helpful than you expected it to be. I’m pretty sure I’ve identified part of my problem as having too high a standard for what makes an x-rationalist. If you let the doctors who developed evidence-based medicine in...yes, that clears a few things up.
One thinks particularly of Robyn Dawes—I don’t know him from “evidence-based medicine” per se, but I know he was fighting the battle to get doctors to acknowledge that their “clinical experience” wasn’t better than simple linear models, and he was on the front lines against psychotherapy shown to perform no better than talking to any bright person.
If you read “Rational Choice in an Uncertain World” you will see that Dawes is pretty definitely on the level of “integrate Bayes into everyday life”, not just Traditional Rationality. I don’t know about the historical origins of evidence-based medicine, so it’s possible that a bunch of Traditional Rationalists invented it; but one does get the impression that probability theorists trying to get people to listen to the research about the limits of their own minds, were involved.
After thinking on this for a while, here are my thoughts. This should probably be a new post but I don’t want to start another whole chain of discussions on this issue.
I had the belief that many people on Less Wrong believed that our currently existing Art of Rationality was sufficient or close to sufficient to guarantee practical success or even to transform its practioner into an ubermensch like John Galt. I’m no longer sure anyone believes this. If they do, they are wrong. If anyone right now claims they participate in Less Wrong solely out of a calculated program to maximize practical benefits and not because they like rationality, I think they are deluded.
Where x-rationality is defined as “formal, math-based rationality”, there are many cases of x-rationality being used for good practical effect. I missed these because they look more like three percent annual gains in productivity than like Brennan discovering quantum gravity or Napoleon conquering Europe. For example, doctors can use evidence-based medicine to increase their cure rate.
The doctors who invented evidence-based medicine deserve our praise. Eliezer is willing to consider them x-rationalists. But there is no evidence that they took a particularly philosophical view towards rationality, as opposed to just thinking “Hey, if we apply these tests, it will improve medicine a bit.” Depending on your view of socialism, the information that one of these inventors ran for parliament on a socialist platform may be an interesting data point.
These doctors probably had mastery of statistics, good understanding of the power of the experimental method, and a belief that formalizing things could do better than normal human expertise. All of these are rationalist virtues. Any new doctor who starts their career with these virtues will be in a better position to profit from and maybe expand upon evidence-based medicine than a less virtuous doctor, and will reap great benefits from their virtues. Insofar as Less Wrong’s goal is to teach people to become such doctors, this is great...
...except that epidemiology and statistics classes teach the same thing with a lot less fuss. Less Wrong’s goal seems to be much higher. Less Wrong wants a doctor who can do that, and understand their mental processes in great detail, and who will be able to think rationally about politics and religion and turn the whole thing into a unified rationalist outlook.
Or maybe it doesn’t. Eliezer has already explained that a lot of his OB writing was just stuff that he came across trying to solve AI problems. Maybe this has turned us into a community of people who like talking about philosophy, and that really doesn’t matter much and shouldn’t be taught at rationality dojos. Maybe a rationality dojo should be an extra-well-taught applied statistics class and some discussion of important cognitive biases and how to avoid them. It seems to me that a statistics class plus some discussion of cognitive biases would be enough to transform an average doctor into the kind of doctor who could invent or at least use evidence-based medicine and whatever other x-rationality techniques might be useful in medicine. With a few modifications, the same goes for business, science, and any other practical field.
I predict the marginal utility of this sort of rationality will decline quickly. The first year of training will probably do wonders. The second year will be less impressive. I doubt a doctor who studies this rationality for ten years will be noticeably better off than one who studies it for five, although this may be my pessimism speaking. Probably the doctor would be better off spending those second five years studying some other area of medicine. In the end, I predict these kinds of classes could improve performance in some fields 10-20% for people who really understood them.
This would be a useful service, but it wouldn’t have the same kind of awesomeness as Overcoming Bias did. There seems to be a second movement afoot here, one to use rationality to radically transform our lives and thought processes, moving so far beyond mere domain-specific reasoning ability that even in areas like religion, politics, morality, and philosophy we hold only rational beliefs and are completely inhospitable to any irrational thoughts. This is a very different sort of task.
This new level of rationality has benefits, but they are less practical. There are mental clarity benefits, and benefits to society when we stop encouraging harmful political and social movements, and benefits to the world when we give charity more efficiently. Once people finish the course mentioned in (6) and start on the course mentioned in (8), it seems less honest to keep telling them about the vast practical benefits they will attain.
This might have certain social benefits, but you would have to be pretty impressive for conscious-level social reasoning to get better than the dedicated unconscious modules we already use for that task.
I have a hard time judging opinion here, but it does seem like some people think sufficient study of z-rationality can turn someone into an ubermensch. But the practical benefits beyond those offered by y-rationality seem low. I really like z-rationality, but only because I think it’s philosophically interesting and can improve society, not because I think it can help me personally.
In the original post, I was using x-rationality in a confused way, but I think to some degree I was thinking of (8) rather than (6).
Those studies sucked. That book had tons of fallacious reasoning and questionable results. It was while reading Dawes’ book that I became convinced that H&B is actively harmful for rationality. Now that you say Dawes was also behind the anti-psychotherapy stuff I suddenly have a lot more faith in psychotherapy. (By the way, it’s not just that Dawes isn’t a careful researcher—he can also be actively misleading.)
I really hope Anna is right that the Center for Modern Rationality won’t be giving much weight to oft-cited overblown H&B results (e.g “confirmation bias”). Knowing about biases almost always hurts people.
ETA: Apologies for curmudgeonly tone; I’m just worried that unless executed with utmost care, the CMR idea will do evil.
Which illustrates an important heuristic: put minimal trust in researchers who seem to have ideological axes to grind.
That is an important heuristic (and upvoted), but I don’t think it’s one we should endorse without some pretty substantial caveats. If you deprecate any results that strike you as ideologically tainted, and your criteria for “ideologically tainted” are themselves skewed in one direction or another by identity effects, you can easily end up accepting less accurate information than you would by taking every result in the field at face value.
I probably don’t need to give any examples.
Agreed. I think your caveat is just a special case: put minimal trust in researchers who seem to have ideological axes to grind, including yourself. (And if you can’t discern when you might be grinding an axe then you’re probably screwed anyway.) (But yeah, I admit it’s a perversion of “researchers” to include meta-researchers.)
As The Last Psychiatrist would say, always be thinking about what the author wants to be true.
FYI to other readers: Citation does not support claim, it’s about linear models of wine-tasting rather than experimental support for psychotherapy.
I think the claim that “those studies sucked” and the accompanying link were in reference to:
The linked comment discusses a few different statistical prediction rules, not just wine-tasting. To the extent that the comment identifies systematic flaws in claims that linear models outperform experts, it does somewhat support the claim that “those studies sucked” (though I wouldn’t think it supports the claim sufficiently to actually justify making it).
(See Steven’s comment, the “those studies sucked” comment was meant to be a reference to the linear model versus expert judgment series, not the psychotherapy studies. Obviously the link was supposed to be representative of a disturbing trend, not the sum total justification for my claims.)
FWIW I still like a lot of H&B research—I’m a big Gigerenzer fan, and Tetlock has some cool stuff, for example—but most of the field, including much of Tversky and Kahneman’s stuff, is hogwash, i.e. less trustworthy than parapsychology results (which are generally held to a much higher standard). This is what we’d expect given the state of the social sciences, but for some reason people seem to give social psychology and cognitive science a free pass rather than applying a healthy dose of skepticism. I suspect this is because of confirmation bias: people are already trying to push an ideology about how almost everyone is irrational and the world is mad, and thus are much more willing to accept “explanations” that support this conclusion.
Tversky and Kahneman, hogwash? What? Can you explain? Or just mention something?
Start by reading Gigerenzer’s critiques. E.g. I really like the study on how overconfidence goes away if you ask for frequencies rather than subjective probabilities—this actually gives you a rationality technique that you can apply in real life! (In my experience it works, but that’s an impression, not a statistical finding.) I also quite liked his point about how just telling subjects to assume random sampling is misleading. You can find a summary of two of his critiques in a LW post by Kaj Sotala, “Heuristics and Biases Biases?” or summat. Also fastandfrugal.com should have some links or links to links. Also worth noting is that Gigerenzer’s been cited many thousands of times and has written a few popular books. I especially like Gigerenzer because unlike many H&B folk he has a thorough knowledge of statistics, and he uses that knowledge to make very sharp critiques of Kahneman’s compare-to-allegedly-ideal-Bayesian-reasoner approach. (Of course it’s still possible to use a Bayesian approach, but the most convincing Bayesian papers I’ve seen were sophisticated (e.g. didn’t skimp on information theory) and applied only to very simple problems.)
I wouldn’t even say that the problem is overall in the H&B lit, it’s just that lots of H&B folk spin their results as if they somehow applied to real life situations. It’s intellectually dishonest, and leads to people like Eliezer having massive overconfidence in the relevance of H&B knowledge for personal rationality.
Awesome, big thanks!
All this rationality organizing talk has to have some misquotes :(
Are you more or less capable of that now? Do you have evidence that you are? Is the job tangibly closer to being completed?
I wouldn’t bother with those questions if I were you, thomblake. They’ve never been answered here, and are unlikely ever to be answered, here or elsewhere.
The goal here is to talk about being rational, not actually being so; to talk about building AIs, not show progress in doing so or even to define what that would be.
It’s about talking, not doing.
There are many different people here. I think talking about “the goal” is nonsense.
Why do you suppose that is?
I’ll admit I might be attacking a straw man, but if you read the posts linked to on the very top, I think there are at least a few people out there who believe it, or who don’t consciously believe it but act as if it’s true.
Depends how you reduce “practically useful”. Reduce it to “a person randomly assigned to take rationality classes two hours a week plus homework for a year will make on average ten percent more money than a similar person who doesn’t”, my wild completely unsubstantiated guess is 50% likely. But I’d give similar numbers to other types of self-improvement classes like Carnegie seminars and that sort of thing.
If by “useful practice of rationality” you mean the way Eliezer imagines it, I think there should be more focus on applying the rationality we have rather than delving deeper and deeper into the theory, but if I could say more than that, I’d be rich and you’d be paying me outrageous hourly fees to talk about it :)
I do think non-godlike levels of rationality have far more potential to help us in politics than in daily life, but that’s a minefield. In terms of easy profits we should focus the movement there, but in terms of remaining cohesive and credible it’s not really an option.