After thinking on this for a while, here are my thoughts. This should probably be a new post but I don’t want to start another whole chain of discussions on this issue.
I had the belief that many people on Less Wrong believed that our currently existing Art of Rationality was sufficient or close to sufficient to guarantee practical success or even to transform its practioner into an ubermensch like John Galt. I’m no longer sure anyone believes this. If they do, they are wrong. If anyone right now claims they participate in Less Wrong solely out of a calculated program to maximize practical benefits and not because they like rationality, I think they are deluded.
Where x-rationality is defined as “formal, math-based rationality”, there are many cases of x-rationality being used for good practical effect. I missed these because they look more like three percent annual gains in productivity than like Brennan discovering quantum gravity or Napoleon conquering Europe. For example, doctors can use evidence-based medicine to increase their cure rate.
The doctors who invented evidence-based medicine deserve our praise. Eliezer is willing to consider them x-rationalists. But there is no evidence that they took a particularly philosophical view towards rationality, as opposed to just thinking “Hey, if we apply these tests, it will improve medicine a bit.” Depending on your view of socialism, the information that one of these inventors ran for parliament on a socialist platform may be an interesting data point.
These doctors probably had mastery of statistics, good understanding of the power of the experimental method, and a belief that formalizing things could do better than normal human expertise. All of these are rationalist virtues. Any new doctor who starts their career with these virtues will be in a better position to profit from and maybe expand upon evidence-based medicine than a less virtuous doctor, and will reap great benefits from their virtues. Insofar as Less Wrong’s goal is to teach people to become such doctors, this is great...
...except that epidemiology and statistics classes teach the same thing with a lot less fuss. Less Wrong’s goal seems to be much higher. Less Wrong wants a doctor who can do that, and understand their mental processes in great detail, and who will be able to think rationally about politics and religion and turn the whole thing into a unified rationalist outlook.
Or maybe it doesn’t. Eliezer has already explained that a lot of his OB writing was just stuff that he came across trying to solve AI problems. Maybe this has turned us into a community of people who like talking about philosophy, and that really doesn’t matter much and shouldn’t be taught at rationality dojos. Maybe a rationality dojo should be an extra-well-taught applied statistics class and some discussion of important cognitive biases and how to avoid them. It seems to me that a statistics class plus some discussion of cognitive biases would be enough to transform an average doctor into the kind of doctor who could invent or at least use evidence-based medicine and whatever other x-rationality techniques might be useful in medicine. With a few modifications, the same goes for business, science, and any other practical field.
I predict the marginal utility of this sort of rationality will decline quickly. The first year of training will probably do wonders. The second year will be less impressive. I doubt a doctor who studies this rationality for ten years will be noticeably better off than one who studies it for five, although this may be my pessimism speaking. Probably the doctor would be better off spending those second five years studying some other area of medicine. In the end, I predict these kinds of classes could improve performance in some fields 10-20% for people who really understood them.
This would be a useful service, but it wouldn’t have the same kind of awesomeness as Overcoming Bias did. There seems to be a second movement afoot here, one to use rationality to radically transform our lives and thought processes, moving so far beyond mere domain-specific reasoning ability that even in areas like religion, politics, morality, and philosophy we hold only rational beliefs and are completely inhospitable to any irrational thoughts. This is a very different sort of task.
This new level of rationality has benefits, but they are less practical. There are mental clarity benefits, and benefits to society when we stop encouraging harmful political and social movements, and benefits to the world when we give charity more efficiently. Once people finish the course mentioned in (6) and start on the course mentioned in (8), it seems less honest to keep telling them about the vast practical benefits they will attain.
This might have certain social benefits, but you would have to be pretty impressive for conscious-level social reasoning to get better than the dedicated unconscious modules we already use for that task.
I have a hard time judging opinion here, but it does seem like some people think sufficient study of z-rationality can turn someone into an ubermensch. But the practical benefits beyond those offered by y-rationality seem low. I really like z-rationality, but only because I think it’s philosophically interesting and can improve society, not because I think it can help me personally.
In the original post, I was using x-rationality in a confused way, but I think to some degree I was thinking of (8) rather than (6).
After thinking on this for a while, here are my thoughts. This should probably be a new post but I don’t want to start another whole chain of discussions on this issue.
I had the belief that many people on Less Wrong believed that our currently existing Art of Rationality was sufficient or close to sufficient to guarantee practical success or even to transform its practioner into an ubermensch like John Galt. I’m no longer sure anyone believes this. If they do, they are wrong. If anyone right now claims they participate in Less Wrong solely out of a calculated program to maximize practical benefits and not because they like rationality, I think they are deluded.
Where x-rationality is defined as “formal, math-based rationality”, there are many cases of x-rationality being used for good practical effect. I missed these because they look more like three percent annual gains in productivity than like Brennan discovering quantum gravity or Napoleon conquering Europe. For example, doctors can use evidence-based medicine to increase their cure rate.
The doctors who invented evidence-based medicine deserve our praise. Eliezer is willing to consider them x-rationalists. But there is no evidence that they took a particularly philosophical view towards rationality, as opposed to just thinking “Hey, if we apply these tests, it will improve medicine a bit.” Depending on your view of socialism, the information that one of these inventors ran for parliament on a socialist platform may be an interesting data point.
These doctors probably had mastery of statistics, good understanding of the power of the experimental method, and a belief that formalizing things could do better than normal human expertise. All of these are rationalist virtues. Any new doctor who starts their career with these virtues will be in a better position to profit from and maybe expand upon evidence-based medicine than a less virtuous doctor, and will reap great benefits from their virtues. Insofar as Less Wrong’s goal is to teach people to become such doctors, this is great...
...except that epidemiology and statistics classes teach the same thing with a lot less fuss. Less Wrong’s goal seems to be much higher. Less Wrong wants a doctor who can do that, and understand their mental processes in great detail, and who will be able to think rationally about politics and religion and turn the whole thing into a unified rationalist outlook.
Or maybe it doesn’t. Eliezer has already explained that a lot of his OB writing was just stuff that he came across trying to solve AI problems. Maybe this has turned us into a community of people who like talking about philosophy, and that really doesn’t matter much and shouldn’t be taught at rationality dojos. Maybe a rationality dojo should be an extra-well-taught applied statistics class and some discussion of important cognitive biases and how to avoid them. It seems to me that a statistics class plus some discussion of cognitive biases would be enough to transform an average doctor into the kind of doctor who could invent or at least use evidence-based medicine and whatever other x-rationality techniques might be useful in medicine. With a few modifications, the same goes for business, science, and any other practical field.
I predict the marginal utility of this sort of rationality will decline quickly. The first year of training will probably do wonders. The second year will be less impressive. I doubt a doctor who studies this rationality for ten years will be noticeably better off than one who studies it for five, although this may be my pessimism speaking. Probably the doctor would be better off spending those second five years studying some other area of medicine. In the end, I predict these kinds of classes could improve performance in some fields 10-20% for people who really understood them.
This would be a useful service, but it wouldn’t have the same kind of awesomeness as Overcoming Bias did. There seems to be a second movement afoot here, one to use rationality to radically transform our lives and thought processes, moving so far beyond mere domain-specific reasoning ability that even in areas like religion, politics, morality, and philosophy we hold only rational beliefs and are completely inhospitable to any irrational thoughts. This is a very different sort of task.
This new level of rationality has benefits, but they are less practical. There are mental clarity benefits, and benefits to society when we stop encouraging harmful political and social movements, and benefits to the world when we give charity more efficiently. Once people finish the course mentioned in (6) and start on the course mentioned in (8), it seems less honest to keep telling them about the vast practical benefits they will attain.
This might have certain social benefits, but you would have to be pretty impressive for conscious-level social reasoning to get better than the dedicated unconscious modules we already use for that task.
I have a hard time judging opinion here, but it does seem like some people think sufficient study of z-rationality can turn someone into an ubermensch. But the practical benefits beyond those offered by y-rationality seem low. I really like z-rationality, but only because I think it’s philosophically interesting and can improve society, not because I think it can help me personally.
In the original post, I was using x-rationality in a confused way, but I think to some degree I was thinking of (8) rather than (6).