I think the larger question of rationality is, When is it good for us, and when is it bad for us?
I suffer more from too much rationality than too little. I have a hard time making decisions. I spend too much time thinking about things that other people handle competently without much thought. Rationality to the degree you desire may not be an evolutionary stable strategy—your rationality may provide a net benefit to society, and a net cost to you.
On the level of society, we don’t know whether a society of rational personal utility maximizers could out-compete a society of members biased in ways that privileged the society over the individual. Defining “rational” as “rational personal utility” is a more radical step than most people realize.
On the even higher level of FAI, we run into the question of whether rationality is a good thing for God to have. Rationality only makes sense if you have values to maximize. If God had many values, it would probably makes the universe a more-homogenous and less-interesting place.
It’s possible that one can learn the wrong kind of rationality first, but I disagree with the idea that rationality can be a bad thing in general.
The first skill ought to be efficient usage of computational resources. For humans, that means calories (no longer a factor outside our ancestral environment) and time. LessWrong has taught me exceptional methods of distinguishing my exact preferences and mapping out every contour of my own utility function. Thus, I can decide exactly which flavor Ice Cream will give me more utilons. This is neat, but useless.
But rationality is more than just thinking. It’s thinking about thinking. And thinking about thinking about thinking. And it keeps recursing. In learning the difference in utility between Ice Cream, I learned to never care about which flavor I will get; the utility saved by not consciously calculating it is worth more than the utility of thinking about it. Before learning about rationality, I had been the sort to whinge and waver over flavors at coldstone. After learning some rationality, I was even worse. But learning a bit more of rationality, well learning the exact right bit that is, made me perform at peak speed and peak utility. It’s a lesson I’ve learned that carries over into more than just food, has saved me countless hours and utilons, and will probably stay with me until the end of my days.
There are many valleys of bad rationality. Whether they take the form of being just clever enough to choose the ‘rational’ position in game theory versus the ‘super-rational’, or just smart enough to catch a mimetic autoimmune disease without fully deconverting from your religion, rationality valleys can be formidable traps. As always, though the valley may be long and hard, the far side of the valley is always better than where you started.
You are only going in circles.
** You need more data, to do so, you should preform an experiment.
You can no longer remember/track your best created strategies.
You can not judge value difference between new strategies and existing strategies.
You spend x percentage of your time tracking/remember your created strategies. Where x is significant.
There are better questions to consider.
The value of answering the question will diminish greatly if you spend more time trying to optimize it.
** “It is great you finished the test and got all the right answers but the test was over a week ago”—extreme example some times …/years/months/weeks/days/hours/minutes/seconds/… count.
It can be a hard question to get right in my experience.
I think the larger question of rationality is, When is it good for us, and when is it bad for us?
I suffer more from too much rationality than too little. I have a hard time making decisions. I spend too much time thinking about things that other people handle competently without much thought. Rationality to the degree you desire may not be an evolutionary stable strategy—your rationality may provide a net benefit to society, and a net cost to you.
On the level of society, we don’t know whether a society of rational personal utility maximizers could out-compete a society of members biased in ways that privileged the society over the individual. Defining “rational” as “rational personal utility” is a more radical step than most people realize.
On the even higher level of FAI, we run into the question of whether rationality is a good thing for God to have. Rationality only makes sense if you have values to maximize. If God had many values, it would probably makes the universe a more-homogenous and less-interesting place.
It’s possible that one can learn the wrong kind of rationality first, but I disagree with the idea that rationality can be a bad thing in general.
The first skill ought to be efficient usage of computational resources. For humans, that means calories (no longer a factor outside our ancestral environment) and time. LessWrong has taught me exceptional methods of distinguishing my exact preferences and mapping out every contour of my own utility function. Thus, I can decide exactly which flavor Ice Cream will give me more utilons. This is neat, but useless.
But rationality is more than just thinking. It’s thinking about thinking. And thinking about thinking about thinking. And it keeps recursing. In learning the difference in utility between Ice Cream, I learned to never care about which flavor I will get; the utility saved by not consciously calculating it is worth more than the utility of thinking about it. Before learning about rationality, I had been the sort to whinge and waver over flavors at coldstone. After learning some rationality, I was even worse. But learning a bit more of rationality, well learning the exact right bit that is, made me perform at peak speed and peak utility. It’s a lesson I’ve learned that carries over into more than just food, has saved me countless hours and utilons, and will probably stay with me until the end of my days.
There are many valleys of bad rationality. Whether they take the form of being just clever enough to choose the ‘rational’ position in game theory versus the ‘super-rational’, or just smart enough to catch a mimetic autoimmune disease without fully deconverting from your religion, rationality valleys can be formidable traps. As always, though the valley may be long and hard, the far side of the valley is always better than where you started.
In short: a method of answering questions should be judged not only on its benefits, but on its costs. So, another basic question of rationality is:
Q: When should we stop thinking about a question?
Definitely when:
You are only going in circles. ** You need more data, to do so, you should preform an experiment.
You can no longer remember/track your best created strategies.
You can not judge value difference between new strategies and existing strategies.
You spend x percentage of your time tracking/remember your created strategies. Where x is significant.
There are better questions to consider.
The value of answering the question will diminish greatly if you spend more time trying to optimize it. ** “It is great you finished the test and got all the right answers but the test was over a week ago”—extreme example some times …/years/months/weeks/days/hours/minutes/seconds/… count.
It can be a hard question to get right in my experience.
I think you mean, “When is it irrational to study rationality explicitly?”