How about testing the rationality of your life (and not just your beliefs)?
Are you satisfied with your job/marriage/health-exercising?
Are you deeply in debt?
Spent too much money on status-symbols?
Cheating on your life-partner?
Spending too much time on the net?
Drinking too much?
If we can’t demand perfect metrics then surely we should at least demand metrics that aren’t easily gamed. If people with the quality named “rationality” don’t on average win more often on life-problems like those named, what quality do they even have, and why is it worthwhile?
I understand “rational” people “win” at the goal of believing the truth, but that goal may be in conflict with more familiar “success” goals. So the people around us we see as succeeding may not have paid the costs required to believe the truth.
If rationality is defined as making the decisions that maximise expected utility in a given situation then it is by definition more winninng. The question would be nonsensical.
If another definition of rationality is implied then I don’t think Eleizer demanded that it win.
I do have components of my utility function for certain rituals of cognition (as described in the segment on Fun Theory) but net wins beyond that point would compel me.
I predict that winners are on average less rational than rationalists. Risk level has an optimal point determined by expected payoff. But the maximal payoff keeps increasing as you increase risk. The winners we see are selected for high payoff. Thus they’re likely to be people who took more risks than were rational. We just don’t see all the losers who made the same decisions as the winners.
Those who take rational actions win more often than those who do not.
If we take a sample of those who have achieved the greatest utility then we can expect that sample to to be biased towards those who have taken the most risks.
Even in idealised situations where success is determined soley by decisions made based off information and in which rationality measured based on how well those decision maximise expected utility we can expect the biggest winners to not be the most rational.
When it comes to actual humans the above remains in place, yet may well be dwarfed by other factors. Some lyrics from Ben Folds spring to mind:
Fate doesn’t hang on a
wrong or right choice, fortune depends on the
tone of your voice
How about testing the rationality of your life (and not just your beliefs)?
Are you satisfied with your job/marriage/health-exercising? Are you deeply in debt? Spent too much money on status-symbols? Cheating on your life-partner? Spending too much time on the net? Drinking too much?
I am sure there are many other life-tests.
Surely we want to distinguish “rational” from “winner.” Are winners on average more rational than others? This is not clear to me.
If we can’t demand perfect metrics then surely we should at least demand metrics that aren’t easily gamed. If people with the quality named “rationality” don’t on average win more often on life-problems like those named, what quality do they even have, and why is it worthwhile?
I understand “rational” people “win” at the goal of believing the truth, but that goal may be in conflict with more familiar “success” goals. So the people around us we see as succeeding may not have paid the costs required to believe the truth.
Suppose we did the experiments and found other policies more winning than rationality. Would you adopt the most winning policy?
If not, then admit that you value rationality, and stop demanding that it win.
If rationality is defined as making the decisions that maximise expected utility in a given situation then it is by definition more winninng. The question would be nonsensical.
If another definition of rationality is implied then I don’t think Eleizer demanded that it win.
That would be a rational thing to do!
I do have components of my utility function for certain rituals of cognition (as described in the segment on Fun Theory) but net wins beyond that point would compel me.
I predict that winners are on average less rational than rationalists. Risk level has an optimal point determined by expected payoff. But the maximal payoff keeps increasing as you increase risk. The winners we see are selected for high payoff. Thus they’re likely to be people who took more risks than were rational. We just don’t see all the losers who made the same decisions as the winners.
Those who take rational actions win more often than those who do not.
If we take a sample of those who have achieved the greatest utility then we can expect that sample to to be biased towards those who have taken the most risks.
Even in idealised situations where success is determined soley by decisions made based off information and in which rationality measured based on how well those decision maximise expected utility we can expect the biggest winners to not be the most rational.
When it comes to actual humans the above remains in place, yet may well be dwarfed by other factors. Some lyrics from Ben Folds spring to mind:
Fate doesn’t hang on a wrong or right choice, fortune depends on the tone of your voice