winning is a combination of luck, resources/power, and instrumental rationality
Exactly. And the question is how much can we increase this result using the CFAR’s rationality improving techniques. Would better rationality on average increase your winning by 1%, 10%, 100%, or 1000%? The values 1% and 10% would probably be lost in the noise of luck.
Also, what is the distribution curve for the gains of rationality among population? An average gain of 100% could mean that everyone gains 100%, in which case you would have a lot of “proofs that rationality works”, but it could also mean that 1 person in 10 gains 1000% and 9 of 10 gain nothing; in which case you would have a lof of “proofs that rationality doesn’t work” and a few exceptions that could be explained away (e.g. by saying that they were so talented that they would get the same results also without CFAR).
It would be also interesting to know the curve for increases in winning by increases in rationality. Maybe rationality gives compound interest; becoming +1 rational can give you 10% more winning, but becoming +2 and +3 rational gives you 30% and 100% more winning, because your rationality techniques combine, and because by removing the non-rational parts of your life you gain additional resources. Or maybe it is actually the other way round; becoming +1 rational gives you 100% more winning, and becoming +2 and +3 rational only gives you additional 10% and 1% more winning, because you have already picked all the low-hanging fruit.
The shape of this curve, if known, could be important for CFAR’s strategy. If rationality follows the compound interest model, then CFAR should pick some of their brightest students and fully focus on optimizing them. On the other hand, if the low-hanging fruit is more likely, CFAR should focus on some easy-to-replicate elementary lessons and try to get as much volunteers as possible to teach them to everyone in sight.
By the way, for the efficient altruist subset of LW crowd, income (its part donated to effective charity) is a good proxy for winning.
That is a possible and likely model, but it seems to me that we should not stop the analysis here.
Let’s assume that rationality works mostly by preventing failures. As a simple mathematical model, we have a biased coin that generates values “success” and “failure”. For a typical smart but not rational person, the coin generates 90% “success” and 10% “failure”. For an x-rationalist, the coin generates 99% “success” and 1% “failure”. If your experiment consists of doing one coin flip and calculating the winners, most winners will not be x-rationalists, simply because of the base rates.
But are these coin flips always taken in isolation, or is it possible to create more complex games? For example, if the goal is to flip the coin 10 times and have 10 “successes”, then the players have total chances of 35% vs 90%. That seems like a greater difference, although the base rates would still dwarf this.
My point is, if your magical power is merely preventing some unlikely failures, you should have a visible advantage in situations which are complex in a way that makes hundreds of such failures possible. A person without the magical power would be pretty likely to fail at some point, even if each individual failure would be unlikely.
I just don’t know what (if anything) in the real world corresponds to this. Maybe the problem is that preventing hundreds of different unlikely failures would simply take too much time for a single person.
Exactly. And the question is how much can we increase this result using the CFAR’s rationality improving techniques. Would better rationality on average increase your winning by 1%, 10%, 100%, or 1000%? The values 1% and 10% would probably be lost in the noise of luck.
Also, what is the distribution curve for the gains of rationality among population? An average gain of 100% could mean that everyone gains 100%, in which case you would have a lot of “proofs that rationality works”, but it could also mean that 1 person in 10 gains 1000% and 9 of 10 gain nothing; in which case you would have a lof of “proofs that rationality doesn’t work” and a few exceptions that could be explained away (e.g. by saying that they were so talented that they would get the same results also without CFAR).
It would be also interesting to know the curve for increases in winning by increases in rationality. Maybe rationality gives compound interest; becoming +1 rational can give you 10% more winning, but becoming +2 and +3 rational gives you 30% and 100% more winning, because your rationality techniques combine, and because by removing the non-rational parts of your life you gain additional resources. Or maybe it is actually the other way round; becoming +1 rational gives you 100% more winning, and becoming +2 and +3 rational only gives you additional 10% and 1% more winning, because you have already picked all the low-hanging fruit.
The shape of this curve, if known, could be important for CFAR’s strategy. If rationality follows the compound interest model, then CFAR should pick some of their brightest students and fully focus on optimizing them. On the other hand, if the low-hanging fruit is more likely, CFAR should focus on some easy-to-replicate elementary lessons and try to get as much volunteers as possible to teach them to everyone in sight.
By the way, for the efficient altruist subset of LW crowd, income (its part donated to effective charity) is a good proxy for winning.
Also, rationality might mostly work by making disaster less common—it’s not so much that the victories are bigger as that fewer of them are lost.
That is a possible and likely model, but it seems to me that we should not stop the analysis here.
Let’s assume that rationality works mostly by preventing failures. As a simple mathematical model, we have a biased coin that generates values “success” and “failure”. For a typical smart but not rational person, the coin generates 90% “success” and 10% “failure”. For an x-rationalist, the coin generates 99% “success” and 1% “failure”. If your experiment consists of doing one coin flip and calculating the winners, most winners will not be x-rationalists, simply because of the base rates.
But are these coin flips always taken in isolation, or is it possible to create more complex games? For example, if the goal is to flip the coin 10 times and have 10 “successes”, then the players have total chances of 35% vs 90%. That seems like a greater difference, although the base rates would still dwarf this.
My point is, if your magical power is merely preventing some unlikely failures, you should have a visible advantage in situations which are complex in a way that makes hundreds of such failures possible. A person without the magical power would be pretty likely to fail at some point, even if each individual failure would be unlikely.
I just don’t know what (if anything) in the real world corresponds to this. Maybe the problem is that preventing hundreds of different unlikely failures would simply take too much time for a single person.
I suspect rationality does a lot to prevent likely failures as well as unlikely failures.