“Rationality” as used around here indicates “succeeding more often”. Or if you prefer, “Rationality is winning”.
That’s the idea. From the looks of it, most of us either suck at it, or only needed it for minor things in the first place, or are improving slowly enough that it’s indistinguishable from “I used more flashcards this month”. (Or maybe I just suck at it and fail to notice actually impressive improvements people have made; that’s possible, too.)
[Edit: CFAR seems to have a better reputation for teaching instrumental rationality than LessWrong, which seems to make sense. Too bad it’s a geographically bound organization with a price tag.]
It would be very useful to somehow measure rationality and winning, so we could say something about the correlation. Or at least to measure winning, so we could say whether CFAR lessons contribute to winning.
Sometimes income is used as a proxy for winning. It has some problems. For our purposes I would guess a big problem is that the changes of income within a year or two (since when CFAR provides workshops) are mostly noise. (Also, for employees this metric could be more easily optimized by preparing them for job interviews, helping them to optimize their CVs, and pressuring them into doing as many interviews as possible.)
The biggest issue with using income as a metric for ‘winning’ is that some people—in fact, most people—do not really have income as their sole goal, or even as their most important one. For most people, things like having social standing, respect, and importance, are far more important.
I think the point was government handout programs. This is a massive external control on many people’s incomes, and it is part of how the world is not a meritocracy.
(Please note, I ADBOC with CellBioGuy, so don’t take my description as anything more than a summary of what I think he is trying to say.)
This is closer to what I was getting at. Above someone mentioned government assistance programs, which is also true to a point but not really what I meant (another ‘disagree connotatively’).
I was mostly going for the fact that circumstances of birth (family and status not genetics), location, and locked-in life history have far more to do with income than a lot of other factors. And those who make it REALLY big are almost without exception extremely lucky rather than extremely good.
The value of income varies pretty widely across time and place (let alone between different people), so using it as a metric for “winning” is highly problematic. For instance, I was mostly insensitive to my income before getting married (and especially having my first child) beyond being able to afford rent, internet, food, and a few other things. The problem is, I don’t know of any other single number that works better.
It would be very useful to somehow measure rationality and winning, so we could say something about the correlation.
Since in the local vernacular rationality is winning, you need no measures: the correlation is 1 by defintion :-/
Sometimes income is used as a proxy for winning.
It’s a very bad proxy as “winning” is, more or less, “achieving things you care about” and income is a rather poor measure of that. For the LW crowd, anyway.
talk of “rationality as winning” is about instrumental rationality; when Viliam talks about the correlation between rationality and winning, it’s not clear whether it’s instrumental rationality (taking the best decisions towards your goals) or epistemic rationality (having true beliefs), but the second one is more likely.
But even if it’s about instrumental rationality, I wouldn’t say that the correlation is 1 by definition: I’d say winning is a combination of luck, resources/power, and instrumental rationality.
winning is a combination of luck, resources/power, and instrumental rationality
Exactly. And the question is how much can we increase this result using the CFAR’s rationality improving techniques. Would better rationality on average increase your winning by 1%, 10%, 100%, or 1000%? The values 1% and 10% would probably be lost in the noise of luck.
Also, what is the distribution curve for the gains of rationality among population? An average gain of 100% could mean that everyone gains 100%, in which case you would have a lot of “proofs that rationality works”, but it could also mean that 1 person in 10 gains 1000% and 9 of 10 gain nothing; in which case you would have a lof of “proofs that rationality doesn’t work” and a few exceptions that could be explained away (e.g. by saying that they were so talented that they would get the same results also without CFAR).
It would be also interesting to know the curve for increases in winning by increases in rationality. Maybe rationality gives compound interest; becoming +1 rational can give you 10% more winning, but becoming +2 and +3 rational gives you 30% and 100% more winning, because your rationality techniques combine, and because by removing the non-rational parts of your life you gain additional resources. Or maybe it is actually the other way round; becoming +1 rational gives you 100% more winning, and becoming +2 and +3 rational only gives you additional 10% and 1% more winning, because you have already picked all the low-hanging fruit.
The shape of this curve, if known, could be important for CFAR’s strategy. If rationality follows the compound interest model, then CFAR should pick some of their brightest students and fully focus on optimizing them. On the other hand, if the low-hanging fruit is more likely, CFAR should focus on some easy-to-replicate elementary lessons and try to get as much volunteers as possible to teach them to everyone in sight.
By the way, for the efficient altruist subset of LW crowd, income (its part donated to effective charity) is a good proxy for winning.
That is a possible and likely model, but it seems to me that we should not stop the analysis here.
Let’s assume that rationality works mostly by preventing failures. As a simple mathematical model, we have a biased coin that generates values “success” and “failure”. For a typical smart but not rational person, the coin generates 90% “success” and 10% “failure”. For an x-rationalist, the coin generates 99% “success” and 1% “failure”. If your experiment consists of doing one coin flip and calculating the winners, most winners will not be x-rationalists, simply because of the base rates.
But are these coin flips always taken in isolation, or is it possible to create more complex games? For example, if the goal is to flip the coin 10 times and have 10 “successes”, then the players have total chances of 35% vs 90%. That seems like a greater difference, although the base rates would still dwarf this.
My point is, if your magical power is merely preventing some unlikely failures, you should have a visible advantage in situations which are complex in a way that makes hundreds of such failures possible. A person without the magical power would be pretty likely to fail at some point, even if each individual failure would be unlikely.
I just don’t know what (if anything) in the real world corresponds to this. Maybe the problem is that preventing hundreds of different unlikely failures would simply take too much time for a single person.
This is getting better, slowly. Workshops are going on in Melbourne sometime in early 2014 (February?), and they’re looking to do more internationals going forward.
“Rationality” as used around here indicates “succeeding more often”. Or if you prefer, “Rationality is winning”.
That’s the idea. From the looks of it, most of us either suck at it, or only needed it for minor things in the first place, or are improving slowly enough that it’s indistinguishable from “I used more flashcards this month”. (Or maybe I just suck at it and fail to notice actually impressive improvements people have made; that’s possible, too.)
[Edit: CFAR seems to have a better reputation for teaching instrumental rationality than LessWrong, which seems to make sense. Too bad it’s a geographically bound organization with a price tag.]
It would be very useful to somehow measure rationality and winning, so we could say something about the correlation. Or at least to measure winning, so we could say whether CFAR lessons contribute to winning.
Sometimes income is used as a proxy for winning. It has some problems. For our purposes I would guess a big problem is that the changes of income within a year or two (since when CFAR provides workshops) are mostly noise. (Also, for employees this metric could be more easily optimized by preparing them for job interviews, helping them to optimize their CVs, and pressuring them into doing as many interviews as possible.)
The biggest issue with using income as a metric for ‘winning’ is that some people—in fact, most people—do not really have income as their sole goal, or even as their most important one. For most people, things like having social standing, respect, and importance, are far more important.
That, and income being massively externally controlled for the majority of people. The world, contrary to reports, is not a meritocracy.
Huh?
If you mean that people don’t necessarily get the income they want, well, duh...
No, it isn’t, but I don’t see the relevance to the previous point.
I think the point was government handout programs. This is a massive external control on many people’s incomes, and it is part of how the world is not a meritocracy.
(Please note, I ADBOC with CellBioGuy, so don’t take my description as anything more than a summary of what I think he is trying to say.)
He might also be saying that most people don’t have an obvious path for marginal increases to their income.
This is closer to what I was getting at. Above someone mentioned government assistance programs, which is also true to a point but not really what I meant (another ‘disagree connotatively’).
I was mostly going for the fact that circumstances of birth (family and status not genetics), location, and locked-in life history have far more to do with income than a lot of other factors. And those who make it REALLY big are almost without exception extremely lucky rather than extremely good.
You what with CellBioGuy..?
Should be “ADBOC”—“agree denotationally, but object connotatively”. (ygert is probably thinking of “disagree” instead of “object”.)
Ah, thanks. I usually think of such things as “technically correct but misleading”—that’s more or less the same thing, right?
Yes.
Yes, my mistake. I was in a rush, and didn’t have time to double check what the acronym was. Edited now.
I think I could make an argument that “object” has a semantic advantage over “disagree” but one advantage is that “adboc” can be pronounced as a two-syllable word.
Yes, this is true. You cannot meaningfully compare incomes between people that, say, live in developed vs. developing countries.
The value of income varies pretty widely across time and place (let alone between different people), so using it as a metric for “winning” is highly problematic. For instance, I was mostly insensitive to my income before getting married (and especially having my first child) beyond being able to afford rent, internet, food, and a few other things. The problem is, I don’t know of any other single number that works better.
Since in the local vernacular rationality is winning, you need no measures: the correlation is 1 by defintion :-/
It’s a very bad proxy as “winning” is, more or less, “achieving things you care about” and income is a rather poor measure of that. For the LW crowd, anyway.
talk of “rationality as winning” is about instrumental rationality; when Viliam talks about the correlation between rationality and winning, it’s not clear whether it’s instrumental rationality (taking the best decisions towards your goals) or epistemic rationality (having true beliefs), but the second one is more likely.
But even if it’s about instrumental rationality, I wouldn’t say that the correlation is 1 by definition: I’d say winning is a combination of luck, resources/power, and instrumental rationality.
Exactly. And the question is how much can we increase this result using the CFAR’s rationality improving techniques. Would better rationality on average increase your winning by 1%, 10%, 100%, or 1000%? The values 1% and 10% would probably be lost in the noise of luck.
Also, what is the distribution curve for the gains of rationality among population? An average gain of 100% could mean that everyone gains 100%, in which case you would have a lot of “proofs that rationality works”, but it could also mean that 1 person in 10 gains 1000% and 9 of 10 gain nothing; in which case you would have a lof of “proofs that rationality doesn’t work” and a few exceptions that could be explained away (e.g. by saying that they were so talented that they would get the same results also without CFAR).
It would be also interesting to know the curve for increases in winning by increases in rationality. Maybe rationality gives compound interest; becoming +1 rational can give you 10% more winning, but becoming +2 and +3 rational gives you 30% and 100% more winning, because your rationality techniques combine, and because by removing the non-rational parts of your life you gain additional resources. Or maybe it is actually the other way round; becoming +1 rational gives you 100% more winning, and becoming +2 and +3 rational only gives you additional 10% and 1% more winning, because you have already picked all the low-hanging fruit.
The shape of this curve, if known, could be important for CFAR’s strategy. If rationality follows the compound interest model, then CFAR should pick some of their brightest students and fully focus on optimizing them. On the other hand, if the low-hanging fruit is more likely, CFAR should focus on some easy-to-replicate elementary lessons and try to get as much volunteers as possible to teach them to everyone in sight.
By the way, for the efficient altruist subset of LW crowd, income (its part donated to effective charity) is a good proxy for winning.
Also, rationality might mostly work by making disaster less common—it’s not so much that the victories are bigger as that fewer of them are lost.
That is a possible and likely model, but it seems to me that we should not stop the analysis here.
Let’s assume that rationality works mostly by preventing failures. As a simple mathematical model, we have a biased coin that generates values “success” and “failure”. For a typical smart but not rational person, the coin generates 90% “success” and 10% “failure”. For an x-rationalist, the coin generates 99% “success” and 1% “failure”. If your experiment consists of doing one coin flip and calculating the winners, most winners will not be x-rationalists, simply because of the base rates.
But are these coin flips always taken in isolation, or is it possible to create more complex games? For example, if the goal is to flip the coin 10 times and have 10 “successes”, then the players have total chances of 35% vs 90%. That seems like a greater difference, although the base rates would still dwarf this.
My point is, if your magical power is merely preventing some unlikely failures, you should have a visible advantage in situations which are complex in a way that makes hundreds of such failures possible. A person without the magical power would be pretty likely to fail at some point, even if each individual failure would be unlikely.
I just don’t know what (if anything) in the real world corresponds to this. Maybe the problem is that preventing hundreds of different unlikely failures would simply take too much time for a single person.
I suspect rationality does a lot to prevent likely failures as well as unlikely failures.
This is getting better, slowly. Workshops are going on in Melbourne sometime in early 2014 (February?), and they’re looking to do more internationals going forward.