Also, we should not neglect the base rates. If more than 99% of people on this planet are irrational by LW standards, then we should not be surprised by seeing irrational people among the most successful ones, even if rationality increases the probability of success.
In other words, if you would find that (pulling the numbers out of hat) 99% of all people are irrational, but “only” 90% of millionaires are irrational, that would be an evidence that rationality does lead to (increased probability of) winning.
Also, in real humans, rationality isn’t all-or-nothing. Compare Oprah with an average person from her reference group (before she became famous). Is she really less rational? I doubt it.
Also, in real humans, rationality isn’t all-or-nothing. Compare Oprah with an average person from her reference group (before she became famous). Is she really less rational? I doubt it.
That seems entirely possible. Consider the old chestnut that entrepreneurs are systematically overoptimistic about their chances of success and that startups and similar risks are negative expected value. Rational people may well avoid such risks precisely because they do not pay, but of the group of people irrational enough to try, a few will become billionaires. Voila! Another example: any smart rational kid will look at career odds and payoffs to things like being a musician or a talk show host, and go ‘screw that! I’m going to become a doctor or economist!’, and so when we look at mega-millionaire musicians like Michael Jackson or billionaire talk show hosts like Oprah… (We are ignoring all the less rational kids who wanted to become an NFL quarterback or a rap star and wind up working at McDonald’s.)
Another point I’ve made in the past is that since marginal utility seems to diminish with wealth, you have to seriously question the rationality of anyone who does not diversify out of whatever made them wealthy, and instead go double or nothing. Did Mark Zuckerberg really make the rational choice to hold onto Facebook ownership percentages as much as possible even when he was receiving offers of hundreds of millions? Yes, he’s now currently a billionaire because he held onto it and worth increased some orders of magnitude, but social networks have often died—as he ought to know, having crushed more than his fair share of social networks under his heel! In retrospect, we know that no one (like Google+) has killed Facebook the way Facebook killed Myspace. But only in retrospect.
Or since using these past examples may not be convincing to people since it’s too easy to think “obviously holding onto Facebook was rational, gwern, don’t you remember how inevitable it looked back in 2006?” (no, I don’t, but I’m not sure how I could convince you otherwise), let’s use a more current example… Bitcoin.
At least one LWer currently holds something like >500 bitcoins, which at the current MtG price could be sold for ~$120,000. His net worth independent of his bitcoins is in the $1-10,000 range as best as I can estimate. I am sure you are seeing where I am going with this: if bitcoin craters, he will lose something like 90% of his current net worth, but if bitcoin gains another order, he could become a millionaire.
So here’s my question for you, if you think that it’s obvious that Oprah must have been rational, and was not merely an irrational risk-seeker who among other things got lucky: right now, without the benefit of hindsight or knowledge of inevitability of Bitcoin’s incredibly-obvious-success/obviously-doomed-to-failure, is it rational for him to sell or to keep his bitcoins? Is he more like Zuckerberg, who by holding makes billions; or more like all the failed startup founders who reject lucrative buyouts and wind up with nothing?
It is rational for him to:
[pollid:428]
Suppose he holds, and Bitcoin craters down to the single dollar range or less for an extended time period; do you think people will regard his decision as:
[pollid:429]
Suppose he holds, and Bitcoin gains another order of magnitude (>$1000) for an extended time period; do you think people will regard his decision as:
[pollid:430]
Suppose he sells, and Bitcoin craters down to the single dollar range or less for an extended time period; do you think people will regard his decision as:
[pollid:431]
Suppose he sells, and Bitcoin gains another order of magnitude (>$1000) for an extended time period; do you think people will regard his decision as:
Do I think people will regard his decision, or would I regard his decision? Are these people general population, or LW? How much do they know about his reasoning process?
I intended general people, and I don’t think they would much care. If you want more detailed scenarios and hypotheticals, feel free to reply to my comment with your preferred poll questions.
if you think that it’s obvious that Oprah must have been rational
I wrote she is probably more rational than an average person from her reference group (before she became famous); by which I meant: a poor black woman pregnant at age 14. Being overoptimistic does not contradict that.
No, but it does put pressure on your claim. You have to be very optimistic or very risk-seeking to ride your risky career all the way up past instant-retirement/fuck-you money levels (a few millions) to the billions point, and not sell out at every point before then to enjoy your gains. What fraction of the general population ever founds a startup or new company or takes an equivalent risk? Her career pushes Oprah way out onto the tail.
Now, maybe the average black pregnant teenager is so irrational in so many ways that their average problems make Oprah on net more rational even though she’s lunatically optimistic or risk-seeking (although here we should question how irrational having a kid is, given issues like welfare and local cultures and issues discussed in Promises I Can Keep and marriage gambits and that sort of thing), but it’s going to be much harder to establish that about an Oprah-with-lunatic-risk-appetite rather than what we started with, the Oprah-who-is-otherwise-looking-pretty-darn-rational.
On the other hand, how good are people who retire at finding what they want most to do?
A person who’s more rational than average (especially about introspection) might do well to retire, but most people might be rationally concerned that they’d just drift.
I don’t know what population-wide aggregates might look like. At least in Silicon Valley, there apparently are many people who have retired early and have the ability and inclination to express any dissatisfaction online in places where I might read them, but I can’t think of any who have said things like “My life has been miserable since I cashed out my millions of dollars of Google shares and I have nothing to do with myself.”
Retiring early means you have the money for doing a great many things, and you are still in physical & mental shape to enjoy it; Twain:
“The whole scheme of things is turned wrong end to. Life should begin with age & its privileges and accumulations, & end with youth & its capacity to splendidly enjoy such advantages. As things are now, when in youth a dollar would bring a hundred pleasures, you can’t have it. When you are old, you get it & there is nothing worth buying with it then. It’s an epitome of life. The first half of it consists of the capacity to enjoy without the chance; the last half consists of the chance without the capacity.”
And what factors enabled this early retirement in the first place? A motivated intelligent person (albeit with a bad appetite for risk and inability to cash out) can find plenty of rewarding things to occupy themselves with, like charity or education. Steve Woziak and Cliff Stoll immediately come to mind, but I’m sure you can name others.
There’s a selection effect on such wishes, though. Only a small fraction of humans ① survive to such an age and ② retire with “privileges and accumulations”; many who would desire such a goal do not achieve it.
I don’t follow. Everyone has wishes, the people who retire without privileges and accumulations tend to have started without privileges and accumulations but the opposite is not true (the elderly are wealthier than the young).
So, I said he’d be considered rational in all cases except hold/fail. That’s because people will take his success as evidence that he knows what he’s doing, and if he sells then he’s doing what ‘everyone else’ (i.e. > 99.9% of the world) would do, so even if it doesn’t work out that way they’d probably give him some slack.
Also, I think it’s rational for him to diversify, but it’s not a bad idea for him to maintain significant holdings.
Expanding on RomeoStevens’ comment… Maths time! Suppose that he has now 10,000 dollars and 500 bitcoins, each bitcoin now costs $100, and that by the end of the year a bitcoin will cost $10 with probability 1⁄3, $100 with probability 1⁄3, and $1000 with probability 1⁄3. Suppose also that his utility function is the logarithm of his net worth in dollars by the end of the year. How many bitcoins should he sell to maximize his expected utility? Hint: the answer isn’t close to 0 or to 500. And I don’t think that a more realistic model would change it by that much.
Khoth suggests modeling it as starting with an endowment of $60k and considering the sum of the 3 equally probable outcomes plus or minus the difference between the original price and the closing price, in which case the optimal number of coins to hold seems to be 300:
Of course, your specific payoffs and probabilities imply that one should be buying bitcoins since in 1⁄3 of the outcomes the price is unchanged, in 1⁄3 one loses 90% of the invested money, and in the remaining 1⁄3, one instead gains 1000% of the invested money...
I’ve fiddled around a bit, and ISTM that so long as the probability distribution of the logarithm of the eventual value of bitcoins is symmetric around the current value (and your utility function is logarithm), you should buy or sell so that half of your current net worth is in dollars and half is in bitcoins.
Even after the ‘crash’, the equivalent figure is still like $50k and so the question remains germane. If you want to answer it, feel free. (The raw poll data includes timestamps, so if anyone thinks that answers after time X are corrupting the results, they can always drop such entries.)
Basic statistics question: if we find that 99% of all people are irrational, but “only” 90% of millionaires are irrational, is that evidence that rationality does lead to (increased probability of) winning, or is it only evidence that rationality is correlated with winning? For instance, how do I know that millionaires aren’t more rational simply because they can afford to go to CFAR workshops and have more freetime to read LessWrong?
I.e. knowing only that 99% of all people are A but “only” 90% of millionaires are A, how do I adjust my respective probabilities that
A --> millionaires
Millionaires --> A
Unknown factor C causes both A and millionaires
It feels like I ought to assign some additional likelihood to each of these 3 cases, but I’m not sure how to split it up. Maybe the answer is simply, “gather more evidence to attempt to tease out the proper causal relationship”.
This is a causal question, not a statistical question. You answer by implementing the relevant intervention, usually by randomization, or maybe you find a natural experiment, or maybe [lots of other ways people thought of].
You can’t in general use observational data (e.g. what you call “evidence”) to figure out causal relationships. You need causal assumptions somewhere.
which show you don’t even need conditional independences to orient edges. For example if the true dag is this:
1 → 2 → 3 → 4, 1 ← u1 → 3, 1 ← u2 → 4,
and we observe p(1, 2, 3, 4) (no conditional independences in this marginal), I can recover the graph exactly with enough data. (The graph would be causal if we assume the underlying true graph is, otherwise it’s just a statistical model).
People’s intuitions about what’s possible in causal discovery aren’t very good.
It would be good if statisticians and machine learning / comp. sci. people came together to hash out their differences regarding causal inference.
I saw that, but I didn’t see much substance to his remarks, nor in the comments.
Here is a paper surveying methods of methods of causal analysis for such non-interventional data, and summarising the causal assumptions that they make:
“New methods for separating causes from effects in genomics data” Alexander Statnikov, Mikael Henaff, Nikita I Lytkin, Constantin F Aliferis
It feels like I ought to assign some additional likelihood to each of these 3 cases, but I’m not sure how to split it up.
Two things:
1) Your prior probabilities. If before getting your evidence you expect that hypothesis H1 is twice as likely as H2, and the new evidence is equally likely under both H1 and H2, you should update so that the new H1 remains twice as likely as H2.
2) Conditional probabilities of the evidence under different hypotheses. Let’s suppose that hypothesis H1 predicts a specific evidence E with probability 10%, hypothesis H2 predicts E with probability 30%. After seeing E, the ratio between H1 and H2 should be multiplied by 1:3.
The first part means simply: Before the (fictional) research about rationality among millionaires was made, which probability would you assign to your hypotheses?
The second part means: If we know that 99% of all people are irrational, what would be your expectation about % of irrational millionaires, if you assume that e.g. the first hypothesis “rationality causes millionaires” is true. Would you expect to see 95% or 90% or 80% or 50% or 10% or 1% of irrational millionaires? Make your probability distribution. Now do the same thing for each one of the remaining hypotheses. -- Ta-da, the research is over and we know that the % of irrational millionaires is 90%, not more, not less. How good were the individual hypotheses at predicting this specific outcome?
(I don’t mean to imply that doing either of these estimates is easy. It is just the way it should be done.)
Maybe the answer is simply, “gather more evidence
Gathering more evidence is always good (ignoring the costs of gathering the evidence), but sometimes we need to make an estimate based on data we already have.
Also, we should not neglect the base rates. If more than 99% of people on this planet are irrational by LW standards, then we should not be surprised by seeing irrational people among the most successful ones, even if rationality increases the probability of success.
In other words, if you would find that (pulling the numbers out of hat) 99% of all people are irrational, but “only” 90% of millionaires are irrational, that would be an evidence that rationality does lead to (increased probability of) winning.
Also, in real humans, rationality isn’t all-or-nothing. Compare Oprah with an average person from her reference group (before she became famous). Is she really less rational? I doubt it.
That seems entirely possible. Consider the old chestnut that entrepreneurs are systematically overoptimistic about their chances of success and that startups and similar risks are negative expected value. Rational people may well avoid such risks precisely because they do not pay, but of the group of people irrational enough to try, a few will become billionaires. Voila! Another example: any smart rational kid will look at career odds and payoffs to things like being a musician or a talk show host, and go ‘screw that! I’m going to become a doctor or economist!’, and so when we look at mega-millionaire musicians like Michael Jackson or billionaire talk show hosts like Oprah… (We are ignoring all the less rational kids who wanted to become an NFL quarterback or a rap star and wind up working at McDonald’s.)
Another point I’ve made in the past is that since marginal utility seems to diminish with wealth, you have to seriously question the rationality of anyone who does not diversify out of whatever made them wealthy, and instead go double or nothing. Did Mark Zuckerberg really make the rational choice to hold onto Facebook ownership percentages as much as possible even when he was receiving offers of hundreds of millions? Yes, he’s now currently a billionaire because he held onto it and worth increased some orders of magnitude, but social networks have often died—as he ought to know, having crushed more than his fair share of social networks under his heel! In retrospect, we know that no one (like Google+) has killed Facebook the way Facebook killed Myspace. But only in retrospect.
Or since using these past examples may not be convincing to people since it’s too easy to think “obviously holding onto Facebook was rational, gwern, don’t you remember how inevitable it looked back in 2006?” (no, I don’t, but I’m not sure how I could convince you otherwise), let’s use a more current example… Bitcoin.
At least one LWer currently holds something like >500 bitcoins, which at the current MtG price could be sold for ~$120,000. His net worth independent of his bitcoins is in the $1-10,000 range as best as I can estimate. I am sure you are seeing where I am going with this: if bitcoin craters, he will lose something like 90% of his current net worth, but if bitcoin gains another order, he could become a millionaire.
So here’s my question for you, if you think that it’s obvious that Oprah must have been rational, and was not merely an irrational risk-seeker who among other things got lucky: right now, without the benefit of hindsight or knowledge of inevitability of Bitcoin’s incredibly-obvious-success/obviously-doomed-to-failure, is it rational for him to sell or to keep his bitcoins? Is he more like Zuckerberg, who by holding makes billions; or more like all the failed startup founders who reject lucrative buyouts and wind up with nothing?
It is rational for him to:
[pollid:428]
Suppose he holds, and Bitcoin craters down to the single dollar range or less for an extended time period; do you think people will regard his decision as:
[pollid:429]
Suppose he holds, and Bitcoin gains another order of magnitude (>$1000) for an extended time period; do you think people will regard his decision as:
[pollid:430]
Suppose he sells, and Bitcoin craters down to the single dollar range or less for an extended time period; do you think people will regard his decision as:
[pollid:431]
Suppose he sells, and Bitcoin gains another order of magnitude (>$1000) for an extended time period; do you think people will regard his decision as:
[pollid:432]
Do I think people will regard his decision, or would I regard his decision? Are these people general population, or LW? How much do they know about his reasoning process?
I intended general people, and I don’t think they would much care. If you want more detailed scenarios and hypotheticals, feel free to reply to my comment with your preferred poll questions.
I wrote she is probably more rational than an average person from her reference group (before she became famous); by which I meant: a poor black woman pregnant at age 14. Being overoptimistic does not contradict that.
No, but it does put pressure on your claim. You have to be very optimistic or very risk-seeking to ride your risky career all the way up past instant-retirement/fuck-you money levels (a few millions) to the billions point, and not sell out at every point before then to enjoy your gains. What fraction of the general population ever founds a startup or new company or takes an equivalent risk? Her career pushes Oprah way out onto the tail.
Now, maybe the average black pregnant teenager is so irrational in so many ways that their average problems make Oprah on net more rational even though she’s lunatically optimistic or risk-seeking (although here we should question how irrational having a kid is, given issues like welfare and local cultures and issues discussed in Promises I Can Keep and marriage gambits and that sort of thing), but it’s going to be much harder to establish that about an Oprah-with-lunatic-risk-appetite rather than what we started with, the Oprah-who-is-otherwise-looking-pretty-darn-rational.
Is retiring relatively young a more rational choice than continuing to work at something you like?
It seems like pretty remarkable luck if the thing you want to do most in the world is also what you’re currently being paid to do.
On the other hand, how good are people who retire at finding what they want most to do?
A person who’s more rational than average (especially about introspection) might do well to retire, but most people might be rationally concerned that they’d just drift.
I don’t know what population-wide aggregates might look like. At least in Silicon Valley, there apparently are many people who have retired early and have the ability and inclination to express any dissatisfaction online in places where I might read them, but I can’t think of any who have said things like “My life has been miserable since I cashed out my millions of dollars of Google shares and I have nothing to do with myself.”
Retiring early means you have the money for doing a great many things, and you are still in physical & mental shape to enjoy it; Twain:
And what factors enabled this early retirement in the first place? A motivated intelligent person (albeit with a bad appetite for risk and inability to cash out) can find plenty of rewarding things to occupy themselves with, like charity or education. Steve Woziak and Cliff Stoll immediately come to mind, but I’m sure you can name others.
There’s a selection effect on such wishes, though. Only a small fraction of humans ① survive to such an age and ② retire with “privileges and accumulations”; many who would desire such a goal do not achieve it.
I don’t follow. Everyone has wishes, the people who retire without privileges and accumulations tend to have started without privileges and accumulations but the opposite is not true (the elderly are wealthier than the young).
So, I said he’d be considered rational in all cases except hold/fail. That’s because people will take his success as evidence that he knows what he’s doing, and if he sells then he’s doing what ‘everyone else’ (i.e. > 99.9% of the world) would do, so even if it doesn’t work out that way they’d probably give him some slack.
Also, I think it’s rational for him to diversify, but it’s not a bad idea for him to maintain significant holdings.
why is buying and selling binary? he should clearly rebalance.
Expanding on RomeoStevens’ comment… Maths time! Suppose that he has now 10,000 dollars and 500 bitcoins, each bitcoin now costs $100, and that by the end of the year a bitcoin will cost $10 with probability 1⁄3, $100 with probability 1⁄3, and $1000 with probability 1⁄3. Suppose also that his utility function is the logarithm of his net worth in dollars by the end of the year. How many bitcoins should he sell to maximize his expected utility? Hint: the answer isn’t close to 0 or to 500. And I don’t think that a more realistic model would change it by that much.
Khoth suggests modeling it as starting with an endowment of $60k and considering the sum of the 3 equally probable outcomes plus or minus the difference between the original price and the closing price, in which case the optimal number of coins to hold seems to be 300:
Of course, your specific payoffs and probabilities imply that one should be buying bitcoins since in 1⁄3 of the outcomes the price is unchanged, in 1⁄3 one loses 90% of the invested money, and in the remaining 1⁄3, one instead gains 1000% of the invested money...
I’ve fiddled around a bit, and ISTM that so long as the probability distribution of the logarithm of the eventual value of bitcoins is symmetric around the current value (and your utility function is logarithm), you should buy or sell so that half of your current net worth is in dollars and half is in bitcoins.
Nevermind, Gwern posted it before me.
I come from the future. Do I try to compensate for hindsight bias, or do I abstain from answering the polls altogether?
Even after the ‘crash’, the equivalent figure is still like $50k and so the question remains germane. If you want to answer it, feel free. (The raw poll data includes timestamps, so if anyone thinks that answers after time X are corrupting the results, they can always drop such entries.)
Okay. I answered the questions except the first (per RomeoStevens) and the last (I’d expect people to be roughly equally split in that situation).
I’d think that the latter would result in less expected pollution.
Basic statistics question: if we find that 99% of all people are irrational, but “only” 90% of millionaires are irrational, is that evidence that rationality does lead to (increased probability of) winning, or is it only evidence that rationality is correlated with winning? For instance, how do I know that millionaires aren’t more rational simply because they can afford to go to CFAR workshops and have more freetime to read LessWrong?
I.e. knowing only that 99% of all people are A but “only” 90% of millionaires are A, how do I adjust my respective probabilities that
A --> millionaires
Millionaires --> A
Unknown factor C causes both A and millionaires
It feels like I ought to assign some additional likelihood to each of these 3 cases, but I’m not sure how to split it up. Maybe the answer is simply, “gather more evidence to attempt to tease out the proper causal relationship”.
This is a causal question, not a statistical question. You answer by implementing the relevant intervention, usually by randomization, or maybe you find a natural experiment, or maybe [lots of other ways people thought of].
You can’t in general use observational data (e.g. what you call “evidence”) to figure out causal relationships. You need causal assumptions somewhere.
What do you think of this challenge, to detect causality from nothing but a set of pairs of values of unnamed variables?
You can do it with enough causal assumptions (e.g. not “from nothing”). There is a series of magical papers, e.g. this:
http://www.cs.helsinki.fi/u/phoyer/papers/pdf/hoyer2008nips.pdf
which show you can use additive noise assumptions to orient edges.
I have a series of papers:
http://www.auai.org/uai2012/papers/248.pdf
http://arxiv.org/abs/1207.5058
which show you don’t even need conditional independences to orient edges. For example if the true dag is this:
1 → 2 → 3 → 4, 1 ← u1 → 3, 1 ← u2 → 4,
and we observe p(1, 2, 3, 4) (no conditional independences in this marginal), I can recover the graph exactly with enough data. (The graph would be causal if we assume the underlying true graph is, otherwise it’s just a statistical model).
People’s intuitions about what’s possible in causal discovery aren’t very good.
It would be good if statisticians and machine learning / comp. sci. people came together to hash out their differences regarding causal inference.
Gelman seems skeptical.
I saw that, but I didn’t see much substance to his remarks, nor in the comments.
Here is a paper surveying methods of methods of causal analysis for such non-interventional data, and summarising the causal assumptions that they make:
“New methods for separating causes from effects in genomics data”
Alexander Statnikov, Mikael Henaff, Nikita I Lytkin, Constantin F Aliferis
Two things:
1) Your prior probabilities. If before getting your evidence you expect that hypothesis H1 is twice as likely as H2, and the new evidence is equally likely under both H1 and H2, you should update so that the new H1 remains twice as likely as H2.
2) Conditional probabilities of the evidence under different hypotheses. Let’s suppose that hypothesis H1 predicts a specific evidence E with probability 10%, hypothesis H2 predicts E with probability 30%. After seeing E, the ratio between H1 and H2 should be multiplied by 1:3.
The first part means simply: Before the (fictional) research about rationality among millionaires was made, which probability would you assign to your hypotheses?
The second part means: If we know that 99% of all people are irrational, what would be your expectation about % of irrational millionaires, if you assume that e.g. the first hypothesis “rationality causes millionaires” is true. Would you expect to see 95% or 90% or 80% or 50% or 10% or 1% of irrational millionaires? Make your probability distribution. Now do the same thing for each one of the remaining hypotheses. -- Ta-da, the research is over and we know that the % of irrational millionaires is 90%, not more, not less. How good were the individual hypotheses at predicting this specific outcome?
(I don’t mean to imply that doing either of these estimates is easy. It is just the way it should be done.)
Gathering more evidence is always good (ignoring the costs of gathering the evidence), but sometimes we need to make an estimate based on data we already have.