Of course not by chance: there is a common cause—high IQ.
Supposing his IQ to be at the 1 in 10k level (at the level of standard deviations, it’s very unlikely that he’s much higher than this, on priors). There are 400 such Americans of each age. What fraction do you think do publishable math or TCS research as college sophomores?
Bill Gates also famously dropped out of college. Does that warrant a Bayesian update, too? (Peter Thiel probably thinks so :-D)
Yeah, it’s an update in the direction of it being a good idea for people of his demonstrated ability by that age and with his ambitions to drop out.
Yeah, it’s an update in the direction of it being a good idea for people of his demonstrated ability by that age
Well, let’s look at Mark Zuckerberg… and we’re updating right back to where we started. And there is Sergey Brin...oh! we’re now updating in a different direction?
Every example (Brin, Gates, Zuckerberg) should inform the implicit statistical model that we create: every time we learn about about one of them, we should update our model. If you don’t do that, you’re not fully utilizing the evidence available to you! ;-). Also, the model isn’t just of “is this a good idea or isn’t it?”, what we’re doing implicitly is determining probability distributions… And factors specific to individuals matter, the update is just of the type “all else being equal, this now looks to be a better idea.”
Every example … should inform the implicit statistical model that we create: every time we learn about about one of them, we should update our model. If you don’t do that, you’re not fully utilizing the evidence available to you!
This is a popular banner to fly at LW. I don’t agree with it.
The problem is that “evidence available to us” is vast. We are incapable of using all of it to update our models of the world. We necessarily select evidence to be used for updating—and herein lies the problem.
Unless your process of selecting evidence for updating is explicit, transparent, and understood, you run a very high risk of falling prey to some variety of selection bias. And if the evidence you picked is biased, so would be your model.
There is a well-known result of an experiment which asks people to name some random numbers. To the surprise of no one at LW, the numbers people name are not very random. In exactly the same way you may think that you’re updating on a randomly selected pieces of evidence and that the randomness should protect your from bias. I am afraid that doesn’t work.
I would update on evidence which I have reason to believe is representative. Updating on cherry-picked (even unconsciously) examples is worse than useless.
Ok, so I have to put more work in to externalizing my intuitions, which will probably take dozens of blog posts. It’s not as though I haven’t considered your points: again, I’ve thought about these things for 10,000+ hours :-). Thanks for helping me to understand where you’re coming from.
I’m a bit confused about the point that you are trying to make here. As far as I can see there is nothing in this about social class in the traditional meaning of the phrase. It’s about your view that people who study quantitative subjects (rather than poor benighted arts students like me) do better in “business”, make more money, and become more successful.
You’ve cited some examples of people who, it is undeniable, are successful, but who also happen to fit your argument. But equally there are many successful businesspeople who did not study maths/CS/physics (John Paulson, hedge fund manager, started NYU doing film studies) and there are many examples of people with qualifications that you would probably argue show them to be intellectually gifted, who have completely failed in business (the example par excellence here is Long Term Capital Management, stuffed full of PhDs from top schools).
A key in this whole discussion is to define success. If you are just using money to keep score, then consider Tom Cruise, who didn’t attend any university and is worth around half a bill.
You’ve cited some examples of people who, it is undeniable, are successful, but who also happen to fit your argument. But equally there are many successful businesspeople who did not study maths/CS/physics
Supposing his IQ to be at the 1 in 10k level (at the level of standard deviations, it’s very unlikely that he’s much higher than this, on priors). There are 400 such Americans of each age. What fraction do you think do publishable math or TCS research as college sophomores?
Yeah, it’s an update in the direction of it being a good idea for people of his demonstrated ability by that age and with his ambitions to drop out.
Well, let’s look at Mark Zuckerberg… and we’re updating right back to where we started. And there is Sergey Brin...oh! we’re now updating in a different direction?
Cherry-picking examples is not a good way to go.
Every example (Brin, Gates, Zuckerberg) should inform the implicit statistical model that we create: every time we learn about about one of them, we should update our model. If you don’t do that, you’re not fully utilizing the evidence available to you! ;-). Also, the model isn’t just of “is this a good idea or isn’t it?”, what we’re doing implicitly is determining probability distributions… And factors specific to individuals matter, the update is just of the type “all else being equal, this now looks to be a better idea.”
This is a popular banner to fly at LW. I don’t agree with it.
The problem is that “evidence available to us” is vast. We are incapable of using all of it to update our models of the world. We necessarily select evidence to be used for updating—and herein lies the problem.
Unless your process of selecting evidence for updating is explicit, transparent, and understood, you run a very high risk of falling prey to some variety of selection bias. And if the evidence you picked is biased, so would be your model.
There is a well-known result of an experiment which asks people to name some random numbers. To the surprise of no one at LW, the numbers people name are not very random. In exactly the same way you may think that you’re updating on a randomly selected pieces of evidence and that the randomness should protect your from bias. I am afraid that doesn’t work.
I would update on evidence which I have reason to believe is representative. Updating on cherry-picked (even unconsciously) examples is worse than useless.
Ok, so I have to put more work in to externalizing my intuitions, which will probably take dozens of blog posts. It’s not as though I haven’t considered your points: again, I’ve thought about these things for 10,000+ hours :-). Thanks for helping me to understand where you’re coming from.
I’m a bit confused about the point that you are trying to make here. As far as I can see there is nothing in this about social class in the traditional meaning of the phrase. It’s about your view that people who study quantitative subjects (rather than poor benighted arts students like me) do better in “business”, make more money, and become more successful.
You’ve cited some examples of people who, it is undeniable, are successful, but who also happen to fit your argument. But equally there are many successful businesspeople who did not study maths/CS/physics (John Paulson, hedge fund manager, started NYU doing film studies) and there are many examples of people with qualifications that you would probably argue show them to be intellectually gifted, who have completely failed in business (the example par excellence here is Long Term Capital Management, stuffed full of PhDs from top schools).
A key in this whole discussion is to define success. If you are just using money to keep score, then consider Tom Cruise, who didn’t attend any university and is worth around half a bill.
If only there were some way of quantifying this.
nice link
I’m not making a point. I do have responses to some of what you say, which I’ll be writing about later.