Every example (Brin, Gates, Zuckerberg) should inform the implicit statistical model that we create: every time we learn about about one of them, we should update our model. If you don’t do that, you’re not fully utilizing the evidence available to you! ;-). Also, the model isn’t just of “is this a good idea or isn’t it?”, what we’re doing implicitly is determining probability distributions… And factors specific to individuals matter, the update is just of the type “all else being equal, this now looks to be a better idea.”
Every example … should inform the implicit statistical model that we create: every time we learn about about one of them, we should update our model. If you don’t do that, you’re not fully utilizing the evidence available to you!
This is a popular banner to fly at LW. I don’t agree with it.
The problem is that “evidence available to us” is vast. We are incapable of using all of it to update our models of the world. We necessarily select evidence to be used for updating—and herein lies the problem.
Unless your process of selecting evidence for updating is explicit, transparent, and understood, you run a very high risk of falling prey to some variety of selection bias. And if the evidence you picked is biased, so would be your model.
There is a well-known result of an experiment which asks people to name some random numbers. To the surprise of no one at LW, the numbers people name are not very random. In exactly the same way you may think that you’re updating on a randomly selected pieces of evidence and that the randomness should protect your from bias. I am afraid that doesn’t work.
I would update on evidence which I have reason to believe is representative. Updating on cherry-picked (even unconsciously) examples is worse than useless.
Ok, so I have to put more work in to externalizing my intuitions, which will probably take dozens of blog posts. It’s not as though I haven’t considered your points: again, I’ve thought about these things for 10,000+ hours :-). Thanks for helping me to understand where you’re coming from.
Every example (Brin, Gates, Zuckerberg) should inform the implicit statistical model that we create: every time we learn about about one of them, we should update our model. If you don’t do that, you’re not fully utilizing the evidence available to you! ;-). Also, the model isn’t just of “is this a good idea or isn’t it?”, what we’re doing implicitly is determining probability distributions… And factors specific to individuals matter, the update is just of the type “all else being equal, this now looks to be a better idea.”
This is a popular banner to fly at LW. I don’t agree with it.
The problem is that “evidence available to us” is vast. We are incapable of using all of it to update our models of the world. We necessarily select evidence to be used for updating—and herein lies the problem.
Unless your process of selecting evidence for updating is explicit, transparent, and understood, you run a very high risk of falling prey to some variety of selection bias. And if the evidence you picked is biased, so would be your model.
There is a well-known result of an experiment which asks people to name some random numbers. To the surprise of no one at LW, the numbers people name are not very random. In exactly the same way you may think that you’re updating on a randomly selected pieces of evidence and that the randomness should protect your from bias. I am afraid that doesn’t work.
I would update on evidence which I have reason to believe is representative. Updating on cherry-picked (even unconsciously) examples is worse than useless.
Ok, so I have to put more work in to externalizing my intuitions, which will probably take dozens of blog posts. It’s not as though I haven’t considered your points: again, I’ve thought about these things for 10,000+ hours :-). Thanks for helping me to understand where you’re coming from.