Yvain’s argument was that “x-rationality” (roughly the sort of thing that’s taught in the Sequences) isn’t practically helpful, not that nothing is. I certainly have read lots of things that have significantly helped me make better decisions and have a better map of the territory. None of them were x-rational. Claiming that x-rationality can’t have big effects because the world is too noisy, just seems like another excuse for avoiding reality.
I certainly have read lots of things that have significantly helped me make better decisions and have a better map of the territory.
What effect size, assessed how, against what counterfactuals? If it’s just “I read book X, and thought about it when I made decision Y, and I estimate that decision Y was right” we’re in testimonial land, and there are piles of those for both epistemic and practical benefits (although far more on epistemic than practical). Unfortunately, those aren’t very reliable. I was specifically talking about non-testimonials, e.g. aggregate effects vs control groups or reference populations to focus on easily transmissible data.
Claiming that x-rationality can’t have big effects because the world is too noisy, just seems like another excuse for avoiding reality.
Imagine that we try to take the best general epistemic heuristics we can find today, and send them back in book form to someone from 10 years ago. What effect size do you think they would have on income or academic productivity? What about 20 years? 50 years? Conditional on someone assembling, with some additions, a good set of heuristics what’s your distribution of effect sizes?
Yvain’s argument was that “x-rationality” (roughly the sort of thing that’s taught in the Sequences) isn’t practically helpful, not that nothing is. I certainly have read lots of things that have significantly helped me make better decisions and have a better map of the territory. None of them were x-rational. Claiming that x-rationality can’t have big effects because the world is too noisy, just seems like another excuse for avoiding reality.
What effect size, assessed how, against what counterfactuals? If it’s just “I read book X, and thought about it when I made decision Y, and I estimate that decision Y was right” we’re in testimonial land, and there are piles of those for both epistemic and practical benefits (although far more on epistemic than practical). Unfortunately, those aren’t very reliable. I was specifically talking about non-testimonials, e.g. aggregate effects vs control groups or reference populations to focus on easily transmissible data.
Imagine that we try to take the best general epistemic heuristics we can find today, and send them back in book form to someone from 10 years ago. What effect size do you think they would have on income or academic productivity? What about 20 years? 50 years? Conditional on someone assembling, with some additions, a good set of heuristics what’s your distribution of effect sizes?