Thanks for writing this! I’m glad to see my post getting engagement, and I wish I’d joined the discussion here sooner.
I feel like my argument got a strawmanned (and I don’t think you did that intentionally). I fully agree with this bit:
“Methods like the above will result in better probability estimates than if we acted as though we knew nothing at all.”
I think it’s entirely reasonable for someone to say: “I feel safe walking out the door because I think there’s an extremely low probability that Zeus will strike me down with a thunderbolt when I walk outside.”
What I object to is the idea that reasonable people do (or in some sense ought to) make sense of all uncertainty in terms of probability estimates. I think combining hazy probability estimates with tools like probabilistic decision theory will generally have bad consequences.
I very much agree with Dagon’s comment:
Models are maps. There’s no similarity molecules or probability fields that tie all die rolls together. It’s just that our models are easier (and still work fairly well) if we treat them similarly because, at the level we’re considering, they share some abstract properties in our models.
Sorry you felt your argument got strawmanned. Perhaps I should’ve been clearer about the relationship between this post and yours. Basically, the Kyle example in your post prompted me to consider the questions in this post, and see if I could test the idea of assigning and making sense of probabilities against cases like those. It was more like that example in your post was a jumping off point, which I decided to take as throwing down a gauntlet I could interesting put my own ideas up against. It wasn’t like I saw this post as focusing on arguing against any core claims in yours.
This is part of why I actually mention you and your post only a couple times, and don’t say things like “I think Smith is wrong”, but rather “I think ultimately we can make sense of Kyle’s probability estimate, and that Kyle can have at least some grounding for it.” Which doesn’t directly conflict with what you say in that quote. You say you don’t know what sense to make of Kyle’s estimate, not that we can’t make sense of his estimate. I suggest one way we could make sense of estimates in situations like that one, though I note that that’s not likely to be what people actually are doing.
So to what seems to me to be the core claim in your post, and definitely the claim you emphasise in this comment, the main thing I say in this post is:
I’m not claiming we should be confident in these probabilities, and in fact, I expect many people should massively reduce their confidence in their probability estimates. I’m also not claiming that the probabilities people actually assign are reliably better than chance—that’s an empirical question, and again there’d likely be issues of overconfidence.
Which I think can be consistent with your view, though it doesn’t take a strong stance on it.
Does this clear up how I see the relationship between your post and this one? Or is there something that still feels strawmanny in here?
In any case, I did write a different post that actually does (sort-of, tentatively, and with some nuance) disagree in one section with what I think you were arguing for in your optimizer’s curse post. (Though it also accepts some of your claims as true and important, and it was because of your post that I learned about the optimizer’s curse, so I’m grateful for that.) I’d be quite interested in your thoughts on that post (regarding how I captured your view, what you think about my view in that section, and to be honest the rest of the post too). The post is here.
(Also, as a meta point, a major goal with most of what I write is to try to capture as clearly as possible what I think is true, and then see what people say about it, so that I can learn from that myself. I suspect I’ll always do this, but it’s especially the case at the moment, as I’m relatively new to EA and don’t have a background in economics, decision theory, philosophy, etc. This also means that there’ll probably be a positive correlation between 1) the length of my comments/posts somewhat disagreeing with someone and 2) the degree to which they seem to me to be clever and to have thought something through quite a bit, even if I currently disagree with them. That’s because those people are the ones I suspect I’d be most likely to learn from interacting with.)
Hi Michael,
Thanks for writing this! I’m glad to see my post getting engagement, and I wish I’d joined the discussion here sooner.
I feel like my argument got a strawmanned (and I don’t think you did that intentionally). I fully agree with this bit:
I think it’s entirely reasonable for someone to say: “I feel safe walking out the door because I think there’s an extremely low probability that Zeus will strike me down with a thunderbolt when I walk outside.”
What I object to is the idea that reasonable people do (or in some sense ought to) make sense of all uncertainty in terms of probability estimates. I think combining hazy probability estimates with tools like probabilistic decision theory will generally have bad consequences.
I very much agree with Dagon’s comment:
Hi Chris :)
Sorry you felt your argument got strawmanned. Perhaps I should’ve been clearer about the relationship between this post and yours. Basically, the Kyle example in your post prompted me to consider the questions in this post, and see if I could test the idea of assigning and making sense of probabilities against cases like those. It was more like that example in your post was a jumping off point, which I decided to take as throwing down a gauntlet I could interesting put my own ideas up against. It wasn’t like I saw this post as focusing on arguing against any core claims in yours.
This is part of why I actually mention you and your post only a couple times, and don’t say things like “I think Smith is wrong”, but rather “I think ultimately we can make sense of Kyle’s probability estimate, and that Kyle can have at least some grounding for it.” Which doesn’t directly conflict with what you say in that quote. You say you don’t know what sense to make of Kyle’s estimate, not that we can’t make sense of his estimate. I suggest one way we could make sense of estimates in situations like that one, though I note that that’s not likely to be what people actually are doing.
So to what seems to me to be the core claim in your post, and definitely the claim you emphasise in this comment, the main thing I say in this post is:
Which I think can be consistent with your view, though it doesn’t take a strong stance on it.
Does this clear up how I see the relationship between your post and this one? Or is there something that still feels strawmanny in here?
In any case, I did write a different post that actually does (sort-of, tentatively, and with some nuance) disagree in one section with what I think you were arguing for in your optimizer’s curse post. (Though it also accepts some of your claims as true and important, and it was because of your post that I learned about the optimizer’s curse, so I’m grateful for that.) I’d be quite interested in your thoughts on that post (regarding how I captured your view, what you think about my view in that section, and to be honest the rest of the post too). The post is here.
(Also, as a meta point, a major goal with most of what I write is to try to capture as clearly as possible what I think is true, and then see what people say about it, so that I can learn from that myself. I suspect I’ll always do this, but it’s especially the case at the moment, as I’m relatively new to EA and don’t have a background in economics, decision theory, philosophy, etc. This also means that there’ll probably be a positive correlation between 1) the length of my comments/posts somewhat disagreeing with someone and 2) the degree to which they seem to me to be clever and to have thought something through quite a bit, even if I currently disagree with them. That’s because those people are the ones I suspect I’d be most likely to learn from interacting with.)