It’s embarrassing that I was confidently wrong about my understanding of so many things in the same domain. I’ve updated towards thinking that microeconomics is trickier than most other similarly simple-seeming subjects like physics, math, or computer science. I think that the above misconceptions are more serious than any misconceptions about other technical fields which I’ve discovered over the last few years
For some of these, I’m confused about your conviction that you were “confidently wrong” before. It seems that the general pattern here is that you used the Econ 101 model to interpret a situation, and then later discovered that there was a more complex model that provided different implications. But isn’t it kind of obvious that for something in the social sciences, there’s always going to be some sort of more complex model that gives slightly different predictions?
When I say that a basic model is wrong, I mean that it gives fundamentally incorrect predictions, and that a model of similar complexity would provide better ones. However (at least minimally in the cases of (3) and (4)) I’m not sure I’d really describe your previous models as “wrong” in this sense. And I think there’s a meaningful distinction between saying you were wrong and saying you gained a more nuanced understanding of something.
I agree that there’s some subtlety here, but I don’t think that all that happened here is that my model got more complex.
I think I’m trying to say something more like “I thought that I understood the first-order considerations, but actually I didn’t.” Or “I thought that I understood the solution to this particular problem, but actually that problem had a different solution than I thought it did”. Eg in the situations of 1, 2, and 3, I had a picture in my head of some idealized market, and I had false beliefs about what happens in that idealized market, just like I’d be able to be wrong about the Nash equilibrium of a game.
I wouldn’t have included something on this list if I had just added complexity to the model in order to capture higher-order effects.
Agreed — I feel like it makes more sense to be proud of changing your mind when that entails acquiring a model of complexity similar to or lower than that of the model you used to have that makes better predictions, rather than merely making your model more complex.
For some of these, I’m confused about your conviction that you were “confidently wrong” before. It seems that the general pattern here is that you used the Econ 101 model to interpret a situation, and then later discovered that there was a more complex model that provided different implications. But isn’t it kind of obvious that for something in the social sciences, there’s always going to be some sort of more complex model that gives slightly different predictions?
When I say that a basic model is wrong, I mean that it gives fundamentally incorrect predictions, and that a model of similar complexity would provide better ones. However (at least minimally in the cases of (3) and (4)) I’m not sure I’d really describe your previous models as “wrong” in this sense. And I think there’s a meaningful distinction between saying you were wrong and saying you gained a more nuanced understanding of something.
I agree that there’s some subtlety here, but I don’t think that all that happened here is that my model got more complex.
I think I’m trying to say something more like “I thought that I understood the first-order considerations, but actually I didn’t.” Or “I thought that I understood the solution to this particular problem, but actually that problem had a different solution than I thought it did”. Eg in the situations of 1, 2, and 3, I had a picture in my head of some idealized market, and I had false beliefs about what happens in that idealized market, just like I’d be able to be wrong about the Nash equilibrium of a game.
I wouldn’t have included something on this list if I had just added complexity to the model in order to capture higher-order effects.
Agreed — I feel like it makes more sense to be proud of changing your mind when that entails acquiring a model of complexity similar to or lower than that of the model you used to have that makes better predictions, rather than merely making your model more complex.