I agree that there’s some subtlety here, but I don’t think that all that happened here is that my model got more complex.
I think I’m trying to say something more like “I thought that I understood the first-order considerations, but actually I didn’t.” Or “I thought that I understood the solution to this particular problem, but actually that problem had a different solution than I thought it did”. Eg in the situations of 1, 2, and 3, I had a picture in my head of some idealized market, and I had false beliefs about what happens in that idealized market, just like I’d be able to be wrong about the Nash equilibrium of a game.
I wouldn’t have included something on this list if I had just added complexity to the model in order to capture higher-order effects.
I agree that there’s some subtlety here, but I don’t think that all that happened here is that my model got more complex.
I think I’m trying to say something more like “I thought that I understood the first-order considerations, but actually I didn’t.” Or “I thought that I understood the solution to this particular problem, but actually that problem had a different solution than I thought it did”. Eg in the situations of 1, 2, and 3, I had a picture in my head of some idealized market, and I had false beliefs about what happens in that idealized market, just like I’d be able to be wrong about the Nash equilibrium of a game.
I wouldn’t have included something on this list if I had just added complexity to the model in order to capture higher-order effects.