I don’t see the purpose of such thought experiments as being to model reality (we’ve already got a perfectly good actual reality for that), but to simplify it. Hypothesizing omnipotent beings and superpowers may not seem like simplification, but it is in one key aspect: it reduces the number of variables.
Reality is messy, and while we have to deal with it eventually, it’s useful to consider simpler, more comprehensible models, and then gradually introduce complexity once we understand how the simpler system works. So the thought experiments arbitrarily set certain variables (such as predictive ability) to 100% or 0% simply to remove that aspect from consideration.
This does give a fundamentally unrealistic situation, but that’s really the point—they are our equivalent of spherical cows. Dealing with all those variables at once is too hard. In the situations where it isn’t and we have “real” situations we can fruitfully consider, there’s no need for the thought experiment in the first place. Once we can understand the simpler system, we have somewhere to start from once we start adding back in the complexity.
Models are also dangerously seductive. You’re gaining precision at the expense of correspondence to reality, which can only be a temporary trade off if you’re ever going to put your knowledge to work.
I most strongly object to modeling as used in economics. Modeling is no longer about getting traction on difficult concepts—building these stylized models has become a goal in and of itself, and mathematical formalization is almost a prerequisite for getting published in a major journal.
I don’t see the purpose of such thought experiments as being to model reality (we’ve already got a perfectly good actual reality for that), but to simplify it.
You seem to misunderstand what models are for. A model is not the actual thing—thus, we do not say, “Why did you build a scale model of the solar system—we have the actual solar system for that!”. Instead, models always leave something out—they abstract away the details we don’t think are important to simplify thinking about the problem.
I don’t see the purpose of such thought experiments as being to model reality (we’ve already got a perfectly good actual reality for that), but to simplify it. Hypothesizing omnipotent beings and superpowers may not seem like simplification, but it is in one key aspect: it reduces the number of variables.
Reality is messy, and while we have to deal with it eventually, it’s useful to consider simpler, more comprehensible models, and then gradually introduce complexity once we understand how the simpler system works. So the thought experiments arbitrarily set certain variables (such as predictive ability) to 100% or 0% simply to remove that aspect from consideration.
This does give a fundamentally unrealistic situation, but that’s really the point—they are our equivalent of spherical cows. Dealing with all those variables at once is too hard. In the situations where it isn’t and we have “real” situations we can fruitfully consider, there’s no need for the thought experiment in the first place. Once we can understand the simpler system, we have somewhere to start from once we start adding back in the complexity.
Models are also dangerously seductive. You’re gaining precision at the expense of correspondence to reality, which can only be a temporary trade off if you’re ever going to put your knowledge to work.
I most strongly object to modeling as used in economics. Modeling is no longer about getting traction on difficult concepts—building these stylized models has become a goal in and of itself, and mathematical formalization is almost a prerequisite for getting published in a major journal.
You seem to misunderstand what models are for. A model is not the actual thing—thus, we do not say, “Why did you build a scale model of the solar system—we have the actual solar system for that!”. Instead, models always leave something out—they abstract away the details we don’t think are important to simplify thinking about the problem.
Other than that, I agree.