Generally (and therefore somewhat inaccurately) speaking, one way that our brains seem to handle the sheer complexity computing in the real world us is a tendency to simplify the information we gather.
In many cases these sorts of extremely simple models didn’t start that way. They may have started with more parameters and complexity. But as they were repeated, explained and applied the model becomes, in effect, simpler. The example begins to represent the entire model, rather than serving to show only a piece of it.
Technically the exponential radioactive decay model for radioactivity of a mixture has most of the pieces you describe fairly directly. But this hardly means they will be appropriately applied, that they will be available when we are thinking of how to use the model. We need to fight the simplification effect to be able to make our models more nuanced and detailed—even though they are still almost certainly lossy compression of the facts, observations, and phenomena they were built from.
On the other hand, the simplification serves its purpose too, if we could devote unlimited cognitive resources to a model, then we risk not being unable to actually reach a decision from the model.
No. The Medawar zone is more about scientific discoveries as marketable products to the scientific community, not the cultural and cognitive pressures of those communities which affect how those products are used as they become adopted.
Different phenomena, although there are almost certainly common causes.
If errors were a few percent randomly up or down it wouldn’t matter, but the inaccuracy is not tiny, over long timescales it’s many orders of magnitude, and almost always in the same direction—growth/decay are slower over long term than exponential models predicts.
Oh yes, but it’s not just a prediliction for simple models in the first place, but also a tendency to culturally and cognitively simplify the model we access to use—even if the original model had extensions to handle this case and even to the tune of orders of magnitude of error.
Of course sometimes it may be worth computing an estimate that is (unknown to you) orders of magnitude off, in a very short amount of time. Certainly if the impact of the estimate is delayed and subtle less conscious trade-offs may factor in between cognitive effort to access and use a more detailed model and the consequences of error. Yet another form of akrasia.
Generally (and therefore somewhat inaccurately) speaking, one way that our brains seem to handle the sheer complexity computing in the real world us is a tendency to simplify the information we gather.
In many cases these sorts of extremely simple models didn’t start that way. They may have started with more parameters and complexity. But as they were repeated, explained and applied the model becomes, in effect, simpler. The example begins to represent the entire model, rather than serving to show only a piece of it.
Technically the exponential radioactive decay model for radioactivity of a mixture has most of the pieces you describe fairly directly. But this hardly means they will be appropriately applied, that they will be available when we are thinking of how to use the model. We need to fight the simplification effect to be able to make our models more nuanced and detailed—even though they are still almost certainly lossy compression of the facts, observations, and phenomena they were built from.
On the other hand, the simplification serves its purpose too, if we could devote unlimited cognitive resources to a model, then we risk not being unable to actually reach a decision from the model.
So pretty much, this: http://en.wikipedia.org/wiki/Medawar_zone
No. The Medawar zone is more about scientific discoveries as marketable products to the scientific community, not the cultural and cognitive pressures of those communities which affect how those products are used as they become adopted.
Different phenomena, although there are almost certainly common causes.
If errors were a few percent randomly up or down it wouldn’t matter, but the inaccuracy is not tiny, over long timescales it’s many orders of magnitude, and almost always in the same direction—growth/decay are slower over long term than exponential models predicts.
Oh yes, but it’s not just a prediliction for simple models in the first place, but also a tendency to culturally and cognitively simplify the model we access to use—even if the original model had extensions to handle this case and even to the tune of orders of magnitude of error.
Of course sometimes it may be worth computing an estimate that is (unknown to you) orders of magnitude off, in a very short amount of time. Certainly if the impact of the estimate is delayed and subtle less conscious trade-offs may factor in between cognitive effort to access and use a more detailed model and the consequences of error. Yet another form of akrasia.