I upvoted this partly because it was really well written and I would love to see more articles of this caliber.
As for the topic… I guess I don’t disagree on any particular point and I think the insights are good to note. Personally I seemed to head in the opposite direction when faced with this problem:
One might grant that while responding that running models in head-land is nevertheless the best predictor of real-land events that any individual has. And that’s true, but it doesn’t change our apparent tendency to place far more trust in our head-land models than their dismal accuracy could ever warrant.
Instead of throwing out the head-land models and simulations as not helpful I look for ways to make the head-land models more accurate. The success of specific head-land models is more or less easy to measure: Did the predictions occur? The solution is two-pronged: look for better accuracy and ditch the apparently accuracy-bias.
The danger of head-land catastrophes that poison real-land endeavors looms over every step of the path. The possibility of being metaphorically laughed out of the classroom, though probably only illusory to begin with, never quite leaves one’s mind.
Agreed; a major obstacle to measuring the success of head-land predictors comes when the predictions themselves affect the outcome of real-land. Namely, both fear of failures imagined and the relaxing opium of daydreams.
In my experience, it is possible to shoo the metaphorical laughter away. Furthermore, it is possible to let the head-land simulations run and remain emotionally abstracted from the results. Instead of responding to imagined failure with real-land fear, forge onward with the intent of measuring the success of your head-land.
Fearing head-land failures to the degree of not acting in the real-world truly is poison. But shutting off our best predictor because it may predict inaccurate failure seems to be letting a valuable tool fall away. It is better to not emulate than to not act but is it not possible to increase our accuracy?
I suppose my point can be boiled into this: My head-land has been known to guess correctly. Are these successes a false pattern? Are they evidence of a talent that can be honed into something useful to my real-land self? My head-land is telling me the latter.
Furthermore, it is possible to let the head-land simulations run and remain emotionally abstracted from the results.
This is wise. Getting the necessary distance would indeed work, as would improving head-land accuracy, though I’m dubious about the extent to which it can be improved. In any case, I’m not quite to either goal myself yet. And if your own head-land making accurate predictions, that’s a good thing; I just can’t get those kinds of results out of mine. Yet.
Another random comment: Head-land is large and can be split into distinct patterns of behavior. Simulations about potential mates are probably going to be on different emotional circuits than strategizing about chess. (Unless, of course, you play chess differently than I do...) My hunches tell me that the chess simulations are going to be a little more accurate.
Rationality certainly helps when testing the accuracy of head-land. My math teacher used to warn me about turning my brain off when working through math problems. If the answer didn’t make intuitive sense check my work for bizarre mistakes. It turns out my head-land simulation of basic math problems is relatively accurate. Knowing its level of accuracy is an excellent tool for determining if we’re in the wrong jungle.
I upvoted this partly because it was really well written and I would love to see more articles of this caliber.
As for the topic… I guess I don’t disagree on any particular point and I think the insights are good to note. Personally I seemed to head in the opposite direction when faced with this problem:
Instead of throwing out the head-land models and simulations as not helpful I look for ways to make the head-land models more accurate. The success of specific head-land models is more or less easy to measure: Did the predictions occur? The solution is two-pronged: look for better accuracy and ditch the apparently accuracy-bias.
Agreed; a major obstacle to measuring the success of head-land predictors comes when the predictions themselves affect the outcome of real-land. Namely, both fear of failures imagined and the relaxing opium of daydreams.
In my experience, it is possible to shoo the metaphorical laughter away. Furthermore, it is possible to let the head-land simulations run and remain emotionally abstracted from the results. Instead of responding to imagined failure with real-land fear, forge onward with the intent of measuring the success of your head-land.
Fearing head-land failures to the degree of not acting in the real-world truly is poison. But shutting off our best predictor because it may predict inaccurate failure seems to be letting a valuable tool fall away. It is better to not emulate than to not act but is it not possible to increase our accuracy?
I suppose my point can be boiled into this: My head-land has been known to guess correctly. Are these successes a false pattern? Are they evidence of a talent that can be honed into something useful to my real-land self? My head-land is telling me the latter.
Furthermore, it is possible to let the head-land simulations run and remain emotionally abstracted from the results.
This is wise. Getting the necessary distance would indeed work, as would improving head-land accuracy, though I’m dubious about the extent to which it can be improved. In any case, I’m not quite to either goal myself yet. And if your own head-land making accurate predictions, that’s a good thing; I just can’t get those kinds of results out of mine. Yet.
Another random comment: Head-land is large and can be split into distinct patterns of behavior. Simulations about potential mates are probably going to be on different emotional circuits than strategizing about chess. (Unless, of course, you play chess differently than I do...) My hunches tell me that the chess simulations are going to be a little more accurate.
Rationality certainly helps when testing the accuracy of head-land. My math teacher used to warn me about turning my brain off when working through math problems. If the answer didn’t make intuitive sense check my work for bizarre mistakes. It turns out my head-land simulation of basic math problems is relatively accurate. Knowing its level of accuracy is an excellent tool for determining if we’re in the wrong jungle.