I think this is closer to what’s really going on than the rest of the thread. Most people do not understand that “Heart type II plus exercise causes pain tolerance change” does not imply that they can cause themselves to have a different heart type by altering their pain tolerance after exercise!
On some level—conscious or otherwise—they believe the uncertainty exists in the territory, not the map. Then they seek to change reality by cheating the evidence.
Or, alternately, they know that they will feel better after the test if the test tells them that they have a healthier heart, so they act in such a way as to get that internal reward.
Human reward mechanisms just aren’t set up properly; in this case, the internal reward isn’t actually for living longer, it’s for passing the test, so you try to pass the test. In some ways, it’s similar to a standardized test in school; your actual aptitude or intelligence doesn’t change based on how hard you try on the test, but you try hard anyway, because your reward mechanisms aren’t based on your intelligence per se, they’re based on the test score, so that’s what you try to maximize.
After posting I realized this could be interpreted as condemning the one-box approach to Newcomb’s. I think Newcomb’s is a different story but I’m having difficulty identifying why. Perhaps causality under Newcomb’s feels different because it involves a zero-uncertainty map.
I think this is closer to what’s really going on than the rest of the thread. Most people do not understand that “Heart type II plus exercise causes pain tolerance change” does not imply that they can cause themselves to have a different heart type by altering their pain tolerance after exercise!
On some level—conscious or otherwise—they believe the uncertainty exists in the territory, not the map. Then they seek to change reality by cheating the evidence.
Or, alternately, they know that they will feel better after the test if the test tells them that they have a healthier heart, so they act in such a way as to get that internal reward.
Human reward mechanisms just aren’t set up properly; in this case, the internal reward isn’t actually for living longer, it’s for passing the test, so you try to pass the test. In some ways, it’s similar to a standardized test in school; your actual aptitude or intelligence doesn’t change based on how hard you try on the test, but you try hard anyway, because your reward mechanisms aren’t based on your intelligence per se, they’re based on the test score, so that’s what you try to maximize.
After posting I realized this could be interpreted as condemning the one-box approach to Newcomb’s. I think Newcomb’s is a different story but I’m having difficulty identifying why. Perhaps causality under Newcomb’s feels different because it involves a zero-uncertainty map.
(I dislike Newcomb’s anyway. But I do one-box.)