If humans are bad at mental arithmetic, but good at, say, not dying—doesn’t that suggest that, as a practical matter, humans should try to rephrase mathematical questions into questions about danger?
E.g. Imagine stepping into a field crisscrossed by dangerous laser beams in a prime-numbers manner to get something valuable. I think someone who had a realistic fear of the laser beams, and a realistic understanding of the benefit of that valuable thing would slow down and/or stop stepping out into suspicious spots.
Quantifying is ONE technique, and it’s been used very effectively in recent centuries—but those successes were inside a laboratory / factory / automation structure, not in an individual-rationality context.
If humans are bad at mental arithmetic, but good at, say, not dying—doesn’t that suggest that, as a practical matter, humans should try to rephrase mathematical questions into questions about danger?
I don’t think this would help at all. Humans have some built-in systems to respond to danger that is shaped like a tiger or a snake or other learned stimuli, like when I see a patient go into a lethal arrhythmia on the heart monitor. This programmed response to danger pumps you full of adrenaline and makes you very motivated to run very fast, or work very hard at some skill that you’ve practiced over and over. Elite athletes perform better under the pressure of competition; beginners perform worse.
An elite mathematician might do math faster if they felt they were in danger, but an elite mathematician is probably motivated to do mental arithmetic in the first place. I place around 95% confidence that generic bad-at-mental-arithmetic human would perform worse if they felt they were in danger than if they were in a safe classroom environment. If a patient is in cardiac arrest, I’m incredibly motivated to do something about it, but I don’t trust my brain with even the simplest mental arithmetic. (Which is irritating, actually).
This doesn’t address the reward part of your situation, the “something valuable” at the end of the road. Without the danger, or with some mild thrill-adding danger, this might be a workable idea.
Elite athletes perform better under the pressure of competition; beginners perform worse.
Can you rule out the selection effect? (Which would be that people who don’t happen to perform better under the pressure of competition, don’t become elite athletes.)
If humans are bad at mental arithmetic, but good at, say, not dying—doesn’t that suggest that, as a practical matter, humans should try to rephrase mathematical questions into questions about danger?
E.g. Imagine stepping into a field crisscrossed by dangerous laser beams in a prime-numbers manner to get something valuable. I think someone who had a realistic fear of the laser beams, and a realistic understanding of the benefit of that valuable thing would slow down and/or stop stepping out into suspicious spots.
Quantifying is ONE technique, and it’s been used very effectively in recent centuries—but those successes were inside a laboratory / factory / automation structure, not in an individual-rationality context.
I don’t think this would help at all. Humans have some built-in systems to respond to danger that is shaped like a tiger or a snake or other learned stimuli, like when I see a patient go into a lethal arrhythmia on the heart monitor. This programmed response to danger pumps you full of adrenaline and makes you very motivated to run very fast, or work very hard at some skill that you’ve practiced over and over. Elite athletes perform better under the pressure of competition; beginners perform worse.
An elite mathematician might do math faster if they felt they were in danger, but an elite mathematician is probably motivated to do mental arithmetic in the first place. I place around 95% confidence that generic bad-at-mental-arithmetic human would perform worse if they felt they were in danger than if they were in a safe classroom environment. If a patient is in cardiac arrest, I’m incredibly motivated to do something about it, but I don’t trust my brain with even the simplest mental arithmetic. (Which is irritating, actually).
This doesn’t address the reward part of your situation, the “something valuable” at the end of the road. Without the danger, or with some mild thrill-adding danger, this might be a workable idea.
Can you rule out the selection effect? (Which would be that people who don’t happen to perform better under the pressure of competition, don’t become elite athletes.)
#necro, but yes. Longitudinal observation shows that the same people perform worse under pressure as amateurs, and better under pressure at the end.