You cannot precommit “no matter what” in real life. If you are an agent at all—if your variable appears in the problem—that means you can renege on your precommitment, even if it means a terrible punishment. (But usually the punishment stays on the same order of magnitude as the importance of the choice, allowing the choice to be non-obvious—possibly the rulemaker’s tribute to human scope insensitivity. Not that this condition is even that necessary since people also fail to realise the most predictable and immediate consequences of their actions on a regular basis. “X sounded like a good idea at the time”, even if X is carjacking a bulldozer.)
You cannot precommit “no matter what” in real life. If you are an agent at all—if your variable appears in the problem—that means you can renege on your precommitment, even if it means a terrible punishment. (But usually the punishment stays on the same order of magnitude as the importance of the choice, allowing the choice to be non-obvious—possibly the rulemaker’s tribute to human scope insensitivity. Not that this condition is even that necessary since people also fail to realise the most predictable and immediate consequences of their actions on a regular basis. “X sounded like a good idea at the time”, even if X is carjacking a bulldozer.)
This is not a problem of IQ.