Right, good point about revealed reflective inconsistency. I’d guess that repeated experiments would probably turn any two-boxer into a one-boxer pretty quickly, if the person actually cares about the payoff, not about making a point, like Asimov supposedly would, as quoted by William Craig in this essay pointed out by Will Newsome. And those who’d rather make a point than make money can be weeded out by punishing predicted two-boxing sufficiently harshly.
Right, good point about revealed reflective inconsistency. I’d guess that repeated experiments would probably turn any two-boxer into a one-boxer pretty quickly, if the person actually cares about the payoff, not about making a point, like Asimov supposedly would, as quoted by William Craig in this essay pointed out by Will Newsome. And those who’d rather make a point than make money can be weeded out by punishing predicted two-boxing sufficiently harshly.