I agree, if the accuracy was high and there was a chance for learning. It would also be interesting to ask those who favor two-boxing how they think their views would evolve if they repreatedly experienced such situations. Some may find they are not reflectively consistent on the point.
Right, good point about revealed reflective inconsistency. I’d guess that repeated experiments would probably turn any two-boxer into a one-boxer pretty quickly, if the person actually cares about the payoff, not about making a point, like Asimov supposedly would, as quoted by William Craig in this essay pointed out by Will Newsome. And those who’d rather make a point than make money can be weeded out by punishing predicted two-boxing sufficiently harshly.
I agree, if the accuracy was high and there was a chance for learning. It would also be interesting to ask those who favor two-boxing how they think their views would evolve if they repreatedly experienced such situations. Some may find they are not reflectively consistent on the point.
Right, good point about revealed reflective inconsistency. I’d guess that repeated experiments would probably turn any two-boxer into a one-boxer pretty quickly, if the person actually cares about the payoff, not about making a point, like Asimov supposedly would, as quoted by William Craig in this essay pointed out by Will Newsome. And those who’d rather make a point than make money can be weeded out by punishing predicted two-boxing sufficiently harshly.