Peter Baumann suggests that this isn’t really a problem because Pascal’s probability that the mugger is honest should scale with the amount of utility he is being promised.
If you have a nonzero probability that the mugger can produce arbitrary amounts of utility, the mugger just has to offer you enough to outweigh the smallness of this probability, which is fixed. So this defense doesn’t work.
The counter-counter argument is then that you should indeed assign a zero probability to anyone’s ability to produce arbitrary amounts of utility.
Yes, I know it is rhetorically claimed that 0 and 1 are not probabilities. I suggest that this example refutes that claim. You must assign zero probability to such things, otherwise you get money-pumped, and lose.
Well, as someone else suggested, you could just ignore all probabilities below a certain noise floor. You don’t necessarily have to assign 0 probability to those things, you could just make it a heuristic to ignore them.
All that does is adopt a different decision theory but not call it that, sidestepping the requirement to formalise and justify it. It’s a patch, not a solution, like solving FAI by saying we can just keep the AI in a box.
If you have a nonzero probability that the mugger can produce arbitrary amounts of utility, the mugger just has to offer you enough to outweigh the smallness of this probability, which is fixed. So this defense doesn’t work.
Edit: I guess you already said this.
Right, that was pretty much my counter-argument against his argument.
The counter-counter argument is then that you should indeed assign a zero probability to anyone’s ability to produce arbitrary amounts of utility.
Yes, I know it is rhetorically claimed that 0 and 1 are not probabilities. I suggest that this example refutes that claim. You must assign zero probability to such things, otherwise you get money-pumped, and lose.
Well, as someone else suggested, you could just ignore all probabilities below a certain noise floor. You don’t necessarily have to assign 0 probability to those things, you could just make it a heuristic to ignore them.
All that does is adopt a different decision theory but not call it that, sidestepping the requirement to formalise and justify it. It’s a patch, not a solution, like solving FAI by saying we can just keep the AI in a box.