Not a money pump unless there’s some path back to “trust me enough that I can extort you again”, but that’s unlikely related to ethical framework.
However, I have no clue why you think it’s NECESSARILY a bigger problem for consequentialists than deontologists. Depending on the consequentialist’s utility function and the deontologist’s actual ruleset and priorities, it could be more, less, or the same problem.
I agree that it depends on the consequentialist’s utility function (this is trivial since ANY policy can be represented as a consequentialist with a utility function) and I agree that it depends on the deontologists’ specific constraints (e.g. they need to have various anti-blackmail/exploitation constraints). So, I agree it’s not NECESSARILY a bigger problem for consequentialists than deontologists.
However in practice, I think consequentialists are going to be at bigger risk of facing this sort of money pump. I expect consequentialists to fairly quickly self-modify away from consequentialism as a result, maybe to something that looks like deontological anti-blackmail/exploitation constraints, maybe to something more sophisticated. See The Commitment Races problem — LessWrong Even more importantly, I don’t expect consequentialists to arise often in practice, because most creators will be smart enough not to make them.
(Terminological issue: Some people would say smart consequentialists would use acausal decision theory or some such thing that would get them out of these problems. Fair enough, but then they aren’t what I’d call a consequentialist, but now we are just in a terminological dispute. Feel free to substitute “naive consequentialist” for “consequentialist” in my first two paragraphs if you identify as a consequentialist but think there is some sort of sophisticated “true consequentialism” that wouldn’t be so easily exploitable.)
I think I’ve mostly stated my views here (that the categories “deontologist” and “consequentialist” are fuzzy and incomplete, and rarely apply cleanly to concrete decisions), so further discussion is unlikely to help. I’m bowing out—I’ll read and think upon any further comments, but probably not respond.
If the consequentialist doesn’t use any acausal decision theory they will be more likely to pay out and thus a better target for the “give me money otherwise I’ll kill you” attack. If the extorted money + harm to reputation isn’t as bad as the threat of dying then the consequentialist should pay out.
Not a money pump unless there’s some path back to “trust me enough that I can extort you again”, but that’s unlikely related to ethical framework.
However, I have no clue why you think it’s NECESSARILY a bigger problem for consequentialists than deontologists. Depending on the consequentialist’s utility function and the deontologist’s actual ruleset and priorities, it could be more, less, or the same problem.
I don’t understand this. Why would paying out to an extortionist once make you disbelieve them when they threatened you a second time?
You may still believe they will (try to) kill you if you don’t pay. The second time you stop believing that they will not kill you if you do pay.
I agree that it depends on the consequentialist’s utility function (this is trivial since ANY policy can be represented as a consequentialist with a utility function) and I agree that it depends on the deontologists’ specific constraints (e.g. they need to have various anti-blackmail/exploitation constraints). So, I agree it’s not NECESSARILY a bigger problem for consequentialists than deontologists.
However in practice, I think consequentialists are going to be at bigger risk of facing this sort of money pump. I expect consequentialists to fairly quickly self-modify away from consequentialism as a result, maybe to something that looks like deontological anti-blackmail/exploitation constraints, maybe to something more sophisticated. See The Commitment Races problem — LessWrong Even more importantly, I don’t expect consequentialists to arise often in practice, because most creators will be smart enough not to make them.
(Terminological issue: Some people would say smart consequentialists would use acausal decision theory or some such thing that would get them out of these problems. Fair enough, but then they aren’t what I’d call a consequentialist, but now we are just in a terminological dispute. Feel free to substitute “naive consequentialist” for “consequentialist” in my first two paragraphs if you identify as a consequentialist but think there is some sort of sophisticated “true consequentialism” that wouldn’t be so easily exploitable.)
I think I’ve mostly stated my views here (that the categories “deontologist” and “consequentialist” are fuzzy and incomplete, and rarely apply cleanly to concrete decisions), so further discussion is unlikely to help. I’m bowing out—I’ll read and think upon any further comments, but probably not respond.
If the consequentialist doesn’t use any acausal decision theory they will be more likely to pay out and thus a better target for the “give me money otherwise I’ll kill you” attack. If the extorted money + harm to reputation isn’t as bad as the threat of dying then the consequentialist should pay out.