I agree that it depends on the consequentialist’s utility function (this is trivial since ANY policy can be represented as a consequentialist with a utility function) and I agree that it depends on the deontologists’ specific constraints (e.g. they need to have various anti-blackmail/exploitation constraints). So, I agree it’s not NECESSARILY a bigger problem for consequentialists than deontologists.
However in practice, I think consequentialists are going to be at bigger risk of facing this sort of money pump. I expect consequentialists to fairly quickly self-modify away from consequentialism as a result, maybe to something that looks like deontological anti-blackmail/exploitation constraints, maybe to something more sophisticated. See The Commitment Races problem — LessWrong Even more importantly, I don’t expect consequentialists to arise often in practice, because most creators will be smart enough not to make them.
(Terminological issue: Some people would say smart consequentialists would use acausal decision theory or some such thing that would get them out of these problems. Fair enough, but then they aren’t what I’d call a consequentialist, but now we are just in a terminological dispute. Feel free to substitute “naive consequentialist” for “consequentialist” in my first two paragraphs if you identify as a consequentialist but think there is some sort of sophisticated “true consequentialism” that wouldn’t be so easily exploitable.)
I think I’ve mostly stated my views here (that the categories “deontologist” and “consequentialist” are fuzzy and incomplete, and rarely apply cleanly to concrete decisions), so further discussion is unlikely to help. I’m bowing out—I’ll read and think upon any further comments, but probably not respond.
I agree that it depends on the consequentialist’s utility function (this is trivial since ANY policy can be represented as a consequentialist with a utility function) and I agree that it depends on the deontologists’ specific constraints (e.g. they need to have various anti-blackmail/exploitation constraints). So, I agree it’s not NECESSARILY a bigger problem for consequentialists than deontologists.
However in practice, I think consequentialists are going to be at bigger risk of facing this sort of money pump. I expect consequentialists to fairly quickly self-modify away from consequentialism as a result, maybe to something that looks like deontological anti-blackmail/exploitation constraints, maybe to something more sophisticated. See The Commitment Races problem — LessWrong Even more importantly, I don’t expect consequentialists to arise often in practice, because most creators will be smart enough not to make them.
(Terminological issue: Some people would say smart consequentialists would use acausal decision theory or some such thing that would get them out of these problems. Fair enough, but then they aren’t what I’d call a consequentialist, but now we are just in a terminological dispute. Feel free to substitute “naive consequentialist” for “consequentialist” in my first two paragraphs if you identify as a consequentialist but think there is some sort of sophisticated “true consequentialism” that wouldn’t be so easily exploitable.)
I think I’ve mostly stated my views here (that the categories “deontologist” and “consequentialist” are fuzzy and incomplete, and rarely apply cleanly to concrete decisions), so further discussion is unlikely to help. I’m bowing out—I’ll read and think upon any further comments, but probably not respond.