Consequentialists can avoid pascal’s mugging by having bounded utility functions. If you add in a deontological side-constraint implemented as “rule out every action that has a nonzero possibility of violating the constraint” then that trivially rules out every action because zero is not a probability. So obviously that’s not how you’d implement it. I’m not sure how to implement it but a first-pass attempt would be to rule out actions that have, say, a >1% chance of violating the constraint. Second-pass attempt is to rule out actions that increase your credence in eventual constraint-violation-by-you by more than 1%. I do have a gut feeling that these will turn out to be problematic somehow, so I’m excited to be discussing this!
I can see two ways.
First, boring: assign bounded utilities over everything and very large disutility on violating constraint, such that >1% chance of violating constraint doesn’t worth it.
Second: throw away most part of utilitarian framework and design agent to work under rules in limited environment, if agent ever leaves environment, it throws exception and waits for your guidance.
First is unexploitable because it’s simply utility maximizer. Second is presumably unexploitable, because we (presumably) designed exception for every possibility of being exploited.
Is a consequentialist who has artificially bounded their utility function still truly a consequentialist? Likewise, if you make a deontological ruleset complicated and probabilistic enough, it starts to look a lot like a utility function.
There may still be modeling and self-image differences—the deontologist considers their choices to be terminally valuable, and the consequentialist considers these as ONLY instrumental to the utility of future experiences.
Weirdly, the consequentialist DOES typically assign utility to imagined universe-state that their experiences support, and it’s unclear why that’s all that different to the value of the experience of choosing correctly.
I agree that any deontologist can be represented as a consequentialist, by making the utility function complicated enough. I also agree that certain very sophisticated and complicated deontologists can probably be represented as consequentialists with not-too-complex utility functions.
Consequentialists can avoid pascal’s mugging by having bounded utility functions. If you add in a deontological side-constraint implemented as “rule out every action that has a nonzero possibility of violating the constraint” then that trivially rules out every action because zero is not a probability. So obviously that’s not how you’d implement it. I’m not sure how to implement it but a first-pass attempt would be to rule out actions that have, say, a >1% chance of violating the constraint. Second-pass attempt is to rule out actions that increase your credence in eventual constraint-violation-by-you by more than 1%. I do have a gut feeling that these will turn out to be problematic somehow, so I’m excited to be discussing this!
I can see two ways. First, boring: assign bounded utilities over everything and very large disutility on violating constraint, such that >1% chance of violating constraint doesn’t worth it. Second: throw away most part of utilitarian framework and design agent to work under rules in limited environment, if agent ever leaves environment, it throws exception and waits for your guidance. First is unexploitable because it’s simply utility maximizer. Second is presumably unexploitable, because we (presumably) designed exception for every possibility of being exploited.
Is a consequentialist who has artificially bounded their utility function still truly a consequentialist? Likewise, if you make a deontological ruleset complicated and probabilistic enough, it starts to look a lot like a utility function.
There may still be modeling and self-image differences—the deontologist considers their choices to be terminally valuable, and the consequentialist considers these as ONLY instrumental to the utility of future experiences.
Weirdly, the consequentialist DOES typically assign utility to imagined universe-state that their experiences support, and it’s unclear why that’s all that different to the value of the experience of choosing correctly.
A consequentialist with an unbounded utility function is broken, due to pascal’s mugging-related problems. At least that’s my opinion. See Tiny Probabilities of Vast Utilities: A Problem for Longtermism? - EA Forum (effectivealtruism.org)
I agree that any deontologist can be represented as a consequentialist, by making the utility function complicated enough. I also agree that certain very sophisticated and complicated deontologists can probably be represented as consequentialists with not-too-complex utility functions.
Not sure if we are disagreeing about anything.