Having slept on it: I think “Consequentialism/maximization treats deontological constraints as damage and routes around them” is maybe missing the big picture; the big picture is that optimization treats deontological constraints as damage and routes around them. (This comes up in law, in human minds, and in AI thought experiments… one sign that it is happening in humans is when you hear them say things like “Aha! If we do X, it wouldn’t be illegal, right?” or “This is a grey area.” The solution is to have some process by which the deontological constraints become more sophisticated over time, improving to match the optimizations happening elsewhere in the agent. But getting this right is tricky. If the constraints strengthen too fast or in the wrong ways, it hurts your competitiveness too much. If they constraints strengthen too slowly or in the wrong ways, they eventually become toothless speed-bumps on the way to achieving the other optimization targets.
Having slept on it: I think “Consequentialism/maximization treats deontological constraints as damage and routes around them” is maybe missing the big picture; the big picture is that optimization treats deontological constraints as damage and routes around them. (This comes up in law, in human minds, and in AI thought experiments… one sign that it is happening in humans is when you hear them say things like “Aha! If we do X, it wouldn’t be illegal, right?” or “This is a grey area.” The solution is to have some process by which the deontological constraints become more sophisticated over time, improving to match the optimizations happening elsewhere in the agent. But getting this right is tricky. If the constraints strengthen too fast or in the wrong ways, it hurts your competitiveness too much. If they constraints strengthen too slowly or in the wrong ways, they eventually become toothless speed-bumps on the way to achieving the other optimization targets.