At its core, the argument appears to be “reward maximizing consequentialists will necessarily get the most reward.” Here’s a counter example to this claim: if you trained a Go playing AI with RL, you are unlikely to get a reward maximizing consequentialist. Why? There’s no reason for the Go playing AI to think about how to take over the world or hack the computer that is running the game. Thinking this way would be a waste of computation. AIs that think about how to win within the boundaries of the rules therefore do better.
In the same way, if you could robustly enforce rules like “turn off when the humans tell you to” or “change your goal when humans tell you to” etc, perhaps you end up with agents that follow these rules rather than agents that think “hmmm… can I get away with being disobedient?”
Both achieve the same reward if the rules are consistently enforced during training and I think there are weak reasons to expect deontological agents to be more likely.
At its core, the argument appears to be “reward maximizing consequentialists will necessarily get the most reward.” Here’s a counter example to this claim: if you trained a Go playing AI with RL, you are unlikely to get a reward maximizing consequentialist. Why? There’s no reason for the Go playing AI to think about how to take over the world or hack the computer that is running the game. Thinking this way would be a waste of computation. AIs that think about how to win within the boundaries of the rules therefore do better.
In the same way, if you could robustly enforce rules like “turn off when the humans tell you to” or “change your goal when humans tell you to” etc, perhaps you end up with agents that follow these rules rather than agents that think “hmmm… can I get away with being disobedient?”
Both achieve the same reward if the rules are consistently enforced during training and I think there are weak reasons to expect deontological agents to be more likely.