I like this exchange and the clarifications on both sides.
Yeah, it feels like it’s getting at a crux between the “backchaining / coherence theorems / solve-for-the-equilibrium / law thinking” cluster of world models and the “OODA loop / shard theory / interpolate and extrapolate / toolbox thinking” cluster of world models.
You’re right that coherence arguments work by assuming a goal is about the future. But preferences over a single future timeslice is too specific, the arguments still work if it’s multiple timeslices, or an integral over time, or larger time periods that are still in the future. The argument starts breaking down only when it has strong preferences over immediate actions, and those preferences are stronger than any preferences over the future-that-is-causally-downstream-from-those-actions
Humans do seem to have strong preferences over immediate actions. For example, many people prefer not to lie, even if they think that lying will help them achieve their goals and they are confident that they will not get caught.
I expect that in multi-agent environments, there is significant pressure towards legibly having these kinds of strong preferences over immediate actions. As such, I expect that that structure of thing will show up in future intelligent agents, rather than being a human-specific anomaly.
But even then it could be reasonable to model the system as a coherent agent during the times when its actions aren’t determined by near-term constraints, when longer-term goals dominate. [...] The whole point of building an intelligent agent is that you know more about the future-outcomes you want than you do about the process to get there.
I expect that agents which predictably behave in the way EY describes as “going hard” (i.e. attempting to achieve their long-term goal at any cost) will find it harder to find other agents who will cooperate with them. It’s not a binary choice between “care about process” and “care about outcomes”—it is possible and common to care about outcomes, and also to care about the process used to achieve those outcomes.
On the other hand, it does look like the anti-corrigibility results can be overcome by sometimes having strong preferences over intermediate times (i.e. over particular ways the world should go) rather than final-outcomes.
Yeah. Or strong preferences over processes (although I suppose you can frame a preference over process as a preference over there not being any intermediate time where the agent is actively executing some specific undesired behavior).
But this only helps us if we have a lot of control over the preferences&constraints of the agent,
It does seem to me that “we have a lot of control over the approaches the agent tends to take” is true and becoming more true over time.
and it has a couple of stability properties.
I doubt that systems trained with ML techniques have these properties. But I don’t think e.g. humans or organizations built out of humans + scaffolding have these properties either, and I have a sneaking suspicion that the properties in question are incompatible with competitiveness.
Yeah, it feels like it’s getting at a crux between the “backchaining / coherence theorems / solve-for-the-equilibrium / law thinking” cluster of world models and the “OODA loop / shard theory / interpolate and extrapolate / toolbox thinking” cluster of world models.
Humans do seem to have strong preferences over immediate actions. For example, many people prefer not to lie, even if they think that lying will help them achieve their goals and they are confident that they will not get caught.
I expect that in multi-agent environments, there is significant pressure towards legibly having these kinds of strong preferences over immediate actions. As such, I expect that that structure of thing will show up in future intelligent agents, rather than being a human-specific anomaly.
I expect that agents which predictably behave in the way EY describes as “going hard” (i.e. attempting to achieve their long-term goal at any cost) will find it harder to find other agents who will cooperate with them. It’s not a binary choice between “care about process” and “care about outcomes”—it is possible and common to care about outcomes, and also to care about the process used to achieve those outcomes.
Yeah. Or strong preferences over processes (although I suppose you can frame a preference over process as a preference over there not being any intermediate time where the agent is actively executing some specific undesired behavior).
It does seem to me that “we have a lot of control over the approaches the agent tends to take” is true and becoming more true over time.
I doubt that systems trained with ML techniques have these properties. But I don’t think e.g. humans or organizations built out of humans + scaffolding have these properties either, and I have a sneaking suspicion that the properties in question are incompatible with competitiveness.