The agent would have an incentive to stop anyone from doing anything new in response to what the agent did
I think that the stepwise counterfactual is sufficient to address this kind of clinginess: the agent will not have an incentive to take further actions to stop humans from doing anything new in response to its original action, since after the original action happens, the human reactions are part of the stepwise inaction baseline.
The penalty for the original action will take into account human reactions in the inaction rollout after this action, so the agent will prefer actions that result in humans changing fewer things in response. I’m not sure whether to consider this clinginess—if so, it might be useful to call it “ex ante clinginess” to distinguish from “ex post clinginess” (similar to your corresponding distinction for offsetting). The “ex ante” kind of clinginess is the same property that causes the agent to avoid scapegoating butterfly effects, so I think it’s a desirable property overall. Do you disagree?
I think it’s generally a good property as a reasonable person would execute it. The problem, however, is the bad ex ante clinginess plans, where the agent has an incentive to pre-emptively constrain our reactions as hard as it can (and this could be really hard).
The problem is lessened if the agent is agnostic to the specific details of the world, but like I said, it seems like we really need IV (or an improved successor to it) to cleanly cut off these perverse incentives.
I’m not sure I understand the connection to scapegoating for the agents we’re talking about; scapegoating is only permitted if credit assignment is explicitly part of the approach and there are privileged “agents” in the provided ontology.
Thanks, glad you liked the breakdown!
I think that the stepwise counterfactual is sufficient to address this kind of clinginess: the agent will not have an incentive to take further actions to stop humans from doing anything new in response to its original action, since after the original action happens, the human reactions are part of the stepwise inaction baseline.
The penalty for the original action will take into account human reactions in the inaction rollout after this action, so the agent will prefer actions that result in humans changing fewer things in response. I’m not sure whether to consider this clinginess—if so, it might be useful to call it “ex ante clinginess” to distinguish from “ex post clinginess” (similar to your corresponding distinction for offsetting). The “ex ante” kind of clinginess is the same property that causes the agent to avoid scapegoating butterfly effects, so I think it’s a desirable property overall. Do you disagree?
I think it’s generally a good property as a reasonable person would execute it. The problem, however, is the bad ex ante clinginess plans, where the agent has an incentive to pre-emptively constrain our reactions as hard as it can (and this could be really hard).
The problem is lessened if the agent is agnostic to the specific details of the world, but like I said, it seems like we really need IV (or an improved successor to it) to cleanly cut off these perverse incentives.
I’m not sure I understand the connection to scapegoating for the agents we’re talking about; scapegoating is only permitted if credit assignment is explicitly part of the approach and there are privileged “agents” in the provided ontology.