I recently wrote a post presenting a step towards corrigibility using causality here. I’ve got several ideas in the works for how to improve it, but I’m not sure which one is going to be most interesting to people. Here’s a list.
Develop the stop button solution further, cleaning up errors, better matching the purpose, etc..
e.g.
I think there may be some variant of this that could work. Like if you give the AI reward proportional to Bs+rf (where r is a reward function for V) for its current world-state (rather than picking a policy that maximizes Bs+Vf overall; so one difference is that you’d be summing over the reward rather than giving a single one), then that would encourage the AI to create a state where shutdown happens when humans want to press the button and V happens when they don’t. But the issue I have with this proposal is that the AI would be prone to not respect past attempts to press the stop button. I think maybe if one picked a different reward function, like (Bs+r)f, then it could work better (though the Bs part would need a time delay...). Though this reward function might leave it open to the “trying to shut down the AI for reasons” objection that you gave before; I think that’s fixed by moving the f counterfactual outside of the sum over rewards, but I’m not sure.
Better explaining the intuitions behind why counterfactuals (and in particular counterfactuals over human preferences) are important for corrigibility.
e.g.
his is the immediate insight for the application to the stop button. But on a broader level, the insight is that corrigibility, respecting human’s preferences, etc. are best thought of as being preferences about the causal effect of humans on various outcomes, and those sorts of preferences can be specified using utility functions that involve counterfactuals.
This seems to be what sets my proposal apart from most “utility indifference proposals”, which seem to be possible to phrase in terms of counterfactuals on a bunch of other variables than humans.
Using counterfactuals to control a paperclip maximizer to be safe and productive
e.g.
(I also think that there are other useful things that can be specified using utility functions that involve counterfactuals, which I’m trying to prepare for an explainer post. For instance, a sort of “encapsulation”—if you’re a paperclip producer, you might want to make a paperclip maximizer which is encapsulated in the sense that it is only allowed to work within a single factory, using a single set of resources, and not influencing the world otherwise. This could be specified using a counterfactual that the outside world’s outcome must be “as if” the resources in the factory just disappeared and paperclips appeared at its output act-of-god style. This avoids any unintended impacts on the outside world while still preserving the intended side effect of the creation of a high but controlled amount of paperclips. However, I’m still working on making it sufficiently neat, e.g. this proposal runs into problems with the universe’s conservation laws.)
Attempting to formally prove that counterfactuals work and/or are necessary, perhaps with a TurnTrout-style argument
I recently wrote a post presenting a step towards corrigibility using causality here. I’ve got several ideas in the works for how to improve it, but I’m not sure which one is going to be most interesting to people. Here’s a list.
Develop the stop button solution further, cleaning up errors, better matching the purpose, etc..
e.g.
Better explaining the intuitions behind why counterfactuals (and in particular counterfactuals over human preferences) are important for corrigibility.
e.g.
Using counterfactuals to control a paperclip maximizer to be safe and productive
e.g.
Attempting to formally prove that counterfactuals work and/or are necessary, perhaps with a TurnTrout-style argument