I think it’s very reasonable to question whether my abstractions make any sense, and that this is a likely place for my entire approach to break down.
That said, I think there is definitely a superficially plausible meaning of “aligned at capacity c”: it’s an agent who is “doing its best” to do what we want. If attacking abstractions I think the focus should be on whether (a) that intuitive concept would be good enough to run the argument, and (b) whether the intuitive concept is actually internally coherent / is potentially formalizable.
If we replace “aligned at capacity c” with “is trying its best to do what we want,” it seems like this objection falls apart.
If agent N is choosing what values agent N+1 should maximize, and it picks r, and if it’s clear to humans that maximizing r is at odds with human interests (as compared to e.g. leaving humans in meaningful control of the situation)---then prima facie agent N has failed to live up to its contract of trying to do what we want.
Now that could certainly happen anyway. For example, agent N could not know how to tell agent N+1 to “leave humans in meaningful control of the situation.” Or agent N could be subhuman in important respects, or could generalize in an unintended way, etc.
But those aren’t decisive objections to the concept of “trying to do what we want” or to the basic analysis plan. Those feel like problems we should consider one by one, and which I have considered and written about at least a little bit. If you are imagining some particular problem in this space it would be good to zoom in on it.
(On top of that objection, “aligned at capacity c” was intended to mean “for every input, tries to do what we want, and has competence c” not to mean “is aligned for everyday inputs.” Whether that can be achieved is another interesting question.)
As an aside: I think that benign is more likely to be a useful concept than “aligned at capacity c,” though it’s still extremely informal.
If agent N is choosing what values agent N+1 should maximize, and it picks r, and if it’s clear to humans that maximizing r is at odds with human interests (as compared to e.g. leaving humans in meaningful control of the situation)—then prima facie agent N has failed to live up to its contract of trying to do what we want.
It seems to me that the default outcome for any process like this is always ”r is at odds with human interests but not in a way that humans will notice until downstream effects of decisions are felt”. This framework does not deal with this problem; it is not incorporated into a model of what we want until feedback is received, and the default response to that feedback will be to execute a nearest unblocked strategy like it. (This is especially concerning because a human is not a secure system, and downstream effects that will not be noticed by the human can include accidental or purposeful social/basilisk-like changes to the human’s value system. The human being in the loop is only superficially protective.)
Establishing what humans really want, in all circumstances including exotic ones, and given that humans are very hackable, seems to be the core problem. Is this actually easier than saying “assume the first AI has a friendly utility function”?
I don’t know what you really want, even in mundane circumstances. Nevertheless, it’s easy to talk about a motivational state in which I try my best to help you get what you want, and this would be sufficient to avert catastrophe. This would remain true if you were an alien with whom I share no cognitive machinery.
An example I often give is that a supervised learner is basically trying to do what I want, while usually being very weak. It may generalize catastrophically to unseen situations (which is a key problem), and it may not be very competent, but on the training distribution it’s not going to kill me except by incompetence.
But this could happen even if you train your agent using the “correct” reward function. And conversely, if we take as given an AI that can robustly maximize a given reward function, then it seems like my schemes don’t have this generalization problem.
So it seems like this isn’t a problem with the reward function, it’s just the general problem of doing robust/reliable ML. It seems like that can be cleanly factored out of the kind of reward engineering I’m discussing in the ALBA post. Does that seem right?
(It could certainly be the case that robust/reliable ML is the real meat of aligning model-free RL systems. Indeed, I think that’s a more common view in the ML community. Or, it could be the case that any ML system will fail to generalize in some catastrophic way, in which case the remedy is to make less use of learning.)
It seems like that can be cleanly factored out of the kind of reward engineering I’m discussing in the ALBA post. Does that seem right?
That doesn’t seem right to me. If there isn’t a problem with the reward function, then ALBA seems unnecessarily complicated. Conversely, if there is a problem, we might be able to use something like ALBA to try and fix it (this is why I was more positive about it in practice).
I think it’s very reasonable to question whether my abstractions make any sense, and that this is a likely place for my entire approach to break down.
That said, I think there is definitely a superficially plausible meaning of “aligned at capacity c”: it’s an agent who is “doing its best” to do what we want. If attacking abstractions I think the focus should be on whether (a) that intuitive concept would be good enough to run the argument, and (b) whether the intuitive concept is actually internally coherent / is potentially formalizable.
If we replace “aligned at capacity c” with “is trying its best to do what we want,” it seems like this objection falls apart.
If agent N is choosing what values agent N+1 should maximize, and it picks r, and if it’s clear to humans that maximizing r is at odds with human interests (as compared to e.g. leaving humans in meaningful control of the situation)---then prima facie agent N has failed to live up to its contract of trying to do what we want.
Now that could certainly happen anyway. For example, agent N could not know how to tell agent N+1 to “leave humans in meaningful control of the situation.” Or agent N could be subhuman in important respects, or could generalize in an unintended way, etc.
But those aren’t decisive objections to the concept of “trying to do what we want” or to the basic analysis plan. Those feel like problems we should consider one by one, and which I have considered and written about at least a little bit. If you are imagining some particular problem in this space it would be good to zoom in on it.
(On top of that objection, “aligned at capacity c” was intended to mean “for every input, tries to do what we want, and has competence c” not to mean “is aligned for everyday inputs.” Whether that can be achieved is another interesting question.)
As an aside: I think that benign is more likely to be a useful concept than “aligned at capacity c,” though it’s still extremely informal.
It seems to me that the default outcome for any process like this is always ”r is at odds with human interests but not in a way that humans will notice until downstream effects of decisions are felt”. This framework does not deal with this problem; it is not incorporated into a model of what we want until feedback is received, and the default response to that feedback will be to execute a nearest unblocked strategy like it. (This is especially concerning because a human is not a secure system, and downstream effects that will not be noticed by the human can include accidental or purposeful social/basilisk-like changes to the human’s value system. The human being in the loop is only superficially protective.)
Establishing what humans really want, in all circumstances including exotic ones, and given that humans are very hackable, seems to be the core problem. Is this actually easier than saying “assume the first AI has a friendly utility function”?
I don’t know what you really want, even in mundane circumstances. Nevertheless, it’s easy to talk about a motivational state in which I try my best to help you get what you want, and this would be sufficient to avert catastrophe. This would remain true if you were an alien with whom I share no cognitive machinery.
An example I often give is that a supervised learner is basically trying to do what I want, while usually being very weak. It may generalize catastrophically to unseen situations (which is a key problem), and it may not be very competent, but on the training distribution it’s not going to kill me except by incompetence.
That probably summarises my whole objection ^_^
But this could happen even if you train your agent using the “correct” reward function. And conversely, if we take as given an AI that can robustly maximize a given reward function, then it seems like my schemes don’t have this generalization problem.
So it seems like this isn’t a problem with the reward function, it’s just the general problem of doing robust/reliable ML. It seems like that can be cleanly factored out of the kind of reward engineering I’m discussing in the ALBA post. Does that seem right?
(It could certainly be the case that robust/reliable ML is the real meat of aligning model-free RL systems. Indeed, I think that’s a more common view in the ML community. Or, it could be the case that any ML system will fail to generalize in some catastrophic way, in which case the remedy is to make less use of learning.)
That doesn’t seem right to me. If there isn’t a problem with the reward function, then ALBA seems unnecessarily complicated. Conversely, if there is a problem, we might be able to use something like ALBA to try and fix it (this is why I was more positive about it in practice).