I’m a bit confused because you’re citing this in comparison with approval-directed agency, but doesn’t approval-directed agency also have this problem?
Human action models, mistake models, etc. are also difficult in this way
Approval-directed agency also has to correctly learn or specify what “considered it at length” means (i.e., to learn/specify reflection or preferences-for-reflection) and a baseline model of the current human user as a starting point for reflection, so it’s not obvious to me that it’s much more robust.
I think overall I do lean towards Paul’s approach though, if only because I understand it a lot better. I wonder why Professor Russell doesn’t describe his agenda in more technical detail, or engage much with the technical AI safety community, to the extent that even grad students at CHAI apparently do not know much about his approach. (Edit: This last paragraph was temporarily deleted while I consulted with Rohin then added back.)
I wonder why Professor Russell doesn’t describe his agenda in more technical detail, or engage much with the technical AI safety community, to the extent that even grad students at CHAI apparently do not know much about his approach.
For the sake of explaining this: for quite a while, he’s been engaging with academics and policymakers, and writing a book; it’s not that he’s been doing research and not talking to anyone about it.
Fyi, when you quote people who work at an organization saying something that has a negative implication about that organization, you make it less likely that people will say things like that in the future. I’m not saying that you did anything wrong here; I just want to make sure that you know of this effect, and that it does make me in particular more likely to be silent the next time you ask about CHAI rather than responding.
Clarification: For me, the general worry is something like “if I get quoted, I need to make sure that it’s not misleading (which can happen even if the person quoting me didn’t mean to be misleading), and that takes time and effort and noticing all the places where I’m quoted, and it’s just easier to not say things at all”.
(Other people may have more worries, like “If I say something that could be interpreted as being critical of the organization, and that becomes sufficiently well-publicized, then I might get fired, so I’ll just never say anything like that.”)
Note: I’ve only started to delve into the literature about Paul’s agenda, so these opinions are lightly held.
Before I respond to specific points, recall that I wrote
I’m not necessarily worried about the difficulties themselves, but that the [uncertainty] framework seems so sensitive to them.
and
the approval-policy does what a predictor says to do at each time step, which is different from maximizing a signal. Its shape feels different to me; the policy isn’t shaped to maximize some reward signal (and pursue instrumental subgoals). Errors in prediction almost certainly don’t produce a policy adversarial to human interests.
The approval agent is taking actions according to the output of an ML-trained approval predictor; the fact that the policy isn’t selected to maximize a signal is critical, and part of why I find approval-based methods so intriguing. There’s a very specific kind of policy you need in order to pursue instrumental subgoals, which is reliably produced by maximization, but which otherwise seems to be vanishingly unlikely.
I’m a bit confused because you’re citing this in comparison with approval-directed agency, but doesn’t approval-directed agency also have this problem?
The contrast is the failing gracefully, not (necessarily) the specific problems.
In addition to the above (even if approval-directed agents have this problem, this doesn’t mean disaster, just reduced performance), my understanding is that approval doesn’t require actually locating the person, just imitating the output of their approval after reflection. This should be able to be trained in the normal fashion, right? (see the learning from examples section)
Suppose we train the predictor Approval using examples and high-powered ML. Then we have the agent take the action most highly rated by Approval at each time step. This seems to fail much more gracefully as the quality of Approval degrades?
I’m a bit confused because you’re citing this in comparison with approval-directed agency, but doesn’t approval-directed agency also have this problem?
Approval-directed agency also has to correctly learn or specify what “considered it at length” means (i.e., to learn/specify reflection or preferences-for-reflection) and a baseline model of the current human user as a starting point for reflection, so it’s not obvious to me that it’s much more robust.
I think overall I do lean towards Paul’s approach though, if only because I understand it a lot better. I wonder why Professor Russell doesn’t describe his agenda in more technical detail, or engage much with the technical AI safety community, to the extent that even grad students at CHAI apparently do not know much about his approach. (Edit: This last paragraph was temporarily deleted while I consulted with Rohin then added back.)
For the sake of explaining this: for quite a while, he’s been engaging with academics and policymakers, and writing a book; it’s not that he’s been doing research and not talking to anyone about it.
Fyi, when you quote people who work at an organization saying something that has a negative implication about that organization, you make it less likely that people will say things like that in the future. I’m not saying that you did anything wrong here; I just want to make sure that you know of this effect, and that it does make me in particular more likely to be silent the next time you ask about CHAI rather than responding.
Clarification: For me, the general worry is something like “if I get quoted, I need to make sure that it’s not misleading (which can happen even if the person quoting me didn’t mean to be misleading), and that takes time and effort and noticing all the places where I’m quoted, and it’s just easier to not say things at all”.
(Other people may have more worries, like “If I say something that could be interpreted as being critical of the organization, and that becomes sufficiently well-publicized, then I might get fired, so I’ll just never say anything like that.”)
Note: I’ve only started to delve into the literature about Paul’s agenda, so these opinions are lightly held.
Before I respond to specific points, recall that I wrote
and
The approval agent is taking actions according to the output of an ML-trained approval predictor; the fact that the policy isn’t selected to maximize a signal is critical, and part of why I find approval-based methods so intriguing. There’s a very specific kind of policy you need in order to pursue instrumental subgoals, which is reliably produced by maximization, but which otherwise seems to be vanishingly unlikely.
The contrast is the failing gracefully, not (necessarily) the specific problems.
In addition to the above (even if approval-directed agents have this problem, this doesn’t mean disaster, just reduced performance), my understanding is that approval doesn’t require actually locating the person, just imitating the output of their approval after reflection. This should be able to be trained in the normal fashion, right? (see the learning from examples section)
Suppose we train the predictor
Approval
using examples and high-powered ML. Then we have the agent take the action most highly rated byApproval
at each time step. This seems to fail much more gracefully as the quality ofApproval
degrades?