Note: I’ve only started to delve into the literature about Paul’s agenda, so these opinions are lightly held.
Before I respond to specific points, recall that I wrote
I’m not necessarily worried about the difficulties themselves, but that the [uncertainty] framework seems so sensitive to them.
and
the approval-policy does what a predictor says to do at each time step, which is different from maximizing a signal. Its shape feels different to me; the policy isn’t shaped to maximize some reward signal (and pursue instrumental subgoals). Errors in prediction almost certainly don’t produce a policy adversarial to human interests.
The approval agent is taking actions according to the output of an ML-trained approval predictor; the fact that the policy isn’t selected to maximize a signal is critical, and part of why I find approval-based methods so intriguing. There’s a very specific kind of policy you need in order to pursue instrumental subgoals, which is reliably produced by maximization, but which otherwise seems to be vanishingly unlikely.
I’m a bit confused because you’re citing this in comparison with approval-directed agency, but doesn’t approval-directed agency also have this problem?
The contrast is the failing gracefully, not (necessarily) the specific problems.
In addition to the above (even if approval-directed agents have this problem, this doesn’t mean disaster, just reduced performance), my understanding is that approval doesn’t require actually locating the person, just imitating the output of their approval after reflection. This should be able to be trained in the normal fashion, right? (see the learning from examples section)
Suppose we train the predictor Approval using examples and high-powered ML. Then we have the agent take the action most highly rated by Approval at each time step. This seems to fail much more gracefully as the quality of Approval degrades?
Note: I’ve only started to delve into the literature about Paul’s agenda, so these opinions are lightly held.
Before I respond to specific points, recall that I wrote
and
The approval agent is taking actions according to the output of an ML-trained approval predictor; the fact that the policy isn’t selected to maximize a signal is critical, and part of why I find approval-based methods so intriguing. There’s a very specific kind of policy you need in order to pursue instrumental subgoals, which is reliably produced by maximization, but which otherwise seems to be vanishingly unlikely.
The contrast is the failing gracefully, not (necessarily) the specific problems.
In addition to the above (even if approval-directed agents have this problem, this doesn’t mean disaster, just reduced performance), my understanding is that approval doesn’t require actually locating the person, just imitating the output of their approval after reflection. This should be able to be trained in the normal fashion, right? (see the learning from examples section)
Suppose we train the predictor
Approval
using examples and high-powered ML. Then we have the agent take the action most highly rated byApproval
at each time step. This seems to fail much more gracefully as the quality ofApproval
degrades?