On your paperclip example, I expect that the AUP-based AI will make very few paper clips, or it could have a big impact
How, exactly, would it have a big impact? Do you expect making a few paperclip factories to have a large impact in real life? If not, why would idealized-AUP agents expect that?
I think that for many tasks, idealized-AUP agents would not be competitive. It seems like they’d still be competitive on tasks with more limited scope, like putting apples on plates, construction tasks, or (perhaps) answering questions etc.
which is forbidden by AUP as I understand it.
I’m not sure what your model is here. In this post, this isn’t a constrained optimization problem, but rather a tradeoff between power gain and the main objective. So it’s not like AUP raps the agent’s knuckles and wholly rules out plans involving even a bit of power gain. The agent computes something like (objective score) - c*(power gain), where c is some constant.
On rereading, I guess this post doesn’t make that clear: this post assumes not only that we correctly implement the concepts behind AUP, but also that we slide along the penalty harshness spectrum until we get reasonable plans. It seems like we should hit reasonable plans before power-seeking is allowed, although this is another detail swept under the rug by the idealization.
Another way to phrase my objection is that at first glance, AUP seems to not only forbid gaining power for the AI, but also gaining power for the AI’s user. Which sounds like a good thing, but might also create incentives to create and use non AUP-based AIs. Does that make any sense, or did I fail to understand some part of the sequence that explains this?
Idealized-AUP doesn’t directly penalize gaining power for the user, no. Whether this is indirectly incentivized depends on the idealizations we make.
I think that impact measures levy a steep alignment tax, so yes, I think that there are competitive pressures to cut corners on impact allowances.
How, exactly, would it have a big impact? Do you expect making a few paperclip factories to have a large impact in real life? If not, why would idealized-AUP agents expect that?
I think that for many tasks, idealized-AUP agents would not be competitive. It seems like they’d still be competitive on tasks with more limited scope, like putting apples on plates, construction tasks, or (perhaps) answering questions etc.
I’m not sure what your model is here. In this post, this isn’t a constrained optimization problem, but rather a tradeoff between power gain and the main objective. So it’s not like AUP raps the agent’s knuckles and wholly rules out plans involving even a bit of power gain. The agent computes something like (objective score) - c*(power gain), where c is some constant.
On rereading, I guess this post doesn’t make that clear: this post assumes not only that we correctly implement the concepts behind AUP, but also that we slide along the penalty harshness spectrum until we get reasonable plans. It seems like we should hit reasonable plans before power-seeking is allowed, although this is another detail swept under the rug by the idealization.
Idealized-AUP doesn’t directly penalize gaining power for the user, no. Whether this is indirectly incentivized depends on the idealizations we make.
I think that impact measures levy a steep alignment tax, so yes, I think that there are competitive pressures to cut corners on impact allowances.