I have said nice things about AUP in the past (in past papers I wrote) and I will continue to say them. I can definitely see real-life cases where adding an AUP term to a reward function makes the resulting AI or AGI more aligned. Therefore, I see AUP as a useful and welcome tool in the AI alignment/safety toolbox. Sure, this tool alone does not solve every problem, but that hardly makes it a pointless tool.
From your off-the-cuff remarks, I am guessing that you are currently inhabiting the strange place where ‘pivotal acts’ are your preferred alignment solution. I will grant that, if you are in that place, then AUP might appear more pointless to you than it does to me.
I have said nice things about AUP in the past (in past papers I wrote) and I will continue to say them. I can definitely see real-life cases where adding an AUP term to a reward function makes the resulting AI or AGI more aligned. Therefore, I see AUP as a useful and welcome tool in the AI alignment/safety toolbox. Sure, this tool alone does not solve every problem, but that hardly makes it a pointless tool.
From your off-the-cuff remarks, I am guessing that you are currently inhabiting the strange place where ‘pivotal acts’ are your preferred alignment solution. I will grant that, if you are in that place, then AUP might appear more pointless to you than it does to me.