Where did I purport that it was safe for AGI in the paper, or in the post? I specifically disclaim that I’m not making that point yet, although I’m pretty sure we can get there.
There is a deeper explanation which I didn’t have space to fit in the paper, and I didn’t have the foresight to focus on when I wrote this post. I agree that it calls out for more investigation, and (this feels like a refrain for me at this point) I’ll be answering this call in a more in-depth sequence on what is actually going on at a deep level with AUP, and how fundamental the phenomenon is to agent-environment interaction.
I don’t remember how I found the first version, I think it was in a Google search somehow?
Where did I purport that it was safe for AGI in the paper, or in the post? I specifically disclaim that I’m not making that point yet, although I’m pretty sure we can get there.
There is a deeper explanation which I didn’t have space to fit in the paper, and I didn’t have the foresight to focus on when I wrote this post. I agree that it calls out for more investigation, and (this feels like a refrain for me at this point) I’ll be answering this call in a more in-depth sequence on what is actually going on at a deep level with AUP, and how fundamental the phenomenon is to agent-environment interaction.
I don’t remember how I found the first version, I think it was in a Google search somehow?
Okay fair. I just mean to make some requests for the next version of the argument.