2) … If you can imagine making your actions more and more granular (at least, up to a reasonably fine level), it seems like there should be a well-defined limit that the coarser representations approximate.
Yeah I agree there’s an easy way to avoid this problem. My main point in bringing it up was that there must be gaps in your justification that AUP is safe, if your justification does not depend on “and the action space must be sufficiently small.” Since AUP definitely isn’t safe for sufficiently large action spaces, your justification (or at least the one presented in the paper) must have at least one flaw, since it purports to argue that AUP is safe regardless of the size of the action space.
You must have read the first version of BoMAI (since you quoted here :) how did you find it by the way?). I’d level the same criticism against that draft. I believed I had a solid argument that it was safe, but then I discovered ν†, which proved there was an error somewhere in my reasoning. So I started by patching the error, but I was still haunted by how certain I felt that it was safe without the patch. I decided I needed to explicitly figure out every assumption involved, and in the process, I discovered ones that I hadn’t realized I was making. Likewise, this patch definitely does seem sufficient to avoid this problem of action-granularity, but I think the problem shows that a more rigorous argument is needed.
Where did I purport that it was safe for AGI in the paper, or in the post? I specifically disclaim that I’m not making that point yet, although I’m pretty sure we can get there.
There is a deeper explanation which I didn’t have space to fit in the paper, and I didn’t have the foresight to focus on when I wrote this post. I agree that it calls out for more investigation, and (this feels like a refrain for me at this point) I’ll be answering this call in a more in-depth sequence on what is actually going on at a deep level with AUP, and how fundamental the phenomenon is to agent-environment interaction.
I don’t remember how I found the first version, I think it was in a Google search somehow?
Yeah I agree there’s an easy way to avoid this problem. My main point in bringing it up was that there must be gaps in your justification that AUP is safe, if your justification does not depend on “and the action space must be sufficiently small.” Since AUP definitely isn’t safe for sufficiently large action spaces, your justification (or at least the one presented in the paper) must have at least one flaw, since it purports to argue that AUP is safe regardless of the size of the action space.
You must have read the first version of BoMAI (since you quoted here :) how did you find it by the way?). I’d level the same criticism against that draft. I believed I had a solid argument that it was safe, but then I discovered ν†, which proved there was an error somewhere in my reasoning. So I started by patching the error, but I was still haunted by how certain I felt that it was safe without the patch. I decided I needed to explicitly figure out every assumption involved, and in the process, I discovered ones that I hadn’t realized I was making. Likewise, this patch definitely does seem sufficient to avoid this problem of action-granularity, but I think the problem shows that a more rigorous argument is needed.
Where did I purport that it was safe for AGI in the paper, or in the post? I specifically disclaim that I’m not making that point yet, although I’m pretty sure we can get there.
There is a deeper explanation which I didn’t have space to fit in the paper, and I didn’t have the foresight to focus on when I wrote this post. I agree that it calls out for more investigation, and (this feels like a refrain for me at this point) I’ll be answering this call in a more in-depth sequence on what is actually going on at a deep level with AUP, and how fundamental the phenomenon is to agent-environment interaction.
I don’t remember how I found the first version, I think it was in a Google search somehow?
Okay fair. I just mean to make some requests for the next version of the argument.