I second Charlie Steiner’s questions, and add my own: why collaboration? A nice property of an (aligned) AGI would be that we could defer activities to it… I would even say that the full extent of “do what we want” at superhuman level would encompass pretty much everything we care about (assuming, again, alignment).
Because human deference is usually conditioned on motives beyond deferring for the sake of deferring. Thus even in that case there will still need to be some collaboration.
I second Charlie Steiner’s questions, and add my own: why collaboration? A nice property of an (aligned) AGI would be that we could defer activities to it… I would even say that the full extent of “do what we want” at superhuman level would encompass pretty much everything we care about (assuming, again, alignment).
Because human deference is usually conditioned on motives beyond deferring for the sake of deferring. Thus even in that case there will still need to be some collaboration.