I continue to agree with my original comment on this post (though it is a bit long-winded and goes off on more tangents than I would like), and I think it can serve as a review of this post.
If this post were to be rewritten, I’d be particularly interested to hear example “deployment scenarios” where we use an AGI without human models and this makes the future go well. I know of two examples:
We use strong global coordination to ensure that no powerful AI systems with human models are ever deployed.
We build an AGI that can do science / engineering really well (STEM AI), use it to build technology that allows us to take over the world, and then proceed carefully to make the future good.
I don’t know if anyone endorses these as plans for the future; either way I have serious qualms with both of them.
I continue to agree with my original comment on this post (though it is a bit long-winded and goes off on more tangents than I would like), and I think it can serve as a review of this post.
If this post were to be rewritten, I’d be particularly interested to hear example “deployment scenarios” where we use an AGI without human models and this makes the future go well. I know of two examples:
We use strong global coordination to ensure that no powerful AI systems with human models are ever deployed.
We build an AGI that can do science / engineering really well (STEM AI), use it to build technology that allows us to take over the world, and then proceed carefully to make the future good.
I don’t know if anyone endorses these as plans for the future; either way I have serious qualms with both of them.