A couple months ago I wrote a doc from the prompt “suppose in the future we look back and think AI went well; what might have happened?” I hope to publish that soon; in the meantime some other classes of victory conditions (according to me) are:
Timelines are long because of good public policy
A classic pivotal act is executed (and aligning it isn’t too hard)
Everyone comes to believe that advanced AI is dangerous; there is disinclination to do risky work at the researcher-level, lab-level, and society-level
(Edit: I’m curious why this was downvoted; I’d love anonymous feedback if relevant.)
Great post.
A couple months ago I wrote a doc from the prompt “suppose in the future we look back and think AI went well; what might have happened?” I hope to publish that soon; in the meantime some other classes of victory conditions (according to me) are:
Timelines are long because of good public policy
A classic pivotal act is executed (and aligning it isn’t too hard)
Everyone comes to believe that advanced AI is dangerous; there is disinclination to do risky work at the researcher-level, lab-level, and society-level
(Edit: I’m curious why this was downvoted; I’d love anonymous feedback if relevant.)