We have good-enough alignment for the AIs we have. We don’t have a general solution to alignment that will work for the ASIs we don’t have. We also don’t know whether we we need one, ie. we don’t know that we need to solve ASI alignment beyond getting ASIs to work acceptably.
I constantly see conflations of AI and ASI. It doesn’t give me much faith in amateur (unrelated to industry) efforts at AI safety.
We have good-enough alignment for the AIs we have. We don’t have a general solution to alignment that will work for the ASIs we don’t have. We also don’t know whether we we need one, ie. we don’t know that we need to solve ASI alignment beyond getting ASIs to work acceptably.
I constantly see conflations of AI and ASI. It doesn’t give me much faith in amateur (unrelated to industry) efforts at AI safety.