‘high level machine intelligence’ (HLMI) and ‘full automation of labor’ (FAOL)
I continue to believe that predicting things like that is not particularly useful to predicting when AI will achieve decisive strategic advantage and/or kill literally everyone. AI could totally kill literally everyone without us ever getting to observe HLMI or FAOL first, and I think development in HLMI / FAOL does not say much about how close we are to AI that kills literally everyone.
I continue to believe that predicting things like that is not particularly useful to predicting when AI will achieve decisive strategic advantage and/or kill literally everyone. AI could totally kill literally everyone without us ever getting to observe HLMI or FAOL first, and I think development in HLMI / FAOL does not say much about how close we are to AI that kills literally everyone.