In April 2020, my Metaculus median for the date a weakly general AI system is publicly known was Dec 2026. The super-team announcement hasn’t really changed my timelines.
running more than one copy of this system at a superhuman speed safely is something no one has any idea how to even approach, and unless this insanity is stopped so we have many more than four years to solve alignment, we’re all dead
My implication was that the quoted claim of yours was extreme and very likely incorrect (“we’re all dead” and “unless this insanity is stopped”, for example). I guess I failed to make that clear in my reply—perhaps LW comments norms require you to eschew ambiguity and implication. I was not making an object-level claim about your timeline models.
Thanks for clarifying, I didn’t get this from a comment about the timelines.
“insanity” refers to the situation where humanity allows AI labs to race ahead, hoping they’ll solve alignment on the way. I’m pretty sure that if the race isn’t stopped, everyone will die once the first smart enough AI is launched.
Is this “extreme” because everyone dies, or because I’m confident this is what happens?
On the upside, now you have a concrete timeline for how long we have to solve the alignment problem, and how long we are likely to live!
In April 2020, my Metaculus median for the date a weakly general AI system is publicly known was Dec 2026. The super-team announcement hasn’t really changed my timelines.
My implication was that the quoted claim of yours was extreme and very likely incorrect (“we’re all dead” and “unless this insanity is stopped”, for example). I guess I failed to make that clear in my reply—perhaps LW comments norms require you to eschew ambiguity and implication. I was not making an object-level claim about your timeline models.
Thanks for clarifying, I didn’t get this from a comment about the timelines.
“insanity” refers to the situation where humanity allows AI labs to race ahead, hoping they’ll solve alignment on the way. I’m pretty sure that if the race isn’t stopped, everyone will die once the first smart enough AI is launched.
Is this “extreme” because everyone dies, or because I’m confident this is what happens?