running more than one copy of this system at a superhuman speed safely is something no one has any idea how to even approach, and unless this insanity is stopped so we have many more than four years to solve alignment, we’re all dead
My implication was that the quoted claim of yours was extreme and very likely incorrect (“we’re all dead” and “unless this insanity is stopped”, for example). I guess I failed to make that clear in my reply—perhaps LW comments norms require you to eschew ambiguity and implication. I was not making an object-level claim about your timeline models.
Thanks for clarifying, I didn’t get this from a comment about the timelines.
“insanity” refers to the situation where humanity allows AI labs to race ahead, hoping they’ll solve alignment on the way. I’m pretty sure that if the race isn’t stopped, everyone will die once the first smart enough AI is launched.
Is this “extreme” because everyone dies, or because I’m confident this is what happens?
My implication was that the quoted claim of yours was extreme and very likely incorrect (“we’re all dead” and “unless this insanity is stopped”, for example). I guess I failed to make that clear in my reply—perhaps LW comments norms require you to eschew ambiguity and implication. I was not making an object-level claim about your timeline models.
Thanks for clarifying, I didn’t get this from a comment about the timelines.
“insanity” refers to the situation where humanity allows AI labs to race ahead, hoping they’ll solve alignment on the way. I’m pretty sure that if the race isn’t stopped, everyone will die once the first smart enough AI is launched.
Is this “extreme” because everyone dies, or because I’m confident this is what happens?