It’s not clear whether that will mean the end of humanity in the sense of the systems we’ve created destroying us. It’s not clear if that’s the case, but it’s certainly conceivable. If not, it also just renders humanity a very small phenomenon compared to something else that is far more intelligent and will become incomprehensible to us, as incomprehensible to us as we are to cockroaches.
It’s interesting that he seems so in despair over this now. To the extent that he’s worried about existential/catastrophic risks, I wonder if he is unaware of efforts to mitigate those, or if he is aware but thinks they are hopeless (or at least not guaranteed to succeed, which—fair enough). To the extent that he’s more broadly worried about human obsolescence (or anyway something more metaphysical), well, there are people trying to slow/stop AI, and others trying to enhance human capabilities—maybe he’s pessimistic about those efforts, too.
I am working on human capability enhancement via genetics. I think it’s quite plausible that we could create humans smarter than any that have ever lived within a decade. But even I think that digital intelligence wins in the end.
Like it just seems obvious to me. The only reason I’m even working in the field is because I think that enhanced humans could play an extremely critical role in the development of aligned AI. Of course this requires time for them to grow up and do research, which we are increasingly short of. But in case AGI takes longer than projected or we get our act together and implement a ban on AI capabilities improvements until alignment is solved, it still seems worth continuing the work to me.
It’s interesting that he seems so in despair over this now. To the extent that he’s worried about existential/catastrophic risks, I wonder if he is unaware of efforts to mitigate those, or if he is aware but thinks they are hopeless (or at least not guaranteed to succeed, which—fair enough). To the extent that he’s more broadly worried about human obsolescence (or anyway something more metaphysical), well, there are people trying to slow/stop AI, and others trying to enhance human capabilities—maybe he’s pessimistic about those efforts, too.
I am working on human capability enhancement via genetics. I think it’s quite plausible that we could create humans smarter than any that have ever lived within a decade. But even I think that digital intelligence wins in the end.
Like it just seems obvious to me. The only reason I’m even working in the field is because I think that enhanced humans could play an extremely critical role in the development of aligned AI. Of course this requires time for them to grow up and do research, which we are increasingly short of. But in case AGI takes longer than projected or we get our act together and implement a ban on AI capabilities improvements until alignment is solved, it still seems worth continuing the work to me.