(For non-x-risk-focused transhumanists, some of whom may be tech execs or ML researchers.)
Some people treat the possibility of human extinction with a philosophical detachment: who are we to obstruct the destiny of the evolution of intelligent life? If the “natural” course of events for a biological species like ours is to be transcended by our artificial “mind children”, shouldn’t we be happy for them?
I actually do have some sympathy for this view, in the sense that the history where we build AI that kills us is plausibly better than the history where the Industrial Revolution never happens at all. Still—if you had the choice between a superintelligence that kills you and everyone you know, and one that grants all your hopes and dreams for a happy billion-year lifespan, isn’t it worth some effort trying to figure out how to get the latter?
(For non-x-risk-focused transhumanists, some of whom may be tech execs or ML researchers.)
Some people treat the possibility of human extinction with a philosophical detachment: who are we to obstruct the destiny of the evolution of intelligent life? If the “natural” course of events for a biological species like ours is to be transcended by our artificial “mind children”, shouldn’t we be happy for them?
I actually do have some sympathy for this view, in the sense that the history where we build AI that kills us is plausibly better than the history where the Industrial Revolution never happens at all. Still—if you had the choice between a superintelligence that kills you and everyone you know, and one that grants all your hopes and dreams for a happy billion-year lifespan, isn’t it worth some effort trying to figure out how to get the latter?