That’s true, but shouldn’t we also give weight to the billions of people who might die if we screw up and create some sort dangerous AI? Or, in a less exotic scenario, we end up fighting a war with some kind of world-destroying weapon we invent? We’ve already had some close callsin thatdepartment. So far the amount of benefits the accelerating changes have given us outweigh the harms, but we’ve been really lucky.
Or, more pertinent to the OP, if the lives that would be lost if we create a bunch of AIs that we don’t consider morally significant, erase them, and then later realize we were wrong to consider them not morally significant?
“please please slow all this change down”
No way no how. Bring the change on, baby. Bring.It.On.
For those who complain about being on your toes all the time, I say take ballet.
Also, think of all the millions of children you’re killing because we didn’t cure their diseases fast enough.
That’s true, but shouldn’t we also give weight to the billions of people who might die if we screw up and create some sort dangerous AI? Or, in a less exotic scenario, we end up fighting a war with some kind of world-destroying weapon we invent? We’ve already had some close calls in that department. So far the amount of benefits the accelerating changes have given us outweigh the harms, but we’ve been really lucky.
Or, more pertinent to the OP, if the lives that would be lost if we create a bunch of AIs that we don’t consider morally significant, erase them, and then later realize we were wrong to consider them not morally significant?