In this case, I’m less afraid of “bad guys” than I am of “good guys” who make mistakes. The bad guys just want to rule the Earth for a little while. The good guys want to define the Universe’s utility function.
I’m less afraid of “bad guys” than I am of “good guys” who make mistakes.
Looking at history of accidents with machines, they seem to be mostly automobile accidents. Medical accidents are number two, I think.
In both cases, technology that proved dangerous was used deliberately—before the relevant safety features could be added—due to the benefits it gave in the mean time. It seems likely that we will see more of that—in conjunction with the overall trend towards increased safety.
My position on this is the opposite of yours. I think there are probably greater individual risks from a machine intelligence working properly for someone else than there are from an accident. Both positions are players, though.
In this case, I’m less afraid of “bad guys” than I am of “good guys” who make mistakes. The bad guys just want to rule the Earth for a little while. The good guys want to define the Universe’s utility function.
Looking at history of accidents with machines, they seem to be mostly automobile accidents. Medical accidents are number two, I think.
In both cases, technology that proved dangerous was used deliberately—before the relevant safety features could be added—due to the benefits it gave in the mean time. It seems likely that we will see more of that—in conjunction with the overall trend towards increased safety.
My position on this is the opposite of yours. I think there are probably greater individual risks from a machine intelligence working properly for someone else than there are from an accident. Both positions are players, though.