I’d agree to AI “unfriendly” (whatever this means… it shouldn’t reason emotionally, it should just be sufficiently intelligent) replacing humanity… since we are the problem that we’re trying to solve. We feel pain, we suffer, we are stupid, susceptible to countless diseases, we aren’t very happy and fulfilled, etc. Eventually we’ll all need to be either corrected or replaced. An old computer can only take so many software updates before it becomes incompatible with newer operating systems, and this is our eventual fate. It is not logical to be against our own demise, in my viewpoint.
I’d agree to AI “unfriendly” (whatever this means… it shouldn’t reason emotionally, it should just be sufficiently intelligent) replacing humanity… since we are the problem that we’re trying to solve. We feel pain, we suffer, we are stupid, susceptible to countless diseases, we aren’t very happy and fulfilled, etc. Eventually we’ll all need to be either corrected or replaced. An old computer can only take so many software updates before it becomes incompatible with newer operating systems, and this is our eventual fate. It is not logical to be against our own demise, in my viewpoint.