I should add that a lot of people here agree with your stand except that there is a bigger risk from AI than there is benefit. That is, we’ll have to work on AI but first we should figure out how to make it friendly. That is what the SIAI is working on.
By the way, welcome to Less Wrong. You know me as Alexander Kruel on Facebook.
There seems to be a significant “risk” of making a much better world with much smarter agents and a lot less insanity and stupidity. A lot of people see that as a bad thing, however.
Looking at history, this sort of thing is fairly common. Most kinds of progress face resistance from various kinds of luddites- who would rather things stayed the way they were.
What? I don’t follow. Are you saying it would be a much better world if an unfriendly AI replaced humanity? I don’t think it’s luddite-ish to say I’d rather not die so something else can take my place.
I’d agree to AI “unfriendly” (whatever this means… it shouldn’t reason emotionally, it should just be sufficiently intelligent) replacing humanity… since we are the problem that we’re trying to solve. We feel pain, we suffer, we are stupid, susceptible to countless diseases, we aren’t very happy and fulfilled, etc. Eventually we’ll all need to be either corrected or replaced. An old computer can only take so many software updates before it becomes incompatible with newer operating systems, and this is our eventual fate. It is not logical to be against our own demise, in my viewpoint.
I should add that a lot of people here agree with your stand except that there is a bigger risk from AI than there is benefit. That is, we’ll have to work on AI but first we should figure out how to make it friendly. That is what the SIAI is working on.
By the way, welcome to Less Wrong. You know me as Alexander Kruel on Facebook.
There seems to be a significant “risk” of making a much better world with much smarter agents and a lot less insanity and stupidity. A lot of people see that as a bad thing, however.
Looking at history, this sort of thing is fairly common. Most kinds of progress face resistance from various kinds of luddites- who would rather things stayed the way they were.
What? I don’t follow. Are you saying it would be a much better world if an unfriendly AI replaced humanity? I don’t think it’s luddite-ish to say I’d rather not die so something else can take my place.
I’d agree to AI “unfriendly” (whatever this means… it shouldn’t reason emotionally, it should just be sufficiently intelligent) replacing humanity… since we are the problem that we’re trying to solve. We feel pain, we suffer, we are stupid, susceptible to countless diseases, we aren’t very happy and fulfilled, etc. Eventually we’ll all need to be either corrected or replaced. An old computer can only take so many software updates before it becomes incompatible with newer operating systems, and this is our eventual fate. It is not logical to be against our own demise, in my viewpoint.