There is a profound difference between being persuasive and manipulating all sensory input of a human. Is your argument not that it would try to persuade but that an AI would hook up all humans to a computer that controlled everything we perceived? If you want to make that your argument, I’m game for discussing it, but I think it should be made clear that this is a very different argument than an AI trying to change people’s minds through persuasion. But lets discuss it. This suggestion of manipulating the senses of humans seems to imply a massive use of technology and integration of the technology by the AI not available today, but that’s okay, we should expect technology to improve incredibly by the time we can make strong AI. But so long as we’re assuming that such huge amounts of improved technology with large integration is available and would allow the AI to pull the wool over everyone’s eyes, we must also consider that humans have made use of that technology themselves to better themselves and provide wildly intelligent computer security systems such that it seems a stretch to me to posit that an AI could do this without anyone noticing.
How is this different from saying it’s not going to let me take actions that cause extreme outrage? I hope you aren’t planning on building an AI that has a sense of personal responsibility and doesn’t care if humans subvert its utility function as long as it didn’t cause them to do so.
I suppose if your actions were extreme enough in the outrage they caused we might make a case for those actions needing to be thwarted, even by the reasoning of the AI. I don’t know you, but my guess is you’re thinking perhaps of religious fundamentalists feelings about you? Such outrage on its own is (1) somewhat limited and counterbalanced by others and (2) counter productive for humanity to act upon in which case the better argument is not to thwart your actions but work toward behavior of tolerance. But lets contrast this to an AI trying to effectively replace mankind with easily satisfied humans and consider how people would respond to that. I think its clear that humans would work toward shutting such an AI down and would respond with extreme concern for their livelihood. The fact that we’re sitting her talking about how this is doomsday scenario seems to be evidence of that concern. Given that, it just doesn’t seem to be in the AIs interest to make that choice; it would simply cause too much of a collapse in the well-being of humanity with their profound concern for the situation.
There is a profound difference between being persuasive and manipulating all sensory input of a human. Is your argument not that it would try to persuade but that an AI would hook up all humans to a computer that controlled everything we perceived? If you want to make that your argument, I’m game for discussing it, but I think it should be made clear that this is a very different argument than an AI trying to change people’s minds through persuasion. But lets discuss it. This suggestion of manipulating the senses of humans seems to imply a massive use of technology and integration of the technology by the AI not available today, but that’s okay, we should expect technology to improve incredibly by the time we can make strong AI. But so long as we’re assuming that such huge amounts of improved technology with large integration is available and would allow the AI to pull the wool over everyone’s eyes, we must also consider that humans have made use of that technology themselves to better themselves and provide wildly intelligent computer security systems such that it seems a stretch to me to posit that an AI could do this without anyone noticing.
I suppose if your actions were extreme enough in the outrage they caused we might make a case for those actions needing to be thwarted, even by the reasoning of the AI. I don’t know you, but my guess is you’re thinking perhaps of religious fundamentalists feelings about you? Such outrage on its own is (1) somewhat limited and counterbalanced by others and (2) counter productive for humanity to act upon in which case the better argument is not to thwart your actions but work toward behavior of tolerance. But lets contrast this to an AI trying to effectively replace mankind with easily satisfied humans and consider how people would respond to that. I think its clear that humans would work toward shutting such an AI down and would respond with extreme concern for their livelihood. The fact that we’re sitting her talking about how this is doomsday scenario seems to be evidence of that concern. Given that, it just doesn’t seem to be in the AIs interest to make that choice; it would simply cause too much of a collapse in the well-being of humanity with their profound concern for the situation.