All human opinions cannot be created by persuasion alone because opinions have to start somewhere. People can and do think for themselves and that’s what creates opinions.
This is completely wrong. Again, you give “persuasion” a very narrow scope.
A baby is born without language, certainly without many opinions. It can be shaped by its environment (“persuasion”) to be almost anything. Certainly, very few of the extremely diverse cultures and sub-cultures known from history have had any trouble raising their kids to behave like the adults, with only a small typical proportion of adolescents who left for another society. And these people had no understanding of how the brain really works—unlike what a superintelligent AI might have.
Short version: it doesn’t matter if people do think for themselves, because they only get to think about their sensory inputs and the AI can control those. Even a perfect Bayesian superintelligence would reach any conclusion you wished if you truly fully controlled all the information it ever received (as long as it had no priors of 0 or 1).
This entire site is based around getting people to not be arbitrarily malleable and to require rationality in making decisions [...] Is this site and community a failure then?
If you end up in an environment controlled by an unfriendly AI, having read this site won’t help you; it’s game over. LW rationality skills work in some worlds, not in any possible world.
Regarding actions that cause outrage I never said you were constrained by the outrage of others. I said an AI that maximizes human well-being is not going to take actions that cause extreme outrage.
How is this different from saying it’s not going to let me take actions that cause extreme outrage? I hope you aren’t planning on building an AI that has a sense of personal responsibility and doesn’t care if humans subvert its utility function as long as it didn’t cause them to do so.
There is a profound difference between being persuasive and manipulating all sensory input of a human. Is your argument not that it would try to persuade but that an AI would hook up all humans to a computer that controlled everything we perceived? If you want to make that your argument, I’m game for discussing it, but I think it should be made clear that this is a very different argument than an AI trying to change people’s minds through persuasion. But lets discuss it. This suggestion of manipulating the senses of humans seems to imply a massive use of technology and integration of the technology by the AI not available today, but that’s okay, we should expect technology to improve incredibly by the time we can make strong AI. But so long as we’re assuming that such huge amounts of improved technology with large integration is available and would allow the AI to pull the wool over everyone’s eyes, we must also consider that humans have made use of that technology themselves to better themselves and provide wildly intelligent computer security systems such that it seems a stretch to me to posit that an AI could do this without anyone noticing.
How is this different from saying it’s not going to let me take actions that cause extreme outrage? I hope you aren’t planning on building an AI that has a sense of personal responsibility and doesn’t care if humans subvert its utility function as long as it didn’t cause them to do so.
I suppose if your actions were extreme enough in the outrage they caused we might make a case for those actions needing to be thwarted, even by the reasoning of the AI. I don’t know you, but my guess is you’re thinking perhaps of religious fundamentalists feelings about you? Such outrage on its own is (1) somewhat limited and counterbalanced by others and (2) counter productive for humanity to act upon in which case the better argument is not to thwart your actions but work toward behavior of tolerance. But lets contrast this to an AI trying to effectively replace mankind with easily satisfied humans and consider how people would respond to that. I think its clear that humans would work toward shutting such an AI down and would respond with extreme concern for their livelihood. The fact that we’re sitting her talking about how this is doomsday scenario seems to be evidence of that concern. Given that, it just doesn’t seem to be in the AIs interest to make that choice; it would simply cause too much of a collapse in the well-being of humanity with their profound concern for the situation.
This is completely wrong. Again, you give “persuasion” a very narrow scope.
A baby is born without language, certainly without many opinions. It can be shaped by its environment (“persuasion”) to be almost anything. Certainly, very few of the extremely diverse cultures and sub-cultures known from history have had any trouble raising their kids to behave like the adults, with only a small typical proportion of adolescents who left for another society. And these people had no understanding of how the brain really works—unlike what a superintelligent AI might have.
Short version: it doesn’t matter if people do think for themselves, because they only get to think about their sensory inputs and the AI can control those. Even a perfect Bayesian superintelligence would reach any conclusion you wished if you truly fully controlled all the information it ever received (as long as it had no priors of 0 or 1).
If you end up in an environment controlled by an unfriendly AI, having read this site won’t help you; it’s game over. LW rationality skills work in some worlds, not in any possible world.
How is this different from saying it’s not going to let me take actions that cause extreme outrage? I hope you aren’t planning on building an AI that has a sense of personal responsibility and doesn’t care if humans subvert its utility function as long as it didn’t cause them to do so.
There is a profound difference between being persuasive and manipulating all sensory input of a human. Is your argument not that it would try to persuade but that an AI would hook up all humans to a computer that controlled everything we perceived? If you want to make that your argument, I’m game for discussing it, but I think it should be made clear that this is a very different argument than an AI trying to change people’s minds through persuasion. But lets discuss it. This suggestion of manipulating the senses of humans seems to imply a massive use of technology and integration of the technology by the AI not available today, but that’s okay, we should expect technology to improve incredibly by the time we can make strong AI. But so long as we’re assuming that such huge amounts of improved technology with large integration is available and would allow the AI to pull the wool over everyone’s eyes, we must also consider that humans have made use of that technology themselves to better themselves and provide wildly intelligent computer security systems such that it seems a stretch to me to posit that an AI could do this without anyone noticing.
I suppose if your actions were extreme enough in the outrage they caused we might make a case for those actions needing to be thwarted, even by the reasoning of the AI. I don’t know you, but my guess is you’re thinking perhaps of religious fundamentalists feelings about you? Such outrage on its own is (1) somewhat limited and counterbalanced by others and (2) counter productive for humanity to act upon in which case the better argument is not to thwart your actions but work toward behavior of tolerance. But lets contrast this to an AI trying to effectively replace mankind with easily satisfied humans and consider how people would respond to that. I think its clear that humans would work toward shutting such an AI down and would respond with extreme concern for their livelihood. The fact that we’re sitting her talking about how this is doomsday scenario seems to be evidence of that concern. Given that, it just doesn’t seem to be in the AIs interest to make that choice; it would simply cause too much of a collapse in the well-being of humanity with their profound concern for the situation.