But the AI isn’t being dropped into a completely undeveloped society. It will be dropped into an extremely developed society with values already existing. If the AI were dropped back into the era of early man, I could see major concern. I don’t see humanity having the values we’ve developed being radically and entirely changed into something we consider so unsavory by persuasion alone. That doesn’t mean no one could be affected, but I can’t see such a thing going down without outrage from large sects of humanity; which is not what the AI wants.
You underestimate “persuasion alone”. Please consider that (by your definition) all human opinions on all subjects that have existed to date, have been created pretty much “by persuasion alone”.
Also, I don’t want to live in a world where what I’m allowed to do or be is constrained by whether it provokes “outrages from large sects of humanity”. There are plenty of sects (properly so called ;-) today that don’t want me to continue existing even the way I already am, at least not without major brainwashing.
All human opinions cannot be created by persuasion alone because opinions have to start somewhere. People can and do think for themselves and that’s what creates opinions. Then they might persuade people to have these opinions as well, but clearly persuasion is not the sole source and even then it’s not like persuasion is a one-way process where you hit the persuade button and the other person is switched. It seems that your argument is that any human can be persuaded to any opinion at any time and I just can’t buy that. Humans are malleable and we’ve made a huge number of mistakes in the past, but I don’t see us as so bad that anyone can have their mind changed to anything regardless of the merit behind it. This entire site is based around getting people to not be arbitrarily malleable and to require rationality in making decisions—that there are objective conclusions and we should strive for them. Is this site and community a failure then? Are all of the people subject to mere persuasion in spite of rationality and cannot think for themselves?
Regarding actions that cause outrage I never said you were constrained by the outrage of others. I said an AI that maximizes human well-being is not going to take actions that cause extreme outrage.
All human opinions cannot be created by persuasion alone because opinions have to start somewhere. People can and do think for themselves and that’s what creates opinions.
This is completely wrong. Again, you give “persuasion” a very narrow scope.
A baby is born without language, certainly without many opinions. It can be shaped by its environment (“persuasion”) to be almost anything. Certainly, very few of the extremely diverse cultures and sub-cultures known from history have had any trouble raising their kids to behave like the adults, with only a small typical proportion of adolescents who left for another society. And these people had no understanding of how the brain really works—unlike what a superintelligent AI might have.
Short version: it doesn’t matter if people do think for themselves, because they only get to think about their sensory inputs and the AI can control those. Even a perfect Bayesian superintelligence would reach any conclusion you wished if you truly fully controlled all the information it ever received (as long as it had no priors of 0 or 1).
This entire site is based around getting people to not be arbitrarily malleable and to require rationality in making decisions [...] Is this site and community a failure then?
If you end up in an environment controlled by an unfriendly AI, having read this site won’t help you; it’s game over. LW rationality skills work in some worlds, not in any possible world.
Regarding actions that cause outrage I never said you were constrained by the outrage of others. I said an AI that maximizes human well-being is not going to take actions that cause extreme outrage.
How is this different from saying it’s not going to let me take actions that cause extreme outrage? I hope you aren’t planning on building an AI that has a sense of personal responsibility and doesn’t care if humans subvert its utility function as long as it didn’t cause them to do so.
There is a profound difference between being persuasive and manipulating all sensory input of a human. Is your argument not that it would try to persuade but that an AI would hook up all humans to a computer that controlled everything we perceived? If you want to make that your argument, I’m game for discussing it, but I think it should be made clear that this is a very different argument than an AI trying to change people’s minds through persuasion. But lets discuss it. This suggestion of manipulating the senses of humans seems to imply a massive use of technology and integration of the technology by the AI not available today, but that’s okay, we should expect technology to improve incredibly by the time we can make strong AI. But so long as we’re assuming that such huge amounts of improved technology with large integration is available and would allow the AI to pull the wool over everyone’s eyes, we must also consider that humans have made use of that technology themselves to better themselves and provide wildly intelligent computer security systems such that it seems a stretch to me to posit that an AI could do this without anyone noticing.
How is this different from saying it’s not going to let me take actions that cause extreme outrage? I hope you aren’t planning on building an AI that has a sense of personal responsibility and doesn’t care if humans subvert its utility function as long as it didn’t cause them to do so.
I suppose if your actions were extreme enough in the outrage they caused we might make a case for those actions needing to be thwarted, even by the reasoning of the AI. I don’t know you, but my guess is you’re thinking perhaps of religious fundamentalists feelings about you? Such outrage on its own is (1) somewhat limited and counterbalanced by others and (2) counter productive for humanity to act upon in which case the better argument is not to thwart your actions but work toward behavior of tolerance. But lets contrast this to an AI trying to effectively replace mankind with easily satisfied humans and consider how people would respond to that. I think its clear that humans would work toward shutting such an AI down and would respond with extreme concern for their livelihood. The fact that we’re sitting her talking about how this is doomsday scenario seems to be evidence of that concern. Given that, it just doesn’t seem to be in the AIs interest to make that choice; it would simply cause too much of a collapse in the well-being of humanity with their profound concern for the situation.
But the AI isn’t being dropped into a completely undeveloped society. It will be dropped into an extremely developed society with values already existing. If the AI were dropped back into the era of early man, I could see major concern. I don’t see humanity having the values we’ve developed being radically and entirely changed into something we consider so unsavory by persuasion alone. That doesn’t mean no one could be affected, but I can’t see such a thing going down without outrage from large sects of humanity; which is not what the AI wants.
You underestimate “persuasion alone”. Please consider that (by your definition) all human opinions on all subjects that have existed to date, have been created pretty much “by persuasion alone”.
Also, I don’t want to live in a world where what I’m allowed to do or be is constrained by whether it provokes “outrages from large sects of humanity”. There are plenty of sects (properly so called ;-) today that don’t want me to continue existing even the way I already am, at least not without major brainwashing.
All human opinions cannot be created by persuasion alone because opinions have to start somewhere. People can and do think for themselves and that’s what creates opinions. Then they might persuade people to have these opinions as well, but clearly persuasion is not the sole source and even then it’s not like persuasion is a one-way process where you hit the persuade button and the other person is switched. It seems that your argument is that any human can be persuaded to any opinion at any time and I just can’t buy that. Humans are malleable and we’ve made a huge number of mistakes in the past, but I don’t see us as so bad that anyone can have their mind changed to anything regardless of the merit behind it. This entire site is based around getting people to not be arbitrarily malleable and to require rationality in making decisions—that there are objective conclusions and we should strive for them. Is this site and community a failure then? Are all of the people subject to mere persuasion in spite of rationality and cannot think for themselves?
Regarding actions that cause outrage I never said you were constrained by the outrage of others. I said an AI that maximizes human well-being is not going to take actions that cause extreme outrage.
This is completely wrong. Again, you give “persuasion” a very narrow scope.
A baby is born without language, certainly without many opinions. It can be shaped by its environment (“persuasion”) to be almost anything. Certainly, very few of the extremely diverse cultures and sub-cultures known from history have had any trouble raising their kids to behave like the adults, with only a small typical proportion of adolescents who left for another society. And these people had no understanding of how the brain really works—unlike what a superintelligent AI might have.
Short version: it doesn’t matter if people do think for themselves, because they only get to think about their sensory inputs and the AI can control those. Even a perfect Bayesian superintelligence would reach any conclusion you wished if you truly fully controlled all the information it ever received (as long as it had no priors of 0 or 1).
If you end up in an environment controlled by an unfriendly AI, having read this site won’t help you; it’s game over. LW rationality skills work in some worlds, not in any possible world.
How is this different from saying it’s not going to let me take actions that cause extreme outrage? I hope you aren’t planning on building an AI that has a sense of personal responsibility and doesn’t care if humans subvert its utility function as long as it didn’t cause them to do so.
There is a profound difference between being persuasive and manipulating all sensory input of a human. Is your argument not that it would try to persuade but that an AI would hook up all humans to a computer that controlled everything we perceived? If you want to make that your argument, I’m game for discussing it, but I think it should be made clear that this is a very different argument than an AI trying to change people’s minds through persuasion. But lets discuss it. This suggestion of manipulating the senses of humans seems to imply a massive use of technology and integration of the technology by the AI not available today, but that’s okay, we should expect technology to improve incredibly by the time we can make strong AI. But so long as we’re assuming that such huge amounts of improved technology with large integration is available and would allow the AI to pull the wool over everyone’s eyes, we must also consider that humans have made use of that technology themselves to better themselves and provide wildly intelligent computer security systems such that it seems a stretch to me to posit that an AI could do this without anyone noticing.
I suppose if your actions were extreme enough in the outrage they caused we might make a case for those actions needing to be thwarted, even by the reasoning of the AI. I don’t know you, but my guess is you’re thinking perhaps of religious fundamentalists feelings about you? Such outrage on its own is (1) somewhat limited and counterbalanced by others and (2) counter productive for humanity to act upon in which case the better argument is not to thwart your actions but work toward behavior of tolerance. But lets contrast this to an AI trying to effectively replace mankind with easily satisfied humans and consider how people would respond to that. I think its clear that humans would work toward shutting such an AI down and would respond with extreme concern for their livelihood. The fact that we’re sitting her talking about how this is doomsday scenario seems to be evidence of that concern. Given that, it just doesn’t seem to be in the AIs interest to make that choice; it would simply cause too much of a collapse in the well-being of humanity with their profound concern for the situation.