I wouldn’t be so quick to discard the idea of the AI persuading us that things are pretty nice the way they are. There are probably strong limits to the persuadability of human beings, so it wouldn’t be a disaster. And there is a long tradition of advice regarding the (claimed) wisdom of learning to enjoy life as you find it.
I wouldn’t be so quick to discard the idea of the AI persuading us that things are pretty nice the way they are.
Suppose the AI we build (AI1) finds itself insufficiently intelligent to persuade us. It decides to build a more powerful AI (AI2) to give it advice. AI2 wakes up and modifies AI1 into being perfectly satisfied with the way things are. Then, mission accomplished, they both shut down and leave humanity unchanged.
I think what went wrong here is that this formulation of utilitarianism isn’t reflectively consistent.
There are probably strong limits to the persuadability of human beings, so it wouldn’t be a disaster.
If there are, then the AI would modify us physically instead.
Why do you say these “strong limits” exist? What are they?
I do think that everyone being persuaded to be Bodhisattvas is a pretty good possible future, but I do think there are better futures that might be given up by that path. (immortal cyborg-Bodhisattvas?)
I wouldn’t be so quick to discard the idea of the AI persuading us that things are pretty nice the way they are. There are probably strong limits to the persuadability of human beings, so it wouldn’t be a disaster. And there is a long tradition of advice regarding the (claimed) wisdom of learning to enjoy life as you find it.
Suppose the AI we build (AI1) finds itself insufficiently intelligent to persuade us. It decides to build a more powerful AI (AI2) to give it advice. AI2 wakes up and modifies AI1 into being perfectly satisfied with the way things are. Then, mission accomplished, they both shut down and leave humanity unchanged.
I think what went wrong here is that this formulation of utilitarianism isn’t reflectively consistent.
If there are, then the AI would modify us physically instead.
Why do you say these “strong limits” exist? What are they?
I do think that everyone being persuaded to be Bodhisattvas is a pretty good possible future, but I do think there are better futures that might be given up by that path. (immortal cyborg-Bodhisattvas?)
Strong limits? You mean the limit of how much the atoms in a human can be rearranged and still be called ‘human’?