Re: scenario 3, see The Evitable Conflict, the last story in Isaac Asimov’s “I, Robot”:
“Stephen, how do we know what the ultimate good of Humanity will entail? We haven’t at our disposal the infinite factors that the Machine has at its! Perhaps, to give you a not unfamiliar example, our entire technical civilization has created more unhappiness and misery than it has removed. Perhaps an agrarian or pastoral civilization, with less culture and less people would be better. If so, the Machines must move in that direction, preferably without telling us, since in our ignorant prejudices we only know that what we are used to, is good – and we would then fight change. Or perhaps a complete urbanization, or a completely caste-ridden society, or complete anarchy, is the answer. We don’t know. Only the Machines know, and they are going there and taking us with them.”
I’m not sure I understand the point of this quote in relation to what I wrote. (Keep in mind that I haven’t read the story, in case the rest of the story offers the necessary context.) One guess is that you’re suggesting that AIs might be more moral than humans “by default” without special effort on the part of effective altruists, so it might not be an existential disaster if AI values end up controlling most of the universe instead of human values. This seems somewhat plausible but surely isn’t a reasonable mainline expectation?
Re: scenario 3, see The Evitable Conflict, the last story in Isaac Asimov’s “I, Robot”:
I’m not sure I understand the point of this quote in relation to what I wrote. (Keep in mind that I haven’t read the story, in case the rest of the story offers the necessary context.) One guess is that you’re suggesting that AIs might be more moral than humans “by default” without special effort on the part of effective altruists, so it might not be an existential disaster if AI values end up controlling most of the universe instead of human values. This seems somewhat plausible but surely isn’t a reasonable mainline expectation?