I’m not sure I understand the point of this quote in relation to what I wrote. (Keep in mind that I haven’t read the story, in case the rest of the story offers the necessary context.) One guess is that you’re suggesting that AIs might be more moral than humans “by default” without special effort on the part of effective altruists, so it might not be an existential disaster if AI values end up controlling most of the universe instead of human values. This seems somewhat plausible but surely isn’t a reasonable mainline expectation?
I’m not sure I understand the point of this quote in relation to what I wrote. (Keep in mind that I haven’t read the story, in case the rest of the story offers the necessary context.) One guess is that you’re suggesting that AIs might be more moral than humans “by default” without special effort on the part of effective altruists, so it might not be an existential disaster if AI values end up controlling most of the universe instead of human values. This seems somewhat plausible but surely isn’t a reasonable mainline expectation?