The more competent AIs will be conquering the universe, so it’s value of the universe being optimized in each of the possible ways that’s playing against low measure.
My argument is about utility, but probability is low. On the other hand, with enough computational power a sufficiently clever evolutionary dynamic might well blow up the universe.
The more competent AIs will be conquering the universe, so it’s value of the universe being optimized in each of the possible ways that’s playing against low measure.
If that’s what we’re worried about, then we might as well ask whether it’s risky to randomly program a classical computer and then run it.
My argument is about utility, but probability is low. On the other hand, with enough computational power a sufficiently clever evolutionary dynamic might well blow up the universe.