So, we’re being asked to imagine an arbitrary superhuman AI whose properties and abilities we can’t guess at except to specify arbitrarily
Quite a lot of discussion concerning the future superintelligent AI is of this sort: “we can’t understand it, therefore you can’t prove it wouldn’t do any arbitrary thing I assert.” This already makes discussion difficult.
Quite a lot of discussion concerning the future superintelligent AI is of this sort: “we can’t understand it, therefore you can’t prove it wouldn’t do any arbitrary thing I assert.” This already makes discussion difficult.