Yeah, that upshot sounds pretty reasonable to me. (Though idk if it’s reasonable to think of that as endorsed by “all of MIRI”.)
Therefore, any optimal strategy will make use of those additional resources (killing humans in the process).
Note that this requires the utility function to be completely indifferent to humans (or actively against them).
Yeah, that upshot sounds pretty reasonable to me. (Though idk if it’s reasonable to think of that as endorsed by “all of MIRI”.)
Note that this requires the utility function to be completely indifferent to humans (or actively against them).