The best summary I can give here is that AIs are expected to be expected utility maximisers that completely ignore anything which they are not specifically tasked to maximise.
I thought utility maximizers were allowed to make the inference “Asteroid Impact → reduced resources → low utility → action to prevent that from happening”, kinda part of the reason for why AI is so dangerous: “Humans may interfere - > Humans in power is low utility → action to prevent that from happening”
They ignore anything but what they’re maximizing in the sense that they don’t follow the Spirit of the code but rather its Letter, all the way to the potentially brutal (for Humans) conclusions.
Counter example: incoming asteroid.
I thought utility maximizers were allowed to make the inference “Asteroid Impact → reduced resources → low utility → action to prevent that from happening”, kinda part of the reason for why AI is so dangerous: “Humans may interfere - > Humans in power is low utility → action to prevent that from happening”
They ignore anything but what they’re maximizing in the sense that they don’t follow the Spirit of the code but rather its Letter, all the way to the potentially brutal (for Humans) conclusions.