I thought utility maximizers were allowed to make the inference “Asteroid Impact → reduced resources → low utility → action to prevent that from happening”, kinda part of the reason for why AI is so dangerous: “Humans may interfere - > Humans in power is low utility → action to prevent that from happening”
They ignore anything but what they’re maximizing in the sense that they don’t follow the Spirit of the code but rather its Letter, all the way to the potentially brutal (for Humans) conclusions.
I thought utility maximizers were allowed to make the inference “Asteroid Impact → reduced resources → low utility → action to prevent that from happening”, kinda part of the reason for why AI is so dangerous: “Humans may interfere - > Humans in power is low utility → action to prevent that from happening”
They ignore anything but what they’re maximizing in the sense that they don’t follow the Spirit of the code but rather its Letter, all the way to the potentially brutal (for Humans) conclusions.