That’s probably true if the takeover is to maximize the AI’s persistence. You could imagine a misaligned AI that doesn’t care about its own persistence—e.g., an AI that got handed a misformed min() or max() that causes it to kill all humans instrumental to its goal (e.g., min(future_human_global_warming))
That’s probably true if the takeover is to maximize the AI’s persistence. You could imagine a misaligned AI that doesn’t care about its own persistence—e.g., an AI that got handed a misformed min() or max() that causes it to kill all humans instrumental to its goal (e.g., min(future_human_global_warming))