This is often overlooked here (perhaps with good reason as many examples will be controversial). Scenarios of this kind can be very, very bad, much worse than a typical unaligned AI like Clippy.
For example, I would take Clippy over an AI whose goal was to spread biological life throughout the universe any day. I expect this may be controversial even here, but see https://longtermrisk.org/the-importance-of-wild-animal-suffering/#Inadvertently_Multiplying_Suffering for why I think this way.
This is often overlooked here (perhaps with good reason as many examples will be controversial). Scenarios of this kind can be very, very bad, much worse than a typical unaligned AI like Clippy.
For example, I would take Clippy over an AI whose goal was to spread biological life throughout the universe any day. I expect this may be controversial even here, but see https://longtermrisk.org/the-importance-of-wild-animal-suffering/#Inadvertently_Multiplying_Suffering for why I think this way.