From what I understand, it is extremely unlikely that an AI would fail in such a way that would 1) kill, enslave or forcibly contain humans while also 2) benefiting other sentient beings. If the AI fails, it’d be something dumb like turning everything into paperclips.
From what I understand, it is extremely unlikely that an AI would fail in such a way that would 1) kill, enslave or forcibly contain humans while also 2) benefiting other sentient beings. If the AI fails, it’d be something dumb like turning everything into paperclips.