How would a military which is increasingly run by AI factor into these scenarios? It seems most similar to organizational safety a la google building software with SWEs but the disanalogy might be that the AI is explicitly supposed to take over some part of the world and maybe it interpreted a command incorrectly. Or does this article only consider the AI taking over because it wanted to take over?
Very likely it’s only considering AI taking over because it functionally “wants” to take over. That’s the standard concern in AGI safety/alignment. “Misinterpreting” commands could result in it “wanting” to take over- but if it’s still taking commands, those orders could probably be reversed before too much damage was done. We tend to mostly worry about the-humans-are-dead (or out-of-power-and-never-getting-it-back, which unless it’s a very benevolent takeover probably means on the way to the-humans-are-dead).
How would a military which is increasingly run by AI factor into these scenarios? It seems most similar to organizational safety a la google building software with SWEs but the disanalogy might be that the AI is explicitly supposed to take over some part of the world and maybe it interpreted a command incorrectly. Or does this article only consider the AI taking over because it wanted to take over?
Very likely it’s only considering AI taking over because it functionally “wants” to take over. That’s the standard concern in AGI safety/alignment. “Misinterpreting” commands could result in it “wanting” to take over- but if it’s still taking commands, those orders could probably be reversed before too much damage was done. We tend to mostly worry about the-humans-are-dead (or out-of-power-and-never-getting-it-back, which unless it’s a very benevolent takeover probably means on the way to the-humans-are-dead).