Why would an AI consider those two scenarios and no others? Seems more likely it would have to chew over every equivalently-complex hypothesis before coming to any actionable conclusion… at which point it stops being a worrisome, potentially world-destroying AI and becomes a brick, with a progress bar that won’t visibly advance until after the last proton has decayed.
More generally I mean that an AI capable of succumbing to this particular problem wouldn’t be able to function in the real world well enough to cause damage.
Why would an AI consider those two scenarios and no others? Seems more likely it would have to chew over every equivalently-complex hypothesis before coming to any actionable conclusion… at which point it stops being a worrisome, potentially world-destroying AI and becomes a brick, with a progress bar that won’t visibly advance until after the last proton has decayed.
… which doesn’t solve the problem, but at least that AI won’t be giving anyone… five dollars? Your point is valid, but it doesn’t expand on anything.
More generally I mean that an AI capable of succumbing to this particular problem wouldn’t be able to function in the real world well enough to cause damage.
I’m not sure that was ever a question. :3