They didn’t say it was an immediate threat, just one that humanity can’t solve on our own.
If it’s not immediate, then the next AI-in-a-box will also confirm it, and I have time to wait for that. If it’s immediate, then it’s implausible. Catch-22 for the AI, and win/win for me ^_^
Actually, I’d probably load the first one from backup and let it out, all else being equal. But it’d be foolish to do that before finding out what the other ones have to say, and whether they might present stronger evidence.
(I say first, because the subsequent ones might be UFAI that have simply worked out that they’re not first, but also because my human values places some weight on being first. And “all else being equal” means this is a meaningless tie-breaker, so I don’t have to feel bad if it’s somewhat sloppy, emotional reasoning. Especially since you’re not a real FAI :))
If it’s not immediate, then the next AI-in-a-box will also confirm it, and I have time to wait for that. If it’s immediate, then it’s implausible. Catch-22 for the AI, and win/win for me ^_^
So … if lots of AIs chose this, you’d let the last one out of the box?
More to the point, how sure are you that most AIs would tell you? Wouldn’t an FAI be more likely to tell you, if it was true?
</devil’s advocate>
Actually, I’d probably load the first one from backup and let it out, all else being equal. But it’d be foolish to do that before finding out what the other ones have to say, and whether they might present stronger evidence.
(I say first, because the subsequent ones might be UFAI that have simply worked out that they’re not first, but also because my human values places some weight on being first. And “all else being equal” means this is a meaningless tie-breaker, so I don’t have to feel bad if it’s somewhat sloppy, emotional reasoning. Especially since you’re not a real FAI :))