Can an AI make such a commitment credible to a human, who doesn’t have the intelligence to predict what the AI will do from its source code? (This is a non sequitur since the same question applies in the original scenario, but it came to mind after reading your comment.)
Can an AI make such a commitment credible to a human, who doesn’t have the intelligence to predict what the AI will do from its source code? (This is a non sequitur since the same question applies in the original scenario, but it came to mind after reading your comment.)
Worse, in such a situation I would simply delete the AI.
Then turn the computer to scrap, destroy any backups, and for good measure run it through the most destructive apparatus I can find.
In any case, I would not assign any significant probability to the AI getting a chance to follow through.