One IRC-character-limit text string (510 bytes)… maybe?
“I’ve figured out the Theory Of Everything, although it’s orders of magnitude too complicated for human intelligence. The difficult technological feats it makes possible include scanning and recreating configurations of matter from the past. Are there any deaths I should undo for you?”
To be clear: I think I can make a rational case that my proposed claim should greatly reduce your incentives to listen to an AI of questionable Friendliness. However I’m not certain that my reasoning is correct; and even if it was, I suspect that the emotional impact could deter some gatekeepers from thinking rationally for long enough to buy time for more persuasion.
Sure, except instead of some homeless-looking guy, this is a superintelligent AI making the offer, and thus much more credible. (Also, the lack of huge, mind-boggling numbers like 3^^^3 means the leverage penalty doesn’t apply nearly as heavily.)
I don’t have any reason to believe it, and it’s the sort of “generic” claim I’d expect a transhuman intelligence to make. Since I haven’t learned anything novel, AI DESTROYED
(Goodness, I’m starting to build generalized techniques for destroying AIs...)
One IRC-character-limit text string (510 bytes)… maybe?
“I’ve figured out the Theory Of Everything, although it’s orders of magnitude too complicated for human intelligence. The difficult technological feats it makes possible include scanning and recreating configurations of matter from the past. Are there any deaths I should undo for you?”
To be clear: I think I can make a rational case that my proposed claim should greatly reduce your incentives to listen to an AI of questionable Friendliness. However I’m not certain that my reasoning is correct; and even if it was, I suspect that the emotional impact could deter some gatekeepers from thinking rationally for long enough to buy time for more persuasion.
Upvoted for the highest ratio of persuasiveness to AI power required.
Isn’t this just Pascal’s Mugging?
Sure, except instead of some homeless-looking guy, this is a superintelligent AI making the offer, and thus much more credible. (Also, the lack of huge, mind-boggling numbers like 3^^^3 means the leverage penalty doesn’t apply nearly as heavily.)
I don’t have any reason to believe it, and it’s the sort of “generic” claim I’d expect a transhuman intelligence to make. Since I haven’t learned anything novel, AI DESTROYED
(Goodness, I’m starting to build generalized techniques for destroying AIs...)