So there is no inherent tension between the amount of prime-number-computing it can do and the amount of work it can do on other useful tasks (e.g. running a power plant), because doing a useful task might change what computing resources are available to the AI.
I agree with you on this. Would you buy my argument on comparative advantage if we assume that superintelligent systems cannot modify/improve/replicate themselves? If we assume that superintelligent systems are “fixed-size entities”? If still no, then can you highlight additional points you disagree with?
But if one of the people is actually an alien capable of splitting down the middle into two copies, then as soon as they’re more productive than the human they have an incentive to kill the human and use the food to copy themselves.
Also a good point. But said alien would likely not attack the human unless the alien is absolutely confident it can kill the human with minimal damage to itself. Otherwise, the alien risks a debilitating injury, losing the fight and antagonizing the human, etc. I see a similar line of reasoning as why a “moderately” superintelligent system (sorry for being so imprecise here, just trying to convey an idea I am developing on the fly) would not modify/improve/replicate itself it if knew that attempting to do so would trigger a response that risks bad outcomes (e.g., being turned off, having a significant portion of its resources destroyed, having to spend resources on a lengthy conflict with humans instead of generating prime numbers, …). Of course, a “highly” superintelligent system would not have to worry about this; it could likely wipe out humanity without much recourse from us.
Thank you for your comment and some great points!
I agree with you on this. Would you buy my argument on comparative advantage if we assume that superintelligent systems cannot modify/improve/replicate themselves? If we assume that superintelligent systems are “fixed-size entities”? If still no, then can you highlight additional points you disagree with?
Also a good point. But said alien would likely not attack the human unless the alien is absolutely confident it can kill the human with minimal damage to itself. Otherwise, the alien risks a debilitating injury, losing the fight and antagonizing the human, etc. I see a similar line of reasoning as why a “moderately” superintelligent system (sorry for being so imprecise here, just trying to convey an idea I am developing on the fly) would not modify/improve/replicate itself it if knew that attempting to do so would trigger a response that risks bad outcomes (e.g., being turned off, having a significant portion of its resources destroyed, having to spend resources on a lengthy conflict with humans instead of generating prime numbers, …). Of course, a “highly” superintelligent system would not have to worry about this; it could likely wipe out humanity without much recourse from us.