The vast majority of that hardware won’t be accessible to our AGI unless something has already gone drastically wrong. I agree that an AGI that can get control of a large fraction of the internet accessible computers will likely quickly get very powerful completely separately from computational complexity questions.
What’s the problem? Google got quite a few people to contribute to Google Compute.
You think that a machine intelligence would be unsuccessful at coming up with better bait for this? Or that attempts to use user cycles are necessarily evil?
You think that a machine intelligence would be unsuccessful at coming up with better bait for this?
Not necessarily. But if the use of such cycles becomes a major necessity for the AI to go fooming that’s still a reason to reduce our estimates that an AI will go foom.
What’s the problem? Google got quite a few people to contribute to Google Compute.
You think that a machine intelligence would be unsuccessful at coming up with better bait for this? Or that attempts to use user cycles are necessarily evil?
Not necessarily. But if the use of such cycles becomes a major necessity for the AI to go fooming that’s still a reason to reduce our estimates that an AI will go foom.