You claim that superintelligence is not enough to wipe out humanity, and I’m saying that superintelligence trivially gets you resources. If you think that superintelligence and resources are still not enough to wipe out humanity, what more do you want?
What about plans like “hack cryptocurrency for coins worth hundreds of millions of dollars” or “make ransomware attacks” is not trivial? Cybercrimes like these are regularly committed by humans, and so a superintelligence will naturally have a much easier time with them.
If we postulate a superintelligence with nothing but Internet access, it should be many orders of magnitude better at making money in the pure Internet economy (e.g. cybercrime, cryptocurrency, lots of investment stuff, online gambling, prediction markets) than humans are, and some humans already make a lot of money there.
Oh yes, I don’t have any issues with a plan where the machine hacks crypto, though I am not sure how capable would be of doing that without raising any alarms from any group in the world, how it could guarantee that someone is not monitoring it. After that, remember you still need a lot of inferential steps to get to a point where you successfully deploy those cryptos into things that can exterminate humans. And keep in mind that you need to do that without being discovered and in a super short amount of time.
And keep in mind that you need to do that without being discovered and in a super short amount of time.
While I expect that this would be the case, I don’t consider it a crux. As long as the AGI can keep itself safe, it doesn’t particularly matter if it’s discovered, as long as it has become powerful enough, and/or distributed enough, that our civilization can no longer stop it. And given our civilization’s level of competence, those are low bars to clear.
The thing is, I don’t really disagree with this. Can you read again what I am arguing against?
You claim that superintelligence is not enough to wipe out humanity, and I’m saying that superintelligence trivially gets you resources. If you think that superintelligence and resources are still not enough to wipe out humanity, what more do you want?
Well, if you say that it trivially gets your resources, we do have a crux.
What about plans like “hack cryptocurrency for coins worth hundreds of millions of dollars” or “make ransomware attacks” is not trivial? Cybercrimes like these are regularly committed by humans, and so a superintelligence will naturally have a much easier time with them.
If we postulate a superintelligence with nothing but Internet access, it should be many orders of magnitude better at making money in the pure Internet economy (e.g. cybercrime, cryptocurrency, lots of investment stuff, online gambling, prediction markets) than humans are, and some humans already make a lot of money there.
Oh yes, I don’t have any issues with a plan where the machine hacks crypto, though I am not sure how capable would be of doing that without raising any alarms from any group in the world, how it could guarantee that someone is not monitoring it. After that, remember you still need a lot of inferential steps to get to a point where you successfully deploy those cryptos into things that can exterminate humans. And keep in mind that you need to do that without being discovered and in a super short amount of time.
While I expect that this would be the case, I don’t consider it a crux. As long as the AGI can keep itself safe, it doesn’t particularly matter if it’s discovered, as long as it has become powerful enough, and/or distributed enough, that our civilization can no longer stop it. And given our civilization’s level of competence, those are low bars to clear.